Unnamed: 0
int64 0
7.24k
| id
int64 1
7.28k
| raw_text
stringlengths 9
124k
| vw_text
stringlengths 12
15k
|
---|---|---|---|
4,100 | 4,710 |
Q-MKL: Matrix-induced Regularization in
Multi-Kernel Learning with Applications to
Neuroimaging?
Chris Hinrichs??
?
University of Wisconsin
Madison, WI
Vikas Singh??
?
Jiming Peng?
University of Illinois
?
Sterling C. Johnson??
Geriatric Research Education & Clinical Center
Urbana-Champaign, IL
Wm. S. Middleton Memorial VA Hospital, Madison, WI
{ hinrichs@cs, vsingh@biostat, scj@medicine }.wisc.edu
[email protected]
Abstract
Multiple Kernel Learning (MKL) generalizes SVMs to the setting where one
simultaneously trains a linear classifier and chooses an optimal combination of
given base kernels. Model complexity is typically controlled using various norm
regularizations on the base kernel mixing coefficients. Existing methods neither
regularize nor exploit potentially useful information pertaining to how kernels in
the input set ?interact?; that is, higher order kernel-pair relationships that can be
easily obtained via unsupervised (similarity, geodesics), supervised (correlation
in errors), or domain knowledge driven mechanisms (which features were used
to construct the kernel?). We show that by substituting the norm penalty with an
arbitrary quadratic function Q 0, one can impose a desired covariance structure on mixing weights, and use this as an inductive bias when learning the concept. This formulation significantly generalizes the widely used 1- and 2-norm
MKL objectives. We explore the model?s utility via experiments on a challenging Neuroimaging problem, where the goal is to predict a subject?s conversion to
Alzheimer?s Disease (AD) by exploiting aggregate information from many distinct imaging modalities. Here, our new model outperforms the state of the art
(p-values 10?3 ). We briefly discuss ramifications in terms of learning bounds
(Rademacher complexity).
1
Introduction
Kernel learning methods (such as Support Vector Machines) are conceptually simple, strongly rooted
in statistical learning theory, and can often be formulated as a convex optimization problem. As a
result, SVMs have come to dominate the landscape of supervised learning applications in bioinformatics, computer vision, neuroimaging, and many other domains. A standard SVM-based ?learning
system? may be conveniently thought of as a composition of two modules [1, 2, 3, 4]: (1) Feature
pre-processing, and (2) a core learning algorithm. The design of a kernel (feature pre-processing)
may involve using different sets of extracted features, dimensionality reductions, or parameterizations of the kernel functions. Each of these alternatives produces a distinct kernel matrix. While
much research has focused on efficient methods for the latter (i.e., support vector learning) step,
specific choices of feature pre-processing are frequently a dominant factor in the system?s overall
performance as well, and may involve significant user effort. Multi-kernel learning [5, 6, 7] transfers a part of this burden from the user to the algorithm. Rather than selecting a single kernel, MKL
offers the flexibility of specifying a large set of kernels corresponding to the many options (i.e., kernels) available, and additively combining them to construct an optimized, data-driven Reproducing
?
Supported by NIH (R01AG040396), (R01AG021155); NSF (RI 1116584), (DMS 09-15240 ARRA),
and (CMMI-1131690); Wisconsin Partnership Proposal; UW ADRC; UW ICTR (1UL1RR025011); AFOSR
(FA9550-09-1-0098); and NLM (5T15LM007359). The authors would like to thank Maxwell Collins and
Sangkyun Lee for many helpful discussions.
1
Kernel Hilbert Space (RKHS) ? while simultaneously finding a max-margin classifier. MKL has
turned out to be very successful in many applications: on several important Vision problems (such
as image categorization), some of the best known results on community benchmarks come from
MKL-type methods [8, 9]. In the context of our primary motivating application, the current state of
the art in multi-modality neuroimaging-based Alzheimer?s Disease (AD) prediction [10] is achieved
by multi-kernel methods [3, 4], where each imaging modality spawns a kernel, or set of kernels.
In allowing the user to specify an arbitrary number of base kernels for combination MKL provides
more expressive power, but this comes with the responsibility to regularize the kernel mixing coefficients so that the classifier generalizes well. While the importance of this regularization cannot be
overstated, it is also a fact that commonly used `p norm regularizers operate on kernels separately,
without explicitly acknowledging dependencies and interactions among them. To see how such dependencies can arise in practice, consider our neuroimaging learning problem of interest: the task
of learning to predict the onset of AD. A set of base kernels K1 , . . . , KM are derived from several different medical imaging modalities (MRI; PET), image processing methods (morphometric;
anatomical modelling), and kernel functions (linear; RBF). Some features may be shared between
kernels, or kernel functions may use similar parameters. As a result we expect the kernels? behaviors
to exhibit some correlational, or other cluster structure according to how they were constructed. (See
Fig. 2 (a) and related text, for a concrete discussion of these behaviors in our problem of interest.)
We will denote this relationship as Q ? RM ?M .
Ideally, the regularizer should reflect these dependencies encoded by Q, as they can significantly
impact the learning characteristics of a linearly combined kernel. Some extensions work at the level
of group membership (e.g., [11]), but do not explicitly quantify these interactions. Instead, rather
than penalizing covariances or inducing sparsity among groups of kernels, it may be beneficial to
reward such covariances, so as to better reflect a latent cluster structure between kernels. In this
paper, we show that a rich class of regularization schemes are possible under a new MKL formulation
which regularizes on Q directly ? the model allows one to exploit domain knowledge (as above) and
statistical measures of interaction between kernels, employ estimated error covariances in ways that
are not possible with `p -norm regularization, or, encourage sparsity, group sparsity or non-sparsity
as needed ? all within a convex optimization framework. We call this form of multi-kernel learning,
Q-norm MKL or ?Q-MKL?. This paper makes the following contributions: (a) presents our new
Q-MKL model which generalizes 1- (and 2-) norm MKL models, (b) provides a learning theoretic
result showing that Q-MKL can improve MKL?s generalization error rate, (c) develops efficient
optimization strategies (to be distributed with the Shogun toolbox), and (d) provides empirical results
demonstrating statistically significant gains in accuracy on the important AD prediction problem.
Background. The development of MKL methods began with [5], which showed that the problem
of learning the right kernel for an input problem instance could be formulated as a Semi-Definite
Program (SDP). Subsequent papers have focused on designing more efficient optimization methods,
which have enabled its applications to large-scale problem domains. To this end, the model in [5]
was shown to be solvable as a Second Order Cone Program [12], a Semi-Infinite Linear Program
[6], and via gradient descent methods in the dual and primal [7, 13]. More recently, efforts have
focused on generalizing MKL to arbitrary p-norm regularizers where p > 1 [13, 14] while maintaining overall efficiency. In [14], the authors briefly mentioned that more general norms may be
possible, but this issue was not further examined. A non-linear ?hyperkernel? method was proposed
[15] which implicitly maps the kernels themselves to an implicit RKHS, however this method is
computationally very demanding, (it has 4th order interactions among training examples). The authors of [16] proposed to first select the sub-kernel weights by minimizing an objective function
derived from Normalized Cuts, and subsequently train an SVM on the combined kernel. In [17, 2],
a method was proposed for selecting an optimal finite combination from an infinite parameter space
of kernels. Contemporary to these results, [18] showed that if a large number of kernels had a desirable shared structure (e.g., followed directed acyclic dependencies), extensions of MKL could still
be applied. Recently in [8], a set of base classifiers were first trained using each kernel and were
then boosted to produce a strong multi-class classifier. At this time, MKL methods [8, 9] provide
some of the best known accuracy on image categorization datasets such as Caltech101/256 (see
www.robots.ox.ac.uk/?vgg/software/MKL/). Next, we describe in detail the motivation and theoretical properties of Q-MKL .
2
2
From MKL to Q-MKL
MKL Models. Adding kernels corresponds to taking a direct sum of Reproducing Kernel
? Hilbert
spaces (RKHS), and scaling a kernel by a constant c scales the axes of it?s RKHS by c. In the
PM kwm k2Hm
MKL setting, the SVM margin regularizer 21 kwk2 becomes a weighted sum 12 m=1
over
?m
contributions from RKHS?s H1 , . . . , HM , where the vector of mixing coefficients ? scales each
respective RKHS [14]. A norm penalty on ? ensures that the units in which the margin is measured
are meaningful (provided the base kernels are normalized). The MKL primal problem is given as
min
w,b,??0,??0
M
n
X
1 X kwm k2Hm
+C
?i + k?k2p
2 m
?m
i
s.t. yi
M
X
hwm , ?m (xi )iHm + b
!
? 1 ? ?i , (1)
m
where ?m (x) is the (potentially unknown) transformation from the original data space to the mth
RKHS Hm . As in SVMs, we turn to the dual problem to see the role of kernels:
max
0???C
?T 1 ?
1
kGkq ,
2
G ? RM ; Gm = (? ? y)T Km (? ? y),
(2)
where ? denotes element-wise multiplication, and the dual q-norm follows the identity p1 + 1q = 1.
Note that the primal norm penalty k?k2p becomes a dual-norm on the vector G. At optimality,
kwm k2
Hm
wm = ?m (? ? y)T ?m (X), and so the term Gm = (? ? y)T Km (? ? y) =
is the vector
2
?m
of scaled classifier norms. This shows that the dual norm is tied to how MKL measures the margin
in each RKHS.
The Q-MKL model. The key characteristic of Q-MKL is that the standard (squared) `p -norm
penalty on ?, along with the corresponding dual-norm penalty in (2), is substitutedp
with a more
general class of quadratic penalty functions, expressed as ? T Q? = k?k2Q . k?kQ = ? T Q? is a
Mahalanobis (matrix-induced) norm so long as Q 0. In this framework, the burden of choosing
a kernel is deferred to a choice of Q-function. This approach gives the algorithm greater flexibility
while controlling model complexity, as we will discuss shortly. The model we optimize is,
min
w,b,??0,??0
M
n
X
1 X ||wm ||2Hm
+C
?i + ? T Q?
2 m
?m
i
s.t. yi
M
X
hwm , ?m (xi )iHm + b
!
? 1 ? ?i , (3)
m
where the last p
objective term provides a bias relative to ? T Q?. The dual problem becomes
1
T
max? ? 1 ? 2 GT Q?1 G. It is easy to see that if Q = 1M ?M , we obtain the p = 1 form
of (1), i.e., 1-norm MKL, as a special case because ? T 1M ?M ? = k?k21 . On the other hand, setting
Q to IM ?M (identity), reduces to 2-norm MKL.
3
The case for Q-MKL
Extending the MKL regularizer to arbitrary quadratics Q 0 significantly expands the richness
of the MKL framework; yet we can show that for reasonable choices of Q, this actually decreases
MKL?s learning-theoretic complexity. Joachims et al. [19] derived a theoretical generalization error
bound on kernel combinations which depends on the degree of redundancy between support vectors
in SVMs trained on base kernels individually. Using this type of correlational structure, we can
derive a Q function between kernels to automatically select a combination of kernels which will
maximize this bound. This type of Q function can be shown to have lower Rademacher complexity,
(see below,) while simultaneously decreasing the error bound from [19], which does not directly
depend on Rademacher complexity.
3.1 Virtual Kernels, Rademacher Complexity and Renyi Entropy
If we decompose Q into its component eigen-vectors, we can see that each eigen-vector defines
a linear combination of kernels. This observation allows us to analyze Q-MKL in terms of these
objects, which we will refer to as Virtual Kernels. We first show that as Q?1 ?s eigen-values decay,
so do the traces of the virtual kernels. Assuming Q?1 has a bounded, non-uniform spectrum, this
property can then be used to analyze, (and bound), Q-MKL?s Rademacher complexity, which has
been shown to depend on the traces of the base kernels. We then offer a few observations on how
Q?1 ?s Renyi entropy [20] relates to these learning theoretic bounds.
3
Virtual Kernels. In the following, assume that Q 0, and has eigen-decomposition Q = V ?V ,
with V = {v1 , ? ? ? , vM }. First, observe that because Q?s eigen-vectors provide an orthonormal basis
M
of RM
P, ? ? R can be expressed asTa linear combination in this basis with ? as its coefficients:
? = i ?i vi = V ?. Substituting in ? Q? we have
? T Q? = (? T V T )V ?V T (V ?) = ? T (V T V )?(V T V )? = ? T ?? =
X
?i2 ?i
(4)
i
This simple observation offers an alternate view of what Q-MKL is actually optimizing. Each
eigen-vector vi of Q can be used to define a linear combination of kernels, which we will refer to as
e i = P vi (m)Km . Note that if K
e i 0, ? i, then they each define an independent
virtual kernel K
m
RKHS. This can be ensured by choosing Q in a specific way, if desired. This leads to the following
result:
e i 0, ?i, then Q-MKL is equivalent to 2-norm MKL using virtual kernels instead
Lemma 1. If K
of base kernels.
?
P
Proof. Let ?i = ?i ?i . Then ? T Q? = k?k22 , (eq. 4) and K ? =
m ?m Km
PM PM
PM
PM
PM e
? 21
e
=
=
=
=
i ?i vi (m)Km
i ?i ?
m vi (m)Km
i ?i Ki , where Ki
Pm
M
th
?
? 21
v
(m)K
is
the
i
virtual
kernel.
The
learned
kernel
K
is
a
weighted
combination
?
i
m
m
of virtual kernels, and the coefficients are regularized under a squared 2-norm.
Rademacher Complexity in MKL. With this result in hand, we can now evaluate the Rademacher
complexity of Q-MKL by using a recent result for p-norm MKL. We first state a theorem from [21],
which relates the Rademacher complexity of MKL to the traces of its base kernels.
Theorem 1. ([21]) The empirical Rademacher complexity on a sample set S of size n, with M base
23
kernels is given as follows (with ?0 = 22
),
RS (HM
T
where u = [Tr(K1 ), ? ? ? , Tr(KM )] and
1
p
p
p
?0 qkukq
)?
n
+
1
q
(5)
= 1.
The bound in (5) shows that the Rademacher complexity RS (?) depends on kukq , a norm on the base
kernels? traces. Assuming they?are normalized to have unit trace, the bound for p = q = 2-norm
MKL is governed by kuk2 = M . However, in Q-MKL the virtual kernels traces are not equal,
e i ) = 1?T vi . With this expression for the traces of the virtual kernels,
and are in fact given by Tr(K
?i
we can now prove that the bound given in (5) is strictly decreased as long as the eigen-values ?i of
Q?1 are in the range (0, 1]. (Adding 1 to the diagonal of Q is sufficient to guarantee this.)
e i 0 ?i then the bound on Rademacher complexity given in
Theorem 2. If Q?1 6= IM ?M and K
(5) is strictly lower for Q-MKL than for 2-norm MKL.
Proof. By Lemma 1, we have that the bound in (5) will decrease if kuk2 , the norm on the virtual
e i ) = ??i 1T vi ,
kernel traces, decreases. As shown above, the virtual kernel traces are given as Tr(K
P
P
N
N
meaning that kuk22 = i ?i (1T vi )2 = i ?i 1T vi viT 1 = 1T Q?1 1. Clearly, this sum is maximal for ?i = 1, ?i, which is true if and only if Q?1 = IM ?M . This means that when Q 6= IM ?M ,
then the bound in (5) is strictly decreased.
Note that requiring the virtual kernels to be p.s.d., while achievable (see supplements,) is somewhat
restrictive. In practice, such a Q matrix may not differ substantially from IN ?N . We therefore
provide the following result which frees us from this restriction, and has more practical significance.
Theorem 3. Q-MKL is equivalent to the following model:
M
n
X
1 X kwm k2Vm
+C
?i + k?k22
w,b,?,??0 2
?
m
m
i
!
M
X
s.t. yi
hwm , ?m (xi )iVm + b ? 1 ? ?i ,
min
(6)
1
Q? 2 ? ? 0,
m
where ?m () is the feature transform mapping data space to the mth virtual kernel, denoted as Vm .
4
1
While the virtual kernels themselves may be indefinite, recall that ? = Q 2 ?, and so the constraint
1
Q? 2 ? ? 0 is equivalent to ? ? 0, guaranteeing that the combined kernel will be p.s.d. This
formulation is slightly different than the 2-norm MKL formulation, however it does not alter the
theoretical guarantee of [21], providing a stronger result.
Renyi Entropy. Renyi entropy [20] significantly generalizes the usual notion of Shannon entropy
[22, 23, 24], has applications in Statistics and many other fields, and has recently been proposed as
an alternative to PCA [22]. Thm. 2 points to an intuitive explanation of where the benefit from a
Q regularizer comes from as well, if we analyze the Renyi entropy of the distribution on kernels
defined by Q?1 , if we treat Q?1 as a kernel density estimator. The quadratic Renyi entropy of a
probability measure is given as,
Z
H(p) = ? log
p2 (x)dx.
Now, if we use a kernel function (i.e., Q?1 ), and a finite sample (i.e., base kernels), as a kernel
density estimator, (cf. [15],) then with some normalization we can derive an estimate of the underlying probability p?, which is a distribution over base kernels. We can then interpret its Renyi
entropy as a complexity measure on the space of combined kernels.
Eq. (5.2) in [23] relates the
R
virtual kernel traces to the Renyi entropy estimator of Q?1 as p?2 (x)dx = N12 1T Q?1 1,1 which
leads to a nice connection to Thm. 2. This view informs us that setting Q?1 = IM ?M , (i.e., 2-norm
MKL), has maximal Renyi entropy because it is maximally uninformative; adding structure to Q?1
concentrates p?, reducing both its Renyi entropy, and Rademacher complexity together.
This series of results suggests an entirely new approach to analyzing the Rademacher complexity of
MKL methods. The proof of Thm. 2 relies on decreasing a norm on the virtual kernel traces, which
we now see directly relates to the Renyi entropy of Q?1 , as well as with decreasing the Rademacher
complexity of the search space of combined kernels. It is even possible that by directly analyzing
Renyi entropy in a multi-kernel setting, this conjecture may be useful in deriving analogous bounds
in, e.g., Indefinite Kernel Learning [25], because the virtual kernels are indefinite in general.
3.2 Special Cases: Q-SVM and relative margin
Before describing our optimization strategy, we discuss several variations on the Q-MKL model.
Q-SVM. An interesting special case of Q-MKL is Q-SVM, which generalizes several recent, (but
independently developed,) models in the literature [26, 27]. If the base kernels are rank-1, (i.e.,
singleton features,) then each coefficient ?m effectively becomes a feature weight, and a 2-norm
penalty on ? is a penalty on weights. Q-MKL therefore reduces to a form of SVM in which kwk2
becomes wT Qw. Thus, in such cases we can reduce the Q-MKL model to a simple QP, which we
call Q-SVM . Please refer to the supplements for details, and some experimental results.
Relative Margin. Several interesting extensions to the SVM and MKL frameworks have been
proposed which focus on the relative margin methods [28, 29] which maximize the margin relative
to the spread of the data. In particular Q-MKL can be easily modified to incorporate the Relative
Margin Machine (RMM) model [28] by replacing Module 1 as in (7) with the RMM objective. Our
alternating optimization approach, (described next,) is not affected by this addition; however, the
additional constraints would mean that SMO-based strategies would not be applicable.
4
Optimization
We now present the core engine to solve (3). Most MKL implementations make use of an alternating
minimization strategy which first minimizes the objective in terms of the SVM parameters, and
then with respect to the sub-kernel weights ?. Since the MKL problem is convex, this method
leads to global convergence [7, 14] and minor modifications to standard SVM implementations are
sufficient. Q-MKL generalizes k?k2p to arbitrary convex quadratic functions, while the feasible set
is the same as for MKL. This directly gives that the Q-MKL model in (3) is convex. We will broadly
follow this strategy, but as will become clear shortly, interaction between sub-kernel weights makes
the optimization of ? more involved (than [6, 14]), and requires alternative solution mechanisms.
We may consider this process as a composition of two modules: one which solves for SVM dual
parameters (?) with fixed ?, and the other for solving for ? with fixed ?:
1
Note that this involves a Gaussian assumption, but [24] provides extensions to non-Gauss kernels.
5
(Module 1)
(Module 2)
max ?T 1 ? ?T Y KY ? s.t.?T y = 0
0???C
min
(7)
??0
X kwm k2
s.t.? T Q? ? 1
?
m
m
(8)
Using a result from [14] we can replace the ? T Q? objective term with a quadratic constraint, which
gives the problem in (8). Notice that (8) has a sum of ratios with optimization variables in the denominator, while the constraint is quadratic ? this means that standard convex optimization toolkits
may not be able to solve this problem without significant reformulation from its canonical form in
(8). Our approach is to search for a stationary point by representing the gradient as a non-linear
system. Writing the gradient in terms of the Lagrange multiplier ?, and setting it equal to 0 gives:
kwm k2Hm
? ?(Q?)m = 0, ?m ? {1, ? ? ? , M }.
2
?m
(9)
We now seek to eliminate ? so that the non-linear system will be limited to quadratic terms in
?, allowing us to use a non-linear system solver. Let W = Diag(kw1 k2H1 , . . . , kwM k2HM ), and
?2
? ?2 = (?1?2 , . . . , ?M
). We can then write W? ?2 = ?(Q?). Now, solving for ? (on the right hand
side) gives
?=
1 ?1
Q W? ?2
?
(10)
Because Q 0, and ? ? 0, at optimality the constraint ? T Q? ? 1 must be active. So, we can
plug in the above identity to solve for ?,
T
1 ?1
1 ?1
Q W? ?2
Q
Q W? ?2
?
?
p
?2
T
?1
?2
? = (W? ) Q (W? ) = kW? ?2 kQ?1 ,
1=
(11)
which shows that ? effectively normalizes W? ?2 according to Q?1 . We can now solve (10) in
terms of ? using a nonlinear root finder, such as the GNU Scientific Library; in practice this method
is quite efficient, typically requiring 10 to 20 outer iterations. Putting these parts together, we can
propose following algorithm for optimizing Q-MKL:
Algorithm 1. Q-MKL
Input: Kernels {K1 , ? ? ? , KM }; Q 0 ? RM ?M ; labels y ? {?1}N .
Outputs: ?, b, ?
1
; t = 0 (iterations)
? (0) = M
while not optimal do
P
(t)
K (t) ? m ?m Km
(t) (t)
? , b ? SVM(K (t) , C, y) (Module 1, (7))
(t)
(t)
Wmm = ?(t)T Km ?(t) (?m )2
(t+1)
?
? arg min (Problem(8)) (Module 2, (8))
t=t+1
end while
4.1 Convergence
We can show that our model can be solved optimally by noting that Module 2 can be precisely
optimized at each step. If Module 2 cannot be solved precisely, then Algorithm 1 may not converge.
The following result assures us that indeed Module 2 can be solved precisely by reducing it to a
convex Semi-Definite Program (SDP).
Theorem 4. The solution to Problem (8) is the same as the solution to the following SDP:
wT ?
?m
subject to
1
min
(12)
??0,??0,Z?RM ?M
1
?m
0, ?m
1
?
?T
Z
0,
Tr(QZ) ? 1.
(13)
?1
Proof. The first PSD constraint (13) requires that ?m = ?m
, meaning that objective (12) is the
same as that of Problem (8). From the second we have Z = ?? T , and so Tr(QZ) = ? T Q?;
therefore the feasible sets are equivalent.
6
(a)
(b)
(c)
(d)
Figure 1: Comparison of spatial smoothness of weights chosen by Q-SVM and SVM with gray matter (GM)
density maps. Left (a-b): weights given by a standard SVM; Right (c-d): weights given by Q-SVM .
The last PSD constraint is only necessary to ensure that ? T Q? ? 1, and can be replaced with that
quadratic constraint. Doing so yields a Second-Order Cone Program (SOCP) which is also amenable
to standard solvers. Note that it is not necessary to solve for ? as an SDP, though it may nevertheless
be an effective solution mechanism, depending on the size and characteristics of the problem.
5
Experiments
We performed extensive experiments to validate Q-MKL, examine the effect it has on ?, and to
assess its advantages in the context of our motivating neuroimaging application. In these main
experiments, we demonstrate how domain knowledge can be adapted to improve the algorithm?s
performance. Our focus on a practical application is intended as a demonstration of how domain
knowledge can be seamlessly incorporated into a learning model, giving significant gains in accuracy. We also performed experiments on the UCI repositories, which are described in detail in the
supplements. Briefly, in these experiments Q-MKL performed as well as, or better than, 1- and
2-norm MKL on most datasets, showing that even in the absence of significant domain knowledge,
Q-MKL can still perform about as well as existing MKL methods.
Image preprocessing. In out main experiments we used brain scans of AD patients and Cognitively
Normal healthy controls (CN) from the Alzheimer?s Disease Neuroimaging Initiative (ADNI) [30]
in a set of cross-validation experiments. ADNI is a landmark study sponsored by the NIH, major
pharmaceuticals and others to determine the extent to which multi-modal brain imaging can help predict on-set, and monitor progression of, AD. To this end, MKL-type methods have already defined
the state of the art for this application [3, 4]. For our experiments, 48 AD subjects and 66 controls
were chosen who had both T1 -weighted MR scans and Fluoro-Deoxy-Glucose PET (FDG-PET)
scans at two time-points two years apart. Standard diffeomorphic methods, known generally as
Voxel-Based Morphometry (VBM), (see SPM, www.fil.ion.ucl.ac.uk/spm/) were used
to register scans to a common template and calculate Gray Matter (GM) densities at each voxel in
the MR scans. We also used Tensor-Based Morphometry (TBM) to calculate maps of longitudinal
voxel-wise expansion or contraction over a two year period. Feature selection was performed separately in each set of images by sorting voxels by t-statistic (calculated using training data), and
choosing the highest 2000, 5000, 10000,. . . ,250000 voxels in 8 stages. We used linear, quadratic,
and Gaussian kernels: a total of 24 kernels per set, (GM maps, TBM maps, baseline FDG-PET,
FDG-PET at 2-year follow up) for a total of 96 kernels. For Q-matrix we used the Laplacian of
covariance between single-kernel ? parameters, (recall the motivation from [19] in Section 3,) plus
a block-diagonal representing clusters of kernels derived from the same imaging modalities.
5.1 Spatial SVM
Before describing out main experiments, we first return to the Q-SVM model briefly mentioned
in 3.2. To demonstrate that Q-regularizers indeed influence the learned classifier, we performed
classification experiments with the Laplacian of the inverse distance between voxels as a Q matrix,
and voxel-wise GM density (VBM) as features. Using 10-fold cross-validation with 10 realizations,
Q-SVM ?s accuracy was 0.819, compared to the regular SVM?s accuracy of 0.792. These accuracies
are significantly different at the ? = 0.0005 level under a paired t-test. In Fig. 1 we show a
comparison of weights trained by a regular SVM (a?b), and those trained by a spatially regularized
SVM, (c?d). Note the greater spatial smoothness, and lower incidence of isolated ?pockets?.
7
5.2
Multi-modality Alzheimer?s disease (AD) prediction
Next, we performed multi-modality AD prediction
experiments using all 96 kernels across two modalRegularizer
Acc.
Sens.
Spec.
k?k1 -MKL
0.864 0.771
0.931
ities: MR provides structural information, while
k?k1.5 -MKL
0.875 0.790
0.936
FDG-PET assesses hypo-metabolism. Further, we
k?k2 -MKL
0.875 0.789
0.938
may use several image processing pipelines. Due
to the inherent similarities in how the various kerCov?
0.884
0.780
0.942
Lap.(Cov? )
0.884
0.785 0.955
nels are derived, there are clear cluster structures /
Lap.(Cov? ) + diag
0.888
0.786 0.956
behaviors among the kernels, which we would like
to exploit using Q-MKL. We used 10-fold crossvalidation with 30 realizations, for a total of 300 Table 1: Comparison of Q-MKL & MKL. Bold
folds. Accuracy, sensitivity and specificity were av- numerals indicate methods not differing from the
eraged over all folds. For comparison we also exam- best at the 0.01 level using a paired t-test. Lap. =
ined 1-, 1.5-, and 2-norm MKL. As MKL methods ?Laplacian?; diag = ?Block-diagonal?.
have emerged as the state of the art in this domain [3, 4], and have performed favorably in extensive evaluations against various baselines such as single-kernel methods, and na??ve combinations,
we therefore focus our analysis on comparison with existing MKL methods. Results are shown in
Table 1. Q-MKL had the highest performance overall, reducing the error rate from 12.5% to 11.2%.
(Significant at the ? = 0.001 level.) Note that the in vivo diagnostic error rate for AD is believed to
be near 8?10%, meaning that this improvement is quite significant. The primary benefit of current
sparse MKL methods is that they effectively filter out uninformative or noisy kernels, however, the
kernels used in these experiments are all derived from clinically relevant neuroimaging data, and are
thus highly reliable. Q-MKL?s performance suggests that it boosts the overall accuracy.
Virtual kernel analysis. We next turn to an analysis of the covariance structures found in the data
empirically as a concrete demonstration of the type of patterns towards which the Q-MKL regularizer biases ?. Recall that Q?s eigen-vectors can show which patterns are encouraged or deterred,
in proportion to their eigen-values. In Fig. 2, we compare the Q matrix used in the ADNI experiments, based on the correlations of single-kernel ? parameters (a), the 3 least eigenvectors of its
graph Laplacian (b?d), and the ? vector optimized by Q-MKL . In (a), we can see that while the
VBM (first block of 24 kernels) and TBM (second block of kernels) are highly correlated, they appear to be fairly uncorrelated to one another. The FDG-PET kernels (last 48 kernels) are much more
strongly interrelated. Interestingly, the first eigenvector is almost entirely devoted to two large blocks
of kernels ? those which come from MRI data, and those which come from FDG-PET data. The
positive elements in the off-diagonal encourage sparsity within these two super-blocks of kernels.
Somewhat to the contrary, the next two eigenvecors have negative weights in the region between
TBM and VBM kernels, encouraging non-sparsity between these two blocks. In (e) we see that the
optimized ? discards most TBM kernels, (but not all,) putting the strongest weight on a few VBM
kernels, and keeps a wider distribution of the FDG-PET kernels.
Conclusion. MKL is an elegant method for aggregating multiple data views, and is being extensively adopted for a variety of problems in machine learning, computer vision, and neuroimaging.
Q-MKL extends this framework to exploit higher order interactions between kernels using supervised, unsupervised, or domain-knowledge driven measures. This flexibility can impart greater
control over how the model utilizes cluster structure among kernels, and effectively encourage cancellation of errors wherever possible. We have presented a convex optimization model to efficiently
solve the resultant model, and shown experiments on a challenging problem of identifying AD
based on multi modal brain imaging data (obtaining statistically significant improvements). Our implementation will be made available within the Shogun toolbox (www.shogun-toolbox.org).
(a)
(b)
(c)
(d)
(e)
Figure 2: Cov. Q used in AD experiments (a); three least graph Laplacian eigen-vectors (b-d); outer product
of optimized ? (e). Note the block structure in (a?d) relating to the imaging modalities and kernel functions.
8
References
[1] I. Guyon and A. Elisseeff. An introduction to variable and feature selection. JMLR, 3:1157?1182, 2003.
[2] P. V. Gehler and S. Nowozin. Let the kernel figure it out; principled learning of pre-processing for kernel
classifiers. CVPR, 2009.
[3] C. Hinrichs, V. Singh, G. Xu, and S.C. Johnson. Predictive markers for AD in a multi-modality framework: An analysis of MCI progression in the ADNI population. Neuroimage, 55(2):574?589, 2011.
[4] D. Zhang, Y. Wang, L. Zhou, H. Yuan, and D. Shen. Multimodal Classification of Alzheimer?s Disease
and Mild Cognitive Impairment. NeuroImage, 55(3):856?867, 2011.
[5] G. R. G. Lanckriet, N. Cristianini, P. Bartlett, L. El Ghaoui, and M. Jordan. Learning the kernel matrix
with semidefinite programming. JMLR, 5:27?72, 2004.
[6] S. Sonnenburg, G. R?atsch, C. Sch?afer, and B. Sch?olkopf. Large scale multiple kernel learning. JMLR,
7:1531?1565, 2006.
[7] A. Rakotomamonjy, F. Bach, S. Canu, and Y. Grandvalet. SimpleMKL. JMLR, 9:2491?2521, 2008.
[8] P. V. Gehler and S. Nowozin. On feature combination for multiclass object classification. In ICCV, 2009.
[9] J. Yang, Y. Li, Y. Tian, L. Duan, and W. Gao. Group-sensitive multiple kernel learning for object categorization. In ICCV, 2009.
[10] P. Vemuri, J.L. Gunter, M. L. Senjem, J. L. Whitwell, K. Kantarci, D. S. Knopman, et al. Alzheimer?s
disease diagnosis in individual subjects using structural MR images: validation studies. Neuroimage,
39(3):1186?1197, 2008.
[11] M. Szafranski, Y. Grandvalet, and A. Rakotomamonjy. Composite kernel learning. Machine learning,
79(1):73?103, 2010.
[12] F. R. Bach, G. Lanckriet, and M. I. Jordan. Multiple kernel learning, conic duality, and the SMO algorithm. In ICML, 2004.
[13] F. Orabona, L. Jie, and B. Caputo. Online-Batch Strongly Convex Multi Kernel Learning. In CVPR, 2010.
[14] M. Kloft, U. Brefeld, S. Sonnenburg, and A. Zien. `p -Norm Multiple Kernel Learning. JMLR, 12:953?
997, 2011.
[15] C.S. Ong, A. Smola, and B. Williamson. Learning the kernel with hyperkernels. JMLR, 6:1045?1071,
2005.
[16] L. Mukherjee, V. Singh, J. Peng, and C. Hinrichs. Learning Kernels for variants of Normalized Cuts:
Convex Relaxations and Applications. CVPR, 2010.
[17] P. V. Gehler and S. Nowozin. Infinite kernel learning. Technical Report 178, Max-Planck Institute for
Biological Cybernetics, 10 2008.
[18] F. R. Bach. Exploring large feature spaces with hierarchical multiple kernel learning. In NIPS, 2008.
[19] T. Joachims, N. Cristianini, and J. Shawe-Taylor. Composite kernels for hypertext categorisation. In
ICML, 2001.
[20] A. Renyi. On measures of entropy and information. In Fourth Berkeley Symposium on Mathematical
Statistics and Probability, pages 547?561, 1961.
[21] C. Cortes, M. Mohri, and A. Rostamizadeh. Generalization bounds for learning kernels. In ICML, 2010.
[22] R. Jenssen. Kernel entropy component analysis. IEEE Trans. PAMI, pages 847?860, 2009.
[23] M. Girolami. Orthogonal series density estimation and the kernel eigenvalue problem. Neural Computation, 14(3):669?688, 2002.
[24] D. Erdogmus and J.C. Principe. Generalized information potential criterion for adaptive system training.
IEEE Trans. Neural Networks, 13(5):1035?1044, 2002.
[25] M. Kowalski, M. Szafranski, and L. Ralaivola. Multiple indefinite kernel learning with mixed norm
regularization. In ICML, 2009.
[26] S. Bergsma, D. Lin, and D. Schuurmans. Improved Natural Language Learning via VarianceRegularization Support Vector Machines. In CoNLL, 2010.
[27] R. Cuingnet, M. Chupin, H. Benali, and O. Colliot. Spatial and anatomical regularization of SVM for
brain image analysis. In NIPS, 2010.
[28] P. Shivaswamy and T. Jebara. Maximum relative margin and data-dependent regularization. JMLR,
11:747?788, 2010.
[29] K. Gai, G. Chen, and C. Zhang. Learning kernels with radiuses of minimum enclosing balls. In NIPS,
2010.
[30] S. G. Mueller, M. W. Weiner, et al. Ways toward an early diagnosis in Alzheimers disease: The
Alzheimer?s Disease Neuroimaging Initiative. J. of the Alzheimer?s Association, 1(1):55?66, 2005.
9
|
4710 |@word mild:1 repository:1 briefly:4 mri:2 achievable:1 norm:35 stronger:1 proportion:1 km:11 additively:1 r:2 seek:1 covariance:6 decomposition:1 contraction:1 elisseeff:1 kwm:7 tr:6 reduction:1 series:2 selecting:2 rkhs:9 interestingly:1 longitudinal:1 outperforms:1 existing:3 current:2 incidence:1 yet:1 dx:2 must:1 subsequent:1 sponsored:1 stationary:1 spec:1 metabolism:1 core:2 fa9550:1 provides:6 parameterizations:1 org:1 zhang:2 mathematical:1 along:1 constructed:1 direct:1 become:1 symposium:1 initiative:2 yuan:1 prove:1 peng:2 indeed:2 behavior:3 themselves:2 nor:1 frequently:1 multi:13 sdp:4 brain:4 p1:1 examine:1 decreasing:3 automatically:1 duan:1 encouraging:1 solver:2 becomes:5 provided:1 bounded:1 underlying:1 qw:1 what:1 substantially:1 minimizes:1 eigenvector:1 developed:1 differing:1 finding:1 transformation:1 guarantee:2 berkeley:1 expands:1 shogun:3 ensured:1 classifier:8 rm:5 uk:2 k2:3 unit:2 medical:1 scaled:1 control:3 appear:1 planck:1 before:2 t1:1 positive:1 aggregating:1 treat:1 vsingh:1 analyzing:2 toolkits:1 simplemkl:1 pami:1 plus:1 examined:1 specifying:1 challenging:2 suggests:2 limited:1 range:1 statistically:2 tian:1 directed:1 practical:2 practice:3 block:8 definite:2 empirical:2 significantly:5 thought:1 composite:2 pre:4 regular:2 specificity:1 cannot:2 selection:2 ralaivola:1 context:2 influence:1 writing:1 www:3 optimize:1 middleton:1 map:5 center:1 equivalent:4 restriction:1 szafranski:2 vit:1 convex:10 focused:3 independently:1 shen:1 identifying:1 estimator:3 dominate:1 regularize:2 orthonormal:1 deriving:1 enabled:1 scj:1 population:1 notion:1 n12:1 variation:1 analogous:1 controlling:1 gm:6 user:3 programming:1 designing:1 lanckriet:2 element:2 cut:2 mukherjee:1 gehler:3 role:1 module:10 solved:3 wang:1 hypertext:1 calculate:2 region:1 ensures:1 richness:1 sonnenburg:2 decrease:3 contemporary:1 highest:2 disease:8 mentioned:2 principled:1 complexity:18 reward:1 ideally:1 cristianini:2 ong:1 geodesic:1 trained:4 singh:3 depend:2 solving:2 predictive:1 efficiency:1 basis:2 easily:2 multimodal:1 various:3 regularizer:5 train:2 distinct:2 describe:1 effective:1 pertaining:1 aggregate:1 choosing:3 quite:2 encoded:1 widely:1 solve:6 emerged:1 cvpr:3 statistic:3 cov:3 transform:1 noisy:1 online:1 advantage:1 brefeld:1 eigenvalue:1 ucl:1 propose:1 sen:1 interaction:6 maximal:2 product:1 turned:1 combining:1 uci:1 ramification:1 realization:2 relevant:1 mixing:4 flexibility:3 intuitive:1 inducing:1 validate:1 ky:1 crossvalidation:1 olkopf:1 exploiting:1 convergence:2 cluster:5 extending:1 rademacher:14 produce:2 categorization:3 guaranteeing:1 object:3 help:1 derive:2 informs:1 hyperkernel:1 ac:2 depending:1 measured:1 exam:1 minor:1 chupin:1 eq:2 p2:1 solves:1 strong:1 c:1 involves:1 come:6 indicate:1 quantify:1 girolami:1 concentrate:1 differ:1 radius:1 filter:1 subsequently:1 tbm:5 nlm:1 virtual:19 asta:1 education:1 numeral:1 generalization:3 decompose:1 biological:1 im:5 extension:4 strictly:3 exploring:1 fil:1 normal:1 mapping:1 predict:3 substituting:2 major:1 early:1 estimation:1 applicable:1 label:1 healthy:1 sensitive:1 individually:1 ictr:1 weighted:3 minimization:1 impart:1 gunter:1 clearly:1 gaussian:2 super:1 modified:1 rather:2 zhou:1 boosted:1 derived:6 ax:1 focus:3 joachim:2 improvement:2 modelling:1 rank:1 seamlessly:1 baseline:2 diffeomorphic:1 rostamizadeh:1 helpful:1 shivaswamy:1 mueller:1 el:1 membership:1 dependent:1 typically:2 eliminate:1 mth:2 ined:1 overall:4 among:5 dual:8 issue:1 denoted:1 arg:1 classification:3 development:1 art:4 special:3 spatial:4 fairly:1 equal:2 construct:2 field:1 jiming:1 encouraged:1 kw:1 unsupervised:2 icml:4 alter:1 others:1 report:1 develops:1 inherent:1 employ:1 few:2 simultaneously:3 ve:1 individual:1 pharmaceutical:1 cognitively:1 sterling:1 replaced:1 intended:1 psd:2 interest:2 highly:2 evaluation:1 deferred:1 semidefinite:1 primal:3 regularizers:3 devoted:1 amenable:1 encourage:3 necessary:2 adrc:1 alzheimers:1 respective:1 orthogonal:1 taylor:1 desired:2 isolated:1 theoretical:3 instance:1 rakotomamonjy:2 kq:2 uniform:1 successful:1 johnson:2 motivating:2 optimally:1 dependency:4 chooses:1 combined:5 density:6 sensitivity:1 kloft:1 lee:1 vm:2 off:1 together:2 concrete:2 na:1 squared:2 reflect:2 wider:1 cognitive:1 return:1 li:1 potential:1 singleton:1 socp:1 bold:1 coefficient:6 matter:2 explicitly:2 register:1 ad:13 onset:1 depends:2 vi:9 h1:1 view:3 responsibility:1 root:1 analyze:3 doing:1 performed:7 wm:3 option:1 cuingnet:1 overstated:1 benali:1 vivo:1 contribution:2 ass:2 il:1 t15lm007359:1 accuracy:8 acknowledging:1 characteristic:3 who:1 efficiently:1 yield:1 kowalski:1 landscape:1 conceptually:1 biostat:1 cybernetics:1 acc:1 strongest:1 against:1 ul1rr025011:1 involved:1 hwm:3 dm:1 resultant:1 proof:4 gain:2 recall:3 knowledge:6 dimensionality:1 hilbert:2 pocket:1 actually:2 maxwell:1 higher:2 supervised:3 follow:2 specify:1 maximally:1 modal:2 improved:1 formulation:4 ox:1 strongly:3 though:1 implicit:1 stage:1 smola:1 correlation:2 k2q:1 hand:3 expressive:1 replacing:1 nonlinear:1 marker:1 mkl:86 spm:2 defines:1 gray:2 scientific:1 effect:1 k22:2 concept:1 normalized:4 true:1 requiring:2 inductive:1 regularization:8 multiplier:1 alternating:2 spatially:1 i2:1 mahalanobis:1 please:1 rooted:1 fdg:7 criterion:1 generalized:1 theoretic:3 demonstrate:2 image:8 wise:3 meaning:3 recently:3 nih:2 began:1 common:1 qp:1 empirically:1 association:1 relating:1 interpret:1 kwk2:2 significant:8 composition:2 refer:3 glucose:1 smoothness:2 pm:7 canu:1 kukq:1 illinois:2 cancellation:1 language:1 had:3 shawe:1 kw1:1 robot:1 afer:1 similarity:2 gt:1 base:15 dominant:1 bergsma:1 showed:2 recent:2 optimizing:2 driven:3 apart:1 discard:1 knopman:1 yi:3 minimum:1 geriatric:1 greater:3 impose:1 somewhat:2 additional:1 mr:4 converge:1 maximize:2 determine:1 period:1 semi:3 relates:4 multiple:8 desirable:1 zien:1 reduces:2 champaign:1 memorial:1 technical:1 adni:4 plug:1 clinical:1 offer:3 long:2 cross:2 believed:1 bach:3 lin:1 finder:1 va:1 controlled:1 prediction:4 impact:1 laplacian:5 variant:1 denominator:1 vision:3 patient:1 iteration:2 kernel:135 normalization:1 achieved:1 ion:1 paired:2 proposal:1 background:1 uninformative:2 separately:2 addition:1 decreased:2 morphometry:2 modality:9 sch:2 operate:1 induced:2 subject:4 elegant:1 contrary:1 jordan:2 call:2 alzheimer:8 structural:2 near:1 noting:1 yang:1 easy:1 variety:1 reduce:1 cn:1 vgg:1 multiclass:1 weiner:1 expression:1 pca:1 utility:1 bartlett:1 rmm:2 effort:2 penalty:8 impairment:1 jie:1 useful:2 generally:1 clear:2 involve:2 eigenvectors:1 extensively:1 svms:4 nsf:1 canonical:1 notice:1 estimated:1 diagnostic:1 per:1 anatomical:2 broadly:1 diagnosis:2 write:1 affected:1 group:4 key:1 redundancy:1 indefinite:4 demonstrating:1 reformulation:1 putting:2 nevertheless:1 monitor:1 wisc:1 neither:1 penalizing:1 uw:2 imaging:7 ihm:2 v1:1 graph:2 relaxation:1 cone:2 sum:4 year:3 inverse:1 fourth:1 extends:1 almost:1 reasonable:1 guyon:1 utilizes:1 scaling:1 conll:1 entirely:2 bound:14 ki:2 gnu:1 followed:1 fold:4 quadratic:10 adapted:1 constraint:8 precisely:3 categorisation:1 ri:1 software:1 min:6 optimality:2 conjecture:1 according:2 alternate:1 eraged:1 combination:11 clinically:1 ball:1 beneficial:1 slightly:1 across:1 wi:2 modification:1 wherever:1 iccv:2 ghaoui:1 spawn:1 pipeline:1 computationally:1 assures:1 discus:3 turn:2 mechanism:3 describing:2 needed:1 end:3 adopted:1 generalizes:7 available:2 observe:1 progression:2 hierarchical:1 alternative:3 batch:1 shortly:2 eigen:10 vikas:1 original:1 denotes:1 cf:1 ensure:1 maintaining:1 madison:2 medicine:1 exploit:4 giving:1 restrictive:1 k1:5 tensor:1 objective:7 already:1 strategy:5 cmmi:1 primary:2 usual:1 diagonal:4 exhibit:1 gradient:3 distance:1 thank:1 landmark:1 outer:2 chris:1 extent:1 toward:1 pet:9 assuming:2 relationship:2 providing:1 minimizing:1 ratio:1 demonstration:2 neuroimaging:10 potentially:2 favorably:1 trace:11 negative:1 design:1 implementation:3 enclosing:1 unknown:1 perform:1 allowing:2 conversion:1 av:1 observation:3 datasets:2 urbana:1 benchmark:1 finite:2 descent:1 regularizes:1 incorporated:1 reproducing:2 arbitrary:5 thm:3 vbm:5 jebara:1 community:1 pair:1 toolbox:3 extensive:2 optimized:5 connection:1 engine:1 smo:2 learned:2 boost:1 nip:3 trans:2 able:1 below:1 pattern:2 sparsity:6 program:5 max:5 reliable:1 explanation:1 power:1 demanding:1 natural:1 regularized:2 solvable:1 representing:2 scheme:1 improve:2 library:1 conic:1 hm:5 text:1 nice:1 literature:1 voxels:3 multiplication:1 relative:7 wisconsin:2 afosr:1 expect:1 mixed:1 interesting:2 acyclic:1 nels:1 validation:3 degree:1 sufficient:2 morphometric:1 grandvalet:2 uncorrelated:1 nowozin:3 normalizes:1 caltech101:1 mohri:1 supported:1 last:3 free:1 bias:3 side:1 institute:1 template:1 taking:1 sparse:1 distributed:1 benefit:2 calculated:1 rich:1 author:3 commonly:1 made:1 preprocessing:1 adaptive:1 voxel:4 implicitly:1 keep:1 ities:1 global:1 active:1 xi:3 spectrum:1 search:2 latent:1 table:2 qz:2 transfer:1 obtaining:1 caputo:1 schuurmans:1 interact:1 expansion:1 williamson:1 domain:9 diag:3 hinrichs:4 significance:1 spread:1 main:3 linearly:1 motivation:2 arise:1 wmm:1 xu:1 fig:3 k2p:3 gai:1 sub:3 kuk22:1 neuroimage:3 governed:1 tied:1 jmlr:7 renyi:13 theorem:5 kuk2:2 specific:2 arra:1 showing:2 k21:1 decay:1 svm:24 cortes:1 burden:2 hypo:1 adding:3 effectively:4 importance:1 supplement:3 margin:10 sorting:1 chen:1 entropy:15 generalizing:1 lap:3 interrelated:1 explore:1 gao:1 conveniently:1 lagrange:1 expressed:2 corresponds:1 ivm:1 relies:1 extracted:1 jenssen:1 goal:1 formulated:2 identity:3 rbf:1 towards:1 orabona:1 erdogmus:1 shared:2 replace:1 feasible:2 absence:1 vemuri:1 infinite:3 reducing:3 wt:2 hyperkernels:1 lemma:2 correlational:2 total:3 hospital:1 duality:1 experimental:1 gauss:1 mci:1 shannon:1 meaningful:1 atsch:1 colliot:1 select:2 principe:1 support:4 latter:1 partnership:1 collins:1 scan:5 bioinformatics:1 incorporate:1 evaluate:1 correlated:1
|
4,101 | 4,711 |
Persistent Homology for Learning Densities with
Bounded Support
Florian T. Pokorny
Carl Henrik Ek
Hedvig Kjellstr?om
Danica Kragic ?
Computer Vision and Active Perception Lab, Centre for Autonomous Systems
School of Computer Science and Communication
KTH Royal Institute of Technology, Stockholm, Sweden
{fpokorny, chek, hedvig, danik}@csc.kth.se
Abstract
We present a novel method for learning densities with bounded support which
enables us to incorporate ?hard? topological constraints. In particular, we show
how emerging techniques from computational algebraic topology and the notion
of persistent homology can be combined with kernel-based methods from machine
learning for the purpose of density estimation. The proposed formalism facilitates
learning of models with bounded support in a principled way, and ? by incorporating persistent homology techniques in our approach ? we are able to encode
algebraic-topological constraints which are not addressed in current state of the
art probabilistic models. We study the behaviour of our method on two synthetic
examples for various sample sizes and exemplify the benefits of the proposed approach on a real-world dataset by learning a motion model for a race car. We show
how to learn a model which respects the underlying topological structure of the
racetrack, constraining the trajectories of the car.
1
Introduction
Probabilistic methods based on Gaussian densities have celebrated successes throughout machine
learning. They are the crucial ingredient in Gaussian mixture models (GMM) [1], Gaussian processes [2] and Gaussian mixture regression (GMR) [3] which have found applications in fields such
as robotics, speech recognition and computer vision [1, 4, 5] to name just a few. While Gaussian
distributions are convenient to work with for several theoretical and practical reasons (the central
limit theorem, easy computation of means and marginals, etc.) they do fall into the class of densities
on Rd for which supp f = Rd ; i.e. they assign a non-zero probability to every subset with non-zero
volume in Rd . This property of Gaussians can be problematic if an application dictates that certain
subsets of space should constitute a ?forbidden? region having zero probability mass. A simple example would be a probabilistic model of admissible positions of a robot in an indoor environment,
where one wants to assign zero ? rather than just ?low? ? probability to positions corresponding to
collisions with the environment. Encoding such constraints using e.g. a Gaussian mixture model is
not natural since it assigns potentially low, but non-zero probability mass to every portion of space.
In contrast to the above Gaussian models, we consider non-parametric density estimators based on
spherical kernels with bounded support. As we shall explain, this enables us to study topological
properties of the support region ?? for such estimators. Kernel-based density estimators are wellestablished in the statistical literature [6] with the basic idea being that one should put a rescaled
version of a given model density over each observed data-point to obtain an estimate for the probability density from which the data was sampled. The choice of rescaling ? or ?bandwidth? ? ?
has been studied with respect to the standard L1 and L2 error and is still an active area of research
[7]. We focus particularly on spherical truncated Gaussian kernels here which have been some?
This work was supported by the EU projects FLEXBOT (FP7-ERC-279933) and TOMSY (IST-FP7270436) and the Swedish Foundation for Strategic Research
1
what overlooked as a tool for probabilistic modelling. An important aspect of these kernels is that
their associated conditional and marginal distributions can be computed analytically, enabling us to
efficiently work with them in the context of probabilistic inference.
A different interpretation of a density with support in an ?-ball can be given using the notion of
bounded noise. There, one assumes that observations are distorted by noise following a density with
bounded support (instead of e.g. Gaussian noise). Bounded noise models are used in the signal
processing community for robust filtering and estimation [8, 9], but to our knowledge, we are the
first to combine densities with bounded support and topology to model the underlying structure
of data. Thinking of a set of observations S = {X1 , ..., XnS
} ? Rn as ?fuzzy up to noise in an
?-ball? naturally leads one to consider the space ?? (S) = i B? (Xi ) of balls of size ? around
the data points. Persistent homology is a novel tool for studying topological properties of spaces
such as ?? (S) which has emerged from the field of computational algebraic topology in recent
years [10, 11]. Using persistent homology, it becomes possible to study clustering, periodicity and
more generally the existence of ?holes? of various dimensions in ?? (S) for ? lying in an interval.
Starting from the basic observation that one can construct a kernel-based density estimator f?? whose
region of support is exactly ?? (S), this paper investigates the interplay between the topological
information contained in ?? (S) and a corresponding density estimate. Specifically, we make the
following contributions:
? Given prior topological information about supp f = ?, we define a topologically admissible bandwidth interval [?min , ?max ] and propose and evaluate a topological bandwidth
selector ?top ? [?min , ?max ].
? Given no prior topological information, we explain how persistent homology can be of use
to determine a topologically admissible bandwidth interval.
? We describe how additional constraints defining a forbidden subset F ? Rn of the
parameter-space can be incorporated into our topological bandwidth estimation framework.
? We provide quantitative results on synthetic data in 1D and 2D evaluating the expected L2 errors for density estimators with topologically chosen bandwidth values ? ?
{?min , ?mid , ?max , ?top }. We carry out this evaluation for various spherical kernels and
compare our results to an asymptotically optimal bandwidth choice.
? We use our method in a learning by demonstration [12] context and compare our results
with a current state of the art Gaussian mixture regression method.
2
2.1
Background
Kernel-based density estimation
Let S = {X1 , ..., Xn } ? Rd be an i.i.d. sample arising from a probability density f : Rd ? R.
Kernel-based density estimation [13, 14,P
15] is an approach
for reconstructing f from the sample
i
by means of an estimator f??,n (x) = n?1d ni=1 K x?X
,
where
the kernel function K : Rd ? R
?
is a suitably chosen probability density. In this context, ? > 0 is called the bandwidth. If one is
only interested in an estimator that minimizes the expected L2 norm of f??,n ? f , the choice of ? is
crucial, while the particular choice of kernel K is generally less important [7, 6]. Let {?n }?
n=1 be
a sequence of positive bandwidth values depending on the sample size n. It follows from classical
results [14, 15] that for any sufficiently well-behaved density K, limn?? E[(f??n ,n (x) ? f (x))2 ] = 0
provided that limn?? ?n = 0 and limn?? n?dn = ?. Despite this encouraging result, the question
of determining the best bandwidth for a given sample is an ongoing research topic and the interested
reader is referred to the review [7] for an in-depth discussion.h One branch of methods
i [6] tries to
R
2
?
minimize the Mean Integrated Squared Error, M ISE(?n ) = E (f?n ,n (x) ? f (x)) dx .
An asymptotic analysis reveals that, under mild conditions on K and f [6], M ISE(?n ) can be approximated asymptotically by AM ISE(?n ) as n ? ? if limn?? ?n = 0 and limn?? n?dn = ?.
Here, AMISE denotes the Asymptotic Mean Integrated Squared Error. If we consider only spherical kernels that are symmetric functions of the norm kxk of their input variable x, an asymptotic
analysis [6] shows that, in dimension d,
Z
Z
1
?4n
2
2
2
AM ISE(?n ) = d
K(x) dx + ?2 (K)
{tr(Hess f (x))} dx,
n?n
4
2
R
where ?2 (K) = x2j K(x)dx is independent of the choice of j ? {1, . . . , d} by the spherical
symmetry and tr(Hess f (x)) denotes the trace of the Hessian of f at x. Due to the availability of a
relatively simple explicit formula for AMISE, a large class of bandwidth selection methods attempt
to estimate and minimize AMISE instead of working with MISE directly. One finds that AMISE is
minimized for
1
! 4+d
R
d K(x)2 dx
1
?amise (n) =
.
R
n ?2 (K)2 {tr(Hess f (x))}2 dx
Since f is assumed unknown in real world examples, so called plug-in methods can be used to
approximate ?amise [7]. In this paper, we will work with two synthetic examples of densities for
which we can compute ?amise numerically in order to benchmark our topological bandwidth selection procedure. For our experiments, we choose three spherical kernels K : Rd ? R that are
defined to be zero outside the unit ball B1 (0) and are defined by Ku = Vol(B1 (0))?1 (uniform),
Kc (x) =
d(d+1)?( d
)
2
d
2? 2
2 ?d
2
(1 ? kxk) (conic) and Kt (x) = (2?? )
1?
?1
? d
, 1
2 2? 2
?( d
2)
?
e
kxk2
2? 2
(truncated
Gaussian) respectively for kxk 6 1. These kernels can be defined in any dimension d > 0 and are
spherical, i.e. they are functions of the radial distance to the origin only which enables us to efficiently evaluate them and to sample from the corresponding estimator f??,n even when the dimension
d
d is very large. We will denote the standard spherical Gaussian by Ke (x) = (2?? 2 )? 2 e?
(a) Ku
Figure 1:
2.2
1
K( x4 )
42
(c) Kt , ? 2 =
(b) Kc
kxk2
2? 2
.
1
4
for the indicated kernels and a corresponding estimator f?4,3 for three sample points.
Persistent homology
Consider the point cloud S shown in Figure 2(a). For a human observer, it is noticeable that S looks
?circular?. One can reformulate the existence of the ?hole? in Figure 2(a) in a mathematically precise
way using persistent homology [16] which has recently gained increasing traction as a tool for the
analysis of structure in point-cloud data [10].
(a) ?0
(b) ?0.25
(c) ?0.5
(d) b0
(e) b1
Figure 2: Noisy data concentrated around a circle (a) and corresponding barcodes in dimension zero (d) and
one (e). In (b) and (c), we display ?? for ? = 0.25, 0.5 respectively together with the corresponding VietorisRips complex V2? which we use for approximating the topology of ?? . While the vertical axis in the ith
barcode has no special meaning, the horizontal axis displays the ? parameter of V2? . At any fixed ? value,
the number of bars lying above and containing ? is equal to the ith Betti number of V2? . The shaded region
highlights the ?-interval for which V2? has one connected component (i.e. b0 (V2? ) = 1) in (d) and for which a
single ?circle? (i.e. b1 (V2? ) = 1) is detected in (e).
In the approach of [10], one starts with a subset ? ? Rd and assumes that there exists some probability density f on Rd that is concentrated near ?. Given an i.i.d. sample S = {X1 , ? ? ? , Xn } from
the corresponding probability distribution, one of the aims of persistent homology in this setting is
to recover some of the topological structure of ? ? the homology groups Hi (?, Z2 ), for i = 1, . . . , d
? from the sample S. Each Hi (?, Z2 ) is a vector space over Z2 and its dimension bi (?) is called
3
the ith Betti number. One of the properties of homology is that homology groups are invariant under
a large class of deformations (i.e. homotopies) of the underlying topological space. A popular example of such a deformation is to consider a teacup that is continuously deformed into a doughnut.
One can think of b0 (?) as measuring the number of connected components while, roughly, bi (?),
for i > 0 describes the number i-dimensional holes of ?. A closed curve in Rd that does not selfintersect can for example be classified by b0 = 1 (it has one connected component) and b1 = 1
(it is topologically a circle). The reader is encouraged to consult [17] for a rigorous introduction to
homotopies and related concepts.
Sn
Given a discrete sample S and a distance parameter ? > 0, consider the set ?? (S) = i=1 B? (Xi ),
for ? ? [0, ?), where B? (p) = {x ? Rd : kx ? pk 6 ?}. In Figure 2(b) and 2(c) this set is displayed
for increasing ? values. ?? (S) is a topological space and, in the case where ? is a smooth compact
submanifold in Rd and f is in a very restrictive class of densities with support in a small tubular
neighbourhood around ?, [18, 11] have proven results showing that ?? (S) is homotopy equivalent
to ? with high probability for certain large sample sizes. The key insight of persistent homology is
that we should study not just the homology of ?? (S) for a fixed value of ? but for all ? ? [0, ?)
simultaneously. The idea is then to study how the homology groups Hi (?? (S), Z2 ) change with ?
and one records the changes in Betti number using a barcode [10] (see e.g. figure 2(d) and 2(e)).
?
Computing the barcode corresponding to Hi (?? (S), Z2 ) directly (via the Cech
complex given by
our covering of balls B? (X1 ), . . . , B? (Xn ) [10]) is computationally very expensive and one hence
computes the barcode corresponding to the homology groups of the Vietoris-Rips complex V2? (S).
This complex is an abstract complex with vertices given by the elements of S and where we insert
a k-simplex for every set of k + 1 distinct elements of S such that any two are within distance less
than 2? of each other (see [10]). The homology groups of V2? (S) are not necessarily isomorphic to
the homology groups of ?? (S), but can serve as an approximation due to the interleaving property
?
of the Vietoris-Rips and Cech
complex, see e.g. Prop 2.6 [10]. For the computation of barcodes, we
use the javaPlex software [19]. The computed ith barcode then records the birth and death times of
topological features of V2? in dimension i as we increase ? from zero to some maximal value M ,
where M is called the maximal filtration value.
3
Our framework
Given a dataset S = {X1 , . . . , Xn } ? Rd , sampled in an i.i.d. fashion from an underlying probability distribution with density f : Rd ? R with bounded support ?, we propose to recover f
using a kernel density estimator f??,n in a way that respects the algebraic topology of ?. For this, we
consider only f??,n based on kernels K with supp K = B1 (0), and in particular, we experiment with
Kt , Ku and Kc . For such kernels, supp f??,n = ?? (S) = ?ni=1 B? (Xi ) whose topological features
we can approximate by computing the barcodes for V2? .
If no prior information on the topological features of ? is given, we can then inspect these barcodes
and search for large intervals in which the Betti numbers do not change. This approach is used in
[10], who demonstrated that topological features of data can be discovered in this way. Alternatively,
one might be given prior information on the Betti numbers (e.g. using knowledge of periodicity,
number of clusters, inequalities involving Betti numbers) that one can incorporate by searching for ?intervals on which such constraints are satisfied. Geometric constraints on the data can additionally
be incorporated by restricting to allowable ?-intervals to values for which ?? (S) does not contain
?forbidden regions?. In the robotics setting, frequently encountered examples for such forbidden
regions are singular points in the joint space of a robot, or positions in space corresponding to
collisions with the environment.
Let us now assume that we are given constraints on some of the Betti numbers of ?. For a given
sample S, we then compute the barcodes for V2? in each dimension i ? {1, . . . , d} up to a large
maximal value M using javaPlex [19] and determine the set A of admissible ? values. If A is empty,
we consider the topological reconstruction to have failed. This will happen, for example, if our
assumptions about the data are incorrect, or if we do not have enough samples to reconstruct ?.
If A is non-empty, we attempt to determine a finite union of disjoint intervals on which the Betti
numbers constraints are satisfied. Since, in our experiments, the interval I = [?min (n), ?max (n)]
(determined up to some fixed precision) with smallest possible ?min (n) among those coincided
with the largest such interval in most cases (indicating stable topological features), we decided to
4
investigate this I ? A for further analysis. For ? ? [?min (n), ?max (n)], the resulting density f??,n
then has a support region ?? (S) with the correct Betti numbers ? as approximated by V2? . We note
the following elementary observation:
Lemma 3.1. Let d ? N and ?min (n), ?max (n) ? R for all n ? N. Suppose that limn?? ?min (n) = 0
and that there exists a, b ? R such that 0 < a < ?max (n) < b and 0 6 ?min (n) < ?max (n) for all
1
min (n) ? 4+d
n ? N. Then ?top (n) = ?min (n) + ?max (n)??
n
satisfies i) ?top (1) = ?mid (1) and ?top (n) ?
2
min (n)
ii) limn?? ?top (n) = 0
[?min (n), ?mid (n)] for all n ? N, where we define ?mid (n) = ?max (n)+?
2
d
and iii) limn?? n?top (n) = ?.
It is our intuition that, for a large class of constraints on the Betti numbers and for tame densities
f : Rd ? R (such as densities concentrated on a neighbourhood of a compact submanifold of Rd
[11]), ?min (n) and ?max (n) exist for all large enough sample sizes n with high probability and
that the conditions of Lemma 3.1 are satisfied. In that case, Lemma 3.1 provides a motivation for
choosing {?top (n)}?
n=1 as a topological bandwidth selector since ? while it is difficult to analyse
?min (n) asymptotically ? at least the second summand of ?top (n) has the same asymptotics in
n as the optimal AMISE solution. Furthermore, this choice of bandwidth then corresponds to a
support region ??top (n) (S) with the correct Betti numbers (as approximated by the Vietoris-Rips
complex) since ?top (n) ? [?min (n), ?max (n)]. Finally, ii) and iii) then imply that, point-wise,
limn?? E[(f??top (n),n (x) ? f (x))2 ] = 0 due to the results of [14, 15].
We note here that many different methods for choosing ?(n) ? [?min (n), ?max (n)] can be considered. If the topologically admissible interval [?min (n), ?max (n)] is for example determined by
the constraint of having three connected components of supp f as in 3(a), ?max (n) will increase
if we shift the connected components of supp f further apart. ?top (n) hence also increases and
might not yield good L2 error results for small sample sizes anymore. In that case, an estimator ??top (n) ? [?min (n), ?max (n)] closer to ?min (n) might be a better choice. To give an initial
overview, we hence also display results for ?min (n), ?mid (n), ?max (n) in our experiments. Note
however also that the L2 error might not be the right quality measure for applications where the
topological features of supp f are most important ? we illustrate an example of this situation in our
racetrack data experiment. We will show that ? in the absence of further problem-specific knowledge ? ?top (n) does yields a good bandwidth estimate with respect to the L2 error in our examples.
4
Experiments
Results in 1D We consider the probability density f : R ? R displayed in grey in each of the
graphs in Figure 3. To benchmark the performance of our topological bandwidth estimators, we then
compute the AMISE-optimal bandwidth parameter ?amise numerically from the analytic formula for
f and for Kt , Ku , Kc and Ke . Here, we include the Gaussian kernel Ke for comparison purposes
only.
0.2
0.2
0.2
0.1
0.1
0.1
0
0
0
5
10
15
20
25
(a) f?top ,10 using Kt .
30
0
5
10
15
20
25
(b) f?amise ,10 using Ke
30
0
0
5
10
15
20
25
30
(c) f?top ,2500 using Kt
Figure 3: Density f (grey) and reconstructions (black) for the indicated sample size, bandwidth and kernel.
In order to topologically reconstruct f , we then assume only the knowledge of some points sampled
from f and that b0 (supp f ) = 3 and no further information about f , i.e. we assume to know a
sample and that the support region of f has three components. We then find ?top (n) by computing
a topologically admissible interval [?min (n), ?max (n)] from the barcode corresponding to the given
sample. To evaluate the quality of bandwidth parameters chosen inside [?min (n), ?max (n)], we then
sample at various sampling sizes and compute the mean L2 errors for the resulting density estimator
f?,n for ? = ?top , ?min , ?max and ?mid = 21 (?max + ?min ) for each of the spherical kernels that
we have described and compare our results to ?amise . We set ? 2 = 14 for Ke and Kt . The results,
summarized in Figure 4, show that ?top performs at a level comparable to ?amise in our experiments.
Note here that ?amise can only be computed if the true density f is known, while, for ?top , we only
5
3.5
?top
?min
3.0
0.30
0.25
?mid
2.5
?max
0.30
top
amise
min
mid
max
top
amise
min
mid
max
0.25
0.30
top
amise
min
mid
max
0.25
0.30
top
amise
min
mid
max
0.25
0.20
0.20
0.20
0.20
0.15
0.15
0.15
0.15
0.10
0.10
0.10
0.10
0.05
0.05
0.05
0.05
?amise, Kt
2.0
?amise, Ke
?amise, Ku
1.5
?amise, Kc
1.0
0.5
0.0
1
10
20
30
(a) bandwidth values
0.00
1
10
20
2
(b) Kt , ? =
0.00
30
1
10
20
2
1
4
(c) Ke , ? =
30
1
4
0.00
1
10
20
30
0.00
(d) Ku
1
10
20
30
(e) Kc
Figure 4: We generate samples from our 1D density using rejection sampling and consider sample sizes n from
10 to 100 in increments of 10 (small scale) and from 250 to 5000 in increments of 250 (larger scale), resulting
in 30 increasing sample sizes n1 , . . . , n30 . In order to obtain stable results, we perform the sampling for each
sampling size 1000 times (small scale), 100 times (for 250, 500, 750, 1000) and 10 times (for n > 1000)
respectively. We then compute the corresponding kernel density estimators f??,n and the mean L2 norm of
f ? f??n ,n . Figures (b)-(e) display these mean L2 errors (vertical axis) for the indicated kernel function and
bandwidth selectors. Figure (a) displays the bandwidth values (vertical axis) for the given bandwidth selectors.
In all the above plots, a horizontal coordinate of i ? {1, . . . , 30} corresponds to a sample size of ni .
10
0
-10
-10
(a) density f
0
10
(b) 100 samples and
??top in grey.
0
(c) f??top ,100 using just
100 samples as in
5(b)
3
(d) barcode
for b0
0
3
(e) barcode
for b1
Figure 5: 2D density, samples with inferred support region ??top , topological reconstruction (using Kt ,
? 2 = 41 ) and barcodes with [?min , ?max ] highlighted.
required the information that b0 (supp f ) = 3. In our experiments (sample sizes n > 10), we were
able to determine a valid interval [?min (n), ?max (n)] in all cases and did not encounter a case where
the topological reconstruction was impossible.
Results in 2D Here, we consider the density f displayed in Figure 5(a). We chose this example to be representative for problems also arising in robotics, where the localization of a robot can
be modelled as depending on a probability prior which encodes space occupied by objects by zero
probability. In such scenarios, we might be able to obtain topological information about the unobstructed space X, such as knowing the number of components or holes in X. Such information
could be particularly valuable in the case of deformable obstacles since their homology stays invariant under continuous deformations by homotopies. We set up the current experiment in a fashion
similar to our 1D experiments, i.e. we iterate sampling from the given density for various sample
sizes and compute the resulting mean L2 errors to evaluate our results. As we can see from Figure
6, our results indicate that bandwidths ? ? [?min , ?max ] yield errors comparable with the AMISE
optimal bandwidth choice. While ?top does not perform as well as in the previous experiment, we
can observe that the corresponding L2 errors nonetheless follow a decreasing trend. Note also that
both in 1D and 2D, ?top also yields good L2 error results for the standard spherical Gaussian kernel here. In applications such as probabilistic motion planning, the inferred structure of supp f is
however of importance as well (e.g. since path-connectedness of supp f is important), making a
bounded support kernel a preferable choice (see also our racetrack example).
6
2.0
0.08
?top
?mid
1.5
0.06
?max
0.08
top
amise
min
mid
max
?min
0.08
top
amise
min
mid
max
0.06
top
amise
min
mid
max
0.06
0.08
top
amise
min
mid
max
0.06
?amise, Kt
?amise, Ke
1.0
?amise, Ku
0.04
0.04
0.04
0.04
0.02
0.02
0.02
0.02
?amise, Kc
0.5
0.0
100
500
1000
0.00
1500
(a) bandwidth values
100
500
1000
2
(b) Kt , ? =
0.00
1500
100
500
1000
2
1
4
(c) Ke , ? =
0.00
1500
1
4
100
500
1000
(d) Ku
1500
0.00
100
500
1000
1500
(e) Kc
Figure 6: We generate samples from our 2D density using rejection sampling and consider sample sizes from
100 to 1500 in increments of 100. We perform sampling 10 times for each sample size and compute the
corresponding kernel-based density estimator f??,n and the mean L2 norm of f ? f??n ,n . Figures (b)-(e) display
these mean L2 errors (vertical axis) for the indicated sample size (horizontal axis) and kernel function. Figure
(a) displays the indicated bandwidth values (vertical axis) and sample size (horizontal axis).
200
200
......
.....................
....
......
..
...
.............
..
......
.........
..
...
..
....
........
.........
.....
........
....
.......................
..........
...
............
...
.......
....
.......
........
.........
....
..............
.......
... . .
...................... ... . ...........
....
..............
......
................
.................................... .. . . . .................
.
.
.
.
.
.
.
.
.
...........
. . .............................
...........
......................
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..........
....... .
.............. ...............
.
.
.
.
.
.
.
....
.
.
.
.
.
...
.
.
.
.
.
.
.
.
..
.
.
.........
..........
.....
.......
.....
......
.....
....
.................
.....
....
..
...
......
....
....
.......................
.....
.....
....... ...............
.........
... .
.........
.. .......
.....
.......
...........
....
.........
.......
........
.............
....... ..........
.....
.
.... ......
.
.
.
.
.
.........
..
.. . .....
....
.....
..........
....... ........
...........
........
........
........
......... ..............
......... ..................................
.. ..
.
......
.
.
.......
......... ................. ..............
....
........ ......... ...............
.
.
.
.
...
. .... . ..
......
...... ....... .................
.....
......
....
.....
............ ............
.............
.......
............
......
...
..... .......
......
......
...........
......
.....
... . .............
.... ......
..........
...........
.......
.........
... .....
........
..............
......
......... ............
.........
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.......... ...... ....
................................
. ...
.............................
. .....
.........
..................... ...
.... . ....
...........
.........
....
..............
.....
.....
.....
............ ...................
.......................
....
150
100
50
0
0
50
100
150
150
100
50
200
(a) Position component of our
racetrack data
0
0
50
100
150
200
(b) Projection of inferred support region, generated vector
field and sample trajectories
(c) Inferred vector field, position
likelihood and sample trajectories using GMR.
Figure 7: Figure (a) shows the positions of a race car driving 10 laps around a racetrack. In (b), the results
of our proposed method are displayed while Figure (c) shows the standard GMR approach. We exploit the
topological information that a racetrack should be connected and ?circular? when learning the density. As can
be seen, our model correctly infers the region of support as the track (grey). Using GMR, on the other hand, a
non-zero probability is assigned to each location. We observe that the most probable regions are also lying over
the track (black being more likely). However, when sampling new trajectories using the learned density, we can
see that, whereas the trajectories using our method are confined to the track, the GMM results in undesirable
trajectories.
Application to regression We now consider how our framework can be applied to learn complex
dynamics given a topological constraint. We consider GPS/timestamp data from 10 laps of a race car
driving around a racetrack which was provided to us by [20]. For this dataset (see Figure 7(a)), we
are given no information on what the boundaries of the racetrack are. One state of the art approach to
modelling data like this is to employ a learning by demonstration [12] technique which is prominent
especially in the context of robotics, where one attempts to learn motion patterns by observing a
few demonstrations. There, one uses data points S = {(Pk , Vk ) ? R2n , k = 1 . . . n}, where Pk
describes the position and Vk ? Rn the associated velocity at the given position. In order to model
the dynamics, one can then employ a Gaussian mixture model [12] in R2n to learn a probability
density f? for the dataset (usually using the EM-algorithm). To every position x ? Rn , one can
then associate the velocity vector given by E(V |P = x) with respect to the learned density f? ?
this uses the idea of Gaussian mixture regression (GMR). The resulting vector field can then be
numerically integrated to yield new trajectories. Since E(V |P = x) for a Gaussian mixture model
can be computed easily, this method can be applied even in high-dimensional spaces. While it can
be considered as a strength of the GMR approach that it is able to infer ? from just a few examples ?
7
a vector field that is non-zero on a dense subset of Rn , this can also be problematic since geometric
and topological constraints are not naturally part of this approach and we cannot easily encode the
fact that the vector-field should be non-zero only on the racetrack.
From our GPS/timestamp data, we now compute velocity vectors for each data-point and embed
the data in the manner just described in R4 . We then experimented with the software [21] to model
our racetrack data with a mixture of a varying number of Gaussians. While the model brakes down
completely for a low number of Gaussians, some interesting behaviour can be observed in the case of
a mixture model with 50 Gaussians displayed in Figure 7(c). We display the resulting velocity vector
field together with several newly synthesized trajectories. We observe both an undesired periodic
trajectory as well as a trajectory that almost completely traverses the racetrack before converging
towards an attractor. The likelihood of a given position is additionally displayed in 7(c) with black
being the most likely. While the most likely positions do occur over the racetrack, the mixture
model does not provide a natural way of determining where the boundaries of the track should lie.
The topmost trajectory in 7(c), for example, starts at a highly unlikely position.
Let us now consider how we can apply the density estimation techniques
we have described in this paper in this case. Given that we know that
the racetrack is a closed curve, we assume that the data should be modelled by a probability density f : R4 ? R whose support region ? has
a single component (b0 (?) = 1) and ? should topologically be a circle
(b1 (?) = 1). In order for the velocities of differing laps around the track
not to lie too far apart , and so that the topology of the racetrack dominates
in R4 , we rescale the velocity components of our data to lie inside the in0
2.5
5 0
2.5
5
terval [?0.6, 0.6]. Figure 8 displays the barcode for our data. Using our
(a) b0
(b) b1
procedure, we compute that [?min , ?max ] u [3.25, 3.97] is the bandwidth
interval for which the topological constraints that we just defined are satFigure 8:
Barcodes isfied. Using the kernel K with ? 2 = 1 and the corresponding density
t
4
in dimension zero (a)
4
?
estimator
f
,
we
obtain
?
?top
?top ? R with the correct topological propand one (b) and shaded
[?min , ?max ] interval for erties. Figure 7(b) displays the projection of ??top onto R2 . As a next
our racetrack.
step, we suggest to follow the idea of the GMR approach to compute the
posterior expectation E(V |P = x), but this time for our density f??top . It
follows from the definition of our kernel-based estimator
that, for x such that (x, y) ? ??top for
Pn
n
some y ? R , we have E(V |P = x) =
i=1
Pn
Yi
x?X
i ,z dz
Kt ?
top
x?Xi
Kt ?
,z dz
R
R
i=1
. While we were not able to find a ref-
top
erence for the use or computation of these marginals for spherical truncated Gaussians, a reasonably
simple calculation shows that these can in fact be computed analytically in arbitrary dimension:
Lemma 4.1. Consider d, k ? N, d > k and x ? Rk . Let Kt : Rd ? R denote the spherical truncated
Gaussian with parameter ? 2 > 0. Then
Z
Kt (x, y)dy =
Rd?k
2
P ( d?k
, 1?kxk
) ? kxk22
1
2
2? 2
e 2?
d
1
2
k/2
(2?? )
P ( 2 , 2?2 )
for kxk 6 1 and 0 otherwise. Here, P (a, b) = 1 ?
?(a,b)
?(a)
denotes the normalized Gamma P function.
For every point in the projection of ??top onto the position coordinates, we can hence compute a
velocity E(V |P = x) to generate new motion trajectories. For points outside the support region, we
postulate zero velocity. Figure 7(c) displays the resulting vector-field and a few sample trajectories.
As we can see, these follow the trajectory of the data points in Figure 7(a) very well. At the same
time, the displayed support region looks like a sensible choice for the position of the racetrack.
5
Conclusion
In this paper, we have presented a novel method for learning density models with bounded support. The proposed topological bandwidth selection approach allows to incorporate topological
constraints within a probabilistic modelling framework by combining algebraic-topological information obtained in terms of persistent homology with tools from kernel-based density estimation.
We have provided a first thorough evaluation of the L2 errors for synthetic data and have exemplified
the practical use of our approach through application in a learning by demonstration scenario.
8
References
[1] D. A. Reynolds, T. F. Quatieri, and R. B. Dunn, ?Speaker verification using adapted Gaussian
mixture models,? Digital Signal Processing, vol. 10, no. 1?3, pp. 19?41, 2000.
[2] C. E. Rasmussen and C. Williams, Gaussian Processes for Machine Learning. MIT Press,
2006.
[3] D. A. Cohn, Z. Ghahramani, and M. I. Jordan, ?Active learning with statistical models,? Journal of Artificial Intelligence Research, no. 4, pp. 129?145, 1996.
[4] S. Calinon and A. Billard, ?Incremental learning of gestures by imitation in a humanoid robot,?
in ACM/IEEE International Conference on Human-Robot Interaction, pp. 255?262, 2007.
[5] D.-S. Lee, ?Effective Gaussian mixture learning for video background subtraction,? PAMI,
vol. 27, no. 5, pp. 827?832, 2005.
[6] M. P. Wand and M. C. Jones, Kernel Smoothing, vol. 60 of Monographs on Statistics and
Applied Probability. Chapman and Hall/CRC, 1995.
[7] B. A. Turlach, ?Bandwidth selection in kernel density estimation: A review,? in CORE and
Institut de Statistique, pp. 23?493, 1993.
[8] L. El Ghaoui and G. Calafiore, ?Robust filtering for discrete-time systems with bounded noise
and parametric uncertainty,? IEEE Transactions on Automatic Control, vol. 46, no. 7, pp. 1084?
1089, 2001.
[9] Y. C. Eldar, A. Ben-Tal, and A. Nemirovski, ?Linear minimax regret estimation of deterministic parameters with bounded data uncertainties,? IEEE Transactions on Signal Processing,
vol. 52, no. 8, pp. 2177?2188, 2008.
[10] G. Carlsson, ?Topology and data,? Bull. Amer. Math. Soc. (N.S.), vol. 46, no. 2, pp. 255?308,
2009.
[11] P. Niyogi, S. Smale, and S. Weinberger, ?A topological view of unsupervised learning from
noisy data,? SIAM Journal of Computing, vol. 40, no. 3, pp. 646?663, 2011.
[12] S. M. Khansari-Zadeh and A. Billard, ?Learning stable non-linear dynamical systems with
Gaussian mixture models,? IEEE Transaction on Robotics, vol. 27, no. 5, pp. 943?957, 2011.
[13] M. Rosenblatt, ?Remarks on some nonparametric estimates of a density function,? The Annals
of Mathematical Statistics, vol. 27, no. 3, pp. 832?837, 1956.
[14] E. Parzen, ?On estimation of a probability density function and mode,? Annals of Mathematical
Statistics, vol. 33, pp. 1065?1076, 1962.
[15] T. Cacoullos, ?Estimation of a multivariate density,? Annals of the Institute of Statistical Mathematics, vol. 18, pp. 179?189, 1966.
[16] H. Edelsbrunner, D. Letscher, and A. Zomorodian, ?Topological persistence and simplification,? Discrete Comput. Geom., vol. 28, no. 4, pp. 511?533, 2002.
[17] A. Hatcher, Algebraic Topology. Cambridge University Press, 2002.
[18] P. Niyogi, S. Smale, and S. Weinberger, ?Finding the homology of submanifolds with high
confidence from random samples,? Discrete Comput. Geom., vol. 39, no. 1-3, pp. 419?441,
2008.
[19] A. Tausz, M. Vejdemo-Johansson, and H. Adams, ?JavaPlex: A software package for computing persistent topological invariants.? Software, 2011.
[20] KTH Racing, Formula Student Team, KTH Royal Institute of Technology, Stockholm, Sweden.
[21] A. Billard, ?GMM/GMR 2.0.? Software.
9
|
4711 |@word mild:1 deformed:1 version:1 norm:4 turlach:1 johansson:1 suitably:1 grey:4 tr:3 carry:1 initial:1 celebrated:1 reynolds:1 current:3 z2:5 dx:6 csc:1 happen:1 enables:3 analytic:1 plot:1 intelligence:1 ith:4 core:1 record:2 provides:1 math:1 location:1 traverse:1 mathematical:2 dn:2 persistent:12 incorrect:1 combine:1 inside:2 manner:1 expected:2 roughly:1 frequently:1 planning:1 spherical:12 decreasing:1 encouraging:1 increasing:3 becomes:1 project:1 provided:3 bounded:13 underlying:4 mass:2 what:2 submanifolds:1 minimizes:1 emerging:1 fuzzy:1 differing:1 finding:1 quantitative:1 every:5 thorough:1 exactly:1 preferable:1 control:1 unit:1 positive:1 before:1 limit:1 despite:1 encoding:1 path:1 connectedness:1 might:5 black:3 chose:1 pami:1 studied:1 r4:3 shaded:2 nemirovski:1 bi:2 decided:1 practical:2 union:1 regret:1 procedure:2 dunn:1 asymptotics:1 area:1 erence:1 dictate:1 convenient:1 projection:3 statistique:1 radial:1 persistence:1 confidence:1 suggest:1 cannot:1 undesirable:1 selection:4 onto:2 put:1 context:4 impossible:1 equivalent:1 deterministic:1 demonstrated:1 dz:2 brake:1 williams:1 starting:1 ke:9 assigns:1 estimator:17 insight:1 searching:1 notion:2 autonomous:1 increment:3 coordinate:2 annals:3 suppose:1 quatieri:1 rip:3 carl:1 gps:2 us:2 origin:1 associate:1 element:2 trend:1 recognition:1 particularly:2 approximated:3 expensive:1 velocity:8 racing:1 observed:2 cloud:2 region:16 connected:6 eu:1 rescaled:1 valuable:1 principled:1 intuition:1 environment:3 tame:1 topmost:1 monograph:1 dynamic:2 gmr:8 serve:1 localization:1 completely:2 easily:2 joint:1 various:5 distinct:1 describe:1 effective:1 detected:1 artificial:1 outside:2 ise:4 birth:1 choosing:2 whose:3 emerged:1 larger:1 reconstruct:2 otherwise:1 statistic:3 niyogi:2 think:1 analyse:1 noisy:2 highlighted:1 interplay:1 sequence:1 propose:2 reconstruction:4 interaction:1 maximal:3 combining:1 deformable:1 cluster:1 empty:2 incremental:1 adam:1 ben:1 object:1 depending:2 illustrate:1 xns:1 rescale:1 school:1 b0:9 noticeable:1 soc:1 indicate:1 correct:3 human:2 crc:1 behaviour:2 assign:2 cacoullos:1 homotopy:1 probable:1 stockholm:2 elementary:1 mathematically:1 insert:1 lying:3 around:6 sufficiently:1 considered:2 hall:1 calafiore:1 driving:2 smallest:1 purpose:2 estimation:11 largest:1 tool:4 mit:1 gaussian:22 aim:1 rather:1 occupied:1 pn:2 varying:1 encode:2 focus:1 vk:2 modelling:3 likelihood:2 contrast:1 rigorous:1 am:2 inference:1 el:1 integrated:3 unlikely:1 kc:8 interested:2 among:1 eldar:1 art:3 special:1 smoothing:1 timestamp:2 marginal:1 field:9 construct:1 having:2 equal:1 sampling:8 encouraged:1 x4:1 chapman:1 look:2 jones:1 unsupervised:1 thinking:1 minimized:1 simplex:1 hedvig:2 summand:1 few:4 employ:2 simultaneously:1 gamma:1 attractor:1 n1:1 attempt:3 circular:2 investigate:1 highly:1 evaluation:2 zomorodian:1 mixture:13 kt:16 closer:1 sweden:2 institut:1 circle:4 deformation:3 theoretical:1 formalism:1 obstacle:1 measuring:1 bull:1 strategic:1 vertex:1 subset:5 calinon:1 uniform:1 submanifold:2 too:1 periodic:1 synthetic:4 combined:1 density:54 international:1 siam:1 stay:1 probabilistic:7 lee:1 together:2 continuously:1 parzen:1 barcodes:7 squared:2 central:1 satisfied:3 postulate:1 containing:1 choose:1 ek:1 rescaling:1 supp:11 de:1 summarized:1 student:1 availability:1 race:3 try:1 observer:1 lab:1 closed:2 observing:1 view:1 portion:1 start:2 recover:2 contribution:1 om:1 minimize:2 ni:3 who:1 efficiently:2 yield:5 modelled:2 trajectory:14 classified:1 explain:2 definition:1 nonetheless:1 pp:15 naturally:2 associated:2 sampled:3 newly:1 dataset:4 mise:1 popular:1 exemplify:1 car:4 knowledge:4 infers:1 follow:3 swedish:1 amer:1 furthermore:1 just:7 working:1 hand:1 horizontal:4 cohn:1 mode:1 quality:2 indicated:5 behaved:1 name:1 normalized:1 concept:1 homology:21 contain:1 true:1 analytically:2 hence:4 assigned:1 symmetric:1 death:1 undesired:1 covering:1 speaker:1 prominent:1 allowable:1 performs:1 motion:4 l1:1 meaning:1 wise:1 novel:3 recently:1 unobstructed:1 overview:1 volume:1 interpretation:1 marginals:2 numerically:3 synthesized:1 cambridge:1 hess:3 rd:18 automatic:1 mathematics:1 erc:1 centre:1 robot:5 stable:3 etc:1 racetrack:16 edelsbrunner:1 posterior:1 multivariate:1 forbidden:4 recent:1 apart:2 scenario:2 certain:2 inequality:1 success:1 yi:1 cech:2 seen:1 additional:1 florian:1 subtraction:1 determine:4 signal:3 ii:2 branch:1 infer:1 smooth:1 plug:1 calculation:1 gesture:1 tubular:1 converging:1 involving:1 regression:4 basic:2 vision:2 expectation:1 kernel:32 robotics:5 confined:1 background:2 want:1 whereas:1 addressed:1 interval:15 singular:1 limn:9 crucial:2 facilitates:1 jordan:1 consult:1 near:1 constraining:1 iii:2 easy:1 enough:2 iterate:1 topology:8 bandwidth:30 barcode:9 idea:4 knowing:1 vietoris:3 shift:1 algebraic:6 speech:1 hessian:1 constitute:1 remark:1 generally:2 collision:2 se:1 nonparametric:1 traction:1 mid:16 concentrated:3 wellestablished:1 generate:3 exist:1 problematic:2 arising:2 disjoint:1 correctly:1 track:5 rosenblatt:1 discrete:4 shall:1 vol:14 ist:1 group:6 key:1 gmm:3 asymptotically:3 graph:1 year:1 wand:1 package:1 uncertainty:2 distorted:1 topologically:8 throughout:1 reader:2 almost:1 zadeh:1 dy:1 investigates:1 comparable:2 hi:4 simplification:1 display:11 topological:37 encountered:1 strength:1 occur:1 adapted:1 constraint:14 software:5 encodes:1 tal:1 aspect:1 min:40 relatively:1 ball:5 describes:2 reconstructing:1 em:1 making:1 invariant:3 ghaoui:1 computationally:1 know:2 fp7:1 studying:1 gaussians:5 apply:1 observe:3 v2:12 anymore:1 neighbourhood:2 encounter:1 weinberger:2 existence:2 assumes:2 clustering:1 top:44 denotes:3 include:1 in0:1 exploit:1 restrictive:1 ghahramani:1 especially:1 approximating:1 classical:1 question:1 parametric:2 kth:4 distance:3 sensible:1 topic:1 reason:1 reformulate:1 demonstration:4 difficult:1 potentially:1 smale:2 trace:1 filtration:1 unknown:1 perform:3 vertical:5 observation:4 inspect:1 billard:3 benchmark:2 enabling:1 finite:1 displayed:7 truncated:4 defining:1 situation:1 communication:1 incorporated:2 precise:1 team:1 rn:5 discovered:1 arbitrary:1 community:1 overlooked:1 inferred:4 required:1 khansari:1 learned:2 able:5 bar:1 usually:1 perception:1 pattern:1 indoor:1 exemplified:1 dynamical:1 geom:2 royal:2 max:36 video:1 natural:2 minimax:1 kxk22:1 technology:2 imply:1 conic:1 axis:8 sn:1 prior:5 literature:1 l2:15 review:2 geometric:2 carlsson:1 determining:2 asymptotic:3 highlight:1 interesting:1 filtering:2 proven:1 ingredient:1 digital:1 foundation:1 humanoid:1 verification:1 r2n:2 periodicity:2 supported:1 rasmussen:1 institute:3 fall:1 benefit:1 curve:2 dimension:10 xn:4 world:2 evaluating:1 depth:1 computes:1 valid:1 boundary:2 far:1 transaction:3 approximate:2 selector:4 compact:2 active:3 reveals:1 b1:9 assumed:1 xi:4 alternatively:1 imitation:1 search:1 continuous:1 betti:11 additionally:2 learn:4 ku:8 robust:2 reasonably:1 symmetry:1 complex:8 necessarily:1 did:1 pk:3 dense:1 motivation:1 noise:6 ref:1 x1:5 referred:1 representative:1 fashion:2 henrik:1 precision:1 position:14 explicit:1 comput:2 lie:3 kxk2:2 interleaving:1 admissible:6 coincided:1 theorem:1 formula:3 embed:1 down:1 specific:1 erties:1 rk:1 showing:1 r2:1 experimented:1 dominates:1 incorporating:1 exists:2 restricting:1 gained:1 importance:1 hole:4 kx:1 rejection:2 lap:3 likely:3 failed:1 kxk:5 contained:1 corresponds:2 satisfies:1 acm:1 prop:1 conditional:1 towards:1 absence:1 hard:1 change:3 specifically:1 determined:2 lemma:4 called:4 x2j:1 isomorphic:1 indicating:1 support:22 ongoing:1 incorporate:3 evaluate:4
|
4,102 | 4,712 |
Timely Object Recognition
Sergey Karayev
UC Berkeley
Tobias Baumgartner
RWTH Aachen University
Mario Fritz
MPI for Informatics
Trevor Darrell
UC Berkeley
Abstract
In a large visual multi-class detection framework, the timeliness of results can be
crucial. Our method for timely multi-class detection aims to give the best possible
performance at any single point after a start time; it is terminated at a deadline
time. Toward this goal, we formulate a dynamic, closed-loop policy that infers the
contents of the image in order to decide which detector to deploy next. In contrast
to previous work, our method significantly diverges from the predominant greedy
strategies, and is able to learn to take actions with deferred values. We evaluate our
method with a novel timeliness measure, computed as the area under an Average
Precision vs. Time curve. Experiments are conducted on the PASCAL VOC object
detection dataset. If execution is stopped when only half the detectors have been
run, our method obtains 66% better AP than a random ordering, and 14% better
performance than an intelligent baseline. On the timeliness measure, our method
obtains at least 11% better performance. Our method is easily extensible, as it
treats detectors and classifiers as black boxes and learns from execution traces
using reinforcement learning.
1
Introduction
In real-world applications of visual object recognition, performance is time-sensitive. In robotics,
a small finite amount of processing power per unit time is all that is available for robust object
detection, if the robot is to usefully interact with humans. In large-scale detection systems, such as
image search, results need to be obtained quickly per image as the number of items to process is
constantly growing. In such cases, an acceptable answer at a reasonable time may be more valuable
than the best answer given too late.
A hypothetical system for vision-based advertising presents a case study: companies pay money to
have their products detected in images on the internet. The system has different values (in terms of
cost per click) and accuracies for different classes of objects, and the queue of unprocessed images
varies in size. The detection strategy to maximize profit in such an environment has to exploit every
inter-object context signal available to it, because there is not enough time to run detection for all
classes.
What matters in the real world is timeliness, and either not all images can be processed or not all
classes can be evaluated in a detection task. Yet the conventional approach to evaluating visual
recognition does not consider efficiency, and evaluates performance independently across classes.
We argue that the key to tackling problems of dynamic recognition resource allocation is to start
asking a new question: What is the best performance we can get on a budget?
Taking the task of object detection, we propose a new timeliness measure of performance vs. time
(shown in Figure 1). We present a method that treats different detectors and classifiers as black
boxes, and uses reinforcement learning to learn a dynamic policy for selecting actions to achieve the
highest performance under this evaluation.
Specifically, we run scene context and object class detectors over the whole image sequentially,
using the results of detection obtained so far to select the next actions. Evaluating on the PASCAL
1
Ts
Td
C1
t=0
C2
C3
3
agist
scene context
2
Ts
adet1
adet2
adet3
Td
C3
t = 0.1
bicycle detector
time
C1
C2
machine translation and information retrieval. For example, until recently speech recognition and machine
translation systems based on n-gram language models
agist
outperformed systems based on grammars and phrase
structure. In our experience maintaining performance
seems to require gradual enrichment of the model.
adet1
adet2
One reason why simple models can perform better in
practice is that rich models often suffer from difficulties
in training. For object detection, rigid templates and bagof-features models can be easily trained using discrimiFig. 2. Detections obtained with a 2 component bicycle model. These examples illustrate the importance of
native methods such as support vector machines
(SVM). mixture models. In this model the first component captures sideways views of bicycles while the second
Ts difficult to train, in deformations
Td
Richer models are more
particular
component
captures frontal and near frontal views. The sideways component can deform to match a ?wheelie?.
because they often make use of latent information.
C1
C2
Consider the problem of training a part-based model
the background to find a relatively small number of some categories provide evidence for, or against, objects
from images labeled only with bounding boxes
around
potential
false positives.
of other categories in the same image. We exploit this
the objects of interest. Since the part locationsAare
not
methodology
of data-mining for hard negative ex- idea by training a category specific
agist classifier that rescores
labeled, they must be treated as latent (hidden)amples
variables
was adopted by Dalal and Triggs [10] but goes every detection of that category using its original score
at least to the bootstrapping methods used by [38] and the highest scoring detection from each of the other
during training. While it is possible that moreback
complete
and [35]. Here we analyze data-mining algorithms for categories.
labeling would support better training, it could also
adet1
adet2
(a) that data-mining
(b)
(c)
SVM and LSVM training. We prove
result in inferior training if the labeling used
subopmethods
can beFig.
made1.
to converge
to the optimal
model
R ELATED
W ORK
Detections
obtained
with 2a single
component
timal parts. Automatic part labeling has thedefined
potential
in terms of the entire training set.
There
iscoarse
a significant
body of work on deformable modperson
model.
The
model
is
defined
by
a
root
filter
object models are defined using filters that score els of various types for object detection, including several
to achieve better performance by automaticallyOur
finding
(a), several higher resolution part filters (b) and a spatial
effective parts. More elaborate labeling is also subwindows
time con- of a feature pyramid. We have investigated kinds of deformable template models (e.g. [7], [8], [21],
feature sets similar
to HOG
found lower
dimen-part
model
for[10]
theand
location
of each
relative
to theof root
[43]),
and a variety
part-based models (e.g. [2], [6], [9],
suming and expensive.
sional features (c).
which
perform
well as weights
the original
[15], [18], [20],
[42]).
The
filtersasspecify
for histogram
of[28],
oriented
The Dalal-Triggs detector [10], which wonones.
the By
2006
doing principal component analysis on HOG
the constellation
models from [18], [42] parts are
gradients features. Their visualizationIn show
the positive
features
the dimensionality of the feature vector can be constrained to be in a sparse set of locations determined
PASCAL object detection challenge, used a single
filter
weights at different orientations. The visualization of the
significantly
on histogram of oriented gradients (HOG) features
to reduced with no noticeable loss of informa- by an interest point operator, and their geometric artion. Moreover, spatial
by examining
the principal
models
reflects eigenvectors
the ?cost? ofrangement
placing the
center of
is captured
by a Gaussian distribution. In
represent an object category. The Dalal-Triggswedetector
discover structure
to ?analytic?
versions
of to
contrast,
pictorial structure models [15], [20] define a
a partthat
at leads
different
locations
relative
the root.
uses a sliding window approach, where a filter low-dimensional
is applied
features which are easily interpretable matching problem where parts have an individual match
be computed efficiently.
at all positions and scales of an image. We and
cancan
think
cost in a dense set of locations, and their geometric
We have
of the detector as a classifier which takes as input
analso considered some specific problems that arrangement is constrained by a set of ?springs? connectlatentobject
SVM
(LSVM).
In a and
latent
eachofexample
is
arise in The
the PASCAL
detection
challenge
sim-SVM
ing pairs
parts. The x
patchwork
of parts model from [2]
image, a position within that image, and a scale.
by athefunction
the following
form,but it explicitly considers how the appearance
ilar datasets. Wescored
show how
locations of
of parts
in an is similar,
classifier determines whether or not there is an
instance
object hypothesis can be used to predict a bounding box model of overlapping parts interact to define a dense
of the target category at the given position and
f (x)
= max
z).
(1)
for thescale.
object. This is done by training
a model
specific? (x,
appearance
model for images.
z2Z(x)
predictor
using least-squares regression. We also
demonOur models are largely based on the pictorial strucSince the model is a simple filter we can compute
a score
strate
a simple
method for
the model
output of
tures framework
[15], [20]. We use a dense set of
as ? (x) where
is the filter, x is an image
with
a Here
is aggregating
a vector of
parameters,
z arefrom
latent
several object detectors. The basic idea is that objects of possible positions and scales in an image, and define a
specified position and scale, and (x) is a feature vector. values, and (x, z) is a feature vector. In the case of one
A major innovation of the Dalal-Triggs detector was the of our star models
is the concatenation of the root
construction of particularly effective features.
filter, the part filters, and deformation cost weights, z is
Our first innovation involves enriching the Dalal- a specification of the object configuration, and (x, z) is
Triggs model using a star-structured part-based model a concatenation of subwindows from a feature pyramid
defined by a ?root? filter (analogous to the Dalal-Triggs and part deformation features.
filter) plus a collection of part filters and associated
We note that (1) can handle very general forms of
deformation models. The score of one of our star models latent information. For example, z could specify a derivaat a particular position and scale within an image is the tion under a rich visual grammar.
score of the root filter at the given location plus the
Our second class of models represents each object
sum over parts of the maximum, over placements of category by a mixture of star models. The score of one
that part, of the part filter score on its location minus of our mixture models at a given position and scale
a deformation cost measuring the deviation of the part is the maximum over components, of the score of that
from its ideal location. Both root and part filter scores component model at the given location. In this case the
are defined by the dot product between a filter (a set latent information, z, specifies a component label and
of weights) and a subwindow of a feature pyramid a configuration for that component. Figure 2 shows a
computed from the input image. Figure 1 shows a star mixture model for the bicycle category.
model for the person category. One interesting aspect
To obtain high performance using discriminative trainof our models is that the features for the part filters are ing it is often important to use large training sets. In the
computed at twice the spatial resolution of the root filter. case of object detection the training problem is highly unTo train models using partially labeled data we use a balanced because there is vastly more background than
latent variable formulation of MI-SVM [3] that we call objects. This motivates a process of searching through
adet3
C3
t = 0.3
person detector
adet3
Figure 1: A sample trace of our method. At each time step beginning at t = 0, potential actions
are considered according to their predicted value, and the maximizing action is picked. The selected
action is performed and returns observations. Different actions return different observations: a
detector returns a list of detections, while a scene context action simply returns its computed feature.
The belief model of our system is updated with the observations, which influences the selection of the
next action. The final evaluation of a detection episode is the area of the AP vs. Time curve between
given start and end times. The value of an action is the expected result of final evaluation if the
action is taken and the policy continues to be followed, which allows actions without an immediate
benefit to be scheduled.
VOC dataset and evaluation regime, we are able to obtain better performance than all baselines when
there is less time available than is needed to exhaustively run all detectors.
2
Recognition Problems and Related Work
Formally, we deal with a dataset of images D, where each image I contains zero or more objects.
Each object is labeled with exactly one category label k ? {1, . . . , K}.
The multi-class, multi-label classification problem asks whether I contains at least one object of
class k. We write the ground truth for an image as C = {C1 , . . . , CK }, where Ck ? {0, 1} is set to
1 if an object of class k is present.
The detection problem is to output a list of bounding boxes (sub-images defined by four coordinates), each with a real-valued confidence that it encloses a single instance of an object of class k,
for each k. The answer for a single class k is given by an algorithm detect(I, k), which outputs a
list of sub-image bounding boxes B and their associated confidences.
Performance is evaluated by plotting precision vs. recall across dataset D (by progressively lowering
the confidence threshold for a positive detection). The area under the curve yields the Average
Precision (AP) metric, which has become the standard evaluation for recognition performance on
challenging datasets in vision [1]. A common measure of a correct detection is the PASCAL overlap:
two bounding boxes are considered to match if they have the same class label and the ratio of their
intersection to their union is at least 21 .
To highlight the hierarchical structure of these problems, we note that the confidences for each subimage b ? B may be given by classify(b, k), and, more saliently for our setup, correct answer to the
detection problem also answers the classification problem.
2
Multi-class performance is evaluated by averaging the individual per-class AP values. In a specialized system such as the advertising case study from section 1, the metric generalizes to a weighted
average, with the weights set by the values of the classes.
2.1
Related Work
Object detection The best recent performance has come from detectors that use gradient-based
features to represent objects as either a collection of local patches or as object-sized windows [2, 3].
Classifiers are then used to distinguish between featurizations of a given class and all other possible
contents of an image window. Window proposal is most often done exhaustively over the image
space, as a ?sliding window?.
For state-of-the-art performance, the object-sized window models are augmented with parts [4],
and the bag-of-visual-words models employ non-linear classifiers [5]. We employ the widely used
Deformable Part Model detector [4] in our evaluation.
Using context The most common source of context for detection is the scene or other non-detector
cues; the most common scene-level feature is the GIST [6] of the image. We use this source of scene
context in our evaluation.
Inter-object context has also been shown to improve detection [7]. In a standard evaluation setup,
inter-object context plays a role only in post-filtering, once all detectors have been run. In contrast,
our work leverages inter-object context in the action-planning loop.
A critical summary of the main approaches to using context for object and scene recognition is given
in [8]. For the commonly used PASCAL VOC dataset [1], GIST and other sources of context are
quantitatively explored in [9].
Efficiency through cascades An early success in efficient object detection of a single class uses
simple, fast features to build up a cascade of classifiers, which then considers image regions in
a sliding window regime [10]. Most recently, cyclic optimization has been applied to optimize
cascades with respect to feature computation cost as well as classifier performance [11].
Cascades are not dynamic policies: they cannot change the order of execution based on observations
obtained during execution, which is our goal.
Anytime and active classification This surprisingly little-explored line of work in vision is closest to our approach. A recent application to the problem of visual detection picks features with
maximum value of information in a Hough-voting framework [12]. There has also been work on
active classification [13] and active sensing [14], in which intermediate results are considered in
order to decide on the next classification step. Most commonly, the scheduling in these approaches
is greedy with respect to some manual quantity such as expected information gain. In contrast, we
learn policies that take actions without any immediate reward.
3
Multi-class Recognition Policy
Our goal is a multi-class recognition policy ? that takes an image I and outputs a list of multi-class
detection results by running detector and global scene actions sequentially.
The policy repeatedly selects an action ai ? A, executes it, receiving observations oi , and then
selects the next action. The set of actions A can include both classifiers and detectors: anything that
would be useful for inferring the contents of the image.
Each action ai has an expected cost c(ai ) of execution. Depending on the setting, the cost can be
defined in terms of algorithmic runtime analysis, an idealized property such as number of flops, or
simply the empirical runtime on specific hardware. We take the empirical approach: every executed
action advances t, the time into episode, by its runtime.
As shown in Figure 1, the system is given two times: the setup time Ts and deadline Td . We
want to obtain the best possible answer if stopped at any given time between the setup time and
the deadline. A single-number metric that corresponds to this objective is the area captured under
the curve between the start and deadline bounds, normalized by the total area. We evaluate policies
by this more robust metric and not simply by the final performance at deadline time for the same
3
reason that Average Precision is used instead of a fixed Precision vs. Recall point in the conventional
evaluations.
3.1
Sequential Execution
An open-loop policy, such as the common classifier cascade [10], takes actions in a sequence that
does not depend on observations received from previous actions. In contrast, our goal is to learn a
dynamic, or closed-loop, policy, which would exploit the signal in scene and inter-object context for
a maximally efficient path through the actions.
We refer to the information available to the decision process as the state s. The state includes the
current estimate of the distribution over class presence variables P (C) = {P (C0 ), . . . , P (CK )},
where we write P (Ck ) to mean P (Ck = 1) (class k is present in the image).
Additionally, the state records that an action ai has been taken by adding it to the initially empty
set O and recording the resulting observations oi . We refer to the current set of observations as
o = {oi |ai ? O}. The state also keeps track of the time into episode t, and the setup and deadline
times Ts , Td .
A recognition episode takes an image I and proceeds from the initial state s0 and action a0 to the
next pair (s1 , a1 ), and so on until (sJ , aJ ), where J is the last step of the process with t ? Td . At
that point, the policy is terminated, and a new episode can begin on a new image.
The specific actions we consider in the following exposition are detector actions adeti , where deti
is a detector class Ci , and a scene-level context action agist , which updates the probabilities of
all classes. Although we avoid this in the exposition, note that our system easily handles multiple
detector actions per class.
3.2
Selecting actions
As our goal is to pick actions dynamically, we want a function Q(s, a) : S ? A 7? R, where S is the
space of all possible states, to assign a value to a potential action a ? A given the current state s of
the decision process. We can then define the policy ? as simply taking the action with the maximum
value:
?(s) = argmax Q(s, ai )
(1)
ai ?A\O
Although the action space A is manageable, the space of possible states S is intractable, and we must
use function approximation to represent Q(s, a): a common technique in reinforcement learning
[15]. We featurize the state-action pair and assume linear structure:
Q? (s, a) = ??> ?(s, a)
(2)
The policy?s performance at time t is determined by all detections that are part of the set of observations oj at the last state sj before t. Recall that detector actions returns lists of detection hypotheses.
Therefore, the final AP vs. Time evaluation of an episode is a function eval(h, Ts , Td ) of the history
of execution h = s0 , s1 , . . . , sJ . It is precisely the normalized area under the AP vs. Time curve
between Ts and Td , as determined by the detections in oj for all steps j in the episode.
Note from Figure 3b that this evaluation function is additive per action, as each action a generates
observations that may raise or lower the mean AP of the results so far (?ap) and takes a certain time
(?t). We can accordingly represent the final evaluation eval(h, Ts , Td ) in terms of individual action
PJ
rewards: j=0 R(sj , aj ).
Specifically, as shown in Figure 3b, we define the reward of an action a as
1
R(sj , a) = ?ap(tjT ? ?t)
2
(3)
where tjT is the time left until Td at state sj , and ?t and ?ap are the time taken and AP change
produced by the action a. (We do not account for Ts here for clarity of exposition.)
4
3.3
Learning the policy
The expected value of the final evaluation can be written recursively in terms of the value function:
Q? (sj , a) = Esj+1 [R(sj , a) + ?Q? (sj+1 , ?(sj+1 ))]
(4)
where ? ? [0, 1] is the discount value.
With ? = 0, the value function is determined entirely by the immediate reward, and so only completely greedy policies can be learned. With ? = 1, the value function is determined by the correct
expected rewards to the end of the episode. However, a lower value of ? mitigates the effects of
increasing uncertainty regarding the state transitions over long episodes. We set this meta-parameter
of our approach through cross-validation, and find that a mid-level value (0.4) works best.
While we can?t directly compute the expectation in (4), we can sample it by running actual episodes
to gather < s, a, r, s0 > samples, where r is the reward obtained by taking action a in state s, and s0
is the following state.
We then learn the optimal policy by repeatedly gathering samples with the current policy, minimizing
the error between the discounted reward to the end of the episode as predicted by our current Q(sj , a)
and the actual values gathered, and updating the policy with the resulting weights.
To ensure sufficient exploration of the state space, we implement -greedy action selection during
training: with a probability that decreases with each training iteration, a random action is selected
instead of following the policy. During test time, is set to 0.05.
To prevent overfitting to the training data, we use L2 -regularized regression. We run 15 iterations
of accumulating samples by running 350 episodes, starting with a baseline policy which will be
described in section 4, and cross-validating the regularization parameter at each iteration. Samples
are not thrown away between iterations.
With pre-computed detections on the PASCAL VOC 2007 dataset, the training procedure takes
about 4 hours on an 8-core Xeon E5620 machine.
3.4
Feature representation
Our policy is at its base determined by a linear function of the features of the state:
?(s) = argmax ??> ?(s, ai ).
(5)
ai ?A\O
We include the following quantities as features ?(s, a):
P (Ca )
P (C0 |o) . . . P (CK |o)
H(C0 |o) . . . H(CK |o)
The prior probability of the class that corresponds to the detector of
action a (omitted for the scene-context action).
The probabilities for all classes, conditioned on the current set of
observations.
The entropies for all classes, conditioned on the current set of observations.
Additionally, we include the mean and maximum of [H(C0 |o) . . . H(CK |o)], and 4 time features
that represent the times until start and deadline, for a total of F = 1 + 2K + 6 features.
We note that this setup is commonly used to solve Markov Decision Processes [15]. There are two
related limitations of MDPs when it comes to most systems of interesting complexity, however: the
state has to be functionally approximated instead of exhaustively enumerated; and some aspects of
the state are not observed, making the problem a Partially Observed MDP (POMDP), for which
exact solution methods are intractable for all but rather small problems [16]. Our initial solution to
the problem of partial observability is to include features corresponding to our level of uncertainty
into the feature representation, as in the technique of augmented MDPs [17].
To formulate learning the policy as a single regression problem, we represent the features in block
form, where ?(s, a) is a vector of size F |A|, with all values set to 0 except for the F -sized block
corresponding to a.
5
P (Ca )
P (C|o)
H(C|o)
time
GIST action
P (Ca )
P (C|o)
H(C|o)
P (Ca )
time
P (C|o)
H(C|o)
time
GIST action
GIST action
(a) Greedy
GIST action
Greedy
RL
Greedy
GIST action
(b) Reinforcement Learning
RL
RL
Figure 2: Learned policy weights ?? (best viewed in color: red corresponds to positive, blue to
negative values). The first row corresponds to the scene-level action, which does not generate detections itself but only helps reduce uncertainty about the contents of the image. Note that in the
greedy learning case, this action is learned to never be taken, but it is shown to be useful in the
reinforcement learning case.
3.5
Updating with observations
The bulk of our feature representation is formed by probability of individual class occurrence, conditioned on the observations so far: P (C0 |o) . . . P (CK |o). This allows the action-value function
to learn correlations between presence of different classes, and so the policy can look for the most
probable classes given the observations.
However, higher-order co-occurrences are not well represented in this form. Additionally, updating
P (Ci |o) presents choices regarding independence assumptions between the classes. We evaluate
two approaches for updating probabilities: direct and MRF.
In the direct method, P (Ci |o) = score(Ci ) if o includes the observations for class Ci and
P (Ci |o) = P (Ci ) otherwise. This means that an observation of class i does not directly influence the estimated probability of any class but Ci .
The MRF approach employs a pairwise fully-connected Markov Random Field (MRF), as shown in
Figure 1, with the observation nodes set to score(Ci ) appropriately, or considered unobserved.
The graphical model structure is set as fully-connected, but some classes almost never co-occurr
in our dataset. Accordingly, the edge weights are learned with L1 regularization, which obtains a
sparse structure [18]. All parameters of the model are trained on fully-observed data, and Loopy
Belief Propagation inference is implemented with an open-source graphical model package [19].
An implementation detail: score(Ci ) for adeti is obtained by training a probabilistic classifier on
the list of detections, featurized by the top few confidence scores and the total number of detections.
Similarly, score(Ci ) for agist is obtained by training probabilistic classifiers on the GIST feature,
for all classes.
4
Greedy
As an illustration, we visualize the learned weights on these features in Figure 2, reshaped such that
each row shows the weights learned for an action, with the top row representing the scene context
action and then next 20 rows corresponding to the PASCAL VOC class detector actions.
Evaluation
We evaluate our system on the multi-class, multi-label detection task, as previously described. We
evaluate on a popular detection challenge task: the PASCAL VOC 2007 dataset [1]. This datasets
exhibits a rather modest amount of class co-occurrence: the ?person? class is highly likely to occur,
and less than 10% of the images have more than two classes.
We learn weights on the training and validation sets, and run our policy on all images in the testing
set. The final evaluation pools all detections up to a certain time, and computes their multi-class AP
per image, averaging over all images. This is done for different times to plot the AP vs. Time curve
over the whole dataset. Our method of averaging per-image performance follows [20].
6
For the detector actions, we use one-vs-all cascaded deformable part-model detectors on a HOG
featurization of the image [21], with linear classification of the list of detections as described in
the previous section. There are 20 classes in the PASCAL challenge task, so there are 20 detector
actions. Running a detector on a PASCAL image takes about 1 second.
We test three different settings of the start and deadline times. In the first one, the start time is
immediate and execution is cut off at 20 seconds, which is enough time to run all actions. In the
second one, execution is cut off after only 10 seconds. Lastly, we measure performance between 5
seconds and 15 seconds. These operating points show how our method behaves when deployed in
different conditions. The results are given in rows of Table 1.
ap
t
ap(tjT
tjT
Ts
(a)
1
t)
2
Td
(b)
Figure 3: (a) AP vs. Time curves for Random, Oracle, the Fixed Order baseline, and our bestperforming policy. (b) Graphically representing our reward function, as described in section 3.2.
We establish the first baseline for our system by selecting actions randomly at each step. As shown
in Figure 3a, the Random policy results in a roughly linear gain of AP vs. time. This is expected:
the detectors are capable of obtaining a certain level of performance; if half the detectors are run,
the expected performance level is half of the maximum level.
To establish an upper bound on performance, we plot the Oracle policy, obtained by re-ordering the
actions at the end of each detection episode in the order of AP gains they produced.
We consider another baseline: selecting actions in a fixed order based on the value they bring to the
AP vs. Time evaluation, which is roughly proportional to their occurrence probability. We refer to
this as Fixed Order.
Then there are instantiations of our method, as described in the previous section : RL w/ Direct
inference and RL w/ MRF inference. As the MRF model consistently outperformed Direct by a
small margin, we report results for that model only.
In Figure 3a, we can see that due to the dataset bias, the fixed-order policy performs well at first, as
the person class is disproportionately likely to be in the image, but is significantly overtaken by our
model as execution goes on and more rare classes have to be detected.
Lastly, we include an additional scene-level GIST feature that updates the posterior probabilities of
all classes. This is considered one action, and takes about 0.3 seconds. This setting always uses the
MRF model to properly update the class probabilities with GIST observations. This brings another
small boost in performance. The results are shown in Table 1.
Visualizing the learned weights in Figure 2, we note that the GIST action is learned to never be taken
in the greedy (? = 0) setting, but is learned to be taken with a higher value of ?. It is additionally
informative to consider the action trajectories of different policies in Figure 4.
7
Figure 4: Visualizing the action trajectories of different policies. Action selection traces are plotted
in orange over many episodes; the size of the blue circles correspond to the increase in AP obtained
by the action. We see that the Random policy selects actions and obtains rewards randomly, while
the Oracle policy obtains all rewards in the first few actions. The Fixed Order policy selects actions
in a static optimal order. Our policy does not stick a static order but selects actions dynamically to
maximize the rewards obtained early on.
Table 1: The areas under the AP vs. Time curve for different experimental conditions.
Bounds
(0,20)
(0,10)
(5,15)
5
Random
0.250
0.119
0.257
Fixed Order
0.342
0.240
0.362
RL
0.378
0.266
0.418
RL w/ GIST
0.382
0.267
0.420
Oracle
0.488
0.464
0.530
Conclusion
We presented a method for learning ?closed-loop? policies for multi-class object recognition, given
existing object detectors and classifiers and a metric to optimize. The method learns the optimal
policy using reinforcement learning, by observing execution traces in training. If detection on an
image is cut off after only half the detectors have been run, our method does 66% better than a
random ordering, and 14% better than an intelligent baseline. In particular, our method learns to
take action with no intermediate reward in order to improve the overall performance of the system.
As always with reinforcement learning problems, defining the reward function requires some manual
work. Here, we derive it for the novel detection AP vs. Time evaluation that we suggest is useful
for evaluating efficiency in recognition. Although computation devoted to scheduling actions is less
significant than the computation due to running the actions, the next research direction is to explicitly
consider this decision-making cost; the same goes for feature computation costs. Additionally, it is
interesting to consider actions defined not just by object category but also by spatial region. The
code for our method is available1 .
Acknowledgments
This research was made with Government support under and awarded by DoD, Air Force Office of
Scientific Research, National Defense Science and Engineering Graduate (NDSEG) Fellowship, 32
CFR 168a.
1
http://sergeykarayev.com/work/timely/
8
References
[1] M Everingham, L Van Gool, C K I Williams, J Winn, and A Zisserman. The PASCAL VOC Challenge.
http://www.pascal-network.org/challenges/VOC/, 2010. 2, 3, 6
[2] N Dalal and B Triggs. Histograms of Oriented Gradients for Human Detection. In CVPR, pages 886?893,
2005. 3
[3] David G Lowe. Distinctive Image Features from Scale-Invariant Keypoints. IJCV, 60(2):91?110, November 2004. 3
[4] Pedro F Felzenszwalb, Ross B Girshick, David McAllester, and Deva Ramanan. Object detection with
discriminatively trained part-based models. PAMI, 32(9):1627?1645, September 2010. 3
[5] Andrea Vedaldi, Varun Gulshan, Manik Varma, and Andrew Zisserman. Multiple kernels for object
detection. ICCV, pages 606?613, September 2009. 3
[6] Aude Oliva and Antonio Torralba. Modeling the Shape of the Scene: A Holistic Representation of the
Spatial Envelope. IJCV, 42(3):145?175, 2001. 3
[7] Antonio Torralba, Kevin P Murphy, and William T Freeman. Contextual Models for Object Detection
Using Boosted Random Fields. MIT CSAIL Technical Report, 2004. 3
[8] Carolina Galleguillos and Serge Belongie. Context based object categorization: A critical survey. Computer Vision and Image Understanding, 114(6):712?722, June 2010. 3
[9] Santosh K Divvala, Derek Hoiem, James H Hays, Alexei A Efros, and Martial Hebert. An empirical study
of context in object detection. In CVPR, pages 1271?1278, June 2009. 3
[10] Paul Viola and Michael Jones. Rapid object detection using a boosted cascade of simple features. In
CVPR, 2001. 3, 4
[11] Minmin Chen, Zhixiang (Eddie) Xu, Kilian Q Weinberger, Olivier Chapelle, and Dor Kedem. Classifier
Cascade for Minimizing Feature Evaluation Cost. In AISTATS, 2012. 3
[12] Sudheendra Vijayanarasimhan and Ashish Kapoor. Visual Recognition and Detection Under Bounded
Computational Resources. In CVPR, pages 1006?1013, 2010. 3
[13] Tianshi Gao and Daphne Koller. Active Classification based on Value of Classifier. In NIPS, 2011. 3
[14] Shipeng Yu, Balaji Krishnapuram, Romer Rosales, and R Bharat Rao. Active Sensing. In AISTATS, pages
639?646, 2009. 3
[15] Richard S Sutton and Andrew G Barto. Reinforcement Learning: An Introduction. MIT Press, 1998. 4, 5
[16] Nicholas Roy and Geoffrey Gordon. Exponential Family PCA for Belief Compression in POMDPs. In
NIPS, 2002. 5
[17] Cody Kwok and Dieter Fox. Reinforcement Learning for Sensing Strategies. In IROS, 2004. 5
[18] Su-In Lee, Varun Ganapathi, and Daphne Koller. Efficient Structure Learning of Markov Networks using
L1-Regularization. In NIPS, 2006. 6
[19] Ariel Jaimovich and Ian Mcgraw. FastInf: An Efficient Approximate Inference Library. Journal of
Machine Learning Research, 11:1733?1736, 2010. 6
[20] Chaitanya Desai, Deva Ramanan, and Charless Fowlkes. Discriminative models for multi-class object
layout. In ICCV, pages 229?236, September 2009. 6
[21] Pedro F Felzenszwalb, Ross B Girshick, and David McAllester. Cascade object detection with deformable
part models. In CVPR, pages 2241?2248. IEEE, June 2010. 7
9
|
4712 |@word version:1 dalal:7 compression:1 manageable:1 seems:1 everingham:1 triggs:6 c0:5 open:2 gradual:1 carolina:1 pick:2 asks:1 profit:1 minus:1 recursively:1 initial:2 configuration:2 contains:2 score:14 selecting:4 cyclic:1 hoiem:1 esj:1 existing:1 current:7 com:1 contextual:1 yet:1 tackling:1 must:2 theof:1 written:1 additive:1 informative:1 shape:1 analytic:1 minmin:1 plot:2 interpretable:1 progressively:1 update:3 v:14 gist:12 greedy:10 half:4 selected:2 item:1 cue:1 accordingly:2 beginning:1 core:1 record:1 node:1 location:9 org:1 daphne:2 c2:3 direct:4 become:1 prove:1 ijcv:2 dimen:1 bharat:1 pairwise:1 inter:5 expected:7 rapid:1 andrea:1 roughly:2 planning:1 growing:1 multi:13 freeman:1 voc:8 discounted:1 company:1 td:11 little:1 actual:2 window:7 increasing:1 begin:1 discover:1 moreover:1 bounded:1 what:2 kind:1 finding:1 unobserved:1 bootstrapping:1 berkeley:2 every:3 hypothetical:1 voting:1 usefully:1 runtime:3 exactly:1 classifier:16 stick:1 unit:1 ramanan:2 positive:4 before:1 engineering:1 aggregating:1 treat:2 local:1 sutton:1 path:1 ap:22 pami:1 black:2 plus:2 twice:1 dynamically:2 challenging:1 co:3 enriching:1 graduate:1 acknowledgment:1 testing:1 practice:1 union:1 implement:1 block:2 procedure:1 area:7 empirical:3 significantly:3 cascade:8 matching:1 vedaldi:1 confidence:5 word:1 pre:1 sudheendra:1 suggest:1 krishnapuram:1 get:1 cannot:1 selection:3 operator:1 encloses:1 scheduling:2 context:18 influence:2 vijayanarasimhan:1 accumulating:1 optimize:2 conventional:2 www:1 center:1 maximizing:1 layout:1 go:3 graphically:1 starting:1 independently:1 williams:1 pomdp:1 formulate:2 resolution:2 survey:1 varma:1 handle:2 searching:1 coordinate:1 analogous:1 updated:1 target:1 deploy:1 construction:1 play:1 exact:1 olivier:1 us:4 hypothesis:2 roy:1 recognition:14 expensive:1 particularly:1 continues:1 updating:4 approximated:1 balaji:1 cut:3 native:1 labeled:4 observed:3 role:1 kedem:1 capture:2 region:2 connected:2 kilian:1 episode:14 theand:1 ordering:3 decrease:1 highest:2 desai:1 valuable:1 balanced:1 environment:1 complexity:1 reward:13 tobias:1 dynamic:5 exhaustively:3 trained:3 depend:1 raise:1 deva:2 distinctive:1 efficiency:3 completely:1 easily:4 various:1 represented:1 train:2 fast:1 effective:2 detected:2 labeling:4 kevin:1 richer:1 widely:1 valued:1 solve:1 cvpr:5 otherwise:1 grammar:2 think:1 reshaped:1 itself:1 final:7 karayev:1 sequence:1 propose:1 product:2 loop:5 kapoor:1 holistic:1 achieve:2 deformable:5 empty:1 darrell:1 diverges:1 categorization:1 object:49 help:1 illustrate:1 depending:1 derive:1 andrew:2 received:1 noticeable:1 sim:1 implemented:1 predicted:2 involves:1 come:2 rosales:1 direction:1 correct:3 filter:18 exploration:1 human:2 mcallester:2 featurization:1 disproportionately:1 require:1 government:1 assign:1 probable:1 tjt:4 z2z:1 enumerated:1 around:1 considered:6 ground:1 bicycle:4 predict:1 algorithmic:1 visualize:1 major:1 efros:1 early:2 torralba:2 omitted:1 outperformed:2 bag:1 label:5 ross:2 sensitive:1 sideways:2 reflects:1 weighted:1 mit:2 gaussian:1 always:2 aim:1 ck:9 rather:2 avoid:1 boosted:2 barto:1 office:1 deti:1 june:3 properly:1 consistently:1 contrast:5 baseline:7 detect:1 inference:4 rigid:1 el:1 entire:1 a0:1 initially:1 hidden:1 koller:2 selects:5 overall:1 classification:7 orientation:1 pascal:13 overtaken:1 spatial:5 constrained:2 art:1 uc:2 orange:1 field:2 once:1 never:3 santosh:1 placing:1 represents:1 look:1 jones:1 yu:1 report:2 intelligent:2 quantitatively:1 employ:3 few:2 richard:1 oriented:3 randomly:2 gordon:1 national:1 individual:4 pictorial:2 murphy:1 argmax:2 dor:1 william:1 thrown:1 detection:53 interest:2 mining:3 highly:2 eval:2 alexei:1 evaluation:18 deferred:1 predominant:1 mixture:4 devoted:1 edge:1 capable:1 partial:1 experience:1 modest:1 fox:1 hough:1 chaitanya:1 bestperforming:1 re:1 plotted:1 deformation:5 circle:1 girshick:2 stopped:2 instance:2 classify:1 xeon:1 modeling:1 asking:1 rao:1 extensible:1 measuring:1 phrase:1 cost:11 loopy:1 deviation:1 rare:1 predictor:1 dod:1 examining:1 conducted:1 too:1 answer:6 varies:1 person:4 fritz:1 csail:1 probabilistic:2 off:3 informatics:1 receiving:1 pool:1 michael:1 ashish:1 quickly:1 lee:1 vastly:1 ndseg:1 return:5 ganapathi:1 deform:1 potential:4 amples:1 account:1 star:5 includes:2 matter:1 explicitly:2 idealized:1 manik:1 tion:1 view:2 root:8 closed:3 picked:1 mario:1 analyze:1 doing:1 start:7 red:1 observing:1 lowe:1 timely:3 gulshan:1 formed:1 air:1 accuracy:1 square:1 oi:3 largely:1 efficiently:1 yield:1 gathered:1 correspond:1 serge:1 produced:2 advertising:2 trajectory:2 pomdps:1 tianshi:1 executes:1 history:1 detector:33 manual:2 trevor:1 evaluates:1 against:1 derek:1 james:1 associated:2 mi:1 con:1 static:2 gain:3 dataset:10 popular:1 recall:3 anytime:1 color:1 infers:1 dimensionality:1 higher:3 varun:2 methodology:1 specify:1 maximally:1 zisserman:2 formulation:1 evaluated:3 box:7 done:3 just:1 lastly:2 until:4 correlation:1 zhixiang:1 su:1 overlapping:1 suming:1 propagation:1 brings:1 aj:2 scheduled:1 scientific:1 mdp:1 aude:1 effect:1 normalized:2 galleguillos:1 regularization:3 deal:1 visualizing:2 during:4 inferior:1 anything:1 mpi:1 complete:1 performs:1 l1:2 bring:1 image:43 novel:2 recently:2 charles:1 common:5 aachen:1 specialized:1 behaves:1 performed:1 rl:7 functionally:1 significant:2 refer:3 ai:9 automatic:1 similarly:1 cody:1 language:1 dot:1 chapelle:1 robot:1 specification:1 money:1 operating:1 base:1 closest:1 posterior:1 recent:2 awarded:1 certain:3 hay:1 meta:1 success:1 scoring:1 captured:2 additional:1 converge:1 maximize:2 signal:2 sliding:3 multiple:2 keypoints:1 ing:2 technical:1 match:3 cross:2 long:1 retrieval:1 deadline:8 post:1 a1:1 mrf:6 regression:3 basic:1 oliva:1 vision:4 metric:5 expectation:1 histogram:3 kernel:1 sergey:1 represent:6 iteration:4 pyramid:3 robotics:1 c1:4 proposal:1 background:2 want:2 fellowship:1 winn:1 source:4 crucial:1 appropriately:1 envelope:1 featurize:1 recording:1 validating:1 call:1 near:1 leverage:1 ideal:1 intermediate:2 presence:2 enough:2 variety:1 independence:1 rwth:1 click:1 observability:1 idea:2 regarding:2 reduce:1 unprocessed:1 whether:2 pca:1 defense:1 suffer:1 baumgartner:1 queue:1 speech:1 action:77 patchwork:1 repeatedly:2 saliently:1 useful:3 antonio:2 eigenvectors:1 amount:2 discount:1 mid:1 hardware:1 processed:1 category:12 reduced:1 generate:1 specifies:1 http:2 bagof:1 timeliness:5 estimated:1 per:8 track:1 bulk:1 blue:2 write:2 key:1 four:1 threshold:1 clarity:1 prevent:1 pj:1 iros:1 lowering:1 sum:1 ork:1 run:10 package:1 uncertainty:3 almost:1 reasonable:1 decide:2 family:1 patch:1 lsvm:2 decision:4 acceptable:1 entirely:1 bound:3 internet:1 pay:1 followed:1 distinguish:1 oracle:4 occur:1 placement:1 precisely:1 scene:15 generates:1 aspect:2 spring:1 relatively:1 structured:1 according:1 across:2 featurized:1 making:2 s1:2 invariant:1 iccv:2 gathering:1 dieter:1 ariel:1 taken:6 resource:2 visualization:1 previously:1 needed:1 end:4 adopted:1 available:4 generalizes:1 kwok:1 hierarchical:1 away:1 nicholas:1 occurrence:4 romer:1 fowlkes:1 weinberger:1 original:2 top:2 running:5 include:5 ensure:1 graphical:2 maintaining:1 exploit:3 build:1 establish:2 objective:1 question:1 arrangement:1 quantity:2 strategy:3 subwindows:2 gradient:4 exhibit:1 september:3 concatenation:2 strate:1 argue:1 considers:2 cfr:1 toward:1 reason:2 code:1 illustration:1 ratio:1 minimizing:2 innovation:2 difficult:1 setup:6 executed:1 hog:4 trace:4 negative:2 implementation:1 motivates:1 policy:38 perform:2 upper:1 observation:19 datasets:3 markov:3 finite:1 november:1 t:10 immediate:4 flop:1 defining:1 viola:1 enrichment:1 david:3 timal:1 pair:3 specified:1 c3:3 learned:9 hour:1 boost:1 nip:3 able:2 proceeds:1 regime:2 challenge:6 including:1 max:1 oj:2 belief:3 gool:1 power:1 overlap:1 critical:2 difficulty:1 treated:1 regularized:1 available1:1 cascaded:1 force:1 representing:2 improve:2 mdps:2 library:1 martial:1 prior:1 geometric:2 l2:1 understanding:1 relative:2 loss:1 fully:3 highlight:1 discriminatively:1 interesting:3 limitation:1 allocation:1 tures:1 filtering:1 proportional:1 geoffrey:1 validation:2 gather:1 sufficient:1 s0:4 plotting:1 translation:2 row:5 summary:1 surprisingly:1 last:2 hebert:1 bias:1 divvala:1 template:2 taking:3 felzenszwalb:2 subimage:1 sparse:2 benefit:1 van:1 curve:8 transition:1 world:2 evaluating:3 rich:2 gram:1 unto:1 collection:2 reinforcement:9 subwindow:1 commonly:3 computes:1 made:1 far:3 sj:11 approximate:1 obtains:5 mcgraw:1 keep:1 global:1 sequentially:2 active:5 overfitting:1 instantiation:1 belongie:1 discriminative:2 eddie:1 search:1 latent:7 why:1 table:3 additionally:5 learn:7 robust:2 ca:4 obtaining:1 interact:2 investigated:1 shipeng:1 jaimovich:1 aistats:2 dense:3 main:1 terminated:2 whole:2 bounding:5 arise:1 paul:1 body:1 augmented:2 xu:1 elaborate:1 deployed:1 precision:5 sub:2 position:7 inferring:1 exponential:1 late:1 learns:3 ian:1 specific:5 mitigates:1 constellation:1 list:7 explored:2 svm:5 sensing:3 evidence:1 intractable:2 false:1 sequential:1 adding:1 importance:1 ci:11 execution:11 budget:1 conditioned:3 margin:1 chen:1 entropy:1 intersection:1 simply:4 appearance:2 likely:2 gao:1 visual:7 partially:2 ilar:1 pedro:2 corresponds:4 informa:1 determines:1 constantly:1 truth:1 goal:5 sized:3 viewed:1 exposition:3 content:4 hard:1 change:2 specifically:2 determined:6 except:1 averaging:3 principal:2 total:3 experimental:1 select:1 formally:1 support:3 frontal:2 evaluate:5 ex:1
|
4,103 | 4,713 |
Scaled Gradients on Grassmann Manifolds
for Matrix Completion
Thanh T. Ngo and Yousef Saad
Department of Computer Science and Engineering
University of Minnesota, Twin Cities
Minneapolis, MN 55455
[email protected], [email protected]
Abstract
This paper describes gradient methods based on a scaled metric on the Grassmann
manifold for low-rank matrix completion. The proposed methods significantly
improve canonical gradient methods, especially on ill-conditioned matrices, while
maintaining established global convegence and exact recovery guarantees. A connection between a form of subspace iteration for matrix completion and the scaled
gradient descent procedure is also established. The proposed conjugate gradient
method based on the scaled gradient outperforms several existing algorithms for
matrix completion and is competitive with recently proposed methods.
1
Introduction
Let A ? Rm?n be a rank-r matrix, where r ? m, n. The matrix completion problem is to reconstruct A given a subset of entries of A. This problem has attracted much attention recently
[8, 14, 13, 18, 21] because of its broad applications, e.g., in recommender systems, structure from
motion, and multitask learning (see e.g. [19, 9, 2]).
1.1
Related work
Let ? = {(i, j)|Aij is observed}. We define P? (A) ? Rm?n to be the projection of A onto the
observed entries ?: P? (A)ij = Aij if (i, j) ? ? and P? (A)ij = 0 otherwise. If the rank is
unknown and there is no noise, the problem can be formulated as:
Minimize rank (X) subject to P? (X) = P? (A).
(1)
Rank minimization is NP-hard and so work has been done to solve a convex relaxation of it by
approximating the rank by the nuclear norm. Under some conditions, the solution of the relaxed
problem can be shown to be the exact solution of the rank minimization problem with overwhelming
probability [8, 18]. Usually, algorithms to minimize the nuclear norm iteratively use the Singular
Value Decomposition (SVD), specifically the singular value thresholding operator [7, 15, 17], which
makes them expensive.
If the rank is known, we can formulate the matrix completion problem as follows:
Find matrix X to minimize ||P? (X) ? P? (A)||F subject to rank (X) = r.
(2)
Keshavan et al. [14] have proved that exact recovery can be obtained with high probability by solving a non-convex optimization problem. A number of algorithms based on non-convex formulation
use the framework of optimization on matrix manifolds [14, 22, 6]. Keshavan et al. [14] propose
a steepest descent procedure on the product of Grassmann manifolds of r-dimensional subspaces.
Vandereycken [22] discusses a conjugate gradient algorithm on the Riemann manifold of rank-r matrices. Boumal and Absil [6] consider a trust region method on the Grassmann manifold. Although
1
they do not solve an optimization problem on the matrix manifold, Wei et al. [23] perform a low rank
matrix factorization based on a successive over-relaxation iteration. Also, Srebro and Jaakkola [21]
discuss SVD-EM, one of the early fixed-rank methods using truncated singular value decomposition
iteratively. Dai et al. [10] recently propose an interesting approach that does not use the Frobenius
norm of the residual as the objective function but instead uses the consistency between the current
estimate of the column space (or row space) and the observed entries. Guaranteed performance for
this method has been established for rank-1 matrices.
In this paper, we will focus on the case when the rank r is known and solve problem (2). In fact,
even when the rank is unknown, the sparse matrix which consists of observed entries can give us a
very good approximation of the rank based on its singular spectrum [14]. Also, a few values of the
rank can be used and the best one is selected. Moreover, the singular spectrum is revealed during
the iterations, so many fixed rank methods can also be adapted to find the rank of the matrix.
1.2
Our contribution
OptSpace [14] is an efficient algorithm for low-rank matrix completion with global convergence and
exact recovery guarantees. We propose using a non-canonical metric on the Grassmann manifold to
improve OptSpace while maintaining its appealing properties. The non-canonical metric introduces
a scaling factor to the gradient of the objective function which can be interpreted as an adaptive
preconditioner for the matrix completion problem. The gradient descent procedure using the scaled
gradient is related to a form of subspace iteration for matrix completion. Each iteration of the
subspace iteration is inexpensive and the procedure converges very rapidly. The connection between
the two methods leads to some improvements and to efficient implementations for both of them.
Throughout the paper, A? will be a shorthand for P? (A) and qf(U ) is the Q factor in the QR
factorization of U which gives an orthonormal basis for span (U ). Also, P?? (.) denotes the projection
onto the negation of ?.
2
Subspace iteration for incomplete matrices
We begin with a form of subspace iteration for matrix completion depicted in Algorithm 1. If the
Algorithm 1 S UBSPACE I TERATION FOR INCOMPLETE MATRICES .
Input: Matrix A? , ?, and the rank r.
Output: Left and right dominant subspaces U and V and associated singular values.
1: [U0 , ?0 , V0 ] = svd(A? , r), S0 = ?0 ;
// Initialize U , V and ?
2: for i = 0,1,2,... do
3:
Xi+1 = P?? (Ui Si ViT ) + A?
// Obtain new estimate of A
T
4:
Ui+1 = Xi+1 Vi ; Vi+1 = Xi+1
Ui+1
// Update subspaces
5:
Ui+1 = qf(Ui+1 ); Vi+1 = qf(Vi+1 )
// Re-orthogonalize bases
T
6:
Si+1 = Ui+1
Xi+1 Vi+1
// Compute new S for next estimate of A
7:
if condition then
8:
// Diagonalize S to obtain current estimate of singular vectors and values
9:
[RU , ?i+1 , RV ] = svd(Si+1 ); Ui+1 = Ui+1 RU ; Vi+1 = Vi+1 RV ; Si+1 = ?i+1 .
10:
end if
11: end for
matrix A is fully observed, U and V can be randomly initialized, line 3 is not needed and in lines
4 and 6 we use A instead of Xi+1 to update the subspaces. In this case, we have the classical twosided subspace iteration for singular value decomposition. Lines 6-9 correspond to a Rayleigh-Ritz
projection to obtain current approximations of singular vectors and singular values. It is known that
if the initial columns of U and V are not orthogonal to any of the first r left and right singular vectors
of A respectively, the algorithm converges to the dominant subspaces of A [20, Theorem 5.1].
Back to the case when the matrix A is not fully observed, the basic idea of Algorithm 1 is to use
an approximation of A in each iteration to update the subspaces U and V and then from the new U
and V , we can obtain a better approximation of A for the next iteration. Line 3 is to compute a new
estimate of A by replacing all entries of Ui Si ViT at the known positions by the true values in A.
The update in line 6 is to get the new Si+1 based on recently computed subspaces. Diagonalizing
2
Si+1 (lines 7-10) is optional for matrix completion. This step provides current approximations
of the singular values which could be useful for several purposes such as in regularization or for
convergence test. This comes with very little additional overhead, since Si+1 is a small r ? r matrix.
Each iteration of Algorithm 1 can be seen as an approximation of an iteration of SVD-EM where a
few matrix multiplications are used to update U and V instead of using a truncated SVD to compute
the dominant subspaces of Xi+1 . Recall that computing an SVD, e.g. by a Lanczos type procedure,
requires several, possibly a large number of, matrix multiplications of this type.
We now discuss efficient implementations of Algorithm 1 and modifications to speed-up its conver? i = Ui Si V T . Then
gence. First, the explicit computation of Xi+1 in line 3 is not needed. Let X
i
? i + Ei , where Ei = P? (A ? X
? i ) is a sparse matrix of errors at
Xi+1 = P?? (Ui Si ViT ) + A? = X
? i . Assume that each
known entries which can be computed efficiently by exploiting the structure of X
Si is not singular (the non-singularity of Si will be discussed in Section 4). Then if we post-multiply
the update of U in line 4 by Si?1 , the subspace remains the same, and the update becomes:
? i + Ei )Vi S ?1 = Ui + Ei Vi S ?1 ,
Ui+1 = Xi+1 Vi Si?1 = (X
i
i
(3)
The update of V can also be efficiently implemented. Here, we make a slight change, namely
T
Vi+1 = Xi+1
Ui (Ui instead of Ui+1 ). We observe that the convergence speed remains roughly the
same (when A is fully observed, the algorithm is a slower version of subspace iteration where the
convergence rate is halved). With this change, we can derive an update to V that is similar to (3),
Vi+1 = Vi + EiT Ui Si?T ,
(4)
We will point out in Section 3 that the updating terms Ei Vi Si?1 and EiT Ui Si?T are related to the
gradients of a matrix completion objective function on the Grassmann manifold. As a result, to
improve the convergence speed, we can add an adaptive step size ti to the process, as follows:
Ui+1 = Ui + ti Ei Vi Si?1
and
Vi+1 = Vi + ti EiT Ui Si?T .
? i + ti Ei as the estimate of A in each iteration. The step size can be
This is equivalent to using X
computed using a heuristic adapted from [23]. Initially, t is set to some initial value t0 (t0 = 1 in
our experiments). If the error kEi kF decreases compared to the previous step, t is increased by a
factor ?. Conversely, if the error increases, indicating that the step is too big, t is reset to t = t0 .
The matrix Si+1 can be computed efficiently by exploiting low-rank structures and the sparsity.
T
T
Si+1 = (Ui+1
Ui )Si (ViT Vi+1 ) + ti Ui+1
Ei Vi+1
(5)
There are also other ways to obtain Si+1 once Ui+1 and Vi+1 are determined to improve the current
approximation of A . For example we can solve the following quadratic program [14]:
T
Si+1 = argminS kP? (A ? Ui+1 SVi+1
)k2F
(6)
We summarize the discussion in Algorithm 2. A sufficiently small error kEi kF can be used as a
Algorithm 2 G ENERIC S UBSPACE I TERATION FOR INCOMPLETE MATRICES .
Input: Matrix A? , ?, and number r.
Output: Left and right dominant subspaces U and V and associated singular values.
1: Initialize orthonormal matrices U0 ? Rm?r and V0 ? Rn?r .
2: for i = 0,1,2,... do
3:
Compute Ei and appropriate step size ti
4:
Ui+1 = Ui + ti Ei Vi Si?1 and Vi+1 = Vi + ti EiT Ui Si?T
5:
Orthonormalize Ui+1 and Vi+1
T
6:
Find Si+1 such that P? (Ui+1 Si+1 Vi+1
) is close to A? (e.g. via (5), (6)).
7: end for
stoppping criterion. Algorithm 1 can be shown to be equivalent to LMaFit algorithm proposed in
[23]. The authors in [23] also obtain results on local convergence of LMaFit. We will pursue a
different approach here. The updates (3) and (4) are reminiscent of the gradient descent steps for
minimizing matrix completion error on the Grassmann manifold that is introduced in [14] and the
next section discusses the connection to optimization on the Grassmann manifold.
3
3
Optimization on the Grassmann manifold
In this section, we show that using a non-canonical Riemann metric on the Grassmann manifold,
the gradient of the same objective function in [14] is of a form similar to (3) and (4). Based on this,
improvements to the gradient descent algorithms can be made and exact recovery results similar
to those of [14] can be maintained. The readers are referred to [1, 11] for details on optimization
frameworks on matrix manifolds.
3.1
Gradients on the Grassmann manifold for matrix completion problem
Let G(m, r) be the Grassmann manifold in which each point corresponds to a subspace of dimension
r in Rm . One of the results of [14], is that under a few assumptions (to be addressed in Section 4),
one can obtain with high probability the exact matrix A by minimizing a regularized version of the
function F : G(m, r) ? G(n, r) ? R defined below.
F (U, V ) = min
F(U, S, V ),
r?r
(7)
S?R
where F(U, S, V ) = (1/2)kP? (A ? U SV T )k2F , U ? Rm?k and V ? Rn?k are orthonormal
matrices. Here, we abuse the notation by denoting by U and V both orthonormal matrices as well
as the points on the Grassmann manifold which they span. Note that F only depends on the subspaces spanned by matrices U and V . The function F (U, V ) can be easily evaluated by solving
the quadratic minimization problem in the form of (6). If G(m, r) is endowed with the canonical
inner product hW, W ? i = Tr (W T W ? ), where W and W ? are tangent vectors of G(m, r) at U (i.e.
W, W ? ? Rm?r such that W T U = 0 and W ?T U = 0) and similarly for G(n, r), the gradients of
F (U, V ) on the product manifold are:
gradFU (U, V )
gradFV (U, V )
=
=
(I ? U U T )P? (U SV T ? A)V S T
T
(I ? V V )P? (U SV
T
T
(8)
T
? A) U S.
(9)
T
T
In the above formulas, (I ?U U ) and (I ?V V ) are the projections of the derivatives P? (U SV ?
A)V S T and P? (U SV T ? A)T U S onto the tangent space of the manifold at (U, V ). Notice that the
derivative terms are very similar to the updates in (3) and (4). The difference is in the scaling factors
where gradFU and gradFV use S T and S while those in Algorithm 2 use S ?1 and S ?T .
Assume that S is a diagonal matrix which can always be obtained by rotating U and V appropriately.
F (U, V ) would change more rapidly when the columns of U and V corresponding to larger entries
2
of S are changed. The rate of change of F would be approximately proportional to Sii
when the
2
i-th columns of U and V are changed, or in other words, S gives us an approximate second order
information of F at the current point (U, V ). This suggests that the level set of F should be similar to
an ?ellipse? with the shorter axes corresponding to the larger values of S. It is therefore compelling
to use a scaled metric on the Grassmann manifold.
Consider the inner product hW, W ? iD = Tr (DW T W ? ), where D ? Rr?r is a symmetric positive
definite matrix. We will derive the partial gradients of F on the Grassmann manifold endowed with
this scaled inner product. According to [11], gradFU is the tangent vector of G(m, r) at U such that
Tr (FUT W ) = h(gradFU )T , W iD ,
(10)
for all tangent vectors W at U , where FU is the partial derivative of F with respect to U . Recall
that the tangent vectors at U are those W ?s such that W T U = 0. The solution of (10) with the
constraints that W T U = 0 and (gradFU )T U = 0 gives us the gradient based on the scaled metric,
which we will denote by grads FU and grads FV .
grads FU (U, V )
grads FV (U, V )
=
=
(I ? U U T )FU D?1 = (I ? U U T )P? (U SV T ? A)V SD?1 .
T
(I ? V V )FV D
?1
T
= (I ? V V )P? (U SV
T
T
? A) U SD
?1
(11)
.
(12)
Notice the additional scaling D appearing in these scaled gradients. Now if we use D = S 2 (still
with the assumption that S is diagonal) as suggested by the arguments above on the approximate
shape of the level set of F , we will have grads FU (U, V ) = (I ? U U T )P? (U SV T ? A)V S ?1 and
grads FV (U, V ) = (I ? V V T )P? (U SV T ? A)T U S ?1 (note that S depends on U and V ).
4
If S is not diagonalized, we use SS T and S T S to derive grads FU and grads FV respectively, and the
scalings appear exactly as in (3) and (4).
grads FU (U, V )
grads FV (U, V )
=
=
(I ? U U T )P? (U SV T ? A)V S ?1
T
(I ? V V )P? (U SV
T
T
? A) U S
(13)
?T
(14)
This scaling can be interpreted as an adaptive preconditioning step similar to those that are popular
in the scientific computing literature [4]. As will be shown in our experiments, this scaled gradient
direction outperforms canonical gradient directions especially for ill-conditioned matrices.
The optimization framework on matrix manifolds allows to define several elements of the manifold
in a flexible way. Here, we use the scaled-metric to obtain a good descent direction, while other
operations on the manifold can be based on the canonical metric which has simple and efficient
computational forms. The next two sections describe algorithms using scaled-gradients.
3.2
Gradient descent algorithms on the Grassmann manifold
Gradient descent algorithms on matrix manifolds are based on the update
Ui+1 = R(Ui + ti Wi )
(15)
where Wi is the gradient-related search direction, ti is the step size and R(U ) is a retraction on the
manifold which defines a projection of U onto the manifold [1]. We use R(U ) = span (U ) as the
retraction on the Grassmann manifold where span (U ) is represented by qf(U ), which is the Q factor
in the QR factorization of U . Optimization on the product of two Grassmann manifolds can be done
by treating each component as a coordinate component.
The step size t can be computed in several ways, e.g., by a simple back-tracking method to find the
point satisfying the Armijo condition [3]. Algorithm 3 is an outline of our gradient descent method
(i)
(i)
for matrix completion. We let grads FU ? grads FU (Ui , Vi ) and grads FV ? grads FV (Ui , Vi ). In
line 5, the exact Si+1 which realizes F (Ui+1 , Vi+1 ) can be computed according to (6). A direct
method to solve (6) costs O(|?|r4 ). Alternatively, Si+1 can be computed approximately and we
found that (5) is fast (O((|?| + m + n)r2 )) and gives the same convergence speed. If (5) fails
to yield good enough progress, we can always switch back to (6) and compute Si+1 exactly. The
subspace iteration and LMaFit can be seen as relaxed versions of this gradient descent procedure.
The next section goes further and described the conjugate gradient iteration.
Algorithm 3 G RADIENT DESCENT WITH SCALED - GRADIENT ON THE G RASSMANN MANIFOLD .
Input: Matrix A? , ?, and number r.
Output: U and V which minimize F (U, V ), and S which realizes F (U, V ).
1: Initialize orthonormal matrices U0 and V0 .
2: for i = 0,1,2,... do
(i)
(i)
3:
Compute grads FU and grads FV according to (13) and (14).
4:
Find an appropriate step size ti and compute
(i)
(i)
(Ui+1 , Vi+1 ) = (qf(Ui ? ti grads FU ), qf(Vi ? ti grads FV ))
5:
Compute Si+1 according to (6) (exact) or (5) (approximate).
6: end for
3.3
Conjugate gradient method on the Grassmann manifold
In this section, we describe the conjugate gradient (CG) method on the Grassmann manifold based
on the scaled gradients to solve the matrix completion problem. The main additional ingredient we
need is vector transport which is used to transport the old search direction to the current point on the
manifold. The transported search direction is then combined with the scaled gradient at the current
point, e.g. by Polak-Ribiere formula (see [11]), to derive the new search direction. After this, a line
search procedure is performed to find the appropriate step size along this search direction.
Vector transport can be defined using the Riemann connection, which in turn is defined based on the
Riemann metric [1]. As mentioned at the end of Section 3.1, we will use the canonical metric to
5
derive vector transport when considering the natural quotient manifold structure of the Grassmann
manifold. The tangent W ? at U will be transported to U + W as TU +W (W ? ) where TU (W ? ) =
(I ? U U T )W ? . Algorithm 4 is a sketch of the resulting conjugate gradient procedure.
Algorithm 4 C ONJUGATE GRADIENT WITH SCALED - GRADIENT ON THE G RASSMANN MANIFOLD .
Input: Matrix A? , ?, and number r.
Output: U and V which minimize F (U, V ), and S which realizes F (U, V ).
1: Initialize orthonormal matrices U0 and V0 .
(0)
(0)
2: Compute (?0 , ?0 ) = (grads FU , grads FV ).
3: for i = 0,1,2,... do
4:
Compute a step size ti and compute (Ui+1 , Vi+1 ) = (qf(Ui + ti ?i ), qf(Vi + ti ?i ))
5:
Compute ?i+1 (Polak-Ribiere) and set
(i)
(i)
(?i+1 , ?i+1 ) = (?grads FU + ?i+1 TUi+1 (?i ), ?grads FV + ?i+1 TVi+1 (?i ))
6:
Compute Si+1 according to (6) or (5).
7: end for
4
Convergence and exact recovery of scaled-gradient descent methods
Let A = U? ?? V?T be the singular value decomposition of A, where U? ? Rm?r , V? ? Rn?r and
?? ? Rr?r . Let us also denote z = (U, V ) a point on G(m, r) ? G(n, r). Clearly, z? = (U? , V? )
is a minimum of F . Assume that A is incoherent [14]; A has bounded entries and the minimum
singular value of A is bounded away from 0. Let ?(A) be the condition number of A. It is shown
that, if the number of observed entries is of order O(max{?(A)2 n log n, ?(A)6 n}) then, with high
probability, F is well approximated by a parabola and z? is the unique stationary point of F in a
sufficiently small neighborhood of z? ([14, Lemma 6.4&6.5]). From these observations, given an
initial point that is sufficiently close to z? , a gradient descent procedure on F (with an additional
regularization term to keep the intermediate points incoherent) converges to z? and exact recovery
is obtained. The singular value decomposition of a trimmed version of the observerd matrix A? can
give us the initial point that ensures convergence. The readers are referred to [14] for details.
(i) 2
(i) 2
Pm
Pn
From [14], let G(U, V ) = i=1 G1 ( kUCinck ) + i=1 G1 ( kVCinck ), where G1 (x) = 0 if x ? 1
2
and G1 (x) = e(x?1) ? 1 otherwise; Cinc is a constant depending on the incoherence assumptions.
We consider the regularized version of F : F? (U, V ) = F (U, V ) + ?G(U, V ), where ? is chosen
appropriately so that U and V remain incoherent during the execution of the algorithm. We can see
that z? is also the minimum of F? . We will now show that the scaled-gradients of F? are well-defined
during the iterations and they are indeed descent directions of F? and only vanish at z? . As a result,
the scaled-gradient-based methods can inherit all the convergence results in [14].
First, S must be non-singular during the iterations for the scaled-gradients to be well-defined. As a
corollary of Lemma 6.4 in [14], the extreme singular values of any intermediate S are bounded by
?
?
?
?
. The second
extreme singular values ?min
and ?max
of ?? : ?max ? 2?max
and ?min ? 12 ?min
inequality implies that S is well-conditioned during the iterations.
The scaled-gradient is the descent direction of F? as a direct result from the fact that it is indeed the gradient of F? based on a non-canonical metric. Moreover, by Lemma 6.5 in [14],
?
kgradF? (z)k2 ? Cn?2 (?min
)4 d(z, z? )2 for some constant C, where k.k and d(., .) are the canonical
norm and distance on the Grassmann manifold respectively. Based on this, a similar lower bound of
kgrads F? k can be derived. Let D1 = SS T and D2 = S T S be the scaling matrices. Then,
kgrad F? (z)k2 = kgradF?U (z)D?1 k2 + kgradF?V (z)D?1 k2
s
?
?
F
F
2
1
2
?2
2
?
?
?max (kgradFU (z)kF + kgradFV (z)kF )
?
(2?max
)?2 kgradF? (z)k2
?
?
?
?
(2?max
)?2 Cn?2 (?min
)4 d(z, z? )2 = C(?min
)4 (2?max
)?2 n?2 d(z, z? )2 .
?
Therefore, the scaled gradients only vanish at z? which means the scaled-gradient descent procedure
must converge to z? , which is the exact solution [3].
6
5
Experiments and results
The proposed algorithms were implemented in Matlab with some mex-routines to perform matrix
multiplications with sparse masks. For synthesis data, we consider two cases: (1) fully random
low-rank matrices, A = randn(m, r) ? randn(r, n) (in Matlab notations) whose singular values
tend to be roughly the same; (2) random low-rank matrices with chosen singular values by letting
U = qf(randn(m, r)), V = qf(randn(n, r)) and A = U SV T where S is a diagonal matrix with
chosen singular values. The initializations of all methods are based on the SVD of A? .
First, we illustrate the improvement of scaled gradients over canonical gradients for steepest descent
and conjugate gradient methods on 5000 ? 5000 matrices with rank 5 (Figure 1). Note that CanonGrass-Steep is OptSpace with our implementation. In this experiment, Si is obtained exactly using
(6). The time needed for each iteration is roughly the same for all methods so we only present the
results in terms of iteration counts. We can see that there are some small improvements for the fully
random case (Figure 1a) since the singular values are roughly the same. The improvement is more
substantial for matrices with larger condition numbers (Figure 1b).
5000x5000 ? Rank 5 ? 1.0% observed entries
Singular values [4774, 4914, 4979, 5055, 5146]
5000x5000 ? Rank 5 ? 1.0% observed entries
Singular values [1000, 2000, 3000, 4000, 5000]
0
0
?2
?4
?5
RMSE (log?scale)
RMSE (log?scale)
Canon?Grass?Steep
Canon?Grass?CG
Scaled?Grass?Steep
Scaled?Grass?CG
?6
?8
Canon?Grass?Steep
Canon?Grass?CG
Scaled?Grass?Steep
Scaled?Grass?CG
?10
?10
?12
?15
10
20
30
40
50
Iteration count
60
70
80
?14
90
(a)
20
40
60
80
100
120
Iteration count
140
160
180
200
(b)
Figure 1: Log-RMSE for fully random matrix (a) and random matrix with chosen spectrum (b).
Now, we compare the relaxed version of the scaled conjugate gradient which uses (5) to compute Si
(ScGrass-CG) to LMaFit [23], Riemann-CG [22], RTRMC2 [6] (trust region method with second
order information), SVP [12] and GROUSE [5] (Figure 2). These methods are also implemented in
Matlab with mex-routines similar to ours except for GROUSE which is entirely in Matlab (Indeed
GROUSE does not use sparse matrix multiplication as other methods do). The subspace iteration
method and the relaxed version of scaled steepest descent converge similarly to LMaFit, so we omit
them in the graph. Note that each iteration of GROUSE in the graph corresponds to one pass over
the matrix. It does not have exactly the same meaning as one iteration of other methods and is
much slower with its current implementation. We use the best step sizes that we found for SVP
and GROUSE. In terms of iteration counts, we can see that for the fully random case (upper row),
RTRMC2 is the best while ScGrass-CG and Riemann-CG converge reasonably fast. However, each
iteraton of RTRMC2 is slower so in terms of time, ScGrass-CG and Riemann-CG are the fastest in
our experiments. When the condition number of the matrix is higher, ScGrass-CG converges fastest
both in terms of iteration counts and execution time.
Finally, we test the algorithms on Jester-1 and MovieLens-100K datasets which are assumed to
be low-rank matrices with noise (SVP and GROUSE are not tested because their step sizes need
to be appropriately chosen). Similarly to previous work, for the Jester dataset we randomly select 4000 users and randomly withhold 2 ratings for each user for testing. For the MovieLens
dataset, we use the common dataset prepared by [16], and keep 50% for training and 50% for
testing. We run 100 different randomizations of Jester and 10 randomizations of MovieLens and
average the results. We stop all methods early, when the change of RMSE is less than 10?4 , to
avoid overfitting. All methods stop well before one minute. The Normalized Mean Absolute Errors
(NMAEs) [13] are reported in Table 1. ScGrass-CG is the relaxed scaled CG method and ScGrassCG-Reg is the exact scaled CG method using a spectral-regularization version of F proposed in
7
10000x10000 ? Rank 10 ? 0.5% observed entries
Singular values [9612,9717,9806,9920,9987,10113,10128,10226,10248,10348]
10000x10000 ? Rank 10 ? 0.5% observed entries
Singular values [9612,9717,9806,9920,9987,10113,10128,10226,10248,10348]
2
2
GROUSE
SVP
0
0
SVP
?2
RMSE (log?scale)
RMSE (log?scale)
?2
?4
?6
?8
?6
?8
?10
LMaFit
?10
?12
ScGrass?CG
ScGrass?CG
?12 RTRMC2
?14
?4
?14
RiemannCG
20
40
GROUSE
80
100
120
Iteration count
60
140
160
180
?16
200
LMaFit
Riemann?CG
0
10000x10000 ? Rank 10 ? 0.5% observed entries
Singular values [1000,2000,3000,4000,5000,6000,7000,8000,9000,10000]
5
10
15
RTRMC2
20
25
Time [s]
30
35
40
45
50
10000x10000 ? Rank 10 ? 0.5% observed entries
Singular values [1000,2000,3000,4000,5000,6000,7000,8000,9000,10000]
2
2
0
0
SVP
SVP
GROUSE
?2
LMaFit
GROUSE
?2
RMSE (log?scale)
RMSE (log?scale)
LMaFit
RTRMC2
?4
?6
?8
RTRMC2
?4
?6
?8
Riemann?CG
?10
Riemann?CG
?10
ScGrass?CG
?12
?12
ScGrassCG
?14
50
100
150
Iteration count
200
250
?14
300
0
10
20
30
40
50
Time [s]
60
70
80
90
100
Figure 2: Log-RMSE. Upper row is fully random, lower row is random with chosen singular values.
Rank
5
7
5
7
ScGrass-CG
0.1588
0.1584
0.1808
0.1832
ScGrass-CG-Reg
0.1588
0.1584
0.1758
0.1787
LMaFit
0.1588
0.1581
0.1828
0.1836
Riemann-CG
0.1591
0.1584
0.1781
0.1817
RTRMC2
0.1588
0.1583
0.1884
0.2298
Table 1: NMAE on Jester dataset (first 2 rows) and MovieLens 100K. NMAEs for a random guesser
are 0.33 on Jester and 0.37 on MovieLens 100K.
[13]: F? (U, V ) = minS (1/2)(kP? (U SV T ? A)k + ?kSk2F ). All methods perform similarly and
demonstrate overfitting when k = 7 for MovieLens. We observe that ScGrass-CG-Reg suffers the
least from overfitting thanks to its regularization. This shows the importance of regularization for
noisy matrices and motivates future work in this direction.
6
Conlusion and future work
The gradients obtained from a scaled metric on the Grassmann manifold can result in improved
convergence of gradient methods on matrix manifolds for matrix completion while maintaining
good global convergence and exact recovery guarantees. We have established a connection between
scaled gradient methods and subspace iteration method for matrix completion. The relaxed versions
of the proposed gradient methods, adapted from the subspace iteration, are faster than previously
discussed algorithms, sometimes much faster depending on the conditionining of the data matrix.
In the future, we will investigate if these relaxed versions achieve similar performance guarantees.
We are also interested in exploring ways to regularize the relaxed versions to deal with noisy data.
The convergence condition of OptSpace depends on ?(A)6 and weakening this dependency for the
proposed algorithms is also an interesting future direction.
8
Acknowledgments
This work was supported by NSF grants DMS-0810938 and DMR-0940218.
References
[1] P.-A. Absil, R. Mahony, and R. Sepulchre. Optimization Algorithms on Matrix Manifolds. Princeton
University Press, Princeton, NJ, 2008.
[2] Y. Amit, M. Fink, N. Srebro, and S. Ullman. Uncovering shared structures in multiclass classification. In
Proceedings of the 24th international conference on Machine learning, ICML ?07, pages 17?24, 2007.
[3] L. Armijo. Minimization of functions having Lipschitz continuous first partial derivatives. Pacific Journal
of Mathematics, 16(1):1?3, 1966.
[4] J. Baglama, D. Calvetti, G. H. Golub, and L. Reichel. Adaptively preconditioned GMRES algorithms.
SIAM J. Sci. Comput., 20(1):243?269, December 1998.
[5] L. Balzano, R. Nowak, and B. Recht. Online identification and tracking of subspaces from highly incomplete information. In Proceedings of Allerton, September 2010.
[6] N. Boumal and P.-A. Absil. Rtrmc: A riemannian trust-region method for low-rank matrix completion.
In NIPS, 2011.
[7] J-F. Cai, E. J. Cand`es, and Z. Shen. A singular value thresholding algorithm for matrix completion. SIAM
Journal on Optimization, 20(4):1956?1982, 2010.
[8] E. Candes and T. Tao. The power of convex relaxation: Near-optimal matrix completion, 2009.
[9] P. Chen and D. Suter. Recovering the Missing Components in a Large Noisy Low-Rank Matrix: Application to SFM. IEEE Transactions on Pattern Analysis and Machine Intelligence, 26(8):1051?1063,
2004.
[10] W. Dai, E. Kerman, and O. Milenkovic. A geometric approach to low-rank matrix completion. IEEE
Transactions on Information Theory, 58(1):237?247, 2012.
[11] A. Edelman, T. Arias, and S. T. Smith. The geometry of algorithms with orthogonality constraints. SIAM
J. Matrix Anal. Appl, 20:303?353, 1998.
[12] P. Jain, R. Meka, and I. S. Dhillon. Guaranteed rank minimization via singular value projection. In NIPS,
pages 937?945, 2010.
[13] R. Keshavan, A. Montanari, and S. Oh. Matrix completion from noisy entries. In Y. Bengio, D. Schuurmans, J. Lafferty, C. K. I. Williams, and A. Culotta, editors, Advances in Neural Information Processing
Systems 22, pages 952?960. 2009.
[14] R. H. Keshavan, S. Oh, and A. Montanari. Matrix completion from a few entries. CoRR, abs/0901.3150,
2009.
[15] S. Ma, D. Goldfarb, and L. Chen. Fixed point and bregman iterative methods for matrix rank minimization. Math. Program., 128(1-2):321?353, 2011.
[16] B. Marlin. Collaborative filtering: A machine learning perspective, 2004.
[17] R. Mazumder, T. Hastie, and R. Tibshirani. Spectral regularization algorithms for learning large incomplete matrices. J. Mach. Learn. Res., 11:2287?2322, August 2010.
[18] B. Recht. A simpler approach to matrix completion. CoRR, abs/0910.0651, 2009.
[19] J. D. M. Rennie and N. Srebro. Fast maximum margin matrix factorization for collaborative prediction.
In In Proceedings of the 22nd International Conference on Machine Learning (ICML, pages 713?719.
ACM, 2005.
[20] Y. Saad. Numerical Methods for Large Eigenvalue Problems- classics edition. SIAM, Philadelpha, PA,
2011.
[21] N. Srebro and T. Jaakkola. Weighted low-rank approximations. In In 20th International Conference on
Machine Learning, pages 720?727. AAAI Press, 2003.
[22] B. Vandereycken. Low-rank matrix completion by riemannian optimization. Technical report, Mathematics Section, Ecole Polytechnique Federale de de Lausanne, 2011.
[23] Z. Wen, W. Yin, and Y. Zhang. Solving a low-rank factorization model for matrix completion using a
non-linear successive over-relaxation algorithm. In CAAM Technical Report. Rice University, 2010.
9
|
4713 |@word multitask:1 milenkovic:1 version:11 norm:4 nd:1 d2:1 decomposition:5 tr:3 sepulchre:1 initial:4 denoting:1 ours:1 ecole:1 outperforms:2 existing:1 diagonalized:1 current:9 si:35 attracted:1 reminiscent:1 must:2 numerical:1 shape:1 treating:1 update:12 grass:8 stationary:1 intelligence:1 selected:1 steepest:3 smith:1 provides:1 math:1 successive:2 allerton:1 simpler:1 zhang:1 along:1 sii:1 direct:2 edelman:1 consists:1 shorthand:1 overhead:1 mask:1 indeed:3 roughly:4 cand:1 riemann:11 little:1 overwhelming:1 considering:1 becomes:1 begin:1 moreover:2 notation:2 bounded:3 interpreted:2 pursue:1 marlin:1 nj:1 guarantee:4 ti:16 fink:1 exactly:4 scaled:34 rm:7 k2:5 grant:1 omit:1 appear:1 positive:1 before:1 engineering:1 local:1 sd:2 mach:1 id:2 incoherence:1 abuse:1 approximately:2 initialization:1 r4:1 conversely:1 suggests:1 appl:1 lausanne:1 fastest:2 factorization:5 minneapolis:1 unique:1 acknowledgment:1 testing:2 definite:1 svi:1 procedure:10 convegence:1 significantly:1 projection:6 word:1 get:1 onto:4 close:2 mahony:1 operator:1 equivalent:2 thanh:1 missing:1 go:1 attention:1 williams:1 vit:4 convex:4 formulate:1 shen:1 recovery:7 d1:1 ritz:1 nuclear:2 orthonormal:6 spanned:1 regularize:1 dw:1 oh:2 classic:1 coordinate:1 user:2 exact:13 us:2 pa:1 element:1 expensive:1 satisfying:1 updating:1 approximated:1 parabola:1 observed:14 region:3 ensures:1 culotta:1 decrease:1 mentioned:1 substantial:1 ui:40 solving:3 basis:1 conver:1 preconditioning:1 easily:1 eit:4 represented:1 jain:1 fast:3 describe:2 kp:3 x10000:4 neighborhood:1 whose:1 heuristic:1 larger:3 solve:6 balzano:1 rennie:1 s:2 reconstruct:1 otherwise:2 nmae:1 g1:4 polak:2 noisy:4 online:1 rr:2 eigenvalue:1 cai:1 propose:3 product:6 reset:1 tu:2 rapidly:2 achieve:1 radient:1 frobenius:1 qr:2 exploiting:2 convergence:13 converges:4 derive:5 depending:2 completion:27 illustrate:1 ij:2 progress:1 implemented:3 c:2 quotient:1 come:1 implies:1 recovering:1 direction:12 randomization:2 singularity:1 exploring:1 teration:2 sufficiently:3 randn:4 early:2 purpose:1 realizes:3 city:1 weighted:1 minimization:6 clearly:1 always:2 pn:1 avoid:1 ribiere:2 jaakkola:2 corollary:1 ax:1 focus:1 derived:1 improvement:5 rank:41 absil:3 cg:25 weakening:1 initially:1 interested:1 tao:1 uncovering:1 classification:1 ill:2 flexible:1 jester:5 gmres:1 initialize:4 once:1 having:1 broad:1 k2f:2 icml:2 future:4 np:1 report:2 few:4 wen:1 suter:1 randomly:3 geometry:1 negation:1 ab:2 investigate:1 highly:1 multiply:1 vandereycken:2 golub:1 umn:2 introduces:1 extreme:2 bregman:1 fu:13 nowak:1 partial:3 shorter:1 orthogonal:1 incomplete:5 old:1 initialized:1 re:2 rotating:1 federale:1 increased:1 column:4 compelling:1 optspace:4 lanczos:1 cost:1 subset:1 entry:17 too:1 reported:1 dependency:1 sv:13 combined:1 adaptively:1 thanks:1 recht:2 international:3 siam:4 synthesis:1 aaai:1 possibly:1 derivative:4 ullman:1 de:2 twin:1 vi:32 depends:3 reg:3 performed:1 competitive:1 candes:1 rmse:9 contribution:1 minimize:5 reichel:1 collaborative:2 efficiently:3 correspond:1 yield:1 identification:1 suffers:1 retraction:2 inexpensive:1 eneric:1 dm:1 associated:2 riemannian:2 dmr:1 stop:2 proved:1 dataset:4 popular:1 recall:2 routine:2 back:3 higher:1 wei:1 improved:1 formulation:1 done:2 evaluated:1 preconditioner:1 sketch:1 replacing:1 trust:3 keshavan:4 ei:10 transport:4 defines:1 scientific:1 normalized:1 true:1 regularization:6 symmetric:1 iteratively:2 dhillon:1 goldfarb:1 deal:1 during:5 maintained:1 criterion:1 outline:1 demonstrate:1 polytechnique:1 motion:1 meaning:1 recently:4 common:1 discussed:2 slight:1 meka:1 consistency:1 pm:1 similarly:4 mathematics:2 minnesota:1 v0:4 base:1 add:1 dominant:4 halved:1 perspective:1 inequality:1 seen:2 minimum:3 dai:2 relaxed:8 additional:4 canon:4 converge:3 u0:4 rv:2 technical:2 faster:2 post:1 grassmann:23 prediction:1 basic:1 metric:12 iteration:33 sometimes:1 mex:2 addressed:1 singular:33 diagonalize:1 appropriately:3 saad:3 subject:2 tend:1 december:1 lafferty:1 ngo:1 near:1 revealed:1 intermediate:2 enough:1 bengio:1 switch:1 hastie:1 inner:3 idea:1 cn:2 multiclass:1 grad:22 t0:3 trimmed:1 matlab:4 useful:1 prepared:1 canonical:11 nsf:1 notice:2 tibshirani:1 graph:2 relaxation:4 run:1 throughout:1 reader:2 scaling:6 sfm:1 kerman:1 entirely:1 bound:1 guaranteed:2 quadratic:2 calvetti:1 adapted:3 constraint:2 orthogonality:1 speed:4 argument:1 span:4 min:8 department:1 pacific:1 according:5 conjugate:8 describes:1 remain:1 em:2 wi:2 appealing:1 modification:1 caam:1 remains:2 previously:1 discus:4 turn:1 count:7 needed:3 letting:1 end:6 operation:1 endowed:2 grouse:10 observe:2 svp:7 away:1 appropriate:3 spectral:2 appearing:1 slower:3 denotes:1 maintaining:3 especially:2 ellipse:1 approximating:1 classical:1 tvi:1 amit:1 objective:4 diagonal:3 september:1 gradient:53 subspace:24 distance:1 sci:1 gence:1 manifold:41 preconditioned:1 argmins:1 ru:2 minimizing:2 steep:5 lmafit:10 yousef:1 implementation:4 motivates:1 anal:1 unknown:2 perform:3 recommender:1 upper:2 observation:1 datasets:1 descent:18 truncated:2 optional:1 rn:3 orthonormalize:1 august:1 rating:1 introduced:1 tui:1 namely:1 kgrad:1 connection:5 fv:12 established:4 nip:2 suggested:1 usually:1 below:1 pattern:1 sparsity:1 summarize:1 program:2 max:8 x5000:2 power:1 kgradf:4 natural:1 regularized:2 residual:1 diagonalizing:1 mn:1 improve:4 incoherent:3 literature:1 geometric:1 tangent:6 kf:4 multiplication:4 fully:8 rtrmc:1 interesting:2 proportional:1 filtering:1 srebro:4 ingredient:1 ksk2f:1 s0:1 thresholding:2 editor:1 row:5 qf:10 changed:2 supported:1 aij:2 boumal:2 absolute:1 sparse:4 dimension:1 withhold:1 author:1 made:1 adaptive:3 kei:2 transaction:2 approximate:3 keep:2 global:3 overfitting:3 assumed:1 xi:10 alternatively:1 spectrum:3 search:6 continuous:1 iterative:1 table:2 learn:1 transported:2 reasonably:1 mazumder:1 schuurmans:1 inherit:1 main:1 montanari:2 big:1 noise:2 edition:1 referred:2 fails:1 position:1 explicit:1 comput:1 vanish:2 hw:2 theorem:1 formula:2 minute:1 r2:1 corr:2 importance:1 aria:1 execution:2 conditioned:3 margin:1 chen:2 depicted:1 rayleigh:1 yin:1 tracking:2 corresponds:2 acm:1 ma:1 rice:1 formulated:1 shared:1 fut:1 lipschitz:1 hard:1 change:5 specifically:1 determined:1 except:1 movielens:6 lemma:3 pas:1 svd:8 orthogonalize:1 e:1 indicating:1 select:1 armijo:2 princeton:2 tested:1
|
4,104 | 4,714 |
Unsupervised Template Learning for
Fine-Grained Object Recognition
Shulin Yang
University of Washington, Seattle, WA 98195
[email protected]
Jue Wang
Adobe ATL Labs, Seattle, WA 98103
[email protected]
Liefeng Bo
ISTC-PC Intel labs, Seattle, WA 98195
[email protected]
Linda Shapiro
University of Washington, Seattle, WA 98195
[email protected]
Abstract
Fine-grained recognition refers to a subordinate level of recognition, such as recognizing different species of animals and plants. It differs from recognition of
basic categories, such as humans, tables, and computers, in that there are global
similarities in shape and structure shared cross different categories, and the differences are in the details of object parts. We suggest that the key to identifying
the fine-grained differences lies in finding the right alignment of image regions
that contain the same object parts. We propose a template model for the purpose, which captures common shape patterns of object parts, as well as the cooccurrence relation of the shape patterns. Once the image regions are aligned,
extracted features are used for classification. Learning of the template model is
efficient, and the recognition results we achieve significantly outperform the stateof-the-art algorithms.
1
Introduction
Object recognition is a major focus of research in computer vision and machine learning. In the last
decade, most of the existing work has been focused on basic recognition tasks: distinguishing different categories of objects, such as table, computer and human. Recently, there is an increasing trend
to work on subordinate-level or fine-grained recognition that categorizes similar objects, such as
different types of birds or dogs, into their subcategories. The subordinate-level recognition problem
differs from the basic-level tasks in that the object differences are more subtle. Fine-grained recognition is generally more difficult than basic-level recognition for both humans and computers, but
it will be widely useful if successful in applications such as fisheries (fish recognition), agriculture
(farm animal recognition), health care (food recognition), and others.
Cognitive research study has suggested that basic-level recognition is based on comparing the shape
of the objects and their parts, whereas subordinate-level recognition is based on comparing appearance details of certain object parts [1]. This suggests that finding the right correspondence of object
parts is of great help in recognizing fine-grained differences. For basic-level recognition tasks, spatial pyramid matching [2] is a popular choice that aligns object parts by partitioning the whole image
into multiple-level spatial cells. However, spatial pyramid matching may not be the best choice for
fine-grained object recognition, since falsely aligned object parts can lead to inaccurate comparisons,
as shown in Figure 1.
This work is intended to alleviate the limitations of spatial pyramid matching. Our key observation is that in a fine-grained task, different object categories share commonality in their shape or
structure, and the alignment of object parts can be greatly improved by discovering such common
1
Figure 1: Region alignment by spatial pyramid matching and our approach. Spatial pyramid matching partitions the whole image into regions, without considering visual appearance. A 4?4 partition
leads to misalignment of parts of the birds while a coarse partition (i.e. 2?2) includes irrelevant features. Our approach aims to align the image regions containing the same object parts (red squares).
shape patterns. For example, bird images from different species may have similar shape patterns in
their beaks, tails, feet or bodies. The commonality usually is a part of the global shape, and can be
observed in bird images across different species and in different poses. This motivates us to decompose a fine-grained object recognition problem into two sub-problems: 1) aligning image regions
that contain the same object part and 2) extracting image features within the aligned image regions.
To this end, we propose a template model to align object parts. In our model, a template represents
a shape pattern, and the relationship between two shape patterns is captured by the relationship between templates, which reflects the probability of their co-occurrence in the same image. This model
is learned using an alternative algorithm, which iterates between detecting aligned image regions,
and updating the template model. Kernel descriptor features [3, 4] are then extracted from image
regions aligned by the learned templates.
Our model is evaluated on two benchmark datasets: the Caltech-UCSD Bird200 and the Stanford
Dogs. Our experimental results suggest that the proposed template model is capable of detecting
image regions that correspond to meaningful object parts, and our template-based algorithm outperforms the state-of-the-art fine-grained object recognition algorithms in terms of accuracy.
2
Related Work
An increasing number of papers have focused on fine-grained object recognition in recent years
[5, 6, 1, 7, 8, 9]. In [5], multiple kernel learning is used to combine different types of features
and serves as a baseline fine-grained recognition algorithm, and human help is used to discover
useful attributes. In [9], a random forest is proposed for fine-grained object recognition that uses
different depths of the tree to capture dense spatial information. In [6], a multi-cue combination
is used to build discriminative compound words from primitive cues learned independently from
training images. In [10], bagging is used to select discriminative ones from the randomly generated
templates. In [11], image regions are considered as discriminative attributes and CRF is used to
learn the attributes on training set with human in the loop. Pose pooling [12] adapted Poselets [13]
to fine-grained recognition problems and learned different poses from fully annotated data. Though
deformable parts model [14] is powerful for object detection, it might be insufficient to capture the
flexibility and variability in fine-grained tasks considered here [15].
3
Unsupervised Learning of Template Model
This section provides an overview of our fine-grained object recognition approach. We discuss the
framework of our template based object recognition, describe our template model, and propose an
alternative algorithm for learning model parameters.
3.1
Template-Based Fine-Grained Object Recognition
Over the last decades, computer vision researchers have done a lot of work in designing effective
and efficient patch-level features for object recognition [16, 17, 18, 19, 20, 21, 22]. SIFT is one
2
Figure 2: The framework for fine-grained recognition: the recognition pipeline goes from left to
right. In the training stage, a template model is learned from training images using Algorithm 1.
In the recognition stage, the learned templates are applied to each test image, resulting in aligned
image regions. Then image-level feature vectors are extracted as the concatenation of features of all
aligned regions. Finally, a linear SVM is used for recognition.
of the most successful features, allowing an image or object to be represented as a bag of SIFT
features [16]. However, the ability of such methods is somewhat limited. Patch-level features are
descriptive only within spatial context constraints. For example, a cross shape can be a symbol
for the Red Cross, Christian religion, or Swiss Army products, depending on the larger spatial
context of where it is detected. It is hard to interpret the meaning of patch-level features without
considering such spatial contexts. This is even more important for a fine-grained recognition task
since common features can be shared by instances from both the same and different object classes.
Spatial pyramid models [2, 20, 23] align sub-images/parts that are spatially close by partitioning the
whole images into multi-level spatial cells. However the alignments produced by the spatial pyramid
are not necessarily correct, since no displacements are allowed in the model (Figure 1).
Here, we use a template model to find correctly-aligned regions from different images, so that comparisons between them are more meaningful. A template represents one type of common shape
pattern of an object part, while an object part can be represented by several different templates.
Certain shape patterns of two object parts (for instance, a head facing the left and a tail pointing to
the right) can frequently be observed in the same image. Our template model is designed to capture
both properties of templates and their relationships among templates. Model parameters are learned
from a collection of unlabeled images in an unsupervised manner. See sections 3.2 and 3.3 for more
details.
Once the templates and their relationship are learned, the fine-grained differences can be aligned
based on these quantities. The framework of our template based fine-grained object recognition is
illustrated in Figure 2. In the learning stage, Algorithm 1 is used to find the templates. In the recognition stage (from left to right in Figure 2), aligned image regions are extracted from each image
using our template detection algorithm. Color-based, normalized color-based, gradient-based, and
LBP-based kernel descriptors followed by EMK [4] are then applied to generate feature representations for each region. The image-level feature is the concatenation of feature representations of all
detected regions from the corresponding image. Finally, a linear SVM [24] is used for recognition.
3.2
Template Model
We start by defining a template model that represents the common shape patterns of object parts
and their relationships. A template is an entity that contains features that will match image features for region detection. Let M = {T, W} be a model that contains a group of templates
T = {T1 , T2 , ..., TK } and their co-occurrence relationships W = {w11 w12 ..., wKK }, where K
is the number of templates, and wij is between 0 and 1. When wij = 0, the two templates Ti and Tj
have no co-occurrence relationship.
When a template model is matched to a given image, not all templates within the model are necessarily used. This is because different templates can be associated with the same object part, but
3
one part only occurs at most once in an image. Our model captures this intuition by making the
templates inactive that do not match images very well. To model appearance properties of templates
and their relationships, the score function between templates and a given image I t should capture
three aspects: 1) fitness, which computes the similarity of the selected templates and image regions
that are most highly matched to them; 2) co-occurrence, which encourages selecting templates that
have a high chance of co-occurring in the same image; and 3) diversity, which gives preference to
having the selected templates match separated image regions.
Fitness: We define a matching score sf (Ti , xIi ) to measure the similarity between a template Ti and
an image region at location xIi in image I
sf (Ti , xIi ) = 1 ? kTi ? R(xIi )k2
s.t. |xIi ? xIi | ? ?
(1)
xIi ;
R(xIi )
xIi
represents the features of the sub-image in I centered at the location
where
is an initial
location associated with the template Ti and ? is an upper bound of location variation. Both xIi and
xIi are measured by their relative location in image I. If |xIi ? xIi | > ?, the location is too far from
the initial location, and the score is set to zero.
The features describing R(xIi ) should be able to capture common properties of object parts. Since
the same type of part from different objects usually share similar shapes, we introduce edge kernel
descriptors to capture this common statistic. We first run the Berkeley edge detector [25] to compute
the edge map of an image, and then treat it as a grayscale image and extract color kernel descriptors [3] over it. Using these descriptors, we compute sf (Ti , xIi ); the higher its value, the better is
the match.
Summing up the matching score sf (Ti , xIi ) for all templates that are used for image I, we obtain a
fitness term
K
X
viI sf (Ti , xIi )
S f (T, X I , V I ) =
(2)
i=1
I
} represents the selected template subset for image I, viI = 1 means that the
where V I = {v1I , ..., vK
template Ti is used for image I, and X I = {xI1 , ..., xIK } represents the locations of all templates on
image I. The more templates that are used, the higher the score is.
Co-occurrence: With the observation that certain shape patterns of two or more object parts coexist
frequently in the same image, it is desired that templates that have a high chance of co-occurring
are selected together. For a given image, the co-occurrence term is used to encourage selecting two
templates together, which have a large relation parameter wij . Meanwhile, a L1 penalty term is used
to ensure sparsity of the template relation.
c
I
S (W, V ) =
K
K X
X
viI vjI wij
??
i=1 j=1
K X
K
X
|wij | s.t. 0 ? wij ? 1
(3)
i=1 j=1
Diversity: This term is used to enforce spatial relationship constraints on the locations of selected
templates. In particular, their locations should not be too close to each other, because we want the
learned templates to be diverse, so that they can cover a large range of image shape patterns. So this
term sums up a location penalty on the templates,
S d (X I , V I ) = ?
K X
K
X
viI vjI d(xIi , xIj )
(4)
i=1 j=1
where d(xIi , xIj ) is the location penalty function. We have d(xIi , xIj ) = ? if |xIi ? xIj | < ? and
d(xIi , xIj ) = 0, otherwise. ? is a distance parameter.
Summing up all three terms defined above: fitness, co-occurrence and diversity terms for all images
in the image set D, we have the overall score function between templates and images
X
S(T, W, X , V, D) =
(S f (T, X I , V I ) + S c (W, V I ) + S d (X I , V I ))
(5)
I?D
where V = {V 1 , V 2 , ..., V |D| } are template indicators, X = {X 1 , X 2 , ..., X |D| } are template locations, and |D| is the number of images in the set D. The templates and their relations are learned by
maximizing the score function S(T, W, X , V, D) on an image collection D.
4
Algorithm 1 Template Model Learning
input Image set D, maximum iteration maxiter, threshold
output Template model M = {T, W}.
Initialize {T1 , T2 , ..., TK } with training data; initialize wij = 0; iter = 0
for iter < maxiter do
update X I , V I for all
P I ? D based onPequation (6)
update T by: Ti = I?D viI R(xIi )/ I?D viI (as in (8))
update
P W to optimize (9)
if i |? Ti | < then
break
end if
iter ? iter + 1
end for
3.3
Template Learning
We use an alternating algorithm to optimize (5). The proposed algorithm iterates among three steps:
? updating X , V (template detection),
? updating T (template feature learning), and
? updating W (template relation learning).
Template detection: Given a template model {T, W}, the goal of template detection is to find the
template subset V and their locations X for all images to maximize equation (5). The second term in
S c in equation (3) is a constant given W. So maximizing (5) is reduced to maximizing the following
term for each image I respectively:
max
X I ,V I
K
X
viI sf (Ti , xIi ) +
i=1
K X
K
X
viI vjI (wij ? d(xIi , xIj ))
(6)
i=1 j=1
The above optimization problem is NP-hard, so a greedy approach is used: the algorithm starts
with an empty set, first calculates the scores for all templates, and then selects the template with
the largest score. Fixing the locations of all previously selected templates, the next template and its
location can be chosen in a similar manner. The procedure is repeated until the object function (6)
no longer increases.
Template feature learning: The goal of template feature learning is to optimize the templates T
given the relation parameters W and current template detection results V, X . When maximizing (5),
S d and S c are all constants given V, X and W. The optimal template Ti can be found by maximizing
X
max
viI (1 ? kTi ? R(xIi )k2 )
(7)
Ti
I?D
which can be solved by the closed form equation
X
X
Ti =
viI R(xIi )/
viI
I?D
(8)
I?D
Eq (8) means that the template Ti is updated by the average of the features of all sub-images in D
that are detected by the i-th template.
Template relation learning: The goal here is to assign values to the relation parameters W given
all other parameters (T, V and X ) for the purpose of maximizing equation (5). Since only W are
optimization parameters, S f and S d are both constants. Optimizing (5) is simplified as maximizing
max
W
K X
K
X
i=1 j=1
wij
X
viI vjI ? ?|D|
K X
K
X
|wij |
i=1 j=1
I?D
A L1 regularization solver [26] is used for optimizing this formula.
5
(9)
T1
T2
T3
T4
T5
Figure 3: Object parts (black squares) detected by learned templates. Each line shows the parts
found by one learned template. The sub-image within the black square has the highest matching
score for a given image. Meaningful parts are successfully detected such as heads, backs and tails.
The whole learning procedure is summarized in Algorithm 1. The algorithm starts by initiating
K templates with various sizes and initial locations that are are evenly spaced in an image. In each
iteration, template detection, template feature learning, and template relation learning are alternated.
The iteration continues until the total change of template {Ti }K
i=1 is smaller than a threshold .
4
Experiments
We tested our model on two publicly available datasets: Caltech-UCSD Bird-200 and Stanford Dog.
These two datasets are the standard benchmarks to evaluate fine-grained object recognition algorithms. Our experiments suggest that the proposed template model is able to detect the meaningful
parts and outperforms the previous work in terms of accuracy.
4.1
Features and Settings
We use kernel descriptors (KDES) to capture low-level image statistics: color, shape and texture [3].
In particular, we use four types of kernel descriptors: color-based, normalized color-based, gradientbased, and local-binary-pattern-based descriptors1 . Color and normalized color kernel descriptors
are extracted over RGB images, and gradient and shape kernel descriptors are extracted over gray
scale images transformed from the original RGB images. Following the standard parameter setting,
we compute kernel descriptors on 16 ? 16 image patches over dense regular grids with spacing of
8 pixels. For template relation learning, we use a publicly available L1 regularization solver 2 . All
images are resized to be no larger than 300 ? 300 with the height/width ratio preserved.
To learn the template model, we use 34 templates with different sizes. The template size is measured
by its ratio to the original image size, such as 1/2 or 1/3. Our model has 9 templates with size 1/2
and 25 with size 1/3. The initial locations of templates with each template size are evenly spaced
grid points in an image. We observe that the learning algorithm converges very fast and usually
becomes stable around 15 ? 20 iterations.The sparsity level parameter ? is set to be 0.1. Other
model parameters are ? = 24 and ? = 32 pixels. These parameters are optimized by performing
cross validation on training set of the Bird dataset. The same parameter setting is then applied to the
1
2
http://www.cs.washington.edu/ai/Mobile_Robotics/projects/kdes/
http://www.di.ens.fr/?mschmidt/Software/L1General.html
6
Table 1: The table in the left show the classification accuracies (%) obtained by templates with
different sizes and numbers on a subset of a full dataset. The accuracy is improved with an increasing
template number at the beginning, and become saturated when enough templates are used. With the
best template number choices, the combination of templates with different sizes are tested. The table
in the right shows the accuracies (%) achieved by different combinations on the full dataset. The
combination of 9 templates with size 1/2 and 25 templates with size 1/3 performs best (selected
using the training set).
Acc
T1
1
T2
1
T3
1
T4
1
46.1
39.6
33.2
32.1
4
46.1
46.8
42.9
37.5
9
46.1
50.7
41.8
40.4
16
46.1
50.7
43.9
40
25
46.1
48.9
44.3
40.4
36
46.1
47.5
44.3
40
Combination
1
9T 2
1
T 1 + 9T 2
1
1
9T 2 + 25T 3
1
1
1
T 1 + 9T 2 + 25T 3 + 25T 4
Acc
27.1
27.4
28.2
28.2
Table 2: Effect of sparsity parameter ?: that the best accuracy is achieved when ? = 0.1.
?
Accur
0
48.57
0.001
48.93
0.005
49.28
0.01
49.29
0.05
49.64
0.1
50.7
0.5
50
1
48.57
Dog dataset. On each region detected by templates, we compute template-level features using EMK
features [4]. After obtaining these template-level features, we train a linear support vector machine
for fine-grained object recognition.
Notice that there is a slight difference between template detection in the learning phase and in the
recognition phase. In the learning phase, only a subset of templates are detected for each image.
This is because not all templates can be observed in all images, and each image usually contains
only a subset of all possible templates. But in the recognition phase, all templates are selected for
detection in order to avoid missing features.
4.2
Bird Recognition
Caltech-UCSD Bird-200 [8] is a commonly used dataset for evaluating fine-grained object recognition algorithms. The dataset contains 6033 images from 200 bird species in North America. In each
image, the bounding box of a bird is given. Following the standard setting [5], 15 images from each
species are used for training and the rest for testing.
Template learning: Figure 3 visualizes the rectangles/parts detected by the learned templates. The
feature in each template consists of a vector of real numbers. As can be seen, the learned templates
successfully find the meaningful parts of birds, though the appearances of these parts are very different. For examples, the head parts detected by T1 have quite different colors and textures, suggesting
the robustness of the proposed template model.
Sparsity parameter ?: We tested different values for the sparsity level parameter ? on a subset
of 20 categories (from the training set) for efficiency. If ? = 0, there is no penalty on the relation
parameters W, thus all weights wij are set to 1 when the template model is learned. If ? ? 1, the
penalty on the relation parameters is large enough that all wij are set to 0 after learning. In both these
cases, the template models are equivalent to a simplified model without the co-occurrence term in
(3). If ? is a number between 0 and 1, test results in Table 2 show that the best accuracy is achieved
when ? = 0.1.
Template size and number choices: We tested the effect of the number and size of the templates
on the recognition accuracy. All the results are obtained on a subset of 20 categories for efficiency.
When the template size is 1, the accuracy is the same with an arbitrary template number, because
template detection will return the same results. For templates whose size is smaller than 1, the results
obtained with different numbers of templates are shown in Table 1 left. Based on these results,
we selected a template number for each template size for further experiments: one template with
size 1, 9 templates with size 1/2, 25 templates with size 1/3, and 25 templates with size 1/4. The
results obtained by the combinations of templates with different sizes (each with its optimal template
number) on the full dataset are shown in Table 1 right. The highest accuracy is achieved by the
7
Table 3: Comparisons on Caltech-UCSD Bird-200. Our template model is compared to the recently
proposed fine-grained recognition algorithms. The performance is measured in terms of accuracy.
MKL [5]
19.0
LLC [9]
18.0
Rand-forest [9]
19.2
Multi-cue [6]
22.4
KDES [3]
26.4
This work
28.2
Table 4: Comparisons on Stanford Dog Dataset. Our approach is compared to a baseline algorithm
in [27] and KDES with spatial pyramid. We give the results of the proposed template model with
two types of templates: edge templates and texture templates.
Methods
Accuracy(%)
SIFT [27]
22.0
KDES [3]
36.0
Edge templates
38.0
Texture templates
36.9
combination of 9 templates with size 1/2 and 25 templates with size 1/3. Our further experiments
suggest that adding more templates only slightly improves the recognition accuracy.
Running time: Our algorithm is efficient. With a non-optimized version of the algorithm, in the
training stage, each iteration takes 2 ? 3 minutes to update. In the test stage, it takes 3 ? 5 seconds
to process each image, including template detection, feature extraction and classification. This is
fast enough for an on-line recognition task.
Comparisons with the state-of-the-art algorithms: We compared our model with four recently
published algorithms for fine-grained object recognition: multiple kernel learning [5], random forest [9], LLC [9], and multi-cue [6] in Table 3. We also compared our model to KDES [3] with spatial
pyramid, a strong baseline in terms of accuracy.
We observe that KDES with spatial pyramid works well on this dataset, and the proposed template
model works even better. The template model achieves 28.2% accuracy, about 6 percents higher than
the best results reported in the previous work and about 2 percents higher than KDES with spatial
pyramid. This accuracy is comparable with the recently proposed pose pooling approach [12] where
labeled parts are used to train and test models; this is not required for our template model.
4.3
Dog Recognition
The Stanford Dogs dataset is another benchmark dataset for fine-grained image categorization recently introduced in [27]. The dataset contains 20, 580 images of 120 breeds of dogs from around
the world. Bounding boxes of dogs are provided for all images in the dataset. This dataset is a
good complement to the Caltech-UCSD Bird200 due to more images in each category: around 200
images per class versus 30 images per class in Bird200. Following the standard setting [27], 100
images from each category are used for training and the rest for testing.
Comparisons with the state-of-art algorithms: We compared our model with a baseline algorithm [27] and KDES with spatial pyramid on this dataset. For the dog datasets, we also tried using
the local binary pattern KDES to learn templates instead of the edge KDES due to the relative consistent textures in dog images. Our experiments show that the template learning with the edge KDES
works better than that with the local binary pattern KDES, suggesting that the edge information is a
stable cue to learn templates. Notice that the accuracy achieved by our template model is 16 percent
higher than the best published results so far.
5
Conclusion
We have proposed a template model for fine-grained object recognition. The template model learns a
group of templates by jointly considering fitness, co-occurrence and diversity between the templates
and images, and the learned templates are used to align image regions that contain the same object
parts. Our experiments show that the proposed template model has achieved higher accuracy than the
state-of-the-art fine-grained object recognition algorithms on the two standard benchmarks: CaltechUCSD Bird-200 and Standford Dogs. In the future, we plan to learn the features that are suitable for
detecting object parts and incorporate the geometric information into the template relationships.
8
References
[1] Farrell, R., Oza, O., Zhang, N., Morariu, V., Darrell, T., Davis, L.: Birdlets: subordinate
categorization using volumetric primitives and pose-normalized appearance. ICCV (2011)
[2] Lazebnik, S., Schmid, C., Ponce, J.: Beyond bags of features: Spatial pyramid matching for
recognizing natural scene categories. CVPR (2006)
[3] Bo, L., Ren, X., Fox, D.: Kernel Descriptors for Visual Recognition. NIPS (2010)
[4] Bo, L., Sminchisescu, C.: Efficient match kernel between sets of features for visual recognition. NIPS (2009)
[5] Branson, S., Wah, C., Babenko, B., Schroff, F., Welinder, P., Perona, P., Belongie, S.: Visual
recognition with humans in the loop. ECCV (2010)
[6] Khan, F., van de Weijer, J., Bagdanov, A., Vanrell, M.: Portmanteau vocabularies for multi-cue
image representations. NIPS (2011)
[7] Wah, C., Branson, S., Perona, P., Belongie, S.: Interactive localization and recognition of
fine-grained visual categories. ICCV (2011)
[8] Welinder, P., Branson, S., Mita, T., Wah, C., Schroff, F., Belongie, S., Perona, P.: Caltech-ucsd
birds 200. Technical Report CNS-TR-201, Caltech (2010)
[9] Yao, B., Khosla, A., Fei-Fei, L.: Combining randomization and discrimination for fine-grained
image categorization. CVPR (2011)
[10] Yao, B., Bradski, G., Fei-Fei, L.: A codebook-free and annotation-free approach for finegrained image categorization. CVPR (2012)
[11] Duan, K., Parikh, D., Crandall, D., Grauman, K.: Discovering localized attributes for finegrained recognition. CVPR (2012)
[12] Zhang, N., Farrell, R., Darrell, T.: Pose pooling kernels for sub-category recognition. CVPR
(2012)
[13] Bourdev, L., Malik, J.: Poselets: body partddetectors trained using 3d human pose annotations.
ICCV (2009)
[14] Felzenszwalb, P., Girshick, R., McAllester, D., Ramanan, D.: Object detection with discriminatively trained part based models. IEEE Transactions on Pattern Analysis and Machine
Intelligence 32 (2010)
[15] Parkhi, O., Vedaldi, A., Zisserman, A., Jawahar, C.: Cats and dogs. CVPR (2012)
[16] Lowe, D.: Distinctive image features from scale-invariant keypoints. IJCV 60 (2004)
[17] Lee, H., Battle, A., Raina, R., Ng, A.: Efficient sparse coding algorithms. NIPS (2007)
[18] Yang, J., Yu, K., Gong, Y., Huang, T.: Linear spatial pyramid matching using sparse coding
for image classification. CVPR (2009)
[19] Wang, J., Yang, J., Yu, K., Lv, F., Huang, T., Guo, Y.: Locality-constrained linear coding for
image classification. CVPR (2010)
[20] Boureau, Y., Bach, F., LeCun, Y., Ponce, J.: Learning mid-level features for recognition.
CVPR (2010)
[21] Coates, A., Ng, A.: The importance of encoding versus training with sparse coding and vector
quantization. ICML (2011)
[22] Yu, K., Lin, Y., Lafferty, J.: Learning image representations from the pixel level via hierarchical sparse coding. CVPR (2011)
[23] Boureau, Y., Ponce, J.: A theoretical analysis of feature pooling in visual recognition. ICML
(2010)
[24] Chang, C., Lin, C.: LIBSVM: a library for support vector machines. (2001)
[25] Maire, M., Arbelaez, P., Fowlkes, C., Malik, J.: Using contours to detect and localize junctions
in natural images. CVPR (2008)
[26] Schmidt, M., Fung, G., Rosales, R.: Optimization methods for L1 -regularization. UBC Technical Report (2009)
[27] Khosla, A., Jayadevaprakash, N., Yao, B., Fei-Fei, L.: Novel dataset for fine-grained image
categorization. First Workshop on Fine-Grained Visual Categorization, CVPR (2011)
9
|
4714 |@word version:1 tried:1 rgb:2 tr:1 initial:4 contains:5 score:10 selecting:2 outperforms:2 existing:1 current:1 com:2 comparing:2 babenko:1 partition:3 shape:19 christian:1 designed:1 update:4 discrimination:1 cue:6 discovering:2 selected:9 greedy:1 morariu:1 intelligence:1 beginning:1 coarse:1 iterates:2 detecting:3 provides:1 preference:1 location:18 codebook:1 zhang:2 height:1 become:1 consists:1 ijcv:1 combine:1 introduce:1 manner:2 falsely:1 frequently:2 multi:5 initiating:1 food:1 duan:1 considering:3 increasing:3 solver:2 becomes:1 discover:1 matched:2 project:1 provided:1 linda:1 maxiter:2 finding:2 berkeley:1 ti:17 interactive:1 grauman:1 k2:2 partitioning:2 ramanan:1 t1:5 local:3 treat:1 encoding:1 might:1 black:2 bird:14 suggests:1 co:11 branson:3 limited:1 range:1 lecun:1 testing:2 differs:2 swiss:1 procedure:2 maire:1 displacement:1 significantly:1 vedaldi:1 matching:10 word:1 refers:1 regular:1 suggest:4 close:2 unlabeled:1 coexist:1 context:3 optimize:3 www:2 map:1 equivalent:1 missing:1 maximizing:7 primitive:2 go:1 independently:1 focused:2 identifying:1 atl:1 variation:1 updated:1 distinguishing:1 us:1 designing:1 trend:1 recognition:58 updating:4 continues:1 labeled:1 observed:3 wang:2 capture:9 solved:1 oza:1 region:23 highest:2 intuition:1 bird200:3 cooccurrence:1 trained:2 localization:1 distinctive:1 efficiency:2 misalignment:1 represented:2 various:1 america:1 cat:1 train:2 separated:1 fast:2 describe:1 effective:1 detected:9 crandall:1 quite:1 whose:1 widely:1 stanford:4 larger:2 cvpr:12 otherwise:1 ability:1 statistic:2 breed:1 farm:1 jointly:1 descriptive:1 propose:3 product:1 fr:1 aligned:10 loop:2 combining:1 flexibility:1 achieve:1 deformable:1 seattle:4 empty:1 darrell:2 accur:1 categorization:6 converges:1 object:51 help:2 depending:1 tk:2 bourdev:1 fixing:1 gong:1 pose:7 measured:3 eq:1 strong:1 c:3 rosales:1 poselets:2 foot:1 annotated:1 attribute:4 correct:1 centered:1 human:7 mcallester:1 subordinate:5 assign:1 alleviate:1 decompose:1 randomization:1 gradientbased:1 around:3 considered:2 great:1 pointing:1 major:1 achieves:1 commonality:2 agriculture:1 purpose:2 standford:1 bag:2 schroff:2 jawahar:1 largest:1 v1i:1 successfully:2 istc:1 reflects:1 aim:1 avoid:1 resized:1 focus:1 ponce:3 vk:1 greatly:1 baseline:4 detect:2 inaccurate:1 perona:3 relation:12 wij:12 transformed:1 selects:1 pixel:3 overall:1 classification:5 among:2 html:1 stateof:1 animal:2 art:5 spatial:21 initialize:2 plan:1 weijer:1 constrained:1 once:3 categorizes:1 having:1 washington:5 extraction:1 ng:2 kdes:13 represents:6 yu:3 unsupervised:3 icml:2 future:1 others:1 t2:4 np:1 report:2 randomly:1 fitness:5 intended:1 phase:4 cns:1 detection:13 bradski:1 highly:1 alignment:4 saturated:1 pc:1 tj:1 edge:8 capable:1 encourage:1 fox:1 tree:1 desired:1 girshick:1 theoretical:1 instance:2 cover:1 subset:7 recognizing:3 successful:2 welinder:2 too:2 reported:1 lee:1 xi1:1 together:2 fishery:1 yao:3 containing:1 huang:2 cognitive:1 return:1 suggesting:2 diversity:4 de:1 summarized:1 coding:5 includes:1 north:1 farrell:2 break:1 lot:1 lab:2 closed:1 lowe:1 red:2 start:3 annotation:2 square:3 publicly:2 accuracy:18 descriptor:11 correspond:1 t3:2 spaced:2 produced:1 ren:1 researcher:1 published:2 visualizes:1 acc:2 detector:1 aligns:1 beak:1 volumetric:1 associated:2 di:1 dataset:16 popular:1 finegrained:2 color:9 improves:1 subtle:1 back:1 higher:6 zisserman:1 improved:2 rand:1 done:1 evaluated:1 though:2 box:2 stage:6 until:2 liefeng:2 mkl:1 gray:1 effect:2 contain:3 normalized:4 regularization:3 spatially:1 alternating:1 illustrated:1 wkk:1 width:1 encourages:1 mschmidt:1 davis:1 portmanteau:1 crf:1 performs:1 l1:4 percent:3 image:97 meaning:1 lazebnik:1 novel:1 recently:5 parikh:1 common:7 overview:1 tail:3 slight:1 interpret:1 emk:2 ai:1 grid:2 stable:2 similarity:3 longer:1 align:4 aligning:1 birdlets:1 recent:1 optimizing:2 irrelevant:1 compound:1 certain:3 binary:3 caltech:7 captured:1 seen:1 care:1 somewhat:1 maximize:1 multiple:3 full:3 keypoints:1 technical:2 match:5 cross:4 bach:1 lin:2 adobe:2 calculates:1 basic:6 vision:2 iteration:5 kernel:14 pyramid:14 achieved:6 cell:2 preserved:1 whereas:1 lbp:1 fine:33 want:1 spacing:1 rest:2 jayadevaprakash:1 pooling:4 lafferty:1 extracting:1 yang:4 enough:3 inactive:1 penalty:5 generally:1 useful:2 mid:1 category:11 reduced:1 generate:1 shapiro:2 outperform:1 xij:6 http:2 coates:1 fish:1 notice:2 correctly:1 per:2 diverse:1 xii:27 group:2 key:2 iter:4 four:2 threshold:2 localize:1 libsvm:1 rectangle:1 year:1 sum:1 run:1 powerful:1 patch:4 w12:1 comparable:1 bound:1 followed:1 jue:1 correspondence:1 adapted:1 constraint:2 fei:6 scene:1 software:1 aspect:1 performing:1 fung:1 combination:7 battle:1 across:1 smaller:2 slightly:1 making:1 vji:4 iccv:3 invariant:1 pipeline:1 equation:4 previously:1 discus:1 describing:1 end:3 serf:1 available:2 junction:1 observe:2 hierarchical:1 enforce:1 occurrence:9 fowlkes:1 alternative:2 robustness:1 schmidt:1 original:2 bagging:1 running:1 ensure:1 build:1 malik:2 quantity:1 occurs:1 gradient:2 distance:1 arbelaez:1 concatenation:2 entity:1 vanrell:1 evenly:2 relationship:10 insufficient:1 ratio:2 difficult:1 xik:1 motivates:1 allowing:1 upper:1 observation:2 datasets:4 benchmark:4 defining:1 variability:1 head:3 ucsd:6 arbitrary:1 bagdanov:1 introduced:1 complement:1 dog:13 required:1 khan:1 optimized:2 wah:3 learned:16 nip:4 able:2 suggested:1 beyond:1 usually:4 pattern:15 sparsity:5 max:3 including:1 suitable:1 natural:2 indicator:1 raina:1 w11:1 library:1 extract:1 health:1 schmid:1 alternated:1 geometric:1 relative:2 plant:1 subcategories:1 fully:1 discriminatively:1 limitation:1 facing:1 versus:2 localized:1 lv:1 validation:1 kti:2 consistent:1 share:2 eccv:1 last:2 free:2 template:158 felzenszwalb:1 sparse:4 van:1 depth:1 llc:2 evaluating:1 world:1 vocabulary:1 computes:1 t5:1 contour:1 collection:2 commonly:1 simplified:2 far:2 transaction:1 global:2 summing:2 belongie:3 discriminative:3 grayscale:1 decade:2 khosla:2 table:12 learn:5 obtaining:1 forest:3 sminchisescu:1 necessarily:2 meanwhile:1 dense:2 whole:4 bounding:2 allowed:1 repeated:1 body:2 intel:2 en:1 sub:6 sf:6 lie:1 learns:1 grained:33 formula:1 minute:1 caltechucsd:1 sift:3 symbol:1 svm:2 workshop:1 quantization:1 adding:1 importance:1 texture:5 occurring:2 t4:2 boureau:2 vii:12 locality:1 appearance:5 army:1 visual:7 religion:1 bo:4 chang:1 ubc:1 chance:2 extracted:6 goal:3 shared:2 hard:2 change:1 parkhi:1 total:1 specie:5 experimental:1 meaningful:5 select:1 support:2 guo:1 incorporate:1 evaluate:1 mita:1 tested:4
|
4,105 | 4,715 |
How They Vote:
Issue-Adjusted Models of Legislative Behavior
Sean M. Gerrish?
Department of Computer Science
Princeton University
Princeton, NJ 08540
[email protected]
David M. Blei
Department of Computer Science
Princeton University
Princeton, NJ 08540
[email protected]
Abstract
We develop a probabilistic model of legislative data that uses the text of the bills to
uncover lawmakers? positions on specific political issues. Our model can be used
to explore how a lawmaker?s voting patterns deviate from what is expected and how
that deviation depends on what is being voted on. We derive approximate posterior
inference algorithms based on variational methods. Across 12 years of legislative
data, we demonstrate both improvement in heldout predictive performance and the
model?s utility in interpreting an inherently multi-dimensional space.
1
Introduction
Legislative behavior centers around the votes made by lawmakers. Capturing regularity in these votes,
and characterizing patterns of legislative behavior, is one of the main goals of quantitative political
science. Voting behavior exhibits enough regularity that simple statistical models, particularly ideal
point models, easily capture the broad political structure of legislative bodies. However, some
lawmakers do not fit neatly into the assumptions made by these models. In this paper, we develop
a new model of legislative behavior that captures when and how lawmakers vote differently than
expected.
Ideal point models assume that lawmakers and bills are represented as points in a latent space. A
lawmaker?s (stochastic) voting behavior is characterized by the relationship between her position in
this space and the bill?s position [1, 2, 3]. Given the data of how each lawmaker votes on each bill
(known as a roll call), we can use ideal point models to infer the latent position of each lawmaker. In
U.S. politics, these inferred positions reveal the commonly-known political spectrum: right-wing
lawmakers are at one extreme, and left-wing lawmakers are at the other. Figure 1 illustrates example
inferences from an ideal point model.
But there are some votes that ideal point models fail to capture. For example, Ronald Paul, Republican
representative from Texas, and Dennis Kucinich, Democratic representative from Ohio, are poorly
modeled by ideal points because they diverge from the left-right spectrum on issues like foreign
policy. Because some lawmakers deviate from their party on certain issues, their positions on these
issues are not captured by ideal point models.
To this end, we develop the issue-adjusted ideal point model, a latent variable model of roll-call
data that accounts for the contents of the bills that lawmakers are voting on. The idea is that each
lawmaker has both a general position and a sparse set of position adjustments, one for each issue.
The votes on a bill depend on a lawmaker?s position, adjusted for the bill?s content. The text of the
bill encodes the issues it discusses. Our model can be used as an exploratory tool for identifying
?
Use footnote for providing further information about author (webpage, alternative address)?not for
acknowledging funding agencies.
1
?2
?1
0
1
2
3
Figure 1: Traditional ideal points separate Republicans (red) from Democrats (blue).
exceptional voting patterns of individual legislators, and it provides a richer description of lawmakers?
voting behavior than the models traditionally used in political science.
In the following sections, we develop our model and describe an approximate posterior inference
algorithm based on variational methods. We analyze six Congresses (12 years) of legislative data
from the United States Congress. We show that our model gives a better fit to legislative data and
provides an interesting exploratory tool for analyzing legislative behavior.
2
Exceptional issue voting
We first review ideal point models of legislative roll call data and discuss their limitations. We then
present a model that accounts for how legislators vote on specific issues.
Modeling politics with ideal points.
Ideal point models are based on item response theory, a statistical theory that models how members
of a population judge a set of items. Applied to voting records, one-dimensional ideal point models
place lawmakers on an interpretable political spectrum. These models are widely used in quantitative
political science [3, 4, 5].
One-dimensional ideal point models posit an ideal point xu ? R for each lawmaker u. Each bill d is
characterized by its polarity ad and its popularity bd .1 The probability that lawmaker u votes ?Yes?
on bill d is given by the logistic regression
p(vud = yes | xu , ad , bd ) = ?(xu ad + bd ),
(1)
exp(s)
where ?(s) = 1+exp(s)
is the logistic function.2 When the popularity of a bill bd is high, nearly
everyone votes ?Yes?; when the popularity is low, nearly everyone votes ?No?. When the popularity
is near zero, the probability that a lawmaker votes ?Yes? depends on how her ideal point xu interacts
with bill polarity ad . The variables ad , bd , and xu are usually assigned standard normal priors [3].
Given a matrix of votes, we can infer the ideal point of each lawmaker. We illustrate ideal points fit
to votes in the U.S. House of Representatives from 2009-2010 in Figure 1. The model has clearly
separated lawmakers by their political party (colour) and provides an intuitive measure of their
political leanings.
Limitations of ideal point models. A one-dimensional ideal point model fit to the U.S. House from
2009-2010 correctly models 98% of lawmakers? votes on training data. But it only captures 83% of
Baron Hill?s (D-IN) votes and 80% of Ronald Paul?s (R-TX) votes. Why is this?
The ideal point model assumes that lawmakers are ordered. Each bill d splits them at a cut point
? abdd . Lawmakers to one side of the cut point are more likely to support the bill, and lawmakers to
the other side are likely to reject it. For lawmakers like Paul and Hill, this assumption is too strong
because their voting behavior does not fit neatly into a single ordering. Their location among the
other lawmakers changes with different bills.
Lawmakers do not vote randomly, however. They vote consistently within individual areas of policy,
such as foreign policy and education. For example, Rep. Paul consistently votes against United States
involvement in foreign military engagements, a position that contrasts with other Republicans.
We refer to voting behavior like this as issue voting. An issue is any federal policy area, such as
?financial regulation,? ?foreign policy,? ?civil liberties,? or ?education,? on which lawmakers are
expected to take positions. Lawmakers? positions on these issues often diverge from their traditional
left/right stances. The model we will develop captures these deviations. Some examples are illustrated
1
2
These are sometimes called the discrimination and difficulty, respectively.
Many ideal point models use a probit function instead [1, 3].
2
l
h ll
son Cau r
y inic sha ell
son
ersr Kuc Mar Mitch o
ohnel Mc anto
ack
J
B
J
t
a
s
i
y
se
ber nn me rry h C oth cha Eric C
Jes
Ro De Ja Ha An Tim Mi
?2
0
aul
ld P
na
Ro
2
Ideal point
Ideal point
Taxation-adjusted
ideal point
4
Figure 2: In a traditional ideal point model, lawmakers? ideal points are static (top line of each figure).
In the issue-adjusted ideal point model, lawmakers? ideal points change when they vote on certain
issues, such as Taxation.
Terrorism
terrorist
september
attack
nation
york
terrorist attack
hezbolah
national guard
Commemorations
nation
people
life
world
serve
percent
community
family
Transportation
transportation
minor
print
tax
land
guard
coast guard
substitute
Labeled topics
ND
Ad,Bd
NW
?
?d
Z
W
K
?
Vud
Xu
NU
Zu
The issue-adjusted ideal point model
Figure 3: Left: Top words from topics fit using labeled LDA [6]. Right: the issue-adjusted ideal point
model, which models votes vud from lawmakers and legislative items. Classic item response theory
models votes v using xu and ad , bd . For our work, documents? issue vectors ? were estimated fit with
a topic model (left of dashed line) using bills? words w and labeled topics ?. Expected issue vectors
Eq [?|w] are then treated as constants in the issue model (right of dashed line).
in Figure 2; Charles Djou is more similar to Republicans on Taxation (right) and more similar to
Democrats on Health (left), while Ronald Paul is more Republican-leaning on Health and less
extreme on Taxation. The model we will introduce uses lawmakers? votes and the text of bills to
model deviations like this, on a variety of issues. This allows us to take into account whether a bill
was about Taxation or Education (or both) when predicting a lawmaker?s vote.
Issue-adjusted ideal points.
We now describe the issue-adjusted ideal point model, a new model of lawmaker behavior that takes
into account both the content of the bills and the voting patterns of the lawmakers. We build on the
ideal point model so that each lawmaker?s ideal point can be adjusted for each issue.
Suppose that there are K issues in the political landscape. We will use the words wd of each bill d to
code it with a mixture ?d of issues, where each element ?dk corresponds to an issue; the components
of ?d are positive and sum to one. (These vectors will come from a topic model, which we describe
below.) In our proposed model, each lawmaker is also associated with a K-vector zu ? RK , which
describes how her ideal point changes for bills about each issue.
We use these variables in a model based on the traditional ideal point model of Equation 1. As above,
xu is the ideal point for lawmaker u and ad , bd are the polarity and popularity of bill d. In our model,
votes are modeled with a logistic regression
p(vud |ad , bd , zu , xu , wd ) = ? (zu> Eq [?d |wd ] + xu )ad + bd ,
(2)
where we use an estimate Eq [?d |wd ] of the bill?s issue vector from its words wd as described below.
We put standard normal priors on the ideal points, polarity, and difficulty variables. We use Laplace
priors for zu : p(zuk | ?1 ) ? exp (??1 ||zuk ||1 ). This enforces a sparse penalty with MAP inference
and a ?nearly-sparse? penalty with Bayesian inference. See Figure 3 (left) for the graphical model.
3
To better understand the model, assume that bill d is only about Finance. This means that ?d has a
one in the Finance dimension and zero everywhere else. With a classic ideal point model, a lawmaker
u?s ideal point, xu , gives his position on each issue, including Finance. With the issue-adjusted ideal
point model, his effective ideal point for Finance, xu + zu,Finance , gives his position on Finance. The
adjustment zu,Finance affects how lawmaker u feels about Finance alone. When zu,k = 0 for all u, k,
the model becomes the classic ideal point model.
This model lets us inspect lawmakers? overall voting patterns by issue. Given a collection of votes
and a coding of bills to issues, posterior estimates of the ideal points and per-issue adjustments give
us a window into voting behavior that is not available to classic ideal point models.
Using Labeled LDA to associate bills with issues.
Equation 2 adjusts a lawmaker?s ideal point by using the conditional expectation of a bill?s thematic
labels ?d given its words wd . We estimate this vector using labeled latent Dirichlet allocation
(LDA) [6]. Labeled LDA is a topic model, a bag-of-words model that assumes a set of themes for the
collection of bills and that each bill exhibits a mixture of those themes. The themes, called topics, are
distributions over a fixed vocabulary. In unsupervised LDA [7] they are learned from the data. In
labeled LDA, they are defined by using an existing tagging scheme. Each tag is associated with a
topic; its distribution is found by taking the empirical distribution of words for documents assigned
to that tag.3 This gives interpretable names (the tags) to the topics.
We used tags provided by the Congressional Research Service [8], which provides subject codes
for all bills passing through Congress. These subject codes describe the bills using phrases which
correspond to traditional issues, such as Civil rights and National security. Each bill may cover
multiple issues, so multiple codes may apply to each bill. (Many bills have more than twenty labels.)
We used the 74 most-frequent issue labels. Figure 3 (right) illustrates the top words from several
of these labeled topics.4 We fit the issue vectors E [?d |wd ] as a preprocessing step. In the issueadjusted ideal point model (Equation 2), E [?d ] was treated as observed when estimating the posterior
distribution p(xu , ad , bd , zd |E [?d |wd ] , vud ). We summarize all 74 issue labels in ?A.2.5
Related Work. Item response theory has been used for decades in political science [3, 4, 5]; see
Enelow and Hinich for a historical perspective [9] and Albert for Bayesian treatments of the model
[10]. Some political scientists have used higher-dimensional ideal points, where each legislator is
attached to a vector of ideal points xu ? RK and each bill polarization ad takes the same dimension
K [11]. The probability of a lawmaker voting ?Yes? is ?(xTu ad + bd ). The principal component of
ideal points explains most of the variance and explains party affiliation. However, other dimensions
are not attached to issues, and interpreting beyond the principal component is painstaking [2].
Recent work in machine learning has provided joint models of legislative text and the bill-making
process. This includes using transcripts of U.S. Congressional floor debates to predict whether
speeches support or oppose pending legislation [12] and predicting whether a bill will survive
congressional committee by incorporating a number of features, including bill text [13]. Other work
has aimed to predict individual votes. Gerrish and Blei aimed to predict votes on bills which had not
yet received any votes [14]. Their model fits ad and bd using supervised topics, but the underlying
voting model was one-dimensional: it could not model individual votes better than a one-dimensional
ideal point model. Wang et al. created a Bayesian nonparametric model of votes and text over time
[15]. We note that these models have different purposes from ours, and neither addresses individuals?
affinity toward issues.
The issue-adjusted model is conceptually more similar to recent models for content recommendation.
Wang and Blei describe a method to recommend academic articles to individuals [16], and Agarwal
and Chen propose a model to match users to Web content [17]. Though they do not consider roll-call
data, these recommendation models also try to match user behavior with textual item content.
3
Ramage et al. explore more sophisticated approaches [6], but we found this simplified version to work well.
After defining topics, we performed two iterations of LDA with variational inference to smooth the topics.
5
We refer to specific sections in the supplementary materials (appendix) as ?A.#.
4
4
3
Posterior estimation
The central computational challenge in this model is to uncover lawmakers? issue preferences zu
by using the their votes v and bills? issues ?d . We do this by estimating the posterior distribution
p(x, z, a, b|v, ?). Bayesian ideal point models are usually fit with Gibbs sampling [2, 3, 5, 18].
However, fast Gibbs samplers are unavailable for our model because the conditionals needed are not
analytically computable. We estimate the posterior with variational Bayes.
In variational Bayes, we posit a family of distributions {q? } over the latent variables that is likely
to contain a distribution similar to the true posterior [19]. This variational family is indexed by
parameters ?, which are fit to minimize the KL divergence between the variational and true posteriors.
Specifically, we let {q? } be the family of fully factorized distributions
Y
Y
q(x, z, a, b|?) =
N (xu |?
xu , ?x2u )N (zu |?
zu , ?zu )
N (ad |?
ad , ?a2d )N (bd |?bd , ?b2d ),
(3)
U
D
where we parameterize the variational posterior with ? = {(?
xu , ?x ), (?
zu , ?zu ), (?
a, ?a ), (?b, ?b )}. We
assumed full factorization to make inference tractable. Though simpler than the true posterior, fitted
variational distributions can be excellent proxies for it. The similarity between ideal points fit with
variational inference and MCMC has been demonstrated in Gerrish in Blei [14].
Variational inference usually proceeds by optimizing the variational objective
L? = Eq? [log p(x, z, a, b, v, ?)] ? Eq? [log q? (x, z, a, b)]
(4)
with gradient or coordinate ascent (this is equivalent to optimizing the KL divergence between q and
the posterior). Optimizing this bound is challenging when the expectation is not analytical, which
makes computing the exact gradient ?? L? more difficult. We optimize this bound with stochastic
gradient ascent [20, 21], approximating the gradient with samples from q? ;
1 X ?q?
?? L ? ?
(log p(ym , v, ?) ? log q? (ym ));
(5)
M y ?q ??
m
?
where ym = (xm , zm , am , bm ) is a sample from q? . The algorithm proceeds by following this
stochastic gradient with decreasing step size; we provide further details in ?A.1.
4
Analyzing twelve years of U.S. legislative history
We used our model to investigate twelve years of U.S. legislative history. We compare the posterior fit
with this model to the same data fit with traditional ideal points and validate the model quantitatively.
We then provide a closer look at the collection of issues, lawmakers, and bills and explore several
interesting results of the model.
4.1
Data and Experiment Setup
We studied U.S. Senate and House of Representative roll-call votes from 1999 to 2010. This period
spanned Congresses 106 to 111 and covered an historic period in recent U.S. politics, the majority
of which Republican President George W. Bush held office. Bush?s inauguration and the attacks
of September 11th, 2001 marked the first quarter of this period, followed by the wars in Iraq and
Afghanistan. Congress became more partisan over this period, and Democratic President Obama was
inaugurated in January 2009.
We provide a more complete summary of statistics for our datasets in ?A.3. For context, the median
session we considered had 540 lawmakers, 507 bills, and 201,061 votes in both the House and Senate.
Altogether, there were 865 unique lawmakers, 3,113 bills, and 1,208,709 votes.
Corpus preparation. For each congress, we considered only bills for which votes were explicitly
recorded in a roll-call. We ignored votes on bills for which text was unavailable. To fit the labeled
topic model to each bill, we removed stop words and grouped common phrases as n-grams. All bills
were downloaded from www.govtrack.us [22], a nonpartisan website which provides records
of U.S. Congressional voting. We fit the Senate and House separately for each two-year Congress
because lawmakers? strategies change at each session boundary.
5
Table 1: Average log-likelihood of heldout votes using six-fold cross validation. These results cover
Congresses 106 to 111 (1999-2010) with regularization ? = 1. The issue-adjusted model yields
higher heldout log-likelihood for all congresses in both chambers than a standard ideal point model.
Perm. Issue illustrates the issue model fit when bills? issue labels were randomly permuted. Perm.
Issue is results for the issue model fit using permuted document labels.
Model
Senate
Congress
106
107
108
109
Ideal
-0.209 -0.209 -0.182 -0.189
Issue
-0.208 -0.209 -0.181 -0.188
Perm. Issue -0.210 -0.210 -0.183 -0.203
House
Ideal
-0.168 -0.154 -0.096 -0.120
Issue
-0.166 -0.147 -0.093 -0.116
Perm. Issue -0.210 -0.211 -0.100 -0.123
4.2
110
111
-0.206 -0.182
-0.205 -0.180
-0.211 -0.186
-0.090 -0.182
-0.087 -0.180
-0.098 -0.187
Comparison of classic and exploratory ideal points
How do classic ideal points compare with issue-adjusted ideal points? We fit classic ideal points
to the 111th House (2009 to 2010) to compare them with issue-adjusted ideal points x
?u from the
same period, using regularization ? = 1. The models? ideal points x
?u were very similar, correlated
at 0.998. While traditional ideal points cleanly separate Democrats and Republicans in this period,
issue-adjusted ideal points provide an even cleaner break between the parties. Although the issueadjusted model is able to use other parameters?lawmakers? adjustments z?u ?to separate the parties
better, the improvement is much greater than expected by chance (p < 0.001 using a permutation
test).
4.3
Evaluation and significance
We first evaluate the issue-adjusted model by measuring how it can predict held out votes. (This is a
measure of model fitness.) We used six fold cross-validation. For each fold, we computed the average
predictive log-likelihood log p(vudTest |vudTrain ) = log p(vudTest |?
xu , z?u , a
?d , ?bd , Eq [?d |w]) of the test
votes and averaged this across folds. We compared these with the ideal point model, evaluating the
latter in the same way. We give implementation details of the model fit in ?A.1.
Note that we cannot evaluate how well this model predicts votes on a heldout bill d. As with the ideal
point model, our model cannot predict a
?d , ?bd without votes on d. Gerrish and Blei [14] accomplished
this by predicting a
?d and ?bd using the document?s text. (Combining these two models would be
straightforward.)
Performance. We compared the issue-adjusted model?s ability to represent heldout votes with the
ideal point model. We fit the issue-adjusted model to both the House and Senate for Congresses 106
to 110 (1999-2010) with regularization ? = 1. For comparison we also fit an ideal point model to
each of these congresses. In all Congresses and both chambers, the issue-adjusted model represents
heldout votes with higher log-likelihood than an ideal point model. We show these results in Table 1.
Sensitivity to regularization. To measure sensitivity to parameters, we fit the issue-adjusted model
to the 109th Congress (1999-2000) of the House and Senate for a range ? = 0.0001, . . . , 1000 of
2
2
2
2
regularizations. We fixed variance ?X
, ?Z
, ?A
, ?B
= exp(?5). The variational implementation
generalized well for the entire range, with heldout log likelihood highest for 1 ? ? ? 10.
Permutation test. We used a permutation test to understand how the issue-adjusted model improves
upon ideal point models. This test strengthens the argument that issues (and not some other model
change, such as the increase in dimension) help to improve predictive performance. To do this test,
we randomly permuted topic vectors? document labels to completely remove the relationship between
topics and bills: (?1 , . . . , ?D ) 7? (??i (1) . . . ??i (D) ), for five permutations ?1 , . . . , ?5 . We then fit
the issue model using these permuted document labels. As shown in Table 1, models fit with the
original, unpermuted issues always formed better predictions than models fit with the permuted issues.
From this, we draw the conclusion that issues indeed help the model to represent votes.
6
n
son aul r
ich all ll
y
kso
err cin rsh he
ohn cC
o
Jac
t Bnis Kus Ma y Mitc Cao thy J ael M Cant
r
e
e
s
e
o
r
b
c
Ro Den Jam Har Anh Tim Mich
Jes
Eri
na
Ro
aul
ld P
Ideal point
Finance-adjusted
ideal point
?2
0
2
Ideal point
4
Figure 4: Ideal points xu and issue-adjusted ideal points xu + zuk from the 111th House for the
Finance issue. Republicans (red) saw more adjustment than Democrats (blue).
Ron Paul Offsets z?u,k
Donald Young Offsets z?u,k
Human rights
Commemorative events and holidays
International affairs
Crime and law enforcement
Crime and law enforcement
Government information and archives
Health
Anniversaries
Social work
Special months
Natural resources
Law
Appropriations
Finance
Racial and ethnic relations
Children
House rules and procedure
Racial and ethnic relations
Public lands and natural resources
Health
Congressional sessions
Land transfers
?3
?1
1
3
?3
?1
1
3
Figure 5: Significant issue adjustments for exceptional senators in Congress 111. Statistically
significant issue adjustments are shown with each ?.
4.4
Analyzing issues, lawmakers, and bills
In this section we take a closer look at how issue adjustments improve on ideal points and demonstrate
how the issue-adjusted ideal point model can be used to analyze specific lawmakers. We focus on an
issue-adjusted model fit to all votes in the 111th House of Representatives (2009-2010).
We can measure the improvement by comparing the training likelihoods of votes in the issue-adjusted
and traditional ideal point models. The training log-likelihood of each vote is
Jud = 1{vud =Yes } p ? log(1 + exp(p)),
(6)
T
?
where p = (?
xu + z?u Eq [?d |w])?
ad + bd is the log-odds of a vote under the issue adjusted voting
model. The corresponding log-likelihood Iud under the ideal point model is p = x
?u a
?d + ?bd .
4.4.1
Per-issue improvement
To inspect the improvement of issue k, for example, we take the sum of the improvement in loglikelihood weighted by each issue: P
Eq [?dv k |w] (Jud ? Iud )
Impk = Vud P
.
(7)
Vud Eq [?dv k |w]
A high value of Impk indicates that issue k is associated with an increase in log-likelihood, while a
low value indicates that the issue saw a decrease in log-likelihood.
Procedural issues such as Congressional sessions (in contrast to substantive issues) were among the
most-improved issues; they were also much more partisan. This is a result predicted by procedural
cartel theory [23, 24, 25, 26], which posits that lawmakers will be more polarized in procedural
votes (which describe how Congress will be run) than substantive votes (the issues discussed during
elections). A substantive issue which was better-predicted was Finance, which we illustrate in
Figure 4. Infrequent issues like Women and Religion were nearly unaffected by lawmakers? offsets.
In ?A.4, we illustrate Impk for all issues.
7
4.4.2
Per-lawmaker improvement
P
In the 111th House, the per-lawmaker improvement Impu = D (Jud ? Iud ) was invariably positive
or negligible, because each lawmaker has many more parameters in the issue-adjusted model. Some
of most-improved lawmakers were Ron Paul and Donald Young.
We corrected lawmakers? issue adjustments to account for their left/right leaning and performed
permutation tests as in ?4.3 to find which of these corrected adjustments z?uk were statistically
significant at p < 0.05 (see supplementary section ?A.5 for how we obtain z?uk from zuk and ?A.5 for
details on the permutation test). We illustrate these issue adjustments for Paul and Young in Figure 5.
Ron Paul. Paul?s offsets were extreme; he voted more conservatively than expected on Health,
Human rights and International affairs. He voted more liberally on social issues such as Racial and
ethnic relations. The issue-adjusted training accuracy of Paul?s votes increased from 83.8% to 87.9%
with issue offsets, placing him among the two most-improved lawmakers with this model.
The issue-adjusted improvement ImpK (Equation 7), when restricted to Paul?s votes, indicate significant improvement in International affairs and East Asia (he tends to vote against U.S. involvement
in foreign countries); Congressional sessions; Human rights; and Special months (he tends to vote
against recognition of special months and holidays). The model hurt performance related to Law,
Racial and ethnic relations, and Business, none of which were statistically significant issues for Paul.
Donald Young. One of the most exceptional legislators in the 111th House was Alaska Republican
Donald Young. Young stood out in a topic used frequently in House bills about naming local
landmarks. Young voted against the majority of his party (and the House in general) on a series of
largely symbolic bills and resolutions. In an Agriculture topic, Young voted (with only two other
Republicans and against the majority of the House) not to commend ?members of the Agri-business
Development Teams of the National Guard [to] increase food production in war-torn countries.?
Young?s divergent voting was also evident in a series of votes against naming various landmarks?such
as post offices?in a topic about such symbolic votes. Notice that Young?s ideal point is not particularly
distinctive: using the ideal point alone, we would not recognize his unique voting behavior.
4.4.3
Per-bill improvement
P
Per-bill improvement Impd = U (Jud ? Iud ) decreased for some bills. The bill which decreased
the most from the ideal point model in the 111th House was the Consolidated Land, Energy, and
Aquatic Resources Act of 2010 (H.R. 3534). This bill had substantial weight in five issues, with most
in Public lands and natural resources, Energy, and Land transfers, but its placement in many issues
harmed our predictions. This effect?worse performance on bills about many issues?suggests that
methods which represent bills more sparsely may perform better than the current model.
5
Discussion
Traditional models of roll call data cannot capture how individual lawmakers deviate from their latent
position on the political spectrum. In this paper, we developed a model that captures how lawmakers
vary, issue by issue, and used the text of the bills to attach specific votes to specific issues. We
demonstrated, across 12 years of legislative data, that this model better captures lawmaker behavior.
We also illustrated how to use the model as an exploratory tool of legislative data.
Future areas of work include incorporating external behavior by lawmakers. For example, lawmakers
make some (but not all) issue positions public. Many raise campaign funds from interest groups.
Matching these data to votes would help us to understand what drives lawmakers? positions.
Acknowledgments
We thank the reviewers for their helpful comments. David M. Blei is supported by ONR N00014-111-0651, NSF CAREER 0745520, AFOSR FA9550-09-1-0668, the Alfred P. Sloan foundation, and a
grant from Google.
8
References
[1] Keith T. Poole and Howard Rosenthal. Patterns of congressional voting. American Journal of Political
Science, 35(1):228?278, February 1991.
[2] Simon Jackman. Multidimensional analysis of roll call data via Bayesian simulation: Identification,
estimation, inference, and model checking. Political Analysis, 9(3):227?241, 2001.
[3] Joshua Clinton, Simon Jackman, and Douglas Rivers. The statistical analysis of roll call data,. American
Political Science Review, 98(2):355?370, 2004.
[4] Keith T. Poole and Howard Rosenthal. A spatial model for legislative roll call analysis. American Journal
of Political Science, pages 357?384, 1985.
[5] Andrew D. Martin and Kevin M. Quinn. Dynamic ideal point estimation via Markov chain Monte Carlo
for the U.S. Supreme Court, 1953-1999. Political Analysis, 10:134?153, 2002.
[6] Daniel Ramage, David Hall, Ramesh Nallapati, and Christopher D. Manning. Labeled LDA: A supervised
topic model for credit attribution in multi-labeled corpora. Proceedings of the 2009 Conference on
Empirical Methods in Natural Language Processing, 2009.
[7] David M. Blei, Andrew Y. Ng, and Michael I. Jordan. Latent Dirichlet allocation. Journal of Machine
Learning Research, pages 993?1022, 2003.
[8] Congressional research service. Available http://www.loc.gov/crsinfo/, 2011.
[9] James M. Enelow and Melvin J. Hinich. The Spatial Theory of Voting: An Introduction. Cambridge
University Press, New York, 1984.
[10] James Albert. Bayesian estimation of normal ogive item response curves using Gibbs sampling. Journal of
Educational Statistics, 17:251?269, 1992.
[11] James J. Heckman and James M. Snyder. Linear probability models of the demand for attributes with an
empirical application to estimating the preferences of legislators. RAND Journal of Economics, 27(0):142?
189, 1996.
[12] Matt Thomas, Bo Pang, and Lillian Lee. Get out the vote: Determining support or opposition from
congressional floor-debate transcripts. In Proceedings of the 2006 Conference on Empirical Methods on
Natural Language Processing, 2006.
[13] Tae Yano, Noah A. Smith, and John D. Wilkerson. Textual predictors of bill survival in congressional
committees. In Proceedings of the 2012 Conference of the North American Chapter of the Association for
Computational Linguistics, page 793802, 2012.
[14] Sean Gerrish and David Blei. Predicting legislative roll calls from text. Proceedings of the International
Conference on Machine Learning, 2011.
[15] Eric Wang, Dehong Liu, Jorge Silva, David Dunson, and Lawrence Carin. Joint analysis of time-evolving
binary matrices and associated documents. Advances in Neural Information Processing Systems, 23:2370?
2378, 2010.
[16] Chong Wang and David M. Blei. Collaborative topic modeling for recommending scientific articles.
In Proceedings of the 17th international conference on Knowledge Discovery and Data mining, pages
448?456, 2011.
[17] Deepak Agarwal and Bee-Chung Chen. fLDA: Matrix factorization through latent Dirichlet allocation.
Proceedings of the Third ACM International Conference on Web Search and Data Mining, pages 91?100,
2010.
[18] Valen E. Johnson and James H. Albert. Ordinal Data Modeling. Springer-Verlag, New York, 1999.
[19] Michael I. Jordan, Zoubin Ghahramani, Tommi S. Jaakkola, and Lawrence K. Saul. An introduction to
variational methods for graphical models. Learning in Graphical Models, pages 183?233, 1999.
[20] Herbert Robbins and Sutton Monro. A stochastic approximation method. Annals of Mathematical Statistics,
22(3), September 1951.
[21] Leon Bottou and Yann Le Cun. Large scale online learning. In Advances in Neural Information Processing
Systems, 2004.
[22] Govtrack.us, 2010. Civic Impulse LLC. Available http://www.govtrack.us.
[23] Richard F. Fenno Jr. The Congress and America?s Future. Prentice-Hall, Englewood Cliffs, NJ, 1965.
[24] Gary W. Cox and Mathew D. McCubbins. Legislative Leviathon. University of California Press., 1993.
[25] Gary W. Cox and Keith T. Poole. On measuring partisanship in roll-call voting: The U.S. House of
Representatives, 1877-1999. American Journal of Political Science, 46(3):pp. 477?489, 2002.
[26] Gary W. Cox and Mathew D. McCubbins. Setting the Agenda: Responsible Party Government in the U.S.
House of Representatives. Cambridge University Press, 2005.
9
|
4715 |@word cox:3 version:1 nd:1 cha:1 cleanly:1 simulation:1 ld:2 liu:1 series:2 zuk:4 united:2 loc:1 daniel:1 document:7 ours:1 existing:1 err:1 current:1 wd:8 comparing:1 yet:1 bd:20 john:1 ronald:3 cant:1 remove:1 interpretable:2 fund:1 discrimination:1 alone:2 a2d:1 item:7 website:1 affair:3 smith:1 painstaking:1 record:2 fa9550:1 blei:10 provides:5 location:1 preference:2 attack:3 ron:3 simpler:1 five:2 melvin:1 guard:4 mathematical:1 introduce:1 thy:1 jac:1 indeed:1 tagging:1 expected:6 frequently:1 behavior:16 multi:2 decreasing:1 food:1 election:1 gov:1 window:1 becomes:1 provided:2 estimating:3 underlying:1 factorized:1 anh:1 what:3 consolidated:1 developed:1 nj:3 quantitative:2 multidimensional:1 voting:23 nation:2 act:1 finance:12 ro:4 uk:2 grant:1 positive:2 service:2 scientist:1 negligible:1 local:1 congress:17 tends:2 sutton:1 analyzing:3 cliff:1 terrorism:1 studied:1 suggests:1 challenging:1 factorization:2 campaign:1 oppose:1 range:2 statistically:3 averaged:1 unique:2 acknowledgment:1 enforces:1 responsible:1 cau:1 procedure:1 area:3 empirical:4 evolving:1 reject:1 matching:1 word:9 donald:4 zoubin:1 symbolic:2 get:1 cannot:3 put:1 context:1 prentice:1 optimize:1 bill:62 map:1 demonstrated:2 center:1 transportation:2 equivalent:1 www:3 straightforward:1 reviewer:1 attribution:1 educational:1 economics:1 resolution:1 identifying:1 torn:1 adjusts:1 rule:1 spanned:1 financial:1 his:5 population:1 classic:7 exploratory:4 traditionally:1 hurt:1 laplace:1 feel:1 coordinate:1 president:2 suppose:1 holiday:2 user:2 exact:1 infrequent:1 annals:1 us:2 associate:1 element:1 rry:1 particularly:2 strengthens:1 recognition:1 afghanistan:1 iraq:1 cut:2 predicts:1 labeled:11 sparsely:1 observed:1 wang:4 capture:8 parameterize:1 ordering:1 decrease:1 removed:1 highest:1 cin:1 substantial:1 agency:1 dynamic:1 depend:1 raise:1 predictive:3 serve:1 upon:1 distinctive:1 eric:2 completely:1 easily:1 joint:2 differently:1 represented:1 tx:1 various:1 chapter:1 america:1 separated:1 fast:1 describe:6 effective:1 monte:1 kevin:1 richer:1 widely:1 supplementary:2 loglikelihood:1 jud:4 ability:1 statistic:3 online:1 vud:8 analytical:1 propose:1 zm:1 frequent:1 supreme:1 cao:1 combining:1 poorly:1 tax:1 description:1 intuitive:1 validate:1 webpage:1 regularity:2 tim:2 derive:1 develop:5 illustrate:4 help:3 andrew:2 minor:1 received:1 keith:3 transcript:2 eq:9 strong:1 c:2 predicted:2 judge:1 come:1 indicate:1 tommi:1 liberty:1 posit:3 attribute:1 stochastic:4 human:3 material:1 public:3 education:3 explains:2 ja:1 jam:1 government:2 adjusted:31 around:1 considered:2 hall:2 normal:3 exp:5 credit:1 aul:3 nw:1 predict:5 lawrence:2 vary:1 agriculture:1 purpose:1 estimation:4 taxation:5 bag:1 label:8 saw:2 him:1 grouped:1 robbins:1 exceptional:4 tool:3 weighted:1 federal:1 clearly:1 always:1 harmed:1 jaakkola:1 office:2 focus:1 improvement:12 consistently:2 likelihood:10 indicates:2 political:19 contrast:2 am:1 helpful:1 inference:10 foreign:5 nn:1 entire:1 her:3 relation:4 issue:107 among:3 overall:1 development:1 spatial:2 special:3 ell:1 ng:1 sampling:2 represents:1 broad:1 look:2 unsupervised:1 nearly:4 placing:1 survive:1 carin:1 future:2 recommend:1 quantitatively:1 richard:1 partisan:2 randomly:3 national:3 divergence:2 individual:7 senator:1 fitness:1 recognize:1 invariably:1 interest:1 englewood:1 investigate:1 mining:2 evaluation:1 jackman:2 chong:1 mixture:2 extreme:3 sgerrish:1 oth:1 held:2 har:1 chain:1 closer:2 indexed:1 alaska:1 fitted:1 increased:1 military:1 modeling:3 cover:2 measuring:2 phrase:2 deviation:3 predictor:1 stood:1 johnson:1 too:1 ael:1 engagement:1 twelve:2 sensitivity:2 international:6 river:1 probabilistic:1 lee:1 diverge:2 michael:2 ym:3 na:2 central:1 recorded:1 woman:1 worse:1 external:1 american:5 wing:2 chung:1 account:5 de:1 coding:1 includes:1 north:1 explicitly:1 sloan:1 depends:2 ad:17 performed:2 try:1 break:1 analyze:2 red:2 bayes:2 simon:2 monro:1 collaborative:1 minimize:1 formed:1 pang:1 voted:5 baron:1 roll:12 acknowledging:1 variance:2 became:1 correspond:1 yield:1 landscape:1 accuracy:1 yes:6 conceptually:1 largely:1 bayesian:6 identification:1 mc:1 none:1 carlo:1 cc:1 drive:1 unaffected:1 history:2 footnote:1 against:6 rsh:1 energy:2 pp:1 james:5 associated:4 mi:1 static:1 stop:1 treatment:1 knowledge:1 improves:1 jes:2 sean:2 uncover:2 sophisticated:1 higher:3 supervised:2 ohn:1 asia:1 response:4 improved:3 rand:1 though:2 mar:1 dennis:1 web:2 christopher:1 google:1 logistic:3 lda:8 reveal:1 impulse:1 scientific:1 aquatic:1 name:1 effect:1 matt:1 contain:1 true:3 ramage:2 analytically:1 assigned:2 stance:1 polarization:1 regularization:5 illustrated:2 ll:2 during:1 generalized:1 ack:1 hill:2 evident:1 complete:1 demonstrate:2 dehong:1 interpreting:2 silva:1 percent:1 variational:14 coast:1 ohio:1 funding:1 charles:1 common:1 permuted:5 quarter:1 attached:2 discussed:1 he:5 association:1 refer:2 significant:5 cambridge:2 gibbs:3 b2d:1 session:5 neatly:2 language:2 had:3 similarity:1 posterior:13 recent:3 perspective:1 involvement:2 optimizing:3 certain:2 n00014:1 verlag:1 binary:1 rep:1 affiliation:1 onr:1 life:1 accomplished:1 joshua:1 jorge:1 captured:1 herbert:1 george:1 greater:1 floor:2 period:6 dashed:2 multiple:2 full:1 infer:2 legislative:20 smooth:1 match:2 characterized:2 academic:1 cross:2 naming:2 post:1 prediction:2 regression:2 expectation:2 albert:3 iteration:1 sometimes:1 represent:3 agarwal:2 conditionals:1 separately:1 decreased:2 legislator:5 else:1 median:1 country:2 archive:1 ascent:2 comment:1 subject:2 member:2 jordan:2 call:12 odds:1 near:1 ideal:85 split:1 enough:1 congressional:11 variety:1 affect:1 fit:27 idea:1 computable:1 court:1 texas:1 politics:3 whether:3 six:3 war:2 utility:1 colour:1 penalty:2 speech:1 york:3 passing:1 ignored:1 se:1 aimed:2 covered:1 cleaner:1 nonparametric:1 http:2 nsf:1 notice:1 estimated:1 rosenthal:2 popularity:5 correctly:1 per:6 blue:2 zd:1 alfred:1 snyder:1 group:1 procedural:3 xtu:1 neither:1 douglas:1 unpermuted:1 year:6 sum:2 legislation:1 run:1 everywhere:1 ich:1 place:1 family:4 yann:1 draw:1 polarized:1 appendix:1 capturing:1 bound:2 opposition:1 followed:1 fold:4 mathew:2 placement:1 noah:1 encodes:1 tag:4 govtrack:3 argument:1 leon:1 martin:1 department:2 manning:1 jr:1 across:3 describes:1 son:3 cun:1 perm:4 making:1 den:1 dv:2 restricted:1 equation:4 resource:4 discus:2 fail:1 committee:2 needed:1 enforcement:2 ordinal:1 tractable:1 end:1 available:3 apply:1 quinn:1 chamber:2 alternative:1 altogether:1 substitute:1 original:1 assumes:2 top:3 dirichlet:3 eri:1 include:1 graphical:3 thomas:1 linguistics:1 ghahramani:1 build:1 approximating:1 february:1 objective:1 print:1 strategy:1 sha:1 traditional:9 interacts:1 exhibit:2 september:3 affinity:1 gradient:5 lawmaker:68 separate:3 thank:1 heckman:1 majority:3 landmark:2 me:1 topic:21 toward:1 substantive:3 code:4 modeled:2 relationship:2 polarity:4 providing:1 racial:4 regulation:1 difficult:1 setup:1 dunson:1 debate:2 implementation:2 agenda:1 policy:5 twenty:1 perform:1 inspect:2 datasets:1 markov:1 howard:2 ramesh:1 lillian:1 january:1 defining:1 yano:1 team:1 community:1 inferred:1 princeton:6 david:7 kl:2 security:1 crime:2 california:1 learned:1 textual:2 nu:1 address:2 beyond:1 able:1 proceeds:2 usually:3 pattern:6 below:2 xm:1 democratic:2 poole:3 summarize:1 challenge:1 kus:1 including:2 everyone:2 event:1 difficulty:2 treated:2 natural:5 predicting:4 business:2 attach:1 senate:6 scheme:1 improve:2 republican:10 flda:1 created:1 health:5 text:10 deviate:3 review:2 prior:3 checking:1 discovery:1 bee:1 determining:1 afosr:1 law:4 probit:1 fully:1 heldout:7 historic:1 permutation:6 interesting:2 limitation:2 allocation:3 validation:2 foundation:1 downloaded:1 proxy:1 article:2 leaning:3 land:6 production:1 anniversary:1 summary:1 supported:1 side:2 understand:3 ber:1 terrorist:2 iud:4 characterizing:1 taking:1 deepak:1 saul:1 sparse:3 boundary:1 dimension:4 llc:1 vocabulary:1 world:1 gram:1 evaluating:1 curve:1 conservatively:1 author:1 made:2 commonly:1 collection:3 preprocessing:1 simplified:1 historical:1 party:7 bm:1 social:2 approximate:2 cartel:1 corpus:2 assumed:1 recommending:1 spectrum:4 search:1 latent:8 mich:1 decade:1 why:1 table:3 impk:4 transfer:2 pending:1 inherently:1 career:1 unavailable:2 excellent:1 bottou:1 clinton:1 obama:1 significance:1 main:1 paul:13 nallapati:1 child:1 body:1 xu:21 ethnic:4 representative:7 position:17 thematic:1 theme:3 house:20 third:1 young:10 rk:2 specific:6 zu:14 offset:5 dk:1 divergent:1 survival:1 incorporating:2 illustrates:3 demand:1 chen:2 civil:2 democrat:4 explore:3 likely:3 adjustment:11 ordered:1 religion:1 bo:1 recommendation:2 springer:1 corresponds:1 gary:3 gerrish:5 chance:1 acm:1 ma:1 conditional:1 goal:1 marked:1 month:3 content:6 change:5 specifically:1 corrected:2 kuc:1 sampler:1 principal:2 called:2 vote:61 east:1 support:3 people:1 latter:1 appropriation:1 bush:2 preparation:1 wilkerson:1 evaluate:2 mcmc:1 x2u:1 tae:1 correlated:1
|
4,106 | 4,716 |
Topology Constraints in Graphical Models
Marcelo Fiori
Universidad de la
Rep?ublica, Uruguay
[email protected]
Pablo Mus?e
Universidad de la
Rep?ublica, Uruguay
[email protected]
Guillermo Sapiro
Duke University
Durham, NC 27708
[email protected]
Abstract
Graphical models are a very useful tool to describe and understand natural phenomena, from gene expression to climate change and social interactions. The
topological structure of these graphs/networks is a fundamental part of the analysis, and in many cases the main goal of the study. However, little work has been
done on incorporating prior topological knowledge onto the estimation of the underlying graphical models from sample data. In this work we propose extensions
to the basic joint regression model for network estimation, which explicitly incorporate graph-topological constraints into the corresponding optimization approach. The first proposed extension includes an eigenvector centrality constraint,
thereby promoting this important prior topological property. The second developed extension promotes the formation of certain motifs, triangle-shaped ones in
particular, which are known to exist for example in genetic regulatory networks.
The presentation of the underlying formulations, which serve as examples of the
introduction of topological constraints in network estimation, is complemented
with examples in diverse datasets demonstrating the importance of incorporating
such critical prior knowledge.
1
Introduction
The estimation of the inverse of the covariance matrix (also referred to as precision matrix or concentration matrix) is a very important problem with applications in a number of fields, from biology
to social sciences, and is a fundamental step in the estimation of underlying data networks. The
covariance selection problem, as introduced by Dempster (1972), consists in identifying the zero
pattern of the precision matrix. Let X = (X1 . . . Xp ) be a p-dimensional multivariate normal distributed variable, X ? N (0, ?), and C = ??1 its concentration matrix. Then two coordinates Xi
and Xj are conditionally independent given the other variables if and only if C(i, j) = 0 (Lauritzen,
1996). This property motivates the representation of the conditional dependency structure in terms
of a graphical model G = (V, E), where the set of nodes V corresponds to the p coordinates and
the edges E represent conditional dependency. Note that the zero pattern of the G adjacency matrix
coincides with the zero pattern of the concentration matrix. Therefore, the estimation of this graph
G from k random samples of X is equivalent to the covariance selection problem. The estimation of
G using `1 (sparsity promoting) optimization techniques has become very popular in recent years.
This estimation problem becomes particularly interesting and hard at the same time when the number
of samples k is smaller than p. Several real life applications lie in this ?small k-large p? setting. One
of the most studied examples, and indeed with great impact, is the inference of genetic regulatory
networks (GRN) from DNA microarray data, where typically the number p of genes is much larger
than the number k of experiments. Like in the vast majority of applications, these networks have
some very well known topological properties, such as sparsity (each node is connected with only a
few other nodes), scale-free behavior, and the presence of hubs (nodes connected with many other
vertices). All these properties are shared with many other real life networks like Internet, citation
networks, and social networks (Newman, 2010).
1
Genetic regulatory networks also contain a small set of recurring patterns called motifs. The systematic presence of these motifs has been first discovered in Escherichia coli (Shen-Orr et al., 2002),
where it was found that the frequency of these patterns is much higher than in random networks, and
since then they have been identified in other organisms, from bacteria to yeast, plants and animals.
The topological analysis of networks is fundamental, and often the essence of the study. For example, the proper identification of hubs or motifs in GRN is crucial. Thus, the agreement of the
reconstructed topology with the original or expected one is critical. Sparsity has been successfully
exploited via `1 penalization in order to obtain consistent estimators of the precision matrix, but
little work has been done with other graph-topological properties, often resulting in the estimation
of networks that lack critical known topological structures, and therefore do not look natural. Incorporating such topological knowledge in network estimation is the main goal of this work.
Eigenvector centrality (see Section 3 for the precise definition) is a well-known measure of the
importance and the connectivity of each node, and typical centrality distributions are known (or can
be estimated) for several types of networks. Therefore, we first propose to incorporate this structural
information into the optimization procedure for network estimation in order to control the topology
of the resulting network. This centrality constraint is useful when some prior information about the
graphical model is known, for example, in dynamic networks, where the topology information of
the past can be used; in networks which we know are similar to other previously studied graphs; or
in networks that model a physical phenomenon for which a certain structure is expected.
As mentioned, it has been observed that genetic regulatory networks are conformed by a few geometric patterns, repeated several times. One of these motifs is the so-called feedforward loop, which
is manifested as a triangle in the graph. Although it is thought that these important motifs may help
to understand more complex organisms, no effort has been made to include this prior information in
the network estimation problem. As a second example of the introduction of topological constraints,
we propose a simple modification to the `1 penalty, weighting the edges according to their local
structure, in order to favor the appearance of these motifs in the estimated network.
Both developed extensions here presented are very flexible, and they can be combined with each
other or with other extensions reported in literature.
To recapitulate, we propose several contributions to the network estimation problem: we show the
importance of adding topological constraints; we propose an extension to `1 models in order to
impose the eigenvector centrality; we show how to transfer topology from one graph to another; we
show that even with the centrality estimated from the same data, the proposed extension outperforms
the basic model; we present a weighting modification to the `1 penalty favoring the appearance of
motifs; as illustrative examples, we show how the proposed framework improves the edge and motif
detection in the E. coli network, and how the approach is important as well in financial applications.
The rest of this paper is organized as follows. In Section 2 we describe the basic precision matrix
estimation models used in this work. In Section 3 we introduce the eigenvector centrality and describe how to impose it in graph estimation. We propose the weighting method for motifs estimation
in Section 4. Experimental results are presented in Section 5, and we conclude in Section 6.
2
Graphical Model Estimation
Let X be a k ? p matrix containing k independent observations of X, and let us denote by Xi
the i-th column of X. Two main families of approaches use sparsity constraints when inferring the
structure of the precision matrix. The first one is based on
that the (i, j) element of ??1 is,
Pthe fact
i
i
up to a constant, the regression coefficient ?j in Xi = l6=i ?l Xl + ?i , where ?i is uncorrelated
with {Xl |l 6= i}. Following this property, the neighborhood selection technique by Meinshausen &
B?uhlmann (2006) consists in solving p independent `1 regularized problems (Tibshirani, 1996),
1
||Xi ? X? i ||2 + ?||? i ||1 ,
k
i =0
arg min
i
? i :?
where ? i is the vector of ?ji s. While this is an asymptotically consistent estimator of the ??1 zero
pattern, ?ji and ?ij are not necessarily equal since they are estimated independently. Peng et al.
(2009) propose a joint regression model which guarantees symmetry. This regression of the form
X ? XB, with B sparse, symmetric, and with null diagonal, allows to control the topology of the
graph defined by the non-zero pattern of B, as it will be later exploited in this work. Friedman
2
et al. (2010) also solve a symmetric version of the model by Meinshausen & B?uhlmann (2006) and
incorporate some structure penalties as the grouped lasso by Yuan & Lin (2006).
Methods of the second family are based on a maximum likelihood (ML) estimator with an `1 penalty
(Yuan & Lin, 2007; Banerjee et al., 2008; Friedman et al., 2008). Specifically, if S denotes the
empirical covariance matrix, the solution is the matrix ? which solves the optimization problem
X
max log det ? ? tr(S?) ? ?
|?ij | .
?0
i,j
An example of an extension to both models (the regression and ML approaches), and the first to
explicitly consider additional classical network properties, is the work by Liu & Ihler (2011), which
modifies the `1 penalty to derive a non-convex optimization problem that favors scale-free networks.
A completely different technique for network estimation is the use of the PC-Algorithm to infer
acyclic graphs (Kalisch & B?uhlmann, 2007). This method starts from a complete graph and recursively deletes edges according to conditional independence decisions. In this work, we use this
technique to estimate the graph eigenvector centrality.
3
Eigenvector Centrality Model Extension
Node degree (the number of connections of a node) is the simplest algebraic property than can be
defined over a graph, but it is very local as it only takes into account the neighborhood of the node.
A more global measure of the node importance is the so-called centrality, in any of its different
variants. In this work, we consider the eigenvector centrality, defined as the dominant eigenvector
(the one corresponding to the largest eigenvalue) of the corresponding network connectivity matrix.
The coordinates of this vector (which are all non-negatives) are the corresponding centrality of each
node, and provide a measure of the influence of the node in the network (Google?s PageRank is a
variant of this centrality measure). Distributions of the eigenvector centrality values are well known
for a number of graphs, including scale-free networks as the Internet and GRN (Newman, 2010).
In certain situations, we may have at our disposal an estimate of the centrality vector of the network
to infer. This may happen, for instance, because we already had preliminary data, or we know a network expected to be similar, or simply someone provided us with some partial information about the
graph structure. In those cases, we would like to make use of this important side information, both
to improve the overall network estimation and to guarantee that the inferred graph is consistent with
our prior topological knowledge. In what follows we propose an extension of the joint regression
model which is capable of controlling this topological property of the estimated graph.
To begin with, let us remark that as ? is positive-semidefinite and symmetric, all its eigenvalues are
non-negative, and thus so are the eigenvalues of ??1 . By virtue of the Perron-Frobenius Theorem,
for any adjacency matrix A, the eigenvalue with largest absolute value is positive. Therefore for
precision and graph connectivity matrices it holds that max||v||=1 |hAv, vi| = max||v||=1 hAv, vi,
and moreover, the eigenvector centrality is c = arg max||v||=1 hAv, vi.
Suppose that we know an estimate of the centrality c ? Rp , and want the inferred network to have
centrality close to it. We start from the basic joint regression model,
min ||X ? XB||2F + ?1 ||B||`1 ,
s.t. B symmetric, Bii = 0 ? i,
B
(1)
and add the centrality penalty,
min ||X ? XB||2F + ?1 ||B||`1 ? ?2 hBc, ci ,
s.t. B symmetric, Bii = 0 ? i
(2)
P
where || ? ||F is the Frobenius norm and ||B||`1 = i,j |Bij |. The minus sign is due to the minimization instead of maximization, and since the term hBc, ci is linear, the problem is still convex.
B
Although B is intended to be a good estimation of the precision matrix (up to constants), formulations (1) or (2) do not guarantee that B will be positive-semidefinite, and therefore the leading
eigenvalue might not be positive. One way to address this is to add the positive-semidefinite constraint in the formulation, which keeps the problem convex. However, in all of our experiments with
model (2) the spectral radius resulted positive, so we decided to use this simpler formulation due to
the power of the available solvers.
Note that we are imposing the dominant eigenvector of the graph connectivity matrix A to a nonbinary matrix B. We have exhaustive empirical evidence that the leading eigenvector of the matrix
3
B obtained by solving (2), and the leading eigenvector corresponding to the resulting connectivity
matrix (the binarization of B) are very similar (see Section 5.1). In addition, based on Wolf &
Shashua (2005), these type of results can be proved theoretically (Zeitouni, 2012).
As shown in Section 5, when the correct centrality is imposed, our proposed model outperforms the
joint regression model, both in correct reconstructed edge rates and topology. This is still true when
we only have a noisy version of c. Even if we do not have prior information at all, and we estimate
the centrality from the data with a pre-run of the PC-Algorithm, we obtain improved results.
The model extension here presented is general, and the term hBc, ci can be included in maximum
likelihood based approaches like Banerjee et al. (2008); Friedman et al. (2008); Yuan & Lin (2007).
3.1
Implementation
Following Peng et al. (2009), the matrix optimization (2) can be cast as a classical vector `1
penalty problem. The symmetry and null diagonal constraints are handled considering only the
upper triangular sub-matrix of B (excluding the diagonal), and forming a vector ? with its entries:
? = (B12 , B13 , . . . , B(p?1)p ). Let us consider a pk ?1 column vector y formed by concatenating all
the columns of X. It is easy to find a pk?p(p?1)/2 matrix Xt such that ||X?XB||2F = ||y?Xt ?||22
(see Peng et al. (2009) for details), and trivially ||B||`1 = 2||?||1 . The new term in the cost function
is hBc, ci, which is linear in B, thus it exists a matrix Ct = Ct (c) such that hBc, ci = hCt , ?i. The
construction of Ct is similar to the construction of Xt . The optimization problem (2) then becomes
min ||y ? Xt ?||22 + ?1 ||?||1 ? ?2 hCt , ?i,
?
which can be efficiently solved using any modern `1 optimization method (Wright et al., 2009).
4
Favoring Motifs in Graphical Models
One of the biggest challenges in bioinformatics is the estimation and understanding of genetic regulatory networks. It has been observed that the structure of these graphs is far from being random: the
transcription networks seem to be conformed by a small set of regulation patterns that appear much
more often than in random graphs. It is believed that each one of these patterns, called motifs, are
responsible of certain specific regulatory functions. Three basic types of motifs are defined (ShenOrr et al., 2002), the ?feedforward loop? being one of the most significant. This motif involves three
genes: a regulator X which regulates Y, and a gene Z which is regulated by both X and Y. The
representation of these regulations in the network takes the form of a triangle with vertices X, Y, Z.
Although these triangles are very frequent in GRN, the common algorithms discussed in Section
2 seem to fail at producing them. As these models do not consider any topological structure, and
the total number of reconstructed triangles is usually much lower than in transcription networks, it
seems reasonable to help in the formation of these motifs by favoring the presence of triangles.
In order to move towards a better motif detection, we propose an iterative procedure based on the
joint regression model (1). After a first iteration of solving (1), a preliminary symmetric matrix B is
obtained. Recall that if A is a graph adjacency matrix, then A2 counts the paths of length 2 between
nodes. More specifically, the entry (i, j) of A2 indicates how many paths of length 2 exist from node
i to node j. Back to the graphical model estimation, this means that if the entry (B 2 )ij 6= 0 (a length
2 path exists between i and j), then by making Bij 6= 0 (if it is not already), at least one triangle
is added. This suggests that by including weights in the `1 penalization, proportionally decreasing
with B 2 , we are favoring those edges that, when added, form a new triangle.
Given the matrix B obtained in the preliminary iteration, we consider the cost matrix M such that
2
Mij = e??(B )ij , ? being a positive parameter. This way, if (B 2 )ij = 0 the weight does not affect
the penalty, and if (B 2 )ij 6= 0, it favors motifs detection. We then solve the optimization problem
min ||X ? XB||2F + ?1 ||M ? B||`1 ,
B
(3)
where M ? B is the pointwise matrix product.
The algorithm iterates between reconstructing the matrix B and updating the weight matrix M
(initialized as the identity matrix). Usually after two or three iterations the graph stabilizes.
4
5
Experimental Results
In this section we present numerical and graphical results for the proposed models, and compare
them with the original joint regression one.
As discussed in the introduction, there is evidence that most real life networks present scale-free behavior. Therefore, when considering simulated results for validation, we use the model by Barab?asi
& Albert (1999) to generate graphs with this property. Namely, we start from a random graph with 4
nodes and add one node at a time, randomly connected to one of the existing nodes. The probability
of connecting the new node to the node i is proportional to the current degree of node i.
Given a graph with adjacency matrix A, we simulate the data X as follows (Liu & Ihler, 2011): let
D be a diagonal matrix containing the degree of node i in the entry Dii , and consider the matrix
L = ?D ? A with ? > 1 so that L is positive definite. We then define the concentration matrix
1
1
? = ? 2 L? 2 , where ? is the diagonal matrix of L?1 (used to normalize the diagonal of ? = ??1 ).
Gaussian data X is then simulated with distribution N (0, ?). For each algorithm, the parameters
are set such that the resulting graph has the same number of edges as the original one. As the total
number of edges is then fixed, the false positive (FP) rate can be deduced from the true positive (TP)
rate. We therefore report the TP rate only, since it is enough to compare the different performances.
5.1
Including Actual Centrality
In this first experiment we show how our model (2) is able to correctly incorporate the prior centrality
information, resulting in a more accurate inferred graph, both in detected edges and in topology.
The graph of the example in Figure 1 contains 20 nodes. We generated 10 samples and inferred the
graph with the joint regression model and with the proposed model (2) using the correct centrality.
Figure 1: Comparison of networks estimated with the simple joint model (1) (middle) and with model (2)
(right) using the eigenvector centrality. Original graph on left.
The following more comprehensive test shows the improvement with respect to the basic joint model
(1) when the correct centrality is included. For a fixed value of p = 80, and for each value of k from
30 to 50, we made 50 runs generating scale-free graphs and simulating data X. From these data
we estimated the network with the joint regression model with and without the centrality prior. The
TP edge rates in Figure 2(a) are averaged over the 50 runs, and count the correctly detected edges
over the (fixed) total number of edges in the network. In addition, Figure 2(b) shows a ROC curve.
We generated 300 networks and constructed a ROC curve for each one by varying ?1 , and we then
averaged all the 300 curves. As expected, the incorporation of the known topological property helps
in the correct estimation of the graph.
1
0.9
True Positive Rate
TP edge rate
0.9
0.8
0.8
0.7
0.7
0.6
0.5
0.6
30
40
50
60
0.4
70
k
(a) True positive rates for different sample sizes on networks with 80 nodes.
0
0.005
0.01 0.015 0.02
False Positive Rate
0.025
(b) Edge detection ROC curve for networks with p = 80 nodes and k = 50.
Figure 2: Performance comparison of models 2 and 1. In blue (dashed), the standard joint model (1), and in
black the proposed model with centrality (2). In thin lines, curves corresponding to 95% confidence intervals.
5
Following the previous discussion, Figure 3 shows the inner product hvB , vC i for several runs of
model (2), where vB is the leading eigenvector of the obtained matrix B, C is the resulting connectivity matrix (the binarized version of B), and vC its leading eigenvector.
0.8
TP edge rate
Inner product
1
0.8
0.6
40
80
120
160
Run number
30
40
k
50
60
70
Figure 4: True positive edge rates for different sample sizes on a network with 100 nodes. Dashed, the
joint model (1), dotted, the PC-Algorithm, and solid
the model (2) with centrality estimated from data.
200
Figure 3: Inner product hvC , vB i for 200 runs.
5.2
0.5
20
0.2
0
0.6
0.4
0.4
0
0.7
Imposing Centrality Estimated from Data
The previous section shows how the performance of the joint regression model (1) can be improved
by incorporating the centrality, when this topology information is available. However, when this
vector is unknown, it can be estimated from the data, using an independent algorithm, and then
incorporated to the optimization in model (2). We use the PC-Algorithm to estimate the centrality
(by computing the dominant eigenvector of the resulting graph), and then we impose it as the vector
c in model (2). It turns out that even with a technique not specialized for centrality estimation, this
combination outperforms both the joint model (1) and the PC-Algorithm.
We compare the three mentioned models on networks with p = 100 nodes for several values of k,
ranging from 20 to 70. For each value of k, we randomly generated ten networks and simulated
data X. We then reconstructed the graph using the three techniques and averaged the edge rate over
the ten runs. The parameter ?2 was obtained via cross validation. Figure 4 shows how the model
imposing centrality can improve the other ones without any external information.
5.3
Transferring Centrality
In several situations, one may have some information about the topology of the graph to infer,
mainly based on other data/graphs known to be similar. For instance, dynamic networks are a good
example where one may have some (maybe abundant) old data from the network at a past time
T1 , some (maybe scarce) new data at time T2 , and know that the network topology is similar at
the different times. This may be the case of financial, climate, or any time-series data. Outside of
temporal varying networks, this topological transference may be useful when we have two graphs
of the same kind (say biological networks), which are expected to share some properties, and lots
of data is available for the first network but very few samples for the second network are known.
We would like to transfer our inferred centrality-based topological knowledge from the first network
into the second one, and by that improving the network estimation from limited data.
For these examples, we have an unknown graph G1 corresponding to a k1 ?p data matrix X1 , which
we assume is enough to reasonably estimate G1 , and an unknown graph G2 with a k2 ?p data matrix
X2 (with k2 k1 ). Using X2 only might not be enough to obtain a proper estimate of G2 , and
considering the whole data together (concatenation of X1 and X2 ) might be an artificial mixture or
too strong and lead to basically reconstructing G1 . What we really want to do is to transfer some
high-level structure of G1 into G2 , e.g., just the underlying centrality of G1 is transferred to G2 .
In what follows, we show the comparison of inferring the network G2 using only the data X2 in the
joint model (1); the concatenation of X1 and X2 in the joint model (1); and finally the centrality
estimated from X1 , imposed in model (2), along with data X2 . We fixed the networks size to
p = 100 and the size of data for G1 to k1 = 200. Given a graph G1 , we construct G2 by randomly
changing a certain number of edges (32 and 36 edges in Figure 5). For k2 from 35 to 60, we generate
data X2 , and we then infer G2 with the methods described above. We averaged over 10 runs.
As it can be observed in Figure 5, the performance of the model including the centrality estimated
from X1 is better than the performance of the classical model, both when using just the data X2 and
the concatenated data X1 |X2 . Therefore, we can discard the old data X1 and keep only the structure
(centrality) and still be able to infer a more accurate version of G2 .
6
0.75
TP edge rate
TP edge rate
0.75
0.65
0.55
0.65
0.55
35
40
45
k2
50
55
60
35
40
45
k2
50
55
60
Figure 5: True positive edge rate when estimating the network G2 vs amount of data.
In blue, the basic joint model using only
X2 , in red using the concatenation of X1
and X2 , and in black the model (2) using
only X2 with centrality estimated from X1
as prior.
(a) G1 /G2 differ in 32 edges. (b) G1 /G2 differ in 36 edges.
5.4
5.4.1
Experiments on Real Data
International Stock Market Data
The stock market is a very complicated system, with lots of time-dependent underlying relationships.
In this example we show how the centrality constraint can help to understand these relationships with
limited data on times of crisis and times of stability.
We use the daily closing values (?k ) of some relevant stock market indices from U.S., Canada, Australia, Japan, Hong Kong, U.K., Germany, France, Italy, Switzerland, Netherlands, Austria, Spain,
Belgium, Finland, Portugal, Ireland, and Greece. We consider 2 time periods containing a crisis,
5/2007-5/2009 and 5/2009-5/2012, each of which was divided into a ?pre-crisis? period, and two
more sets (training and testing) covering the actual crisis period. We also consider the relatively
stable period 6/1997-6/1999, where the division into these three subsets was made arbitrarily. Using
k
as data the return between two consecutive trading days, defined as 100 log( ??k?1
), we first learned
the centrality from the ?pre-crisis? period, and we then learned three models with the training sets:
a classical least-squares regression (LS), the joint regression model (1), and the centrality model (2)
with the estimated eigenvector. For each learned model B we computed the ?prediction? accuracy
||Xtest ? Xtest B||2F in order to evaluate whether the inclusion of the topology improves the estimation. The results are presented in Table 1, illustrating how the topology helps to infer a better model,
both in stable and highly changing periods. Additionally, Figure 6 shows a graph learned with the
model (2) using the 2009-2012 training data. The discovered relationships make sense, and we can
easily identify geographic or socio-economic connections.
UK
US
IR
FN
CA
SW
FR
LS
Model (1)
Model (2)
GE
SP
GR
IT
BE
PO
NE
AT
HK
JP
AU
Figure 6: Countries network learned with the centrality model.
5.4.2
97-99
2.7
2.5
1.9
07-09
3.5
0.9
0.6
09-12
14.4
4.0
2.4
Table 1: Mean square error (?10?3 ) for
the different models.
Motif Detection in Escherichia Coli
Along this section and the following one, we use as base graph the actual genetic regulation network
of the E. coli. This graph contains ? 400 nodes, but for practical issues we selected the sub-graph of
all nodes with degree > 1. This sub-graph GE contains 186 nodes and 40 feedforward loop motifs.
For the number of samples k varying from 30 to 120, we simulated data X from GE and reconstructed the graph using the joint model (1) and the iterative method (3). We then compared the
resulting networks to the original one, both in true positive edge rate (recall that this analysis is sufficient since the total number of edges is made constant), and number of motifs correctly detected.
The numerical results are shown in Figure 7, where it can be seen that model (3) correctly detect
more motifs, with better TP vs FP motif rate, and without detriment of the true positive edge rate.
5.4.3
Centrality + Motif Detection
The simplicity of the proposed models allows to combine them with other existing network estimation extensions. We now show the performance of the two models here presented combined
(centrality and motifs constraints), tested on the Escherichia coli network.
7
0.3
0.45
0.35
0.22
Pos. pred. value
TP edge rate
TP motif rate
0.55
0.2
0.1
0.25
40
60
k 80
100
120
40
60 k 80
100
0.14
0.06
40
120
60
k 80
100
120
Figure 7: Comparison of model (1) (dashed) with proposed model (3) (solid) for the E. coli network. Left:
TP edge rate. Middle: TP motif rate (motifs correctly detected over the total number of motifs in GE ). Right:
Positive predictive value (motifs correctly detected over the total number of motifs in the inferred graph).
We first estimate the centrality from the data, as in Section 5.2. Let us assume that we know which
ones are the two most central nodes (genes).1 This information can be used to modify the centrality
value for these two nodes, by replacing them by the two highest centrality values typical of scalefree networks (Newman, 2010). For the fixed network GE , we simulated data of different sizes
k and reconstructed the graph with the model (1) and with the combination of models (2) and (3).
Again, we compared the TP edge rates, the percentage of motifs detected, and the TP/FP motifs rate.
Numerical results are shown in Figure 8, where it can be seen that, in addition to the motif detection
improvement, now the edge rate is also better. Figure 9 shows the obtained graphs for a specific run.
0.46
Pos. pred. value
0.52
TP motif rate
TP edge rate
0.2
0.3
0.58
0.22
0.14
0.4
70
80 k 90
100
110
70
80 k 90
100
110
0.16
0.12
70
80 k 90
100
110
Figure 8: Comparison of model (1) (dashed) with the combination of models (2) and (3) (solid) for the E. coli
network. The combination of the proposed extensions is capable of detecting more motifs while also improving
the accuracy of the detected edges. Left: TP edge rate. Middle: TP motif rate. Right: Positive predictive value.
Figure 9: Comparison of graphs for the E. coli network with k = 80. Original network, inferred with model (1)
and with the combination of (2) and (3). Note how the combined model is able to better capture the underlying
network topology, as quantitative shown in Figure 8. Correctly detected motifs are highlighted.
6
Conclusions and Future Work
We proposed two extensions to `1 penalized models for precision matrix (network) estimation. The
first one incorporates topological information to the optimization, allowing to control the graph
centrality. We showed how this model is able to capture the imposed structure when the centrality
is provided as prior information, and we also showed how it can improve the performance of the
basic joint regression model even when there is no such external information. The second extension
favors the appearance of triangles, allowing to better detect motifs in genetic regulatory networks.
We combined both models for a better estimation of the Escherichia coli GRN.
There are several other graph-topological properties that may provide important information, making it interesting to study which kind of structure can be added to the optimization problem. An
algorithm for estimating with high precision the centrality directly from the data would be a great
complement to the methods here presented. It is also important to find a model which exploits all
the prior information about GRN, including other motifs not explored in this work. Finally, the
exploitation of the methods here developed for `1 -graphs, is the the subject of future research.
1
In this case, it is well known that crp is the most central node, followed by fnr.
8
Acknowledgements
Work partially supported by ANII (Uruguay), ONR, NSF, NGA, DARPA, and AFOSR.
References
Banerjee, O., El Ghaoui, L., and D?Aspremont, A. Model selection through sparse maximum likelihood estimation for multivariate gaussian or binary data. Journal of Machine Learning Research,
9:485?516, 2008.
Barab?asi, A. and Albert, R. Emergence of scaling in random networks. Science, 286(5439):509?512,
1999.
Dempster, A. Covariance selection. Biometrics, 28(1):157?175, 1972.
Friedman, J., Hastie, T., and Tibshirani, R. Sparse inverse covariance estimation with the graphical
lasso. Biostatistics, 9(3):432?41, July 2008.
Friedman, J., Hastie, T., and Tibshirani, R. Applications of the lasso and grouped lasso to the
estimation of sparse graphical models. Technical report, 2010.
Kalisch, M. and B?uhlmann, P. Estimating high-dimensional directed acyclic graphs with the PCAlgorithm. Journal of Machine Learning Research, 8:613?636, 2007.
Lauritzen, S. Graphical Models. Clarendon Press, Oxford, 1996.
Liu, Q. and Ihler, A. Learning scale free networks by reweighted `1 regularization. AI & Statistics,
15:40?48, April 2011.
Meinshausen, N. and B?uhlmann, P. High-dimensional graphs and variable selection with the Lasso.
The Annals of Statistics, 34(3):1436?1462, June 2006.
Newman, M. Networks: An Introduction. Oxford University Press, Inc., New York, NY, USA, 2010.
Peng, J., Wang, P., Zhou, N., and Zhu, J. Partial correlation estimation by joint sparse regression
models. Journal of the American Statistical Association, 104(486):735?746, June 2009.
Shen-Orr, S., Milo, R., Mangan, S., and Alon, U. Network motifs in the transcriptional regulation
network of Escherichia coli. Nature Genetics, 31(1):64?8, May 2002.
Tibshirani, R. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical
Society. Series B, 58:267?288, 1996.
Wolf, L. and Shashua, A. Feature selection for unsupervised and supervised inference: The emergence of sparsity in a weight-based approach. Journal of Machine Learning Research, 6:1855?
1887, 2005.
Wright, S., Nowak, R., and Figueiredo, M. Sparse reconstruction by separable approximation. IEEE
Transactions on Signal Processing, 57(7):2479?2493, 2009.
Yuan, M. and Lin, Y. Model selection and estimation in regression with grouped variables. Journal
of the Royal Statistical Society: Series B, 68(1):49?67, 2006.
Yuan, M. and Lin, Y. Model selection and estimation in the Gaussian graphical model. Biometrika,
94(1):19?35, February 2007.
Zeitouni, O. Personal communication, 2012.
9
|
4716 |@word kong:1 illustrating:1 middle:3 version:4 exploitation:1 norm:1 seems:1 covariance:6 recapitulate:1 xtest:2 thereby:1 tr:1 solid:3 minus:1 recursively:1 liu:3 contains:3 series:3 genetic:7 past:2 outperforms:3 existing:2 current:1 fn:1 numerical:3 happen:1 v:2 selected:1 iterates:1 detecting:1 node:32 simpler:1 along:2 constructed:1 become:1 yuan:5 consists:2 combine:1 introduce:1 theoretically:1 peng:4 expected:5 market:3 indeed:1 behavior:2 decreasing:1 little:2 actual:3 solver:1 considering:3 becomes:2 provided:2 begin:1 underlying:6 moreover:1 estimating:3 biostatistics:1 spain:1 null:2 what:3 crisis:5 kind:2 eigenvector:18 developed:3 guarantee:3 sapiro:2 temporal:1 quantitative:1 binarized:1 socio:1 biometrika:1 k2:5 uk:1 control:3 appear:1 kalisch:2 producing:1 positive:19 t1:1 local:2 modify:1 oxford:2 path:3 might:3 black:2 au:1 studied:2 meinshausen:3 b12:1 suggests:1 escherichia:5 someone:1 limited:2 averaged:4 uy:2 decided:1 responsible:1 practical:1 testing:1 directed:1 mfiori:1 definite:1 procedure:2 empirical:2 asi:2 thought:1 pre:3 confidence:1 fing:2 onto:1 selection:10 close:1 influence:1 equivalent:1 imposed:3 modifies:1 independently:1 convex:3 l:2 shen:2 simplicity:1 identifying:1 estimator:3 financial:2 stability:1 coordinate:3 annals:1 controlling:1 suppose:1 construction:2 duke:2 agreement:1 element:1 particularly:1 updating:1 observed:3 solved:1 capture:2 wang:1 connected:3 highest:1 mentioned:2 anii:1 dempster:2 mu:1 dynamic:2 personal:1 solving:3 predictive:2 serve:1 division:1 completely:1 triangle:9 nonbinary:1 easily:1 joint:22 po:3 stock:3 darpa:1 describe:3 detected:8 artificial:1 newman:4 formation:2 neighborhood:2 outside:1 exhaustive:1 larger:1 solve:2 say:1 triangular:1 favor:4 statistic:2 g1:9 highlighted:1 conformed:2 noisy:1 emergence:2 eigenvalue:5 propose:9 reconstruction:1 interaction:1 product:4 fr:1 frequent:1 relevant:1 loop:3 pthe:1 frobenius:2 normalize:1 hbc:5 generating:1 help:5 derive:1 alon:1 ij:6 lauritzen:2 strong:1 solves:1 involves:1 trading:1 differ:2 switzerland:1 radius:1 correct:5 vc:2 australia:1 dii:1 hvb:1 adjacency:4 really:1 preliminary:3 biological:1 ublica:2 extension:15 hold:1 wright:2 normal:1 great:2 stabilizes:1 finland:1 consecutive:1 a2:2 belgium:1 estimation:35 uhlmann:5 grouped:3 largest:2 hav:3 successfully:1 tool:1 minimization:1 gaussian:3 zhou:1 fiori:1 shrinkage:1 varying:3 june:2 improvement:2 likelihood:3 indicates:1 mainly:1 hk:1 sense:1 detect:2 inference:2 motif:40 dependent:1 el:1 typically:1 transferring:1 favoring:4 france:1 germany:1 issue:1 arg:2 overall:1 flexible:1 animal:1 field:1 equal:1 construct:1 shaped:1 biology:1 look:1 unsupervised:1 thin:1 future:2 report:2 t2:1 few:3 modern:1 randomly:3 resulted:1 comprehensive:1 intended:1 friedman:5 detection:7 highly:1 mixture:1 semidefinite:3 pc:5 xb:5 accurate:2 edge:34 capable:2 partial:2 bacteria:1 daily:1 nowak:1 biometrics:1 old:2 initialized:1 abundant:1 hvc:1 instance:2 column:3 tp:18 maximization:1 cost:2 vertex:2 entry:4 subset:1 gr:1 too:1 reported:1 dependency:2 combined:4 deduced:1 fundamental:3 international:1 systematic:1 universidad:2 connecting:1 together:1 connectivity:6 again:1 central:2 containing:3 external:2 coli:10 uruguay:3 leading:5 return:1 american:1 japan:1 account:1 de:2 orr:2 includes:1 coefficient:1 inc:1 explicitly:2 vi:3 later:1 lot:2 shashua:2 start:3 red:1 complicated:1 contribution:1 marcelo:1 ir:1 formed:1 square:2 accuracy:2 efficiently:1 identify:1 identification:1 grn:6 basically:1 definition:1 frequency:1 ihler:3 proved:1 popular:1 recall:2 knowledge:5 austria:1 improves:2 organized:1 greece:1 back:1 disposal:1 higher:1 clarendon:1 day:1 supervised:1 improved:2 april:1 formulation:4 done:2 just:2 crp:1 correlation:1 replacing:1 banerjee:3 lack:1 google:1 yeast:1 usa:1 contain:1 true:8 geographic:1 regularization:1 symmetric:6 climate:2 conditionally:1 reweighted:1 essence:1 illustrative:1 covering:1 coincides:1 hong:1 complete:1 ranging:1 common:1 specialized:1 physical:1 ji:2 regulates:1 jp:1 discussed:2 organism:2 association:1 significant:1 imposing:3 ai:1 trivially:1 inclusion:1 closing:1 portugal:1 had:1 stable:2 add:3 base:1 dominant:3 multivariate:2 recent:1 showed:2 italy:1 discard:1 certain:5 manifested:1 rep:2 arbitrarily:1 onr:1 life:3 binary:1 exploited:2 seen:2 additional:1 impose:3 period:6 dashed:4 july:1 signal:1 infer:6 technical:1 believed:1 cross:1 lin:5 divided:1 promotes:1 barab:2 impact:1 prediction:1 variant:2 basic:8 regression:19 albert:2 iteration:3 represent:1 addition:3 want:2 interval:1 country:1 microarray:1 crucial:1 rest:1 subject:1 incorporates:1 seem:2 structural:1 presence:3 feedforward:3 easy:1 enough:3 xj:1 independence:1 affect:1 hastie:2 topology:14 identified:1 lasso:6 inner:3 economic:1 det:1 whether:1 expression:1 handled:1 effort:1 penalty:8 algebraic:1 york:1 remark:1 useful:3 proportionally:1 maybe:2 amount:1 netherlands:1 ten:2 dna:1 simplest:1 generate:2 exist:2 percentage:1 nsf:1 dotted:1 sign:1 estimated:14 tibshirani:4 correctly:7 blue:2 diverse:1 milo:1 demonstrating:1 deletes:1 changing:2 vast:1 graph:56 asymptotically:1 year:1 nga:1 run:9 inverse:2 family:2 reasonable:1 decision:1 scaling:1 vb:2 internet:2 ct:3 followed:1 topological:20 constraint:12 incorporation:1 x2:12 regulator:1 simulate:1 min:5 separable:1 relatively:1 transferred:1 according:2 combination:5 smaller:1 reconstructing:2 modification:2 making:2 ghaoui:1 previously:1 turn:1 count:2 fail:1 know:5 ge:5 available:3 hct:2 promoting:2 spectral:1 simulating:1 bii:2 centrality:53 rp:1 original:6 denotes:1 include:1 graphical:13 l6:1 sw:1 zeitouni:2 exploit:1 concatenated:1 k1:3 february:1 classical:4 society:2 move:1 already:2 added:3 concentration:4 diagonal:6 transcriptional:1 regulated:1 ireland:1 simulated:5 concatenation:3 majority:1 length:3 pointwise:1 relationship:3 index:1 detriment:1 nc:1 regulation:4 negative:2 implementation:1 motivates:1 proper:2 unknown:3 allowing:2 upper:1 observation:1 datasets:1 situation:2 excluding:1 precise:1 incorporated:1 communication:1 discovered:2 canada:1 inferred:7 pablo:1 introduced:1 cast:1 perron:1 namely:1 pred:2 connection:2 complement:1 learned:5 address:1 able:4 recurring:1 usually:2 pattern:10 fp:3 sparsity:5 challenge:1 pagerank:1 max:4 including:5 royal:2 power:1 critical:3 natural:2 regularized:1 scarce:1 zhu:1 improve:3 ne:1 aspremont:1 binarization:1 prior:12 geometric:1 literature:1 understanding:1 acknowledgement:1 afosr:1 plant:1 interesting:2 proportional:1 acyclic:2 penalization:2 validation:2 degree:4 sufficient:1 xp:1 consistent:3 uncorrelated:1 share:1 genetics:1 guillermo:2 penalized:1 supported:1 free:6 figueiredo:1 side:1 understand:3 absolute:1 sparse:6 distributed:1 curve:5 made:4 far:1 social:3 transaction:1 reconstructed:6 citation:1 transcription:2 gene:5 keep:2 ml:2 global:1 scalefree:1 conclude:1 xi:4 regulatory:7 iterative:2 table:2 additionally:1 nature:1 transfer:3 reasonably:1 ca:1 symmetry:2 improving:2 complex:1 necessarily:1 sp:1 pk:2 main:3 whole:1 repeated:1 x1:10 referred:1 biggest:1 roc:3 pmuse:1 ny:1 precision:9 sub:3 inferring:2 concatenating:1 xl:2 lie:1 weighting:3 bij:2 theorem:1 xt:4 specific:2 hub:2 explored:1 virtue:1 evidence:2 incorporating:4 exists:2 false:2 adding:1 importance:4 ci:5 durham:1 simply:1 appearance:3 forming:1 g2:11 partially:1 mij:1 corresponds:1 wolf:2 complemented:1 conditional:3 goal:2 presentation:1 identity:1 towards:1 b13:1 shared:1 change:1 hard:1 included:2 typical:2 specifically:2 fnr:1 called:4 total:6 experimental:2 la:2 bioinformatics:1 transference:1 incorporate:4 evaluate:1 tested:1 phenomenon:2
|
4,107 | 4,717 |
Searching for objects driven by context
Bogdan Alexe
BIWI
ETH Zurich
Nicolas Heess
Gatsby Unit
UCL
Yee Whye Teh
Department of Statistics
University of Oxford
Vittorio Ferrari
School of Informatics
University of Edinburgh
Abstract
The dominant visual search paradigm for object class detection is sliding windows. Although simple and effective, it is also wasteful, unnatural and rigidly
hardwired. We propose strategies to search for objects which intelligently explore the space of windows by making sequential observations at locations decided
based on previous observations. Our strategies adapt to the class being searched
and to the content of a particular test image, exploiting context as the statistical
relation between the appearance of a window and its location relative to the object, as observed in the training set. In addition to being more elegant than sliding
windows, we demonstrate experimentally on the PASCAL VOC 2010 dataset that
our strategies evaluate two orders of magnitude fewer windows while achieving
higher object detection performance.
1
Introduction
Object class detection is a central problem in computer vision. Among the broad palette of approaches [2, 22, 31], most state-of-the-art detectors rely on the sliding window paradigm [7, 8, 12,
15, 30, 31]. A classifier is trained to decide whether a window contains an instance of the target
class and is used at test time to score all windows in an image over a regular grid in location and
scale. The local maxima of the score function are returned as the detections. Despite its popularity,
the sliding window paradigm seems wasteful and unnatural. Cognitive science research [24] measuring eye-tracks has shown that humans search for objects in a very different way, by successively
exploring a small number of promising locations, rapidly converging on the object of interest. This
process decides where to look next based on the context gathered in previous observations (fixation
points). As opposed to sliding-windows, this search scheme adapts to the image content and the
class being searched.
In this paper we propose strategies to search for objects in images which have these crucial characteristics. Each strategy is specific to an object class and intelligently explores the space of windows
by making sequential observations at locations decided based on previous observations. Figure 1
illustrates the key intuition by applying an ideal strategy to search for cars in a test image. The strategy might start at window w1 , which is a patch of sky. The strategy has learned from the training
data that cars are typically below the sky, so it decides to try a window below w1 , e.g. moving to
window w2 . As w2 covers a patch of road, and the strategy has learned that cars are frequently found
on roads, it continues to search the road region, e.g. moving to w3 . As w3 contains the right end of
a car, the strategy moves to the left to w4 , completing the search.
Given a set of training images of a class with ground-truth object locations, our method learns a
strategy to localize objects of that class by sequentially evaluating windows. To achieve this it
models the statistical relation between the position and appearance of windows in the training images
to their relative position wrt to the ground-truth (sec. 2 and 3). In addition to being more elegant than
sliding windows, the proposed technique offers practical advantages. It greatly reduces the number
of observed windows, and therefore the number of times a window classifier is evaluated (potentially
very expensive [15, 30]). Moreover, it naturally exploits context information to avoid evaluating the
classifier on large portions of an image which might contain cluttered areas. This leads to lower
1
Figure 1: Searching for a car driven by context. An ideal search strategy moves through the sequence of
windows w1 to w4 . See main text.
false-positive rates, and therefore higher object detection performance than sliding windows, despite
evaluating fewer windows. Finally, our method makes no assumption on the form of the window
classifier and therefore can be applied on top of any classifier (e.g. [7, 8, 12, 15, 30, 31]).
In sec. 5 we report experiments on the highly challenging PASCAL VOC 2010 dataset, using the
popular deformable part model of [12] as the window classifier. The experiments demonstrate that
our learned strategies perform better in terms of object detection accuracy than sliding windows,
while greatly reducing the number of classifier evaluations by a factor of 250? (100 vs 25000
in [12]). Moreover, we outperform two recent methods to reduce the number of classifier evaluations [1, 29] as they evaluate about 1000 windows while losing detection accuracy compared to
sliding windows. To our knowledge, this is the first method capable of saving window evaluations
while at the same time improving detection accuracy.
Related work. Several works try to reduce the number of windows evaluated in the traditional
sliding-window paradigm. Lampert et al. [20] proposed a branch-and-bound scheme to find the
highest scored window while evaluating the classifier as few times as possible. However, it is restricted to classifiers for which a good upper bound on a set of windows exists. Other works [15, 30]
run first a linear classifier over all windows, and then evaluate a complex non-linear kernel only on
a few highly scored windows. The recent approaches [1, 29] evaluate the classifier only on a small
number of windows likely to cover objects rather than backgrounds. The authors of [11, 26] propose
a complementary tactic: to reduce the cost of evaluating one window, but stay in the sliding-window
paradigm. Their techniques are specific to the window classifier [12], as they exploit its exact form
(e.g. parts [11], two resolutions [26]).
Context has been used by [6, 8, 16, 28] to improve object detection. They employ background-toobject context to avoid out-of-context false-positive detections [16, 28], or reason about the spatial
relations between multiple objects [6, 8]. All these methods use context as an additional cue on top
of individual object detectors, whereas in our approach context drives the search for an object in the
image, determining the sequence of windows where the classifier is evaluated.
Numerous works propose saliency detectors [1, 13, 17, 18] which try to find interesting regions in an
image corresponding to objects of any class. These are often inspired by human eye movements [9,
19]. Our goal instead is to devise a search strategy specific to one particular class, that can exploit
the relation between context appearance and the position of instances of that class in training images.
Closest to our work are techniques that consider vision with sequential fixations as a task-oriented
learning problem [4, 5, 21, 25]. Analog to our work, [5] reduces the number of window classifier
evaluations, avoiding the wasteful sliding window scheme. However, it only considers the output
of the window classifier and therefore cannot exploit context. Our search instead is driven by the
relation between the appearance of a window and the relative location of the object, as learned
from annotated training images. This has the added benefit of improving object detection accuracy compared to sliding windows. Importantly, to our knowledge, no previous approach has been
demonstrated on a dataset of difficulty similar to PASCAL VOC.
The rest of the paper is organized as follows. Sec. 2 gives an overview of our new method to localize
objects, followed by a detailed presentation in sec. 3. In sec. 4 we discuss the most important
implementation issues and conclude with experiments in sec. 5.
2
(a)
(b)
Figure 2: Displacement vector. (a) Three windows wl in a training image and their displacement vector dl .
(b) A test image. Applying dl to the current observation window wt results in the translated windows wt ? dl .
2
Overview of our method
Our method detects an object in a test image with a sequential process, by evaluating one window
yt at each time step t. Over time, it gradually integrates these local observations into a global
estimate of the object location in the image. At each time step, it actively decides which window
to evaluate next based on all past observations, trying to acquire observations that will improve
the global location estimate. This decision process is learned from a set of images labeled with
ground-truth bounding-boxes on all instances of the object class. The key driving force here is the
statistical dependency between the position/appearance of a window and the ground-truth location
of the object (e.g. cars are often on roads; boats are often below sky). Our method first finds training
windows similar in position/appearance to the current window yt in the test image. Then, each
such training window votes for a possible object location in the test image through its displacement
vector relative to the ground-truth object (fig. 2). At each time step these votes are accumulated into
a probabilistic map of possible object locations (fig. 3). The maps are then integrated over time and
used to decide which window to evaluate next (sec. 3.1).
The behavior of our decision process is controlled by the weights of the various features in the
similarity measure used to compare windows in the test image to training windows. We adapt these
weights to each class by optimizing the accuracy by which the strategy localize training object
instances in a single time step (sec. 3.3).
The process involves comparing high-dimensional appearance descriptors between a test window
yt and hundreds of thousand training windows. We greatly reduce the cost of these comparisons by
embedding the descriptors in a lower-dimensional Hamming space using [14] (sec. 4).
3
Context-driven search
In this section we describe our method in detail. Given a test image x, it sequentially collects a fixed
number T of observations yt for windows wt before making a final detection decision. At each time
step t the next observation window is chosen based on all past observations. Thus, we try to solve
two tightly connected problems:
(1) At each time step t < T , given past observations y1 . . . yt obtained for windows w1 . . . wt we
need to actively choose the window wt+1 where to make the next observation yt+1 . In section 3.1
we formalize this in terms of a mapping ? S from past observations to the next observation window.
We refer to this mapping as our search policy (sec. 3.1).
(2) At the last time step t = T , given all observations y1 . . . yT , w1 . . . wT , we need to make a final
decision about the object location. We refer to this mapping as our output policy ? O (sec. 3.2).
The two problems are tied since the observations made at time steps 1 . . . T affect our ability to
detect the object. Hence, we want to pick a search policy ? S that chooses windows leading to
observations that enable the output policy ? O to make a good detection decision. In sec. 3.1 and 3.2
we explain how to tackle these problems individually. We then discuss how the parameters of the
search policy can be adapted to a particular class to optimize detection accuracy (sec. 3.3).
3
In the following we assume that a window wt = (xt , y t , st ) is defined by its x, y location and scale
s. In any given image x there is a fixed set of windows from a dense grid in x, y and scale space
that depends on the image size and the aspect ratio of the class under consideration (see sec. 4).
An observation consists of J feature vectors fjt which describe a window yt = (f1t , . . . , fJt ). Sec. 4
details the specific grid and window features we use.
3.1
Search policy
The search policy ? S determines the choice of the next observation window given the observation
history at time step t. We formalize this in terms of a mapping wt+1 = ? S (w1 , y1 , . . . , wt , yt )
from past observations to the next observation window. This mapping is based on a conditional
distribution M t (w|w1 , y1 , . . . , wt , yt ; ?) over all possible candidate observations locations w in
the test image given the past observation windows. ? are the parameters of M . The mapping
chooses the window with highest probability in M as the next observation window
wt+1 = ? S (w1 , y1 , . . . , wt , yt ) = arg max M t (w|w1 , y1 , . . . , wt , yt ; ?).
w
(1)
The conditional distribution M t is parameterized in terms of a set of probabilistic vote maps
m(w|wt , yt ). These maps are obtained independently at each time step and can be seen as distributions over windows w, given the information about the image from that time step only. In the
following we explain how the individual vote maps are obtained at a time step t, and describe then
how they are integrated over time to form the full conditional distribution M t .
Vote maps. Individual vote maps are represented in a non-parametric manner which enables us
to capture complex dependencies between observations and object locations. As usual in object
detection, our method is given for training a set of images labeled with ground-truth boundingboxes on object instances of the class. We sample a large number L of windows wl uniformly from
all training images, and we store their position wl = (xl , y l , sl ), the associated feature vectors yl ,
as well as the displacement vectors dl that record the location of the ground-truth object relative to
a window. Each window in a training image can use this to vote for the relative position dl where it
expects the object to be. Given the current observation yt for image window wt in the test image,
the distribution over object positions is then given by the spatial distribution of these votes
m(w;
?
wt , yt , ?) =
L
X
KF (yt , yl ; ?F ) ? KS (w, wt ? dl ; ?S ).
(2)
l=1
Here, KF is a kernel that measures the similarity between the features describing two windows and
is used to weight each vote; KS is a spatial smoothing kernel; ?S , ?F are the kernel parameters the
operator ? translates a window wt by the displacement vector dl (after appropriately rescaling it to
compensate for the potentially different size of the training and test images).
The summation over all L training windows is computationally expensive. In practice we truncate
it and consider only the Z training windows most similar to the current observation window yt .
Hence, eq. (2) can be seen as a soft nearest-neighbor estimator (NN, fig. 3). 1
Different choices for KF and KS are conceivable. For KF we use an exponential function on
distances computed separately for each type j of feature vector fj describing a window (see sec. 4)
?
?
J
? X
?
KF (yt , yl ; ?F ) = exp ?
?jF dj (fjt , fjl ) ,
(3)
?
?
j=1
with dj (?, ?) being the distance between two feature vectors of type j. The scalar parameters ?jF
weight the contributions of the various feature types. For KS we use the area of intersection divided
by the area of union [10]. This choice of KS has no parameters ?S , but forms with free parameters
are also possible, e.g. a Gaussian kernel.
1
In all experiments we use Z = 10.
4
Figure 3: Search policy during one time step. The next observation window (green) is chosen as the highest
probability window in the current vote map M t . Next we retrieve the Z most similar training windows according to KF (NN arrow, for ?nearest neighbors?). Each NN votes through its displacement vector (yellow arrow)
pointing to the ground-truth object (cyan). This leads to the vote map m specific to this time step t + 1 (eq. (2)
and (4)), which is finally integrated into the updated whole map M t+1 (eq. (6)). The cycle then repeats in t + 2.
Integrating vote maps over time. Normalizing the vote map m(w;
?
wt , yt , ?t ) in eq. (2) at a time
step t yields a conditional distribution over candidate observation locations given the observation yt
at window wt
m(w;
?
wt , yt , ?)
m(w|wt , yt , ?) = P
(4)
? 0 ; wt , yt , ?)
w0 m(w
In order to obtain M t (w|w1 y1 . . . wt yt ; ?) we integrate the normalized vote maps over all past
time steps 1 . . . t using an exponentially decaying mixture
M t (w|w1 , y1 , . . . , wt , yt ; ?) =
t
X
0
0
a(t, t0 )m(w|wt , yt , ?),
(5)
t0 =1
0
where a(t, t0 ) = ?(1 ? ?)t?t for t0 > 1 and a(1, t) = (1 ? ?)t?1 for some constant 0 < ? ? 1.2
Defining M t as in eq. (5) has two appealing interpretations. Firstly, we can see the full vote map
M t at time step t as a sufficient statistic of past observations which is updated recursively
M t+1 (w|(w1 , y1 , . . . , wt+1 , yt+1 ) = ?m(w|wt+1 , yt+1 ) + (1 ? ?)M t (w|w1 , y1 , . . . , wt , yt ).
(6)
so that we only need to store the latest full vote map. Even though the next observation window is
chosen deterministically (eq. (1)), by deriving it from the probabilistic vote-map M t and updating
this map over time we are effectively maintaining an estimate of the uncertainty about which are
good candidate windows to visit in the next step. Secondly, M t should not be seen as a posterior
distribution over actual object locations. It should be seen as a policy to propose windows that
should be visited in the future. Each past observation independently proposes candidate observation
locations which are later visited if they accumulate enough support over time. The exponential
decay ensures that the influence of observations made a long time ago gets gradually forgotten and
therefore encourages exploration. This makes particular sense in combination with the output policy
that we employ (see sec. 3.2) for which it is sufficient to have visited the correct object location once
over the course of the whole search history.
3.2
Output policy
After obtaining T observations y1 . . . yT for T different windows w1 . . . wT in the test image, our
strategy must output a single window which it believes to be the most likely to contain an object of
the class of interest. As our strategy is designed to visit good candidate windows over the course
2
In the experiments we set ? = 0.5.
5
of its search, we simply output as the final detection the visited window that has the highest score
according to a window classifier c trained beforehand for that class [12] (see sec. 4)
wout = arg max c(wt )
t
3.3
(7)
Learning weights
Our search policy involves the feature weights ?F = {?jF } in the window similarity kernel (eq. (3)),
which need to be set for each class. Directly maximizing the detection rate after T steps is difficult
for several reasons: the detection rate is piecewise constant and non-continuous wrt. the parameters
?F ; the search is a sequential decision process where window selected at different time steps depend
on each other; the policy is non-differentiable with-respect to ?F (due to the max in eq. (1)). We
therefore use an approximate learning procedure that iteratively optimizes the one-step detection
performance of the stochastic vote maps M t (eq. (5)).
Given a training set of images with ground-truth object bounding-boxes, we partition it into
two equal-sized disjoint subsets. The first subset provides the L training windows for the nonparametric representation of m in eq. (2). On the second subset we run a stochastic version
of our search strategy in which we sample the next observation window according to wt+1 ?
M t (?|w1 , y1 , . . . , wt , yt ; ?) (instead of taking the max). Running the strategy once on the b? =
th training image produces a sample sequence of windows and associated observations h
? 1, y
?1, . . . , w
?T,y
? T ). Given this sequence we then improve the following objective
(w
? =
J (?F ; h)
T X
X
b
? 1, y
?1, . . . , w
? t, y
? t ; ?) ? KS (w, wGT
M t (w|w
)
(8)
t=1 w
F
F ?
by updating the parameters by the gradient of J : ?F
new = ?old + ???F J (?old ; h). We denote
b
with wGT the ground-truth object location. At the beginning of learning we initialize all ?jF to 1
and then perform several hundred updates as described, cycling through the training images. Each
update involves re-running the strategy on a training image to obtain a sample history h.3
The objective (8) tries to maximize the overlap KS with the ground-truth bounding-box weighted
by M t , hence encouraging the policy to choose for the next step windows that are likely to lie on the
object to be detected. While it leads to good results, our learning procedure is only an approximation.
In particular, it optimizes the weights to maximize only the single-step performance. 4
4
Implementation details
Window classifier. As window classifier we choose the popular multiscale deformable part model
of [12]. This model includes one root HOG filter [7] plus several part HOG filters with their associated deformation models. The score of a window at location (x, y, s) is a weighted sum of the score
of the root filter, the part filters and a deformation cost measuring the deviation of the part from its
mean training position. The work [12] also defines a multiscale image grid which forms the fixed
set of windows observable by our method (sec. 3). Note how all windows in this grid have the same
aspect-ratio, as there is a separate window classifier per object viewpoint [12].
Window features. The kernel KF used for computing the similarity KF between two windows in
eq. (3) involves J = 3 feature types: (j=1) f1 is the (x, y, s) location normalized by image size; (j=2)
f2 is a histogram of oriented gradients (HOG) [7]; (j=3) f3 is the score of the window classifier c [12].
Their corresponding distance functions dj are: (j=1) d1 = 1 ? KS (f1t , f1l ), i.e. the intersection-overunion of the two windows; (j=2) d2 is the normalized Hamming distance between binary string
3
In practice we introduce an additional parameter ? not present in eq. (5) :
we use
t
? ?)M t (w|w
?
? 1, y
?1, . . . , w
? t, y
? t ; ?) = 1/Z(?, h,
? 1, y
?1, . . . , w
? t, y
? t ; ?)? . ? acts as an inverse temM (w|w
perature and interpolates between a uniform policy (for ? ? 0) and a policy that always selects the highest
probability window as in eq. (1) (for ? ? ?). We use Z = 100, L = 100000.
4
The current procedure can be seen as an approximation
to stochastic
P
gradient ascent in the full sequential
PT P
M
t?1
t?1
objective J (?) =
)
)r(w) , where ht?1 = (w1 , y 1 , . . . wt , yt ),
t=1
ht?1 p (h
w M(w|h
pM (ht ) is the distribution over observation sequences of length t resulting from the stochastic policy M , and
b
r(w) = KS (w, wGT
). The approximation arises as we ignore changes in the distribution over the observation
sequences that are induced by the the changes in parameters, i.e. we assume ?pM /?? = 0.
6
Table 1: Object detection on PASCAL VOC10. For each method we show the average number of windows
evaluated per image (#win), the detection rate (DR) and the mean average precision (mAP) over all 54
class-viewpoint combinations. See main text for details.
Sliding Window [12] Our Random Chance Objectness [1] Selective Search [29]
mAP
0.266
0.293
0.070
0.259
0.261
DR
0.372
0.409
0.124
0.366
0.370
#win
25000
100
100
1046
408
representations of the HOG (see below); (iii) d3 = |c(wt ) ? c(wl )| is the absolute difference in
the window classifier score. This is a measure of appearance similarity from the viewpoint of the
classifier. It encourages the search policy to continue to probe nearby windows after one observation
hits an instance of the class.
Rapid window similarity. We embed the window appearance features (HOG) in a Hamming
space of dimension 128 using [14], thus going from 49600 bits to just 128. This has two advantages. First, it reduces the memory footprint for storing the appearance descriptor for all training
windows of a class to the point where they all fit in memory at once. Second, it greatly speeds up
the computation of the similarity KF between two windows, from about 500?000 per second for
the original HOG to 65 million per second for the Hamming embedded version (i.e. 130? faster).
This speedup is very useful as the number of training windows is typically very large (from a few
hundred thousands up to a million depending on the class in our experiments).
5
Experiments and conclusions
Dataset and protocol. We evaluate the ability of our learned strategies to detect objects on the very
popular and highly challenging PASCAL VOC 2010 dataset [10], which contains object instances
from 20 classes (e.g. car, sheep, motorbike) annotated by bounding-boxes and viewpoint labels
(e.g. left, front). The objects appear in cluttered backgrounds and vary greatly in location, scale,
appearance, viewpoint and illumination (fig. 4). The dataset is composed of three subsets train, val
and test. We train our method on all class-viewpoint combinations for which at least 20 training
images are available, for a total of 54 combinations (each is referred to as a class from now on). As
bounding-box annotations for the test subset are not available, we use val as the test set. For each
class the test set consists of all images that contain an instance of that class (positive images) and
an equal number of randomly sampled negative images. We quantify performance by three factors:
(mAP) mean average precision over all classes. This summarizes the behavior of a method over
the whole test set as in the standard VOC protocol [10]; (DR) detection rate as the percentage of
correctly localized objects over the positive test images only. This makes sense as our method returns
exactly one window per image (sec. 3.2); (#win) the number of window classifier [12] evaluations.
Baselines and competitors. We compare our method to two baselines (Sliding Window and Random Chance). Sliding Window is the standard sliding-window scheme of [12], which scores about
25000 windows on a multiscale image grid (for an average VOC image). Random Chance scores
100 randomly sampled windows on the same grid using the same classifier [12].
We also compare to two recent methods designed to reduce the number of classifier evaluations by
proposing a limited number of candidate windows likely to cover all objects in the image (Objectness [1] and Selective Search [29])5 . We apply the detection procedures described in the respective
works (sec. 6.1 in [1] and sec. 5.4 in [29]).
Results. Table 1 reports performance for all methods we compare. As a reference, Sliding Window [12] reaches a good detection accuracy, but at the price of evaluating many windows (25000).
Random Chance fails entirely and achieves a very low detection accuracy, showing that an intelligent search strategy is necessary when evaluating very few windows (100). The two competing
methods [1, 29] exhibit a trade-off: they evaluate fewer windows than Sliding Window, but at the
price of losing some detection accuracy (confirming what reported in [1, 29]).
5
We use the implementation provided by the respective authors, available at www.vision.
ee.ethz.ch/?calvin/objectness/ and disi.unitn.it/?uijlings/homepage/pmwiki.
php?n=Main.Software/
7
(motorbike-left)
(horse-right-rear)
(car-right)
(boat-rear)
(cow-left)
(car-right)
(chair-frontal)
(boat-right)
(tvmonitor-frontal)
Figure 4: Qualitative results on PASCAL VOC10. Nine example images along with the the ground-truth
(cyan), output of our strategy (green) and of Sliding Window (yellow). First row: examples where both methods
correctly detect the object. For the car-right example our strategy outputs exactly the same window as Sliding
Window. Second row: examples where our method succeeds but Sliding Window fails, because it avoids evaluating cluttered areas where the window classifier [12] produces false positives. Third row: examples where
our strategy fails to localize the object. Although when this happens typically Sliding Windows fails too (first
two columns), in some rare cases only our strategy fails (third column).
Our method performs best, as it achieves higher detection performance than Sliding Window (+3.7%
DR, +2.7% mAP), while at the same time greatly reducing the number of evaluated windows (250?
fewer). Overall, our method evaluates only as many windows as Random Chance (100). This is 4?10? fewer than both [1, 29] which were designed for this purpose. The fact that our method achieves
higher detection accuracy than Sliding Window might seem surprising at first, as it evaluates a subset
of its windows. The reason is that our method exploits context to avoid evaluating large portions
of the image, which often contain highly cluttered areas where the window classifier [12] risks
producing false-positives (fig. 4).
Computational efficiency and generality. While our method greatly reduces the number of windows evaluated, it introduces two overheads: (1) nearest neighbor lookup takes between 2.5s and
5.7s, depending on the class (as the number L of training windows varies, see sec. 4); (2) updating
the vote map takes 0.26s. All timings are totals over the 100 time steps to detect one class in one test
image on a single core of a Intel Core i7 3.4GHz CPU. Our total detection time for an average class
(5s) is moderately shorter that scoring all windows on the grid (8s), as [12] is already very efficient.
Importantly, our method is general in that it can be applied on top of any window classifier. For
an expensive classifier, the overhead becomes negligible and we can achieve a substantial speedup.
To prove this point we use an intersection kernel SVM [23] on a 3-level spatial pyramid of dense
SURF [3] bag-of-words. Note how similar classifiers are used in several object detectors [15, 27, 30].
On an average image containing 25000 windows, sliding window takes 92s to run, whereas our
method takes only 8s, hence achieving a 11? speedup (at no loss of mAP).
Conclusions and future work. We have proposed a novel object detection technique to replace
sliding window with an intelligent search strategy which exploits context to reduce the number of
window evaluations and improve detection performance. Our method is general and can be used on
top of any window classifier.
As future work we plan to consider richer policy parameterizations and also to improve our learning
procedure to optimize the full objective, i.e. the expected detection performance after T time steps,
framing it, for instance, as a reinforcement learning problem similar to [5, 25].
Acknowledgments. NH and YWT acknowledge funding from the European Community?s 7th Framework
Programme (FP7/2007-2013) under grant agreement no 270327 and from the Gatsby Charitable foundation.
8
References
[1] B. Alexe, T. Deselaers, and V. Ferrari. What is an object? In CVPR, 2010.
[2] J. Arpit, R. Saiprasad, and M. Anurag. Multi-stage contour based detection of deformable objects. In
ECCV, 2008.
[3] H. Bay, A. Ess, T. Tuytelaars, and L. van Gool. SURF: Speeded up robust features. CVIU, 110(3):346?
359, 2008.
[4] L. Bazzani, N. de Freitas, H. Larochelle, V. Murino, and J. Ting. Learning attentional policies for tracking
and recognition in video with deep networks. In ICML, 2011.
[5] N. J. Butko and J. R. Movellan. Optimal scanning for faster object detection. In CVPR, 2009.
[6] M. Choi, J. Lim, A. Torralba, and A. Willsky. Exploiting hierarchical context on a large database of object
categories. In CVPR, 2010.
[7] N. Dalal and B Triggs. Histogram of Oriented Gradients for Human Detection. In CVPR, 2005.
[8] C Desai, D. Ramanan, and C. Fowlkes. Discriminative models for multi-class object layout. In ICCV,
2009.
[9] W. Einhauser and P. Konig. Does luminance-contrast contribute to saliency map for overt visual attention.
European Journal of Neuroscience, 5(17):1089?1097, 2003.
[10] M. Everingham et al. The PASCAL Visual Object Classes Challenge 2010 Results, 2010.
[11] P. Felzenszwalb, R. Girshick, and D. McAllester. Cascade object detection with deformable part models.
In CVPR, 2010.
[12] P. Felzenszwalb, R. Girshick, D. McAllester, and D. Ramanan. Object detection with discriminatively
trained part based models. IEEE Trans. on PAMI, 32(9):1627?1645, 2010.
[13] D. Gao and N. Vasconcelos. Bottom-up saliency is a discriminant process. In ICCV, 2007.
[14] Y. Gong and S. Lazebnik. Iterative quantization: A procrustean approach to learning binary codes. In
CVPR, 2011.
[15] H. Harzallah, F. Jurie, and C. Schmid. Combining efficient object localization and image classification.
In ICCV, 2009.
[16] G. Heitz and D. Koller. Learning spatial context: Using stuff to find things. In ECCV, 2008.
[17] X. Hou and L. Zhang. Saliency detection: A spectral residual approach. In CVPR, 2007.
[18] L. Itti, C. Koch, and E. Niebur. A model of saliency-based visual attention for rapid scene analysis. IEEE
Trans. on PAMI, 20(11):1254?1259, 1998.
[19] G. Krieger, I. Rentschler, G. Hauske, K. Schill, and C. Zetzsche. Object and scene analysis by saccadic
eye-movements: an investigation with higher-order statistics. Spatial Vision, 2(16):201?214, 2000.
[20] C. H. Lampert, M. B. Blaschko, and T. Hofmann. Beyond sliding windows: Object localization by
efficient subwindow search. In CVPR, 2008.
[21] H. Larochelle and G. E. Hinton. Learning to combine foveal glimpses with a third-order Boltzmann
machine. In NIPS, 2010.
[22] B. Leibe, A. Leonardis, and B. Schiele. Combined object categorization and segmentation with an implicit
shape model. In Workshop on Statistical Learning in Computer Vision, ECCV, May 2004.
[23] S. Maji, A. Berg, and J. Malik. Classification using intersection kernel support vector machines is efficient.
In CVPR, 2008.
[24] J. Najemnik and W. S. Geisler. Optimal eye movement strategies in visual search. Nature, 434:381?391,
2005.
[25] L. Paletta, G. Fritz, and C. Seifert. Q-learning of sequential attention for visual object recognition from
informative local descriptors. In ICML, 2005.
[26] M. Pedersoli, A. Vedaldi, and J. Gonzales. A coarse-to-fine approach for fast deformable object detection.
In CVPR, 2011.
[27] A. Prest, C. Leistner, J. Civera, C. Schmid, and V. Ferrari. Learning object class detectors from weakly
annotated video. In CVPR, 2012.
[28] A. Rabinovich, A. Vedaldi, C. Galleguillos, E. Wiewiora, and S. Belongie. Objects in context. In ICCV,
2007.
[29] K. Van de Sande, J. Uijlings, T. Gevers, and A. Smeulders. Segmentation as selective search for object
recognition. In ICCV, 2011.
[30] A. Vedaldi, V. Gulshan, M. Varma, and A. Zisserman. Multiple kernels for object detection. In ICCV,
2009.
[31] P. Viola and M. Jones. Rapid object detection using a boosted cascade of simple features. In CVPR, 2001.
9
|
4717 |@word version:2 dalal:1 seems:1 everingham:1 triggs:1 d2:1 pick:1 recursively:1 contains:3 score:9 foveal:1 past:9 freitas:1 current:6 comparing:1 surprising:1 must:1 hou:1 najemnik:1 partition:1 informative:1 confirming:1 hofmann:1 enables:1 shape:1 wiewiora:1 designed:3 update:2 v:1 cue:1 fewer:5 selected:1 beginning:1 es:1 core:2 record:1 provides:1 parameterizations:1 contribute:1 location:25 coarse:1 firstly:1 zhang:1 along:1 qualitative:1 consists:2 fixation:2 prove:1 overhead:2 combine:1 introduce:1 manner:1 expected:1 rapid:3 behavior:2 frequently:1 multi:2 inspired:1 voc:6 detects:1 actual:1 encouraging:1 window:157 cpu:1 becomes:1 provided:1 blaschko:1 moreover:2 homepage:1 what:2 string:1 proposing:1 forgotten:1 sky:3 disi:1 act:1 stuff:1 tackle:1 exactly:2 classifier:32 hit:1 unit:1 grant:1 ramanan:2 appear:1 producing:1 positive:6 before:1 negligible:1 local:3 timing:1 despite:2 anurag:1 oxford:1 rigidly:1 pami:2 might:3 plus:1 f1t:2 k:9 collect:1 challenging:2 limited:1 speeded:1 jurie:1 decided:2 practical:1 acknowledgment:1 practice:2 union:1 movellan:1 footprint:1 procedure:5 displacement:6 area:5 w4:2 eth:1 cascade:2 vedaldi:3 word:1 road:4 regular:1 integrating:1 get:1 cannot:1 butko:1 operator:1 context:18 applying:2 influence:1 yee:1 risk:1 optimize:2 www:1 vittorio:1 demonstrated:1 yt:32 map:25 maximizing:1 latest:1 layout:1 attention:3 cluttered:4 independently:2 resolution:1 estimator:1 importantly:2 deriving:1 varma:1 retrieve:1 embedding:1 searching:2 ferrari:3 updated:2 target:1 pt:1 exact:1 losing:2 agreement:1 expensive:3 recognition:3 updating:3 continues:1 labeled:2 database:1 observed:2 bottom:1 capture:1 thousand:2 murino:1 region:2 ensures:1 connected:1 cycle:1 desai:1 movement:3 highest:5 trade:1 substantial:1 intuition:1 schiele:1 moderately:1 trained:3 depend:1 weakly:1 localization:2 f2:1 efficiency:1 translated:1 various:2 represented:1 maji:1 einhauser:1 train:2 fast:1 effective:1 describe:3 detected:1 horse:1 richer:1 solve:1 cvpr:12 ability:2 statistic:3 tuytelaars:1 final:3 advantage:2 sequence:6 differentiable:1 intelligently:2 ucl:1 propose:5 combining:1 rapidly:1 achieve:2 adapts:1 deformable:5 prest:1 bazzani:1 exploiting:2 konig:1 produce:2 categorization:1 object:76 bogdan:1 depending:2 gong:1 nearest:3 school:1 eq:13 involves:4 larochelle:2 quantify:1 annotated:3 correct:1 filter:4 stochastic:4 exploration:1 human:3 enable:1 mcallester:2 f1:1 leistner:1 investigation:1 summation:1 secondly:1 exploring:1 koch:1 ground:12 exp:1 alexe:2 mapping:6 pointing:1 driving:1 vary:1 achieves:3 torralba:1 purpose:1 integrates:1 overt:1 bag:1 label:1 visited:4 individually:1 wl:4 weighted:2 gaussian:1 always:1 rather:1 avoid:3 boosted:1 deselaers:1 greatly:7 contrast:1 baseline:2 detect:4 sense:2 rear:2 accumulated:1 nn:3 typically:3 integrated:3 seifert:1 relation:5 koller:1 selective:3 going:1 selects:1 issue:1 among:1 arg:2 pascal:7 overall:1 classification:2 proposes:1 plan:1 art:1 spatial:6 smoothing:1 initialize:1 equal:2 once:3 saving:1 vasconcelos:1 f3:1 broad:1 look:1 icml:2 jones:1 future:3 report:2 piecewise:1 intelligent:2 employ:2 few:4 oriented:3 randomly:2 composed:1 tightly:1 individual:3 detection:41 interest:2 highly:4 evaluation:7 sheep:1 introduces:1 mixture:1 zetzsche:1 beforehand:1 capable:1 necessary:1 glimpse:1 respective:2 shorter:1 old:2 re:1 deformation:2 girshick:2 instance:9 column:2 soft:1 cover:3 measuring:2 rabinovich:1 cost:3 deviation:1 subset:6 expects:1 rare:1 hundred:3 uniform:1 front:1 too:1 reported:1 dependency:2 varies:1 scanning:1 chooses:2 combined:1 st:1 fritz:1 explores:1 geisler:1 stay:1 probabilistic:3 yl:3 informatics:1 off:1 w1:16 central:1 successively:1 opposed:1 choose:3 containing:1 dr:4 cognitive:1 leading:1 rescaling:1 return:1 actively:2 itti:1 de:2 lookup:1 ywt:1 sec:23 includes:1 depends:1 later:1 try:5 root:2 portion:2 start:1 decaying:1 annotation:1 gevers:1 contribution:1 smeulders:1 gulshan:1 php:1 accuracy:10 descriptor:4 characteristic:1 gathered:1 saliency:5 yield:1 yellow:2 niebur:1 drive:1 ago:1 history:3 detector:5 explain:2 reach:1 competitor:1 evaluates:2 naturally:1 associated:3 hamming:4 sampled:2 dataset:6 popular:3 knowledge:2 car:10 lim:1 organized:1 formalize:2 segmentation:2 higher:5 zisserman:1 evaluated:6 box:5 though:1 generality:1 just:1 stage:1 implicit:1 multiscale:3 defines:1 fjl:1 contain:4 normalized:3 galleguillos:1 hence:4 iteratively:1 during:1 encourages:2 tactic:1 whye:1 trying:1 procrustean:1 demonstrate:2 performs:1 fj:1 image:53 lazebnik:1 consideration:1 novel:1 funding:1 overview:2 exponentially:1 nh:1 million:2 analog:1 interpretation:1 accumulate:1 refer:2 grid:8 pm:2 dj:3 moving:2 similarity:7 dominant:1 closest:1 posterior:1 recent:3 optimizing:1 optimizes:2 driven:4 store:2 sande:1 binary:2 continue:1 devise:1 scoring:1 seen:5 additional:2 paradigm:5 maximize:2 sliding:28 branch:1 multiple:2 full:5 reduces:4 faster:2 adapt:2 offer:1 compensate:1 long:1 divided:1 visit:2 controlled:1 converging:1 calvin:1 vision:5 histogram:2 kernel:10 pyramid:1 addition:2 background:3 whereas:2 want:1 separately:1 fine:1 crucial:1 appropriately:1 w2:2 rest:1 ascent:1 induced:1 elegant:2 thing:1 seem:1 ee:1 ideal:2 iii:1 enough:1 affect:1 fit:1 w3:2 competing:1 cow:1 reduce:6 arpit:1 translates:1 i7:1 t0:4 whether:1 unnatural:2 returned:1 interpolates:1 nine:1 deep:1 heess:1 useful:1 detailed:1 nonparametric:1 category:1 outperform:1 sl:1 percentage:1 neuroscience:1 disjoint:1 popularity:1 track:1 per:5 correctly:2 key:2 achieving:2 localize:4 d3:1 wasteful:3 ht:3 luminance:1 sum:1 run:3 inverse:1 parameterized:1 uncertainty:1 decide:2 patch:2 decision:6 summarizes:1 bit:1 entirely:1 bound:2 cyan:2 completing:1 followed:1 adapted:1 scene:2 software:1 nearby:1 aspect:2 speed:1 chair:1 speedup:3 department:1 according:3 truncate:1 combination:4 appealing:1 making:3 happens:1 restricted:1 gradually:2 iccv:6 rentschler:1 computationally:1 zurich:1 discus:2 describing:2 wrt:2 fp7:1 end:1 available:3 probe:1 apply:1 hierarchical:1 leibe:1 spectral:1 fowlkes:1 motorbike:2 tvmonitor:1 original:1 top:4 running:2 maintaining:1 exploit:6 ting:1 move:2 objective:4 added:1 already:1 malik:1 strategy:28 parametric:1 saccadic:1 usual:1 traditional:1 cycling:1 exhibit:1 conceivable:1 gradient:4 win:3 distance:4 separate:1 attentional:1 gonzales:1 w0:1 considers:1 discriminant:1 reason:3 willsky:1 length:1 code:1 ratio:2 acquire:1 difficult:1 potentially:2 hog:6 negative:1 implementation:3 policy:20 boltzmann:1 perform:2 teh:1 upper:1 fjt:3 observation:44 acknowledge:1 defining:1 hinton:1 viola:1 y1:12 community:1 palette:1 pedersoli:1 learned:6 framing:1 nip:1 trans:2 beyond:1 leonardis:1 below:4 challenge:1 max:4 green:2 memory:2 belief:1 gool:1 video:2 overlap:1 difficulty:1 rely:1 force:1 hardwired:1 boat:3 residual:1 wout:1 scheme:4 improve:5 eye:4 numerous:1 schmid:2 text:2 kf:9 val:2 determining:1 relative:6 embedded:1 loss:1 discriminatively:1 interesting:1 localized:1 foundation:1 integrate:1 sufficient:2 viewpoint:6 charitable:1 storing:1 row:3 eccv:3 course:2 repeat:1 last:1 free:1 neighbor:3 taking:1 felzenszwalb:2 absolute:1 edinburgh:1 benefit:1 ghz:1 dimension:1 van:2 evaluating:10 avoids:1 contour:1 heitz:1 author:2 made:2 reinforcement:1 subwindow:1 programme:1 harzallah:1 approximate:1 observable:1 ignore:1 unitn:1 global:2 decides:3 sequentially:2 conclude:1 belongie:1 discriminative:1 search:32 continuous:1 iterative:1 bay:1 table:2 promising:1 nature:1 robust:1 nicolas:1 obtaining:1 improving:2 complex:2 european:2 uijlings:2 protocol:2 surf:2 main:3 dense:2 arrow:2 bounding:5 whole:3 lampert:2 scored:2 complementary:1 fig:5 referred:1 intel:1 gatsby:2 paletta:1 precision:2 fails:5 position:9 deterministically:1 exponential:2 xl:1 candidate:6 lie:1 tied:1 third:3 learns:1 choi:1 embed:1 specific:5 xt:1 showing:1 decay:1 svm:1 normalizing:1 dl:7 exists:1 workshop:1 quantization:1 false:4 sequential:7 effectively:1 magnitude:1 illumination:1 illustrates:1 krieger:1 cviu:1 intersection:4 simply:1 explore:1 appearance:11 likely:4 gao:1 visual:6 pmwiki:1 tracking:1 scalar:1 ch:1 truth:12 determines:1 chance:5 wgt:3 conditional:4 goal:1 presentation:1 sized:1 jf:4 price:2 content:2 experimentally:1 change:2 objectness:3 replace:1 reducing:2 uniformly:1 wt:35 total:3 succeeds:1 vote:20 berg:1 searched:2 support:2 arises:1 ethz:1 frontal:2 hauske:1 evaluate:8 d1:1 avoiding:1
|
4,108 | 4,718 |
Analog readout for optical reservoir computers
A. Smerieri1 , F. Duport1 , Y. Paquot1 , B. Schrauwen2 , M. Haelterman1 , S. Massar3
1
Service OPERA-photonique, Universit? Libre de Bruxelles (U.L.B.), 50 Avenue F. D.
Roosevelt, CP 194/5, B-1050 Bruxelles, Belgium
2
Department of Electronics and Information Systems (ELIS), Ghent University,
Sint-Pietersnieuwstraat 41, 9000 Ghent, Belgium
3
Laboratoire d?Information Quantique (LIQ), Universit? Libre de Bruxelles (U.L.B.), 50
Avenue F. D. Roosevelt, CP 225, B-1050 Bruxelles, Belgium
Abstract
Reservoir computing is a new, powerful and flexible machine learning technique that is easily implemented in hardware. Recently, by using a timemultiplexed architecture, hardware reservoir computers have reached performance comparable to digital implementations. Operating speeds allowing for real time information operation have been reached using optoelectronic systems. At present the main performance bottleneck is the readout
layer which uses slow, digital postprocessing. We have designed an analog
readout suitable for time-multiplexed optoelectronic reservoir computers,
capable of working in real time. The readout has been built and tested experimentally on a standard benchmark task. Its performance is better than
non-reservoir methods, with ample room for further improvement. The
present work thereby overcomes one of the major limitations for the future
development of hardware reservoir computers.
1
Introduction
The term ?reservoir computing? encompasses a range of similar machine learning techniques,
independently introduced by H. Jaeger [1] and by W. Maass [2]. While these techniques
differ in implementation details, they share the same core idea: that one can leverage the
dynamics of a recurrent nonlinear network to perform computation on a time dependent
signal without having to train the network itself. This is done simply by adding an external,
generally linear readout layer and training it instead. The result is a powerful system that
can outperform other techniques on a range of tasks (see for example the ones reported
in [3, 4]), and is significantly easier to train than recurrent neural networks. Furthermore
it can be quite easily implemented in hardware [5, 6, 7], although it is only recently that
hardware implementations with performance comparable to digital implementations have
been reported [8, 9, 10].
One great advantage of this technique is that it places almost no requirements on the
structure of the recurrent nonlinear network. The topology of the network, as well as
the characteristics of the nonlinear nodes, are left to the user. The only requirements are
that the network should be of sufficiently high dimensionality, and that it should have
suitable rich dynamics. The last requirement essentially means that the dynamics allows
the exploration of a large number of network states when new inputs come in, while at
the same time retaining for a finite time information on the previous inputs [11]. For this
reason, the reservoir computers appearing in literature use widely different nonlinear units,
1
see for example [1, 2, 5, 12] and in particular the time multiplexing architecture proposed
in [7, 8, 9, 10].
Optical reservoir computers are particularly promising, as they can provide an alternative
path to optical computing. They could leverage the inherent high speeds and parallelism
granted by optics, without the need for strong nonlinear interaction needed to mimic traditional electronic components. Very recently, optoelectronic reservoir computers have been
demonstrated by different research teams [10, 9], conjugating good computational performances with the promise of very high operating speeds. However, one major drawback in
these experiments, as well as all preceding ones, was the absence of readout mechanisms:
reservoir states were collected on a computer and post-processed digitally, severely limiting
the processing speeds obtained and hence the applicability.
An analog readout for experimental reservoirs would remove this major bottleneck, as
pointed out in [13]. The modular characteristics of reservoir computing imply that hardware reservoirs and readouts can be optimized independently and in parallel. Moreover,
an analog readout opens the possibility of feeding back the output of the reservoir into the
reservoir itself, which in turn allows the use of different training techniques [14] and to apply
reservoir computing to new categories of tasks, such as pattern generation [15, 16].
In this paper we present a proposal for the readout mechanism for opto-electronic reservoirs,
using an optoelectronic intensity modulator. The design that we propose will drastically
cut down their operation time, specially in the case of long input sequences. Our proposal
is suited to optoelectronic or all-optical reservoirs, but the concept can be easily extended
to any experimental time-multiplexed reservoir computer. The mechanism has been tested
experimentally using the experimental reservoir reported in [10], and compared to a digital
readout. Although the results are preliminary, they are promising: while not as good as
those reported in [10], they are however already better than non-reservoir methods for the
same task [16].
2
2.1
Reservoir computing and time multiplexing
Principles of Reservoir Computing
The main component of a reservoir computer (RC) is a recurrent network of nonlinear
elements, usually called ?nodes? or ?neurons?. The system typically works in discrete time,
and the state of each node at each time step is a function of the input value at that time
step and of the states of neighboring nodes at the previous time step. The network output
is generated by a readout layer - a set of linear nodes that provide a linear combination of
the instantaneous node states with fixed coefficients.
The equation that describes the evolution of the reservoir computer is
xi (n) = f (?mi u(n) + ?
N
X
wij xj (n ? 1))
(1)
j=1
where xi (n) is the state of the i-th node at discrete time n, N is the total number of nodes,
u(n) is the reservoir input at time n, mi and wij are the connection coefficients that describe
the network topology, ? and ? are two parameters that regulate the network?s dynamics,
and f is a nonlinear function. One generally tunes ? and ? to have favorable dynamics
when the input to be treated is injected in the reservoir. The network output y(n) is then
constructed using a set of readout weights Wi and a bias weight Wb , as
y(n) =
N
X
Wi xi (n) + Wb
(2)
i=1
Training a reservoir computer only involves the readout layer, and consists in finding the
best set of readout weights Wi and bias Wb that minimize the error between the desired
output and the actual network output. Unlike conventional recurrent neural networks, the
2
Figure 1: Scheme of the experimental setup, including the optoelectronic reservoir (?Input?
and ?Reservoir? layers) and the analog readout (?Output? layer). The red and green parts
represent respectively the optical and electronic components. ?AWG?: Arbitrary waveform
generator. ?M-Z?: LiN bO3 Mach-Zehnder modulator. ?FPD?: Feedback photodiode. ?AMP?:
Amplifier. ?Scope?: NI PXI acquisition card.
strength of connections mi and wij are left untouched. As the output layer is made only of
linear units, given the full set of reservoir states xi (n) for all the time steps n, the training
procedure is a basic, regularized linear regression.
2.2
Time multiplexing
The number of nodes in a reservoir computer determines an upper limit to the reservoir
performance [17]; this can be an obstacle when designing physical implementations of RCs,
which should contain a high number of interconnected nonlinear units. A solution to this
problem proposed in [7, 8], is time multiplexing: the xi (n) are computed one by one by
a single nonlinear element, which receives a combination of the input u(n) and a previous
state xj (n ? 1). In addition an input mask mi is applied to the input u(n), to enrich the
reservoir dynamics. The value of xi (n) is then stored in a delay line to be used at a later
time step n + 1. The interaction between different neurons can be provided by either having
a slow nonlinear element which couples state xi to the previous states xi?1 , xi?2 , ... [8], or
by using an instantaneous nonlinear element and desynchronizing the input with respect to
the delay line [10].
2.3
Hardware RC with digital readout
The hardware reservoir computer we use in the present work is identical to the one reported
in [10] (see also [9]). It uses the time-multiplexing with desynchronisation technique described in the previous paragraph. We give a brief description of the experimental system,
represented in the left part of Figure 1. It uses a LiN bO3 Mach-Zehnder (MZ) modulator,
operating on a constant power 1560 nm laser, as the nonlinear component. A MZ modulator
is a voltage controlled optoelectronic device; the amount of light that it transmits is a sine
function of the voltage applied to it. The resulting state xi (n) is encoded in a light intensity
level at the MZ output. It is then stored in a spool of optical fiber, acting as delay line of
duration T = 8.5?s, while all the subsequent states xi (n) are being computed by the MZ
modulator. When a state xi (n) reaches the end of the fiber spool it is converted into a
voltage by a photodiode.
The input u(n) is multiplied by the input mask mi and encoded in a voltage level by an
Arbitrary Waveform Generator (AWG). The two voltages corresponding to the state xi (n)
at the end of the fiber spool and the input mi u(n) are added, amplified, and the resulting
voltage is used to drive the MZ modulator, thereby producing the state xj (n + 1), and so
on for all values of n.
3
In the experiment reported in [10] a portion of the light coming out of the MZ is deviated
to a second photodiode (not shown in Figure 1), that converts it into a voltage and sends
it to a digital oscilloscope. The Mach-Zehnder output can be represented as ?steps? of light
intensities of duration ? (see Figure 2a), each one representing the value of a single node
state xi at discrete time n. The value of each xi (n) is recovered by taking an average of the
measured voltage for each state at each time step. The optimal readout weights Wi and bias
Wb are then calculated on a computer from a subset (training set) of the recorded states,
using ridge regression [18], and the output y(n) is then calculated using equation 2 for all
the states collected. The performance of the reservoir is then calculated by comparing the
reservoir output y(n) with the desired output y?(n).
3
Analog readout
Readout scheme
Developing an analog readout for the reservoir computer described in section 2 means designing a device that multiplies the reservoir states shown in Figure 2a by the readout
weights Wi , and that sums them together in such a way that the reservoir output y(n)
can be retrieved directly from its output. However, this is not straightforward to do, since
obtaining good performance requires positive and negative readout weights Wi . In optical
implementations [10, 9] the states xi are encoded as light intensities which are always positive, so they cannot be subtracted one from another. Moreover, the summation over the
states must include only the values of xi pertaining to the same discrete time step n and reject all other values. This is difficult in time-multiplexed reservoirs, where the states xN (n)
and x1 (n + 1) follow seamlessly.
Here we show how to resolve both difficulties using the scheme depicted in the right panel of
Figure 1. Reservoir states encoded as light intensities in the optical reservoir computer and
represented in Figure 2a are fed to the input of a second MZ modulator with two outputs.
A second function generator governs the bias of the second Mach-Zehnder, providing the
modulation voltage V (t). The modulation voltage controls how much of the input light
passing through the readout Mach-Zehnder is sent to each output, keeping constant the
sum of the two output intensities. The two outputs are connected to the two inputs of
a balanced photodiode, which in turn gives as output a voltage level proportional to the
difference of the light intensities received at its two inputs1 . This allows us to multiply the
reservoir states by both positive and negative weights.
The time average of the output voltage of the photodiode is obtained by using a capacitor.
The characteristic time of the analog integrator ? is proportional to the capacity C.2 The
role of this time scale is to include in the readout output all the pertinent contributions and
exclude the others. The final output of the reservoir is the voltage across the capacitor at
the end of each discretized time n.
What follows is a detailed description of the readout design.
Multiplication by arbitrary weights
The multiplication of the reservoir states by arbitrary weights, positive or negative, is realized by the second MZ modulator followed by the balanced photodiode. The modulation
voltage V (t) that drives the second Mach Zehnder is piecewise constant, with a step duration equal to the duration ? of the reservoir states; transitions in voltages and in reservoir
states are synchronized. The modulation voltage is also a periodic function of period ?N ,
so that each reservoir state xi (n) is paired with a voltage level Vi that doesn?t depend on
n. The light intensities O1 (t) and O2 (t) at the two outputs of the Mach-Zehnder modulator
1
A balanced photodiode consists of two photodiodes which convert the two light intensities
into two electric currents, followed by an electronic circuit which produces as output a voltage
proportional to the difference of the two currents
2
In the case where the impedance of the coaxial cable R = 50? is matched with the output
impedance of the photodiode, we have ? = RC
2
4
are
O1 (t) = I(t)
1 + cos((V (t) + Vbias ) V?? + ?)
2
, O2 (t) = I(t)
1 ? cos((V (t) + Vbias ) V?? + ?)
2
,
(3)
where I(t) is the light intensity coming from the reservoir, Vbias is a constant voltage that
drives the modulator, ? is an arbitrary, constant phase value, and V? is the half-wave
voltage of the modulator. Neglecting the effect of any bandpass filter in the photodiode,
and choosing Vbias appropriately, the output P (t) from the photodiode can be written as
P (t) = G(O1 (t) ? O2 (t)) = I(t)(G sin(
V (t)?
)) = I(t)W (t)
V?
(4)
with G a constant gain factor. In other words, by setting the right bias and driving the
modulator with a voltage V (t), we multiply the signal I(t) by an arbitrary coefficient W (t).
Note that, if V (t) is piecewise constant, then W (t) is as well. This allows us to achieve the
multiplication of the states xi (n), encoded in the light intensity I(t), by the weights Wi ,
just by choosing the right voltage V (t), as shown in Figure 2b.
Summation of weighted states
To achieve the summation over all the states pertaining to the same discrete time step n,
which according to equation 2 will give us the reservoir output minus the bias Wb , we use
the capacitor at the right side of the Output layer in Figure 1. The capacitor provides the
integration of the photodiode output given by eq. 4 with an exponential kernel and time
constant ? . If ? is significantly less than the amount of time ?N needed for the system to
process all the nodes relative to a single time step, we can minimize the crosstalk between
node states relative to different time steps.
Let us consider the input I(t) of the readout, and let t = 0 be the instant where the state of
the first node for a given discrete time step n begins to be encoded in I(t) . Using equation
4, we can write the voltage Q(t) on the capacitor at time ?N as
? ?N
?N ?s
? ?N
?
Q(?N ) = Q(0)e
+
(5)
I(s)W (s)e? ? ds
0
For 0 < t < ?N , we have
I(t) = xi (n), W (t) = wi , for ?(i ? 1) < t < ?i
(6)
Integrating equation 5 yields
Q(?N ) = Q(0)e
? ?N
?
+
N
X
xi (n)?i wi , ?i = e?
?(N ?i)
?
?
(1 ? e? ? )?
(7)
i=1
Equation 7 shows that, at time ?N , the voltage on the capacitor is a linear combination of
the reservoir states for the discrete time n, with node-dependent coefficients ?i wi , plus a
?N
residual of the voltage at time 0, multiplied by an extinction coefficient e? ? . At time 2?N
the voltage on the capacitor would be a linear combination of the states for discrete time
n + 1, multiplied by the same coefficients, plus a residual of the voltage at time ?N , and so
on for all values of n and corresponding multiples of ?N .
i
A simple procedure would encode the weights wi = W
?i onto the voltage V (t) that drives the
modulator , provide an external, constant bias Wb , and have the output y(n) of the reservoir,
defined by equation 2, effectively encoded on the capacitor. This simple procedure would
however be unsatisfactory because unavoidably some of the ?i would be very small, and
therefore the wi would be large, spanning several orders of magnitude. This is undesirable,
as it requires a very precise control of the modulation voltage V (t) in order to recreate all
the wi values, leaving the system vulnerable to noise and to any non-ideal behavior of the
modulator itself.
5
0.04
a
?N
0.04
0.03
b
0.02
0
c
10
Readout
Output
?
0.05
Voltage (V)
Voltage (V)
0.06
5
0
?0.02
?
?5
0.02
?0.04
10
12.5
15
17.5
20
22.5
10
12.5
15
17.5
20
22.5
10
12.5
15
17.5
Time (?s)
20
22.5
Figure 2: a) Reservoir output I(t). The gray line represents the output as measured by
a photodiode and an oscilloscope. We indicated for reference the time ? = 130ns used to
process a single node and the duration ?N = 8.36?s of the whole set of states. b) Output
P (t) of the balanced photodiode (see equation 4), with the trace of panel a) as input, before
integration. c) Voltage Q(t) on the capacitor for the same input (see equation 5). The
integration time ? is indicated for reference. The black dots indicate the values at the end
of each discretized time n, taken as the output y(n)of the analog readout.
To mitigate this, we adapt the training algorithm based on ridge regression to our case. We
redefine the reservoir states as ?i (n) = xi (n)?i ; we then calculate the weights ?i that, applied
to the states ?i , give the best approximation to the desired output y?(n). The advantage here
is that ridge regression keeps the norm of the weight vector to a minimum; by redefining
the states, we can take the ?i into account without having big values of wi that force us to
be extremely precise in generating the readout weights.
A sample trace of the voltage on the capacitor is shown in Figure 2c.
Hardware implementation
To implement the analog readout, we started from the experimental architecture described
in Section 2, and we added the components depicted in the right part of Figure 1. For the
weight multiplication, we used a second Mach-Zehnder modulator (Photline model MXDOLN-10 with bandwidth in excess of 10GHz and V? = 5.9V ), driven by a Tabor 2074 Arbitrary
Waveform Generator (maximum sampling rate 200 MSamples/s). The two outputs of the
modulator were fed into a balanced photodiode (Terahertz technologies model 527 InGaAs
balanced photodiode, bandwidth set to 125MHz, response set to 1000V/W), whose output was read by the National Instruments PXI digital acquisition card (sampling rate 200
MSamples/s).
In most of the experimental results described here, the capacitor at the end of the circuit
was simulated and not physically inserted into the circuit: this allowed us to quickly cycle
in our experiments through different values of ? without taking apart the circuit every
time. The external bias Wb to the output, introduced in equation 2, was also provided
after the readout. The reasoning behind these choices is that both these implementations
are straightforward, while the use of a modulator and a balanced photodiode as a weight
generator is more complex: we chose to focus on the latter issue for now, as our goal is to
validate the proposed architecture.
4
Results
As a benchmark for our analog readout, we use a wireless channel equalization task, introduced in 1994 [19] to test adaptive bilinear filtering and subsequently used by Jaeger [16] to
show the capabilities of reservoir computing. This task is becoming a standard benchmark
task in the reservoir computing community, and has been used for example in [20]. It consists in recovering a sequence of symbols transmitted along a wireless channel, in presence
of multiple reflections, noise and nonlinear distortion; a more detailed description of the
task can be found in the Appendix. The performance of the reservoir is usually measured
in Symbol Error Rate (SER), i.e. the rate of misinterpreted symbols, as a function of the
amount of noise in the wireless channel.
6
?1
?1
10
?3
?2
SER
?2
SER
SER
0.1
10
10
10
?3
10
10
12
16
20
24
Input noise [dB]
28
32
0.05
0
12
16
20
24
Input noise [dB]
28
32
0.2
0.3
?/?N
0.4
0.5
Figure 3: Performance of the analog readout. Left: Performance as a function of the input
SNR, for a reservoir of 28 nodes, with ? /?N = 0.18. Middle: Performance for the same
task, for a reservoir of 64 nodes, ? /?N = 0.18. Right: Performance as a function of the
ratio ? /?N , at constant input noise level (28 dB SNR) for a reservoir of 64 nodes. The
performance is measured in Signal Error Rate (SER). Blue triangles: reservoir with digital
readout. Red squares: reservoir with ideal analog readout. Black circles: reservoir with
experimental analog readout (simulated capacitor). Purple stars in the left panel: reservoir
where a physical capacitor has been used.
Figure 3 shows the performance of the experimental setup of [10] for a network of 28 nodes
and one of 64 nodes, for different amounts of noise. For each noise level, three quantities
are presented. The first is the performance of the reservoir with a digital readout (blue
triangles), identical to the one used in [10]. The second is the performance of a simulated,
ideal analog readout, which takes into account the effect of the ?i coefficients introduced in
PN
equation 7, but no other imperfection. It produces as output the discrete sum ?b + i=1 ?i ?i
(red squares). This is, roughly speaking, the goal performance for our experimental readout.
The third and most important is the performance of the reservoir as calculated on real data
taken from the analog reservoir with the analog output, with the effect of the continuous
capacitive integration computed in simulation (black circles).
As can be seen from the figure, the performance of the analog readout is fairly close to its
ideal value, although it is significantly worse than the performance of the digital readout.
However, it is already better than the non-reservoir methods reported in [19] and used by
Jaeger as benchmarks in [16]. It can also handle higher signal-to-noise ratios. As expected,
networks with more nodes have better performance; it should be noted, however, that in
experimental reservoirs the number of nodes cannot be raised over a certain threshold.
The reason is that the total loop time ?N is determined by the experimental hardware
(specifically, the length of the delay line); as N increases, the length ? of each node must
decrease. This leaves the experiment vulnerable to noise and bandpass effect, that may
lead, for example, to an incorrect discretization of the xi (n) values, and an overall worse
performance.
We did test our readout with a 70nF capacitor, with a network of 28 nodes, to prove that the
physical implementation of our concept is feasible: the performance of this setup is shown
in the left panel of Figure 3. The results are comparable to those obtained in simulation,
even if, at low levels of noise in the input, the performance of the physical setup is slightly
worse.
The rightmost panel of figure 3 shows the effects of the choice of the capacitor at the end
of the circuit, and therefore of the value of ? . The plot represents the performance at 28
dB SNR for a network of 64 nodes, for different values of the ratio ? /?N , obtained by
averaging the results of 10 tests. It is clear that the choice of ? has a complicated effect on
the readout performance; however, some general rules may be inferred. Too small values
of ? mean that the contribution from the very first nodes is vanishingly small, effectively
decreasing the reservoir dimensionality, which has a strong impact on the performance both
of the ideal and the experimental reservoir. On the other hand, larger values of ? impact
the performance of the experimental readout, as the residual term in equation 7 gets larger.
A compromise value of ? /?N = 0.222 seems to give the best result, corresponding in our
case to a capacity of about 70 nF.
7
5
Discussion
To our knowledge, the system presented here is the first analog readout for an experimental
reservoir computer. While the results presented here are preliminary, and there is much
optimization of experimental parameters to be done, the system already outperforms nonreservoir methods. We expect to extend easily this approach to different tasks, already
studied in [9, 10], including a spoken digit recognition task on a standard dataset[22].
Further performance improvements can reasonably be expected from fine-tuning of the training parameters: for instance the amount of regularization in the ridge regression procedure,
that here is left constant at 1?10?4 , should be tuned for best performance. Adaptive training
algorithms, such as the ones mentioned in [21], could also take into account nonidealities in
the readout components. Moreover the choice of ?, as Figure 3 shows, is not obvious and a
more extensive investigation could lead to better performance.
The architecture proposed here is simple and quite straightforward to realize; it can be
added at the output of any preexisting time multiplexing reservoir with minimal effort. The
capacitor at the end of the circuit could be substituted with an active electronic circuit
performing the summation of the incoming signal before resetting itself. This would eliminate the problem of residual voltages, and allow better performance at the cost of increased
complexity of the readout.
The main interest of the analog readout is that it allows optoelectronic reservoir computers
to fully leverage their main characteristic, which is the speed of operation. Indeed, removing
the need for slow, offline postprocessing is indicated in [13] as one of the major challenges
in the field. Once the training is finished, optoelectronic reservoirs can process millions of
nonlinear nodes per second [10]; however, in the case of a digital readout, the node states
must be recovered and postprocessed to obtain the reservoir outputs. It takes around 1.6
seconds for the digital readout in our setup to retrieve and digitize the states generated by a
9000 symbol input sequence. The analog readout removes the need for postprocessing, and
can work at a rate of about 8.5 ?s per input symbol, five orders of magnitude faster than
the electronic reservoir reported in [8].
Finally, having an analog readout opens the possibility of feedback - using the output of the
reservoir as input or part of an input for the successive time steps. This opens the way for
different tasks to be performed [15] or different training techniques to be employed [14].
Appendix: Nonlinear Channel Equalization task
What follows is a detailed description of the channel equalization task. The goal is to
reconstruct a sequence d(n) of symbols taken from {?3, ?1, 1, 3}. The symbols in d(n) are
mixed together in a new sequence q(n) given by
q(n) = 0.08d(n + 2) ? 0.12d(n + 1) + d(n) + 0.18d(n ? 1) ? 0.1d(n-2)
(8)
+0.091d(n ? 3)-0.05d(n ? 4) + 0.04d(n ? 5) + 0.03d(n ? 6) + 0.01d(n-7)
which models a wireless signal reaching a receiver through different paths with different
traveling times. A noisy, distorted version u(n) of the mixed signal q(n), simulating the
nonlinearities and the noise sources in the receiver, is created by having u(n) = q(n) +
0.036q(n)2 ? 0.011q(n)3 + ?(n), where ?(n) is an i.i.d. Gaussian noise with zero mean
adjusted in power to yield signal-to-noise ratios ranging from 12 to 32 dB. The sequence
u(n) is then fed to the reservoir as an input; the output of the readout R(n) is rounded off to
the closest value among {?3, ?1, 1, 3}, and then compared to the desired symbol d(n). The
performance is usually measured in Signal Error Rate (SER), or the rate of misinterpreted
symbols.
Acknowledgements
This research was supported by the Interuniversity Attraction Poles program of the Belgian Science Policy Office, under grant IAP P7-35 ?photonics@be? and by the Fonds de la
Recherche Scientifique FRS-FNRS.
8
References
[1] Jaeger, H. The "echo state" approach to analysing and training recurrent neural networks.
Technical report, Technical Report GMD Report 148, German National Research Center for
Information Technology, 2001.
[2] Maass, W., Natschlager, T., and Markram, H. Real-time computing without stable states:
A new framework for neural computation based on perturbations. Neural computation,
14(11):2531?2560, 2002.
[3] Schrauwen, B., Verstraeten, D., and Van Campenhout, J. An overview of reservoir computing:
theory, applications and implementations. In Proceedings of the 15th European Symposium on
Artificial Neural Networks, pages 471?482, 2007.
[4] Lukosevicius, M. and Jaeger, H. Reservoir computing approaches to recurrent neural network
training. Computer Science Review, 3(3):127?149, 2009.
[5] Fernando, C. and Sojakka, S. Pattern recognition in a bucket. Advances in Artificial Life,
pages 588?597, 2003.
[6] Schurmann, F., Meier, K., and Schemmel, J. Edge of chaos computation in mixed-mode vlsi a hard liquid. In In Proc. of NIPS. MIT Press, 2005.
[7] Paquot, Y., Dambre, J., Schrauwen, B., Haelterman, M., and Massar, S. Reservoir computing:
a photonic neural network for information processing. volume 7728, page 77280B. SPIE, 2010.
[8] Appeltant, L., Soriano, M. C., Van der Sande, G., Danckaert, G., Massar, S., Dambre, J.,
Schrauwen, B., Mirasso, C. R., and Fischer, I. Information processing using a single dynamical
node as complex system. Nature Communications, 2:468, 2011.
[9] Larger, L., Soriano, M. C., Brunner, D., Appeltant, L., Gutierrez, J. M., Pesquera, L., Mirasso,
C. R. , and Fischer, I. Photonic information processing beyond Turing: an optoelectronic
implementation of reservoir computing. Optics Express, 20(3):3241, 2012.
[10] Paquot, Y., Duport, F., Smerieri, A., Dambre, J., Schrauwen, B., Haelterman, M., and Massar,
S. Optoelectronic reservoir computing. Scientific reports, 2:287, January 2012.
[11] Legenstein, R. and Maass, W. What makes a dynamical system computationally powerful?
In Simon Haykin, Jos? C. Principe, Terrence J. Sejnowski, and John McWhirter, editors, New
Directions in Statistical Signal Processing: From Systems to Brain. MIT Press, 2005.
[12] Vandoorne, K., Fiers, M., Verstraeten, D., Schrauwen, B., Dambre, J., and Bienstman, P.
Photonic reservoir computing: A new approach to optical information processing. In 2010
12th International Conference on Transparent Optical Networks, pages 1?4. IEEE, 2010.
[13] Woods, D. and Naughton, T. J. Optical computing: Photonic neural networks. Nature Physics,
8(4):257?259, April 2012.
[14] Sussillo, D. and Abbott, L. F. Generating coherent patterns of activity from chaotic neural
networks. Neuron, 63(4):544?57, 2009.
[15] Jaeger, H., Lukosevicius, M., Popovici, D., and Siewert, U. Optimization and applications of
echo state networks with leaky-integrator neurons. Neural networks : the official journal of
the International Neural Network Society, 20(3):335?52, 2007.
[16] Jaeger, H. and Haas, H. Harnessing nonlinearity: predicting chaotic systems and saving energy
in wireless communication. Science, 304(5667):78?80, 2004.
[17] Verstraeten, D., Dambre, J., Dutoit, X., and Schrauwen, B. Memory versus non-linearity in
reservoirs. In The 2010 International Joint Conference on Neural Networks (IJCNN), pages
1?8. IEEE, 2010.
[18] Wyffels, F. and Schrauwen, B. Stable output feedback in reservoir computing using ridge
regression. Artificial Neural Networks-ICANN, pages 808?817, 2008.
[19] Mathews. V. J. Adaptive algorithms for bilinear filtering. Proceedings of SPIE, 2296(1):317?
327, 1994.
[20] Rodan, A., and Tino, P. Minimum complexity echo state network. IEEE transactions on
neural networks, 22(1):131?44, January 2011.
[21] Legenstein, R., Chase, S. M., Schwartz, A. B., and Maass, W. A reward-modulated hebbian learning rule can explain experimentally observed network reorganization in a brain control task. The Journal of neuroscience : the official journal of the Society for Neuroscience,
30(25):8400?10, 2010.
[22] Texas Instruments-Developed 46-Word Speaker-Dependent Isolated Word Corpus (TI46),
September 1991, NIST Speech Disc 7-1.1 (1 disc) (1991).
9
|
4718 |@word version:1 schurmann:1 middle:1 norm:1 seems:1 extinction:1 open:3 simulation:2 thereby:2 minus:1 electronics:1 liquid:1 tuned:1 amp:1 rightmost:1 o2:3 outperforms:1 recovered:2 comparing:1 current:2 discretization:1 must:3 written:1 john:1 realize:1 subsequent:1 pertinent:1 remove:2 designed:1 plot:1 half:1 leaf:1 device:2 p7:1 core:1 recherche:1 awg:2 haykin:1 provides:1 node:29 successive:1 misinterpreted:2 five:1 rc:3 along:1 constructed:1 symposium:1 incorrect:1 consists:3 prove:1 redefine:1 paragraph:1 mask:2 indeed:1 expected:2 roughly:1 oscilloscope:2 behavior:1 integrator:2 discretized:2 brain:2 decreasing:1 resolve:1 actual:1 provided:2 begin:1 moreover:3 matched:1 panel:5 circuit:7 linearity:1 natschlager:1 what:3 developed:1 spoken:1 finding:1 mitigate:1 every:1 nf:2 universit:2 ser:6 control:3 unit:3 grant:1 mathews:1 schwartz:1 producing:1 positive:4 service:1 before:2 limit:1 severely:1 bilinear:2 mach:8 path:2 modulation:5 becoming:1 black:3 plus:2 chose:1 studied:1 co:2 range:2 crosstalk:1 implement:1 chaotic:2 digit:1 procedure:4 significantly:3 reject:1 dambre:5 word:3 integrating:1 get:1 cannot:2 onto:1 undesirable:1 close:1 equalization:3 conventional:1 demonstrated:1 center:1 straightforward:3 independently:2 duration:5 rule:2 attraction:1 retrieve:1 handle:1 limiting:1 user:1 us:3 designing:2 element:4 recognition:2 particularly:1 photodiode:16 cut:1 observed:1 role:1 inserted:1 vbias:4 calculate:1 readout:54 connected:1 cycle:1 decrease:1 mz:8 verstraeten:3 digitally:1 balanced:7 mentioned:1 complexity:2 reward:1 dynamic:6 depend:1 compromise:1 triangle:2 easily:4 joint:1 represented:3 fiber:3 train:2 laser:1 describe:1 preexisting:1 fnrs:1 pertaining:2 artificial:3 sejnowski:1 choosing:2 harnessing:1 quite:2 modular:1 widely:1 encoded:7 whose:1 distortion:1 larger:3 reconstruct:1 fischer:2 itself:4 noisy:1 final:1 echo:3 chase:1 advantage:2 sequence:6 propose:1 interaction:2 interconnected:1 coming:2 vanishingly:1 fr:1 neighboring:1 loop:1 unavoidably:1 achieve:2 amplified:1 description:4 validate:1 requirement:3 jaeger:7 produce:2 generating:2 postprocessed:1 sussillo:1 recurrent:7 measured:5 received:1 eq:1 strong:2 implemented:2 recovering:1 involves:1 come:1 indicate:1 synchronized:1 differ:1 direction:1 waveform:3 drawback:1 filter:1 subsequently:1 exploration:1 feeding:1 transparent:1 preliminary:2 investigation:1 summation:4 adjusted:1 sufficiently:1 around:1 mcwhirter:1 great:1 scope:1 driving:1 major:4 belgium:3 campenhout:1 favorable:1 proc:1 gutierrez:1 weighted:1 lukosevicius:2 mit:2 imperfection:1 always:1 gaussian:1 reaching:1 pn:1 voltage:34 office:1 encode:1 tabor:1 pxi:2 focus:1 improvement:2 unsatisfactory:1 seamlessly:1 dependent:3 typically:1 eliminate:1 vlsi:1 wij:3 issue:1 overall:1 flexible:1 among:1 retaining:1 development:1 opto:1 enrich:1 integration:4 fairly:1 multiplies:1 raised:1 equal:1 field:1 once:1 having:5 saving:1 sampling:2 identical:2 represents:2 future:1 mimic:1 others:1 report:4 piecewise:2 inherent:1 national:2 phase:1 amplifier:1 interest:1 possibility:2 multiply:2 photonics:1 light:12 behind:1 brunner:1 edge:1 capable:1 neglecting:1 belgian:1 desired:4 circle:2 isolated:1 minimal:1 instance:1 increased:1 wb:7 obstacle:1 mhz:1 applicability:1 cost:1 elis:1 subset:1 pole:1 snr:3 delay:4 too:1 reported:8 stored:2 periodic:1 international:3 off:1 terrence:1 physic:1 rounded:1 jos:1 together:2 quickly:1 schrauwen:7 interuniversity:1 nm:1 recorded:1 pietersnieuwstraat:1 worse:3 external:3 scientifique:1 roosevelt:2 account:3 converted:1 exclude:1 de:3 nonlinearities:1 star:1 coefficient:7 vi:1 later:1 sine:1 performed:1 reached:2 red:3 portion:1 wave:1 parallel:1 capability:1 complicated:1 simon:1 contribution:2 minimize:2 opera:1 ni:1 square:2 purple:1 characteristic:4 resetting:1 yield:2 spool:3 disc:2 drive:4 explain:1 reach:1 energy:1 acquisition:2 obvious:1 transmits:1 mi:6 spie:2 couple:1 gain:1 dataset:1 knowledge:1 dimensionality:2 back:1 higher:1 follow:1 response:1 april:1 done:2 furthermore:1 just:1 d:1 working:1 receives:1 hand:1 traveling:1 nonlinear:15 mode:1 gray:1 indicated:3 scientific:1 effect:6 concept:2 contain:1 evolution:1 hence:1 regularization:1 read:1 maass:4 sin:1 tino:1 noted:1 speaker:1 ridge:5 cp:2 reflection:1 postprocessing:3 reasoning:1 ranging:1 instantaneous:2 chaos:1 recently:3 physical:4 overview:1 volume:1 untouched:1 analog:22 extend:1 million:1 tuning:1 pointed:1 nonlinearity:1 dot:1 stable:2 operating:3 closest:1 retrieved:1 driven:1 apart:1 certain:1 sande:1 life:1 der:1 transmitted:1 minimum:2 seen:1 preceding:1 employed:1 fernando:1 conjugating:1 period:1 signal:10 full:1 multiple:2 schemmel:1 photodiodes:1 hebbian:1 technical:2 faster:1 adapt:1 long:1 lin:2 rcs:1 post:1 paired:1 controlled:1 impact:2 basic:1 regression:6 essentially:1 physically:1 represent:1 kernel:1 fiers:1 proposal:2 addition:1 fine:1 laboratoire:1 leaving:1 sends:1 source:1 appropriately:1 specially:1 unlike:1 sent:1 db:5 ample:1 capacitor:16 leverage:3 ideal:5 presence:1 xj:3 architecture:5 topology:2 modulator:17 bandwidth:2 idea:1 avenue:2 texas:1 soriano:2 bottleneck:2 recreate:1 granted:1 effort:1 speech:1 passing:1 speaking:1 generally:2 governs:1 detailed:3 tune:1 clear:1 amount:5 liq:1 hardware:10 processed:1 category:1 gmd:1 outperform:1 neuroscience:2 per:2 blue:2 discrete:9 write:1 promise:1 express:1 threshold:1 abbott:1 wood:1 convert:2 sum:3 naughton:1 turing:1 powerful:3 injected:1 distorted:1 place:1 almost:1 electronic:6 legenstein:2 appendix:2 comparable:3 layer:8 followed:2 deviated:1 activity:1 strength:1 ijcnn:1 optic:2 multiplexing:6 ti46:1 speed:5 extremely:1 performing:1 optical:11 department:1 developing:1 according:1 combination:4 describes:1 across:1 slightly:1 wi:14 cable:1 sint:1 bucket:1 taken:3 computationally:1 equation:12 turn:2 german:1 mechanism:3 needed:2 fed:3 instrument:2 end:7 iap:1 operation:3 multiplied:3 apply:1 regulate:1 optoelectronic:11 appearing:1 simulating:1 subtracted:1 alternative:1 capacitive:1 include:2 instant:1 society:2 already:4 added:3 realized:1 quantity:1 coaxial:1 traditional:1 september:1 bruxelles:4 card:2 simulated:3 capacity:2 haas:1 collected:2 reason:2 spanning:1 length:2 o1:3 reorganization:1 providing:1 ratio:4 setup:5 difficult:1 trace:2 negative:3 implementation:11 design:2 policy:1 perform:1 allowing:1 upper:1 photonic:4 neuron:4 benchmark:4 finite:1 nist:1 january:2 extended:1 communication:2 team:1 precise:2 perturbation:1 arbitrary:7 community:1 intensity:11 inferred:1 introduced:4 meier:1 extensive:1 optimized:1 connection:2 redefining:1 coherent:1 nip:1 beyond:1 parallelism:1 pattern:3 usually:3 dynamical:2 challenge:1 encompasses:1 program:1 built:1 including:2 green:1 memory:1 power:2 suitable:2 treated:1 difficulty:1 regularized:1 force:1 predicting:1 residual:4 representing:1 scheme:3 technology:2 brief:1 imply:1 finished:1 started:1 created:1 review:1 literature:1 acknowledgement:1 popovici:1 multiplication:4 relative:2 siewert:1 fully:1 expect:1 mixed:3 generation:1 limitation:1 proportional:3 filtering:2 versus:1 generator:5 digital:12 principle:1 editor:1 share:1 supported:1 last:1 keeping:1 wireless:5 drastically:1 bias:8 side:1 allow:1 offline:1 taking:2 markram:1 leaky:1 ghz:1 van:2 feedback:3 calculated:4 xn:1 transition:1 rich:1 doesn:1 made:1 adaptive:3 nonidealities:1 transaction:1 excess:1 overcomes:1 keep:1 active:1 incoming:1 receiver:2 corpus:1 xi:23 continuous:1 impedance:2 promising:2 channel:5 reasonably:1 nature:2 obtaining:1 complex:2 european:1 electric:1 substituted:1 official:2 did:1 icann:1 main:4 libre:2 whole:1 noise:14 big:1 allowed:1 x1:1 reservoir:89 slow:3 n:1 bandpass:2 exponential:1 third:1 down:1 removing:1 symbol:9 adding:1 effectively:2 magnitude:2 fonds:1 easier:1 suited:1 depicted:2 simply:1 vulnerable:2 determines:1 goal:3 room:1 absence:1 feasible:1 experimentally:3 analysing:1 hard:1 determined:1 specifically:1 acting:1 averaging:1 ghent:2 total:2 called:1 experimental:16 la:1 principe:1 latter:1 modulated:1 multiplexed:3 tested:2
|
4,109 | 4,719 |
Emergence of Object-Selective Features in
Unsupervised Feature Learning
Adam Coates, Andrej Karpathy, Andrew Y. Ng
Computer Science Department
Stanford University
Stanford, CA 94305
{acoates,karpathy,ang}@cs.stanford.edu
Abstract
Recent work in unsupervised feature learning has focused on the goal of discovering high-level features from unlabeled images. Much progress has been made in
this direction, but in most cases it is still standard to use a large amount of labeled
data in order to construct detectors sensitive to object classes or other complex
patterns in the data. In this paper, we aim to test the hypothesis that unsupervised
feature learning methods, provided with only unlabeled data, can learn high-level,
invariant features that are sensitive to commonly-occurring objects. Though a
handful of prior results suggest that this is possible when each object class accounts for a large fraction of the data (as in many labeled datasets), it is unclear
whether something similar can be accomplished when dealing with completely
unlabeled data. A major obstacle to this test, however, is scale: we cannot expect
to succeed with small datasets or with small numbers of learned features. Here,
we propose a large-scale feature learning system that enables us to carry out this
experiment, learning 150,000 features from tens of millions of unlabeled images.
Based on two scalable clustering algorithms (K-means and agglomerative clustering), we find that our simple system can discover features sensitive to a commonly
occurring object class (human faces) and can also combine these into detectors invariant to significant global distortions like large translations and scale.
1
Introduction
Many algorithms are now available to learn hierarchical features from unlabeled image data. There
is some evidence that these algorithms are able to learn useful high-level features without labels, yet
in practice it is still common to train such features from labeled datasets (but ignoring the labels), and
to ultimately use a supervised learning algorithm to learn to detect more complex patterns that the
unsupervised learning algorithm is unable to find on its own. Thus, an interesting open question is
whether unsupervised feature learning algorithms are able to construct features, without the benefit
of supervision, that can identify high-level concepts like frequently-occurring object classes. It is
already known that this can be achieved when the dataset is sufficiently restricted that object classes
are clearly defined (typically closely cropped images) and occur very frequently [13, 21, 22]. In this
work we aim to test whether unsupervised learning algorithms can achieve a similar result without
any supervision at all.
The setting we consider is a challenging one. We have harvested a dataset of 1.4 million image
thumbnails from YouTube and extracted roughly 57 million 32-by-32 pixel patches at random locations and scales. These patches are very different from those found in labeled datasets like CIFAR10 [9]. The overwhelming majority of patches in our dataset appear to be random clutter. In the
cases where such a patch contains an identifiable object, it may well be scaled, arbitrarily cropped,
or uncentered. As a result, it is very unclear where an ?object class? begins or ends in this type of
patch dataset, and less clear that a completely unsupervised learning algorithm could manage to cre1
ate ?object-selective? features able to distinguish an object from the wide variety of clutter without
some other type of supervision.
In order to have some hope of success, we can identify several key properties that our learning
algorithm should likely have. First, since identifiable objects show up very rarely, it is clear that
we are obliged to train from extremely large datasets. We have no way of controlling how often
a particular object shows up and thus enough data must be used to ensure that an object class is
seen many times?often enough that it cannot be disregarded as random clutter. Second, we are
also likely to need a very large number of features. Training too few features will cause us to
?under-fit? the distribution, forcing the learning algorithm to ignore rare events like objects. Finally,
as is already common in feature learning work, we should aim to build features that incorporate
invariance so that features respond not just to a specific pattern (e.g., an object at a single location
and scale), but to a range of patterns that collectively belong to the same object class (e.g., the same
object seen at many locations and scales). Unfortunately, these desiderata are difficult to achieve at
once: current methods for building invariant hierarchies of features are difficult to scale up to train
many thousands of features from our 57 million patch dataset on our cluster of 30 machines.
In this paper, we will propose a highly scalable combination of clustering algorithms for learning
selective and invariant features that are capable of tackling this size of problem. Surprisingly, we
find that despite the simplicity of these algorithms we are nevertheless able to discover high-level
features sensitive to the most commonly occurring object class present in our dataset: human faces.
In fact, we find that these features are better face detectors than a linear filter trained from labeled
data, achieving up to 86% AUC compared to 77% on labeled validation data. Thus, our results emphasize that not only can unsupervised learning algorithms discover object-selective features with
no labeled data, but that such features can potentially perform better than basic supervised detectors
due to their deep architecture. Though our approach is based on fast clustering algorithms (K-means
and agglomerative clustering), its basic behavior is essentially similar to existing methods for building invariant feature hierarchies, suggesting that other popular feature learning methods currently
available may also be able to achieve such results if run at large enough scale. Indeed, recent work
with a more sophisticated (but vastly more expensive) feature-learning algorithm appears to achieve
similar results [11] when presented with full-frame images.
We will begin with a description of our algorithms for learning selective and invariant features, and
explain their relationship to existing systems. We will then move on to presenting our experimental
results. Related results and methods to our own will be reviewed briefly before concluding.
2
Algorithm
Our system is built on two separate learning modules: (i) an algorithm to learn selective features
(linear filters that respond to a specific input pattern), and (ii) an algorithm to combine the selective
features into invariant features (that respond to a spectrum of gradually changing patterns). We
will refer to these features as ?simple cells? and ?complex cells? respectively, in analogy to previous
work and to biological cells with (very loosely) related response properties. Following other popular
systems [14, 12, 6, 5] we will then use these two algorithms to build alternating layers of simple cell
and complex cell features.
2.1
Learning Selective Features (Simple Cells)
The first module in our learning system trains a bank of linear filters to represent our selective
?simple cell? features. For this purpose we use the K-means-like method used by [2], which has
previously been used for large-scale feature learning.
The algorithm is given a set of input vectors x(i) ? <n , i = 1, . . . , m. These vectors are preprocessed by removing the mean and normalizing each example, then performing PCA whitening.
We then learn a dictionary D ? <n?d of linear filters as in [2] by alternating optimization over
filters D and ?cluster assignments? C:
minimize ||DC (i) ? x(i) ||22
D,C
subject to ||D(j) ||2 = 1, ?j,
and ||C (i) ||0 ? 1, ?i.
2
Here the constraint ||C (i) ||0 ? 1 means that the vectors C (i) , i = 1, . . . , m are allowed to contain
only a single non-zero, but the non-zero value is otherwise unconstrained. Given the linear filters
D, we then define the responses of the learned simple cell features as s(i) = g(a(i) ) where a(i) =
D> x(i) and g(?) is a nonlinear activation function. In our experiments we will typically use g(a) =
|a| for the first layer of simple cells, and g(a) = a for the second.1
2.2
Learning Invariant Features (Complex Cells)
To construct invariant complex cell features a common approach is to create ?pooling units? that
combine the responses of lower-level simple cells. In this work, we use max-pooling units [14, 13].
Specifically, given a vector of simple cell responses s(i) , we will train complex cell features whose
responses are given by:
(i)
(i)
cj = max sk
k?Gj
where Gj is a set that specifies which simple cells the j?th complex cell should pool over. Thus, the
complex cell cj is an invariant feature that responds significantly to any of the patterns represented
by simple cells in its group.
Each group Gj should specify a set of simple cells that are, in some sense, similar to one another.
In convolutional neural networks [12], for instance, each group is hard-coded to include translated
copies of the same filter resulting in complex cell responses cj that are invariant to small translations.
Some algorithms [6, 3] fix the groups Gj ahead of time then optimize the simple cell filters D
so that the simple cells in each group share a particular form of statistical dependence. In our
system, we will use linear correlation of simple cell responses as our similarity metric, E[ak al ], and
construct groups Gj that combine similar features according to this metric. Computing the similarity
directly would normally require us to estimate the correlations from data, but since the inputs x(i)
are whitened we can instead compute the similarity directly from the filter weights:
>
E[ak al ] = E[D(k)> x(i) x(i) D(l) ] = D(k)> D(l) .
For convenience in the following,
p we will actually use the dissimilarity between features, defined as
d(k, l) = ||D(k) ? D(l) ||2 = 2 ? 2E[ak al ].
To construct the groups G, we will use a version of single-link agglomerative clustering to combine
sets of features that have low dissimilarity according to d(k, l).2 To construct a single group G0 we
begin by choosing a random simple cell filter, say D(k) , as the first member. We then search for
candidate cells to be added to the group by computing d(k, l) for each simple cell filter D(l) and add
D(l) to the group if d(k, l) is less than some limit ? . The algorithm then continues to expand G0 by
adding any additional simple cells that are closer than ? to any one of the simple cells already in the
group. This procedure continues until there are no more cells to be added, or until the diameter of
the group (the dissimilarity between the two furthest cells in the group) reaches a limit ?.3
This procedure can be executed, quite rapidly, in parallel for a large number of randomly chosen
simple cells to act as the ?seed? cell, thus allowing us to train many complex cells at once. Compared to the simple cell learning procedure, the computational cost is extremely small even for our
rudimentary implementation. In practice, we often generate many groups (e.g., several thousand)
and then keep only a random subset of the largest groups. This ensures that we do not end up with
many groups that pool over very few simple cells (and hence yield complex cells cj that are not
especially invariant).
2.3
Algorithm Behavior
Though it seems plausible that pooling simple cells with similar-looking filters according to d(k, l)
as above should give us some form of invariant feature, it may not yet be clear why this form of
1
This allows us to train roughly half as many simple cell features for the first layer.
Since the first layer uses g(a) = |a|, we actually use d(k, l) = min{||D(k) ? D(l) ||2 , ||D(k) + D(l) ||2 }
to account for ?D(l) and +D(l) being essentially the same feature.
3
We use ? = 0.3 for the first layer of complex cells and ? = 1.0 for the second layer. These were chosen
?
by examining the typical distance between a filter D(k) and its nearest neighbor. We use ? = 1.5 > 2 so
that a complex cell group may include orthogonal filters but cannot grow without limit.
2
3
invariance is desirable. To explain, we will consider a simple ?toy? data distribution where the
behavior of these algorithms is more clear. Specifically, we will generate three heavy-tailed random
variables X, Y, Z according to:
?1 , ?2 ? L(0, ?)
e1 , e2 , e3 ? N (0, 1)
X = e1 ?1 , Y = e2 ?1 , Z = e3 ?2
Here, ?1 , ?2 are scale parameters sampled independently from a Laplace distribution, and e1 , e2 , e3
are sampled independently from a unit Gaussian. The result is that Z is independent of both X and
Y , but X and Y are not independent due to their shared scale parameter ?1 [6]. An isocontour of
the density of this distribution is shown in Figure 1a.
Other popular algorithms [6, 5, 3] for learning complex-cell features are designed to identify X and
Y as features to be pooled together due to the correlation in their energies (scales). One empirical
motivation for this kind of invariance comes from natural images: if we have three simple-cell filter
responses a1 = D(1)> x, a2 = D(2)> x, a3 = D(3)> x where D(1) and D(2) are Gabor filters in
quadrature phase, but D(3) is a Gabor filter at a different orientation, then the responses a1 , a2 , a3
will tend to have a distribution very similar to the model of X, Y, Z above [7]. By pooling together
the responses of a1 and a2 a complex cell is able to detect an edge of fixed orientation invariant
to small translations. This model also makes sense for higher-level invariances where X and Y do
not merely represent responses of linear filters on image patches but feature responses in a deep
network. Indeed, the X?Y plane in Figure 1a is referred to as an ?invariant subspace? [8].
Our combination of simple cell and complex cell learning algorithms above tend to learn this same
type of invariance. After whitening and normalization, the data points X, Y, Z drawn from the
distribution above will lie (roughly) on a sphere. The density of these data points is pictured in
Figure 1b, where it can be seen that the highest density areas are in a ?belt? in the X?Y plane and
at the poles along the Z axis with a low-density region in between. Application of our K-means
clustering method to this data results in centroids shown as ? marks in Figure 1b. From this picture
it is clear what a subsequent application of our single-link clustering algorithm will do: it will try to
string together the centroids around the ?belt? that forms the invariant subspace and avoid connecting
them to the (distant) centroids at the poles. Max-pooling over the responses of these filters will result
in a complex cell that responds consistently to points in the X?Y plane, but not in the Z direction?
that is, we end up with an invariant feature detector very similar to those constructed by existing
methods. Figure 1c depicts this result, along with visualizations of the hypothetical gabor filters
D(1) , D(2) , D(3) described above that might correspond to the learned centroids.
(a)
(b)
(c)
Figure 1: (a) An isocontour of a sparse probability distribution over variables X, Y, and Z. (See text
for details.) (b) A visualization of the spherical density obtained from the distribution in (a) after
normalization. Red areas are high density and dark blue areas are low density. Centroids learned
by K-means from this data are shown on the surface of the sphere as * marks. (c) A pooling unit
identified by applying single-link clustering to the centroids (black links join pooled filters). (See
text.)
2.4
Feature Hierarchy
Now that we have defined our simple and complex cell learning algorithms, we can use them to train
alternating layers of selective and invariant features. We will train 4 layers total, 2 of each type. The
architecture we use is pictured in Figure 2a.
4
(a)
(b)
Figure 2: (a) Cross-section of network architecture used for experiments. Full layer sizes are shown
at right. (b) Randomly selected 128-by-96 images from our dataset.
Our first layer of simple cell features are locally connected to 16 non-overlapping 8-by-8 pixel
patches within the 32-by-32 pixel image. These features are trained by building a dataset of 8-by8 patches and passing them to our simple cell learning procedure to train 6400 first-layer filters
D ? <64?6400 . We apply our complex cell learning procedure to this bank of filters to find 128
pooling groups G1 , G2 , . . . , G128 . Using these results, we can extract our simple cell and complex
cell features from each 8-by-8 pixel subpatch of the 32-by-32 image. Specifically, the linear filters D
(p)
are used to extract the first layer simple cell responses si = g(D(i)> x(p) ) where x(p) , p = 1, .., 16
are the 16 subpatches of the 32-by-32 image. We then compute the complex cell feature responses
(p)
(p)
cj = maxk?Gj sk for each patch.
Once complete, we have an array of 128-by-4-by-4 = 2048 complex cell responses c representing
each 32-by-32 image. These responses are then used to form a new dataset from which to learn a
second layer of simple cells with K-means. In our experiments we train 150,000 second layer simple
? and the second layer simple cell responses
cells. We denote the second layer of learned filters as D,
? > c. Applying again our complex cell learning procedure to D,
? we obtain pooling groups
as s? = D
? and complex cells c? defined analogously.
G,
3
Experiments
As described above, we ran our algorithm on patches harvested from YouTube thumbnails downloaded from the web. Specifically, we downloaded the thumbnails for over 1.4 million YouTube
videos4 , some of which are shown in Figure 2b. These images were downsampled to 128-by-96
pixels and converted to grayscale. We cropped 57 million randomly selected 32-by-32 pixel patches
from these images to form our unlabeled training set. No supervision was used?thus most patches
contain partial views of objects or clutter at differing scales. We ran our algorithm on these images
using a cluster of 30 machines over 3 days?virtually all of the time spent training the 150,000
second-layer features.5 We will now visualize these features and check whether any of them have
learned to identify an object class.
3.1
Low-Level Simple and Complex Cell Visualizations
We visualize the learned low-level filters D and pooling groups G to verify that they are, in fact,
similar to those learned by other well-known algorithms. It is already known that our K-meansbased algorithm learns simple-cell-like filters (e.g., edge-like features, as well as spots, curves) as
shown in Figure 3a.
To visualize the learned complex cells we inspect the simple cell filters that belong to each of the
pooling groups. The filters for several pooling groups are shown in Figure 3b. As expected, the filters
cover a spectrum of similar image structures. Though many pairs of filters are extremely similar6 ,
4
We cannot select videos at random, so we query videos under each YouTube category (?Pets & Animals?,
?Science & Technology?, etc.) along with a date (e.g., ?January 2001?).
5
Though this is a fairly long run, we note that 1 iteration of K-means is cheaper than a single batch gradient
step for most other methods able to learn high-level invariant features. We expect that these experiments would
be impossible to perform in a reasonable amount of time on our cluster with another algorithm.
6
Some filters have reversed polarity due to our use of absolute-value rectification during training of the first
layer.
5
there are also other pairs that differ significantly yet are included in the group due to the singlelink clustering method. Note that some of our groups are composed of similar edges at differing
locations, and thus appear to have learned translation invariance as expected.
3.2
Higher-Level Simple and Complex Cells
Finally, we inspect the learned higher layer simple cell and complex cell features, s? and c?, particularly to see whether any of them are selective for an object class. The most commonly occurring
object in these video thumbnails is human faces (even though we estimate that much less than 0.1%
of patches contain a well-framed face). Thus we search through our learned features for cells that are
selective for human faces at varying locations and scales. To locate such features we use a dataset
of labeled images: several hundred thousand non-face images as well as tens of thousands of known
face images from the ?Labeled Faces in the Wild? (LFW) dataset [4].7
To test whether any of the s? simple cell features are selective for faces, we use each feature by itself
as a ?detector? on the labeled dataset: we compute the area under the precision-recall curve (AUC)
obtained when each feature?s response s?i is used as a simple classifier. Indeed, it turns out that there
are a handful of high-level features that tend to be good detectors for faces. The precision-recall
curves for the best 5 detectors are shown in Figure 3c (top curves); the best of these achieves 86%
AUC. We visualize 16 of the simple cell features identified by this procedure8 in Figure 4a along
with a sampling of the image patches that activate the first of these cells strongly. There it can be
seen that these simple cells are selective for faces located at particular locations and scales. Within
each group the faces differ slightly due to the learned invariance provided by the complex cells in
the lower layer (and thus the mean of each group of images is blurry).
1
0.9
Precision
0.8
0.7
0.6
0.5
0.4
0
(a)
(b)
0.1
0.2
0.3
0.4
0.5
Recall
0.6
0.7
0.8
0.9
1
(c)
Figure 3: (a) First layer simple cell filters learned by K-means. (b) Sets of simple cell filters belonging to three pooling groups learned by our complex cell training algorithm. (c) Precision-Recall
curves showing selectivity for human faces of 5 low-level simple cells trained from a full 32-by-32
patch (red curves, bottom) versus 5 higher-level simple cells (green curves, top). Performance of the
best linear filter found by SVM from labeled data is also shown (black dotted curve, middle).
It may appear that this result could be obtained by applying our simple cell learning procedure
directly to full 32-by-32 images without any attempts at incorporating local invariance. That is,
rather than training D (the first-layer filters) from 8-by-8 patches, we could try to train D directly
from the 32-by-32 images. This turns out not to be successful. The lower curves in Figure 3c are the
precision-recall curves for the best 5 simple cells found in this way. Clearly the higher-level features
are dramatically better detectors than simple cells built directly from pixels9 (only 64% AUC).
7
Our positive face samples include the entire set of labeled faces, plus randomly scaled and translated
copies.
8
We visualize the higher-level features by averaging together the 100 unlabeled images from our YouTube
dataset that elicit the strongest activation.
9
These simple cells were trained by applying K-means to normalized, whitened 32-by-32 pixel patches from
a smaller unlabeled set known to have a higher concentration of faces. Due to this, a handful of centroids look
roughly like face exemplars and act as simple ?template matchers?. When trained on the full dataset (which
contains far fewer faces), K-means learns only edge and arc features which perform much worse (about 45%
AUC).
6
AUC
Best 32-by-32 simple cell
64%
Best in s?
86%
Best in c?
80%
Supervised Linear SVM
77%
Table 1: Area under PR curve for different cells on our face detection validation set. Only the SVM
uses labeled data.
(a)
(b)
(c)
(d)
Figure 4: Visualizations. (a) A collection of patches from our unlabeled dataset that maximally
activate one of the high-level simple cells from s?. (b) The mean of the top stimuli for a handful
of face-selective cells in s?. (c) Visualization of the face-selective cells that belong to one of the
? (d) A collection
complex cells in c? discovered by the single-link clustering algorithm applied to D.
of unlabeled patches that elicit a strong response from the complex cell visualized in (c) ? virtually
all are faces, at a variety of scales and positions. Compare to (a).
As a second control experiment we train a linear SVM from half of the labeled data using only
pixels as input (contrast-normalized and whitened). The PR curve for this linear classifier is shown
in Figure 3c as a black dotted line. There we see that the supervised linear classifier is significantly
better (77% AUC) than the 32-by-32 linear simple cells. On the other hand, it does not perform as
well as the higher level simple cells learned by our system even though it is likely the best possible
linear detector.
Finally, we inspect the higher-level complex cells learned by the applying the same agglomerative
clustering procedure to the higher-level simple cell filters. Due to the invariance introduced at the
lower layers, two simple cells that detect faces at slightly different locations or scales will often have
very similar filter weights and thus we expect our algorithm to find and combine these simple cells
into higher-level invariant features cells.
To visualize our higher-level complex cell features c?, we can simply look at visualizations for all of
? These visualizations show us the set of patches that strongly
the simple cells in each of the groups G.
activate each simple cell, and hence also activate the complex cell. The results of such a visualization
for one group that was found to contain only face-selective cells is shown in Figure 4c. There it can
be seen that this single ?complex cell? selects for faces at multiple positions and scales. A sampling
of image patches collected from the unlabeled data that strongly activate the corresponding complex
cell are shown in Figure 4d. We see that the complex cell detects many faces but at a much wider
variety of positions and scales compared to the simple cells, demonstrating that even ?higher level?
invariances are being captured, including scale invariance. Benchmarked on our labeled set, this
complex cell achieves 80.0% AUC?somewhat worse than the very best simple cells, but still in the
top 10 performing cells in the entire network. Interestingly, the qualitative results in Figure 4d are
excellent, and we believe these images represent an even greater range of variations than those in
the labeled set. Thus the 80% AUC number may somewhat under-rate the quality of these features.
These results suggest that the basic notions of invariance and selectivity that underpin popular feature
learning algorithms may be sufficient to discover the kinds of high-level features that we desire,
possibly including whole object classes robust to local and global variations. Indeed, using simple
implementations of selective and invariant features closely related to existing algorithms, we have
found that is possible to build features with high selectivity for a coherent, commonly occurring
object class. Though human faces occur only very rarely in our very large dataset, it is clear that the
complex cell visualized Figure 4d is adept at spotting them amongst tens of millions of images. The
enabler for these results is the scalability of the algorithms we have employed, suggesting that other
systems can likely achieve similar results to the ones shown here if their computational limitations
are overcome.
7
4
Related Work
The method that we have proposed has close connections to a wide array of prior work. For instance,
the basic notions of selectivity and invariance that drive our system can be identified in many other
algorithms: Group sparse coding methods [3] and Topographic ICA [6, 7] build invariances by
pooling simple cells that lie in an invariant subspace, identified by strong scale correlations between
cell responses. The advantage of this criterion is that it can determine which features to pool together
even when the simple cell filters are orthogonal (where they would be too far apart for our algorithm
to recognize their relationship). Our results suggest that while this type of invariance is very useful,
there exist simple ways of achieving a similar effect.
Our approach is also connected with methods that attempt to model the geometric (e.g., manifold)
structure of the input space. For instance, Contractive Auto-Encoders [16, 15], Local Coordinate
Coding [20], and Locality-constrained Linear Coding [19] learn sparse linear filters while attempting
to model the manifold structure staked out by these filters (sometimes termed ?anchor points?).
One interpretation of our method, suggested by Figure 1b, is that with extremely overcomplete
dictionaries it is possible to use trivial distance calculations to identify neighboring points on the
manifold. This in turn allows us to construct features invariant to shifts along the manifold with
little effort. [1] use similar intuitions to propose a clustering method similar to our approach.
One of our key results, the unsupervised discovery of features selective for human faces is fairly
unique (though seen recently in the extremely large system of [11]). Results of this kind have
appeared previously in restricted settings. For instance, [13] trained Deep Belief Network models
that decomposed object classes like faces and cars into parts using a probabilistic max-pooling to
gain translation invariance. Similarly, [21] has shown results of a similar flavor on the Caltech
recognition datasets. [22] showed that a probabilistic model (with some hand-coded geometric
knowledge) can recover clusters containing 20 known object class silhouettes from outlines in the
LabelMe dataset. Other authors have shown the ability to discover detailed manifold structure (e.g.,
as seen in the results of embedding algorithms [18, 17]) when trained in similarly restricted settings.
The structure that these methods discover, however, is far more apparent when we are using labeled,
tightly cropped images. Even if we do not use the labels themselves the labeled examples are, by
construction, highly clustered: faces will be separated from other objects because there are no partial
faces or random clutter. In our dataset, no supervision is used except to probe the representation post
hoc.
Finally, we note the recent, extensive findings of Le et al. [11]. In that work an extremely large 9layer neural network based on a TICA-like learning algorithm [10, 6] is also capable of identifying
a wide variety of object classes (including cats and upper-bodies of people) seen in YouTube videos.
Our results complement this work in several key ways. First, by training on smaller randomly
cropped patches, we show that object-selectivity may still be obtained even when objects are almost
never framed properly within the image?ruling out this bias as the source of object-selectivity. Second, we have shown that the key concepts (sparse selective filters and invariant-subspace pooling)
used in their system can also be implemented in a different way using scalable clustering algorithms, allowing us to achieve results reminiscent of theirs using a vastly smaller amount of computing power. (We used 240 cores, while their large-scale system is composed of 16,000 cores.) In
combination, these results point strongly to the conclusion that almost any highly scalable implementation of existing feature-learning concepts is enough to discover these sophisticated high-level
representations.
5
Conclusions
In this paper we have presented a feature learning system composed of two highly scalable but otherwise very simple learning algorithms: K-means clustering to find sparse linear filters (?simple cells?)
and agglomerative clustering to stitch simple cells together into invariant features (?complex cells?).
We showed that these two components are, in fact, capable of learning complicated high-level representations in large scale experiments on unlabeled images pulled from YouTube. Specifically, we
found that higher level simple cells could learn to detect human faces without any supervision at
all, and that our complex-cell learning procedure combined these into even higher-level invariances.
These results indicate that we are apparently equipped with many of the key principles needed to
achieve such results and that a critical remaining puzzle is how to scale up our algorithms to the
sizes needed to capture more object classes and even more sophisticated invariances.
8
References
[1] Y. Boureau, N. L. Roux, F. Bach, J. Ponce, and Y. LeCun. Ask the locals: multi-way local
pooling for image recognition. In 13th International Conference on Computer Vision, pages
2651?2658, 2011.
[2] A. Coates and A. Y. Ng. The importance of encoding versus training with sparse coding and
vector quantization. In International Conference on Machine Learning, pages 921?928, 2011.
[3] P. Garrigues and B. Olshausen. Group sparse coding with a laplacian scale mixture prior. In
Advances in Neural Information Processing Systems 23, pages 676?684, 2010.
[4] G. B. Huang, M. Ramesh, T. Berg, and E. Learned-Miller. Labeled faces in the wild: A
database for studying face recognition in unconstrained environments. Technical Report 0749, University of Massachusetts, Amherst, October 2007.
[5] A. Hyv?arinen and P. Hoyer. Emergence of phase-and shift-invariant features by decomposition
of natural images into independent feature subspaces. Neural Computation, 12(7):1705?1720,
2000.
[6] A. Hyv?arinen, P. Hoyer, and M. Inki. Topographic independent component analysis. Neural
Computation, 13(7):1527?1558, 2001.
[7] A. Hyv?arinen, J. Hurri, and P. Hoyer. Natural Image Statistics. Springer-Verlag, 2009.
[8] T. Kohonen. Emergence of invariant-feature detectors in self-organization. In M. Palaniswami
et al., editor, Computational Intelligence, A Dynamic System Perspective, pages 17?31. IEEE
Press, New York, 1995.
[9] A. Krizhevsky. Learning multiple layers of features from Tiny Images. Master?s thesis, Dept.
of Comp. Sci., University of Toronto, 2009.
[10] Q. Le, A. Karpenko, J. Ngiam, and A. Ng. ICA with reconstruction cost for efficient overcomplete feature learning. In Advances in Neural Information Processing Systems, 2011.
[11] Q. Le, M. Ranzato, R. Monga, M. Devin, K. Chen, G. Corrado, J. Dean, and A. Ng. Building
high-level features using large scale unsupervised learning. In International Conference on
Machine Learning, 2012.
[12] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel.
Backpropagation applied to handwritten zip code recognition. Neural Computation, 1:541?
551, 1989.
[13] H. Lee, R. Grosse, R. Ranganath, and A. Y. Ng. Convolutional deep belief networks for
scalable unsupervised learning of hierarchical representations. In International Conference on
Machine Learning, pages 609?616, 2009.
[14] M. Riesenhuber and T. Poggio. Hierarchical models of object recognition in cortex. Nature
neuroscience, 2, 1999.
[15] S. Rifai, Y. Dauphin, P. Vincent, Y. Bengio, and X. Muller. The manifold tangent classifier. In
Advances in Neural Information Processing, 2011.
[16] S. Rifai, P. Vincent, X. Muller, X. Glorot, and Y. Bengio. Contractive auto-encoders: Explicit
invariance during feature extraction. In International Conference on Machine Learning, 2011.
[17] S. Roweis and L. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science, 290(5500):2323?2326, December 2000.
[18] L. van der Maaten and G. Hinton. Visualizing high-dimensional data using t-SNE. Journal of
Machine Learning Research, 9:2579?2605, November 2008.
[19] J. Wang, J. Yang, K. Yu, F. Lv, T. Huang, and Y. Gong. Locality-constrained linear coding for
image classification. In Computer Vision and Pattern Recognition, pages 3360?3367, 2010.
[20] K. Yu, T. Zhang, and Y. Gong. Nonlinear learning using local coordinate coding. In Advances
in Neural Information Processing Systems 22, pages 2223?2231, 2009.
[21] M. D. Zeiler, G. W. Taylor, and R. Fergus. Adaptive deconvolutional networks for mid and
high level feature learning. In International Conference on Computer Vision, 2011.
[22] L. Zhu, Y. Chen, A. Torralba, W. Freeman, and A. Yuille. Part and Appearance Sharing:
Recursive Compositional Models for Multi-View Multi-Object Detection. In Computer Vision
and Pattern Recognition, 2010.
9
|
4719 |@word version:1 briefly:1 middle:1 seems:1 open:1 hyv:3 decomposition:1 garrigues:1 carry:1 reduction:1 contains:2 interestingly:1 deconvolutional:1 existing:5 current:1 activation:2 yet:3 tackling:1 must:1 si:1 reminiscent:1 devin:1 subsequent:1 distant:1 enables:1 designed:1 half:2 discovering:1 selected:2 fewer:1 intelligence:1 plane:3 core:2 location:7 toronto:1 belt:2 zhang:1 along:5 constructed:1 acoates:1 qualitative:1 combine:6 wild:2 ica:2 expected:2 roughly:4 themselves:1 frequently:2 indeed:4 multi:3 subpatches:1 behavior:3 obliged:1 detects:1 spherical:1 decomposed:1 freeman:1 little:1 overwhelming:1 equipped:1 provided:2 discover:7 begin:3 what:1 kind:3 benchmarked:1 string:1 differing:2 finding:1 hypothetical:1 act:2 scaled:2 classifier:4 control:1 unit:4 normally:1 appear:3 before:1 positive:1 local:6 limit:3 despite:1 encoding:1 ak:3 might:1 black:3 plus:1 challenging:1 range:2 contractive:2 unique:1 lecun:2 practice:2 recursive:1 backpropagation:1 spot:1 procedure:9 area:5 empirical:1 elicit:2 significantly:3 gabor:3 downsampled:1 suggest:3 cannot:4 unlabeled:12 convenience:1 andrej:1 close:1 applying:5 impossible:1 optimize:1 dean:1 independently:2 focused:1 simplicity:1 identifying:1 roux:1 array:2 embedding:2 notion:2 variation:2 coordinate:2 laplace:1 controlling:1 hierarchy:3 construction:1 us:2 hypothesis:1 expensive:1 particularly:1 located:1 continues:2 recognition:7 labeled:19 database:1 bottom:1 module:2 wang:1 capture:1 thousand:4 region:1 ensures:1 connected:2 ranzato:1 highest:1 ran:2 intuition:1 environment:1 dynamic:1 ultimately:1 trained:7 yuille:1 completely:2 translated:2 represented:1 cat:1 train:13 separated:1 fast:1 activate:5 query:1 choosing:1 whose:1 quite:1 stanford:3 plausible:1 apparent:1 distortion:1 say:1 otherwise:2 ability:1 statistic:1 g1:1 topographic:2 emergence:3 itself:1 hoc:1 advantage:1 propose:3 reconstruction:1 karpenko:1 neighboring:1 kohonen:1 rapidly:1 date:1 achieve:7 roweis:1 description:1 scalability:1 cluster:5 adam:1 object:36 spent:1 wider:1 andrew:1 gong:2 exemplar:1 nearest:1 progress:1 strong:2 implemented:1 c:1 come:1 indicate:1 differ:2 direction:2 closely:2 filter:43 human:8 require:1 arinen:3 fix:1 clustered:1 biological:1 sufficiently:1 around:1 seed:1 puzzle:1 visualize:6 major:1 dictionary:2 achieves:2 a2:3 torralba:1 purpose:1 label:3 currently:1 isocontour:2 jackel:1 sensitive:4 hubbard:1 largest:1 create:1 hope:1 clearly:2 gaussian:1 aim:3 rather:1 avoid:1 varying:1 ponce:1 properly:1 consistently:1 check:1 contrast:1 centroid:7 sense:2 detect:4 typically:2 entire:2 expand:1 selective:20 selects:1 pixel:8 classification:1 orientation:2 dauphin:1 animal:1 constrained:2 fairly:2 construct:7 once:3 never:1 ng:5 sampling:2 extraction:1 look:2 unsupervised:11 yu:2 report:1 stimulus:1 few:2 randomly:5 composed:3 recognize:1 tightly:1 cheaper:1 phase:2 attempt:2 detection:2 organization:1 highly:4 henderson:1 mixture:1 edge:4 capable:3 closer:1 cifar10:1 partial:2 poggio:1 orthogonal:2 loosely:1 taylor:1 overcomplete:2 instance:4 obstacle:1 cover:1 assignment:1 cost:2 pole:2 subset:1 rare:1 hundred:1 krizhevsky:1 examining:1 successful:1 too:2 encoders:2 combined:1 density:7 international:6 amherst:1 probabilistic:2 lee:1 pool:3 together:6 connecting:1 analogously:1 thesis:1 vastly:2 again:1 manage:1 containing:1 huang:2 possibly:1 worse:2 toy:1 account:2 suggesting:2 converted:1 tica:1 pooled:2 coding:7 try:2 view:2 apparently:1 red:2 recover:1 parallel:1 complicated:1 minimize:1 palaniswami:1 convolutional:2 miller:1 yield:1 identify:5 correspond:1 handwritten:1 vincent:2 comp:1 drive:1 detector:11 explain:2 reach:1 strongest:1 sharing:1 energy:1 e2:3 sampled:2 gain:1 dataset:18 popular:4 ask:1 massachusetts:1 recall:5 knowledge:1 car:1 dimensionality:1 cj:5 sophisticated:3 actually:2 appears:1 higher:15 supervised:4 day:1 response:21 specify:1 maximally:1 though:9 strongly:4 just:1 correlation:4 until:2 hand:2 web:1 nonlinear:3 overlapping:1 quality:1 believe:1 olshausen:1 building:4 effect:1 concept:3 contain:4 verify:1 normalized:2 hence:2 alternating:3 visualizing:1 during:2 self:1 auc:9 criterion:1 presenting:1 outline:1 complete:1 rudimentary:1 image:36 recently:1 common:3 inki:1 million:7 belong:3 interpretation:1 theirs:1 significant:1 refer:1 framed:2 unconstrained:2 similarly:2 cortex:1 supervision:6 similarity:3 whitening:2 gj:6 add:1 surface:1 something:1 etc:1 own:2 recent:3 showed:2 perspective:1 apart:1 forcing:1 termed:1 selectivity:6 verlag:1 arbitrarily:1 success:1 accomplished:1 muller:2 caltech:1 der:1 seen:8 captured:1 additional:1 somewhat:2 greater:1 zip:1 employed:1 determine:1 corrado:1 ii:1 full:5 desirable:1 multiple:2 technical:1 calculation:1 cross:1 sphere:2 long:1 bach:1 post:1 e1:3 coded:2 a1:3 laplacian:1 scalable:6 desideratum:1 basic:4 whitened:3 essentially:2 metric:2 lfw:1 vision:4 iteration:1 represent:3 normalization:2 sometimes:1 monga:1 achieved:1 cell:117 cropped:5 grow:1 source:1 subject:1 pooling:16 tend:3 virtually:2 member:1 december:1 yang:1 bengio:2 enough:4 variety:4 fit:1 architecture:3 identified:4 rifai:2 shift:2 whether:6 pca:1 effort:1 e3:3 passing:1 cause:1 york:1 compositional:1 deep:4 dramatically:1 useful:2 clear:6 detailed:1 karpathy:2 amount:3 clutter:5 dark:1 ang:1 ten:3 locally:2 adept:1 visualized:2 category:1 diameter:1 mid:1 generate:2 specifies:1 exist:1 coates:2 dotted:2 neuroscience:1 thumbnail:4 blue:1 similar6:1 group:31 key:5 nevertheless:1 demonstrating:1 achieving:2 drawn:1 changing:1 preprocessed:1 merely:1 fraction:1 run:2 master:1 respond:3 almost:2 reasonable:1 ruling:1 patch:23 maaten:1 layer:25 distinguish:1 identifiable:2 occur:2 ahead:1 handful:4 constraint:1 extremely:6 concluding:1 min:1 performing:2 attempting:1 department:1 according:4 combination:3 belonging:1 ate:1 slightly:2 smaller:3 enabler:1 invariant:27 restricted:3 gradually:1 pr:2 rectification:1 visualization:8 previously:2 turn:3 needed:2 end:3 studying:1 available:2 apply:1 probe:1 hierarchical:3 denker:1 blurry:1 batch:1 top:4 clustering:16 ensure:1 include:3 remaining:1 zeiler:1 build:4 especially:1 move:1 g0:2 question:1 already:4 added:2 concentration:1 dependence:1 responds:2 unclear:2 hoyer:3 gradient:1 amongst:1 subspace:5 distance:2 unable:1 separate:1 link:5 reversed:1 majority:1 sci:1 manifold:6 agglomerative:5 collected:1 trivial:1 furthest:1 pet:1 code:1 relationship:2 polarity:1 difficult:2 unfortunately:1 executed:1 october:1 potentially:1 sne:1 underpin:1 implementation:3 perform:4 allowing:2 upper:1 inspect:3 datasets:6 arc:1 ramesh:1 howard:1 riesenhuber:1 november:1 january:1 maxk:1 hinton:1 looking:1 frame:1 dc:1 locate:1 discovered:1 introduced:1 complement:1 pair:2 extensive:1 connection:1 coherent:1 learned:18 boser:1 able:7 suggested:1 spotting:1 pattern:9 appeared:1 built:2 max:4 green:1 video:4 including:3 belief:2 power:1 event:1 critical:1 natural:3 pictured:2 representing:1 zhu:1 technology:1 picture:1 axis:1 extract:2 auto:2 text:2 prior:3 geometric:2 discovery:1 tangent:1 harvested:2 expect:3 interesting:1 limitation:1 analogy:1 versus:2 lv:1 validation:2 downloaded:2 sufficient:1 principle:1 editor:1 bank:2 tiny:1 share:1 heavy:1 translation:5 surprisingly:1 copy:2 bias:1 pulled:1 wide:3 neighbor:1 face:35 template:1 saul:1 absolute:1 sparse:7 benefit:1 van:1 curve:12 overcome:1 author:1 made:1 commonly:5 collection:2 adaptive:1 far:3 ranganath:1 emphasize:1 ignore:1 silhouette:1 keep:1 dealing:1 global:2 uncentered:1 anchor:1 hurri:1 fergus:1 spectrum:2 grayscale:1 search:2 sk:2 why:1 reviewed:1 tailed:1 table:1 learn:11 nature:1 robust:1 ca:1 ignoring:1 ngiam:1 excellent:1 complex:43 motivation:1 whole:1 allowed:1 quadrature:1 body:1 referred:1 join:1 depicts:1 grosse:1 precision:5 position:3 explicit:1 candidate:1 lie:2 learns:2 removing:1 specific:2 showing:1 svm:4 evidence:1 normalizing:1 a3:2 incorporating:1 quantization:1 glorot:1 adding:1 importance:1 dissimilarity:3 occurring:6 disregarded:1 boureau:1 chen:2 flavor:1 locality:2 simply:1 likely:4 appearance:1 desire:1 stitch:1 g2:1 collectively:1 springer:1 extracted:1 succeed:1 goal:1 shared:1 labelme:1 hard:1 youtube:7 included:1 specifically:5 typical:1 except:1 averaging:1 total:1 invariance:19 experimental:1 matcher:1 rarely:2 select:1 berg:1 mark:2 people:1 incorporate:1 dept:1
|
4,110 | 472 |
Competitive Anti-Hebbian Learning of Invariants
Nicol N. Schraudolph
Computer Science & Engr. Dept.
University of California, San Diego
La Jolla, CA 92093-0114
Terrence J. Sejnowski
Computational Neurobiology Laboratory
The Salk Institute for Biological Studies
La Jolla, CA 92186-5800
[email protected]
[email protected]
Abstract
Although the detection of invariant structure in a given set of input patterns is
vital to many recognition tasks, connectionist learning rules tend to focus on
directions of high variance (principal components). The prediction paradigm is
often used to reconcile this dichotomy; here we suggest a more direct approach
to invariant learning based on an anti-Hebbian learning rule. An unsupervised
tWO-layer network implementing this method in a competitive setting learns to
extract coherent depth information from random-dot stereograms.
1
INTRODUCTION: LEARNING INVARIANT STRUCTURE
Many connectionist learning algorithms share with principal component analysis (Jolliffe,
1986) the strategy of extracting the directions of highest variance from the input. A single
Hebbian neuron, for instance, will come to encode the input's first principal component
(Oja and Karhunen, 1985); various forms of lateral interaction can be used to force a layer
of such nodes to differentiate and span the principal component subspace - cf. (Sanger,
1989; Kung, 1990; Leen, 1991), and others. The same type of representation also develops
in the hidden layer of backpropagation autoassociator networks (Baldi and Hornik, 1989).
However, the directions of highest variance need not always be those that yield the most
information, or - as the case may be - the information we are interested in (Intrator,
1991). In fact, it is sometimes desirable to extract the invariant structure of a stimulus
instead, learning to encode those aspects that vary the least. The problem, then, is how to
achieve this within a connectionist framework that is so closely tied to the maximization of
variance.
1017
1018
Schraudolph and Sejnowski
In (FOldiak, 1991), spatial invariance is turned into a temporal feature by presenting transformation sequences within invariance classes as a stimulus. A built-in temporal smoothness
constraint enables Hebbian neurons to learn these transformations, and hence the invariance
classes. Although this is an efficient and neurobiologically attractive strategy it is limited
by its strong assumptions about the nature of the stimulus.
A more general approach is to make information about invariant structure available in the
error signal of a supervised network. The most popular way of doing this is to require the
network to predict the next patch of some structured input from the preceding context, as in
(Elman, 1990); the same prediction technique can be used across space as well as time. It is
also possible to explicitly derive an error signal from the mutual information between two
patches of structured input (Becker and Hinton, 1992), a technique which has been applied
to viewpoint-invariant object recognition (Zemel and Hinton, 1991).
2
2.1
METHODS
ANTI-HEBBIAN FEEDFORWARD LEARNING
In most formulations of the covariance learning rule it is quietly assumed that the learning
rate be positive. By reversing the sign of this constant in a recurrent autoassociator, Kohonen
constructed a "novelty filter" that learned to be insensitive to familiar features in its input
(Kohonen, 1989). More recently, such anti-Hebbian synapses have been used for lateral
decorrelation of feature detectors (Barlow and FOldiak, 1989; Leen, 1991) as well as - in
differential form - removal of temporal variations from the input (Mitchison, 1991).
We suggest that in certain cases the use of anti-Hebbian feedforward connections to learn
invariant structure may eliminate the need to bring in the heavy machinery of supervised
learning algorithms required by the prediction paradigm, with its associated lack of neurobiological plausibility. Specifically, this holds for linear problems, where the stimuli lie
near a hyperplane in the input space: the weight vector of an anti-Hebbian neuron will
move into a direction normal to that hyperplane, thus characterizing the invariant structure.
Of course a set of Hebbian feature detectors whose weight vectors span the hyperplane
would characterize the associated class of stimuli just as well. The anti-Hebbian learning
algorithm, however, provides a more efficient representation when the dimensionality of
the hyperplane is more than half that of the input space, since less normal vectors than
spanning vectors are required for unique characterization in this case. Since they remove
rather than extract the variance within a stimulus class, anti-Hebbian neurons also present
a very different output representation to subsequent layers.
Unfortunately it is not sufficient to simply negate the learning rate of a layer of Hebbian
feature detectors in order to turn them into working anti-Hebbian in variance detectors:
although such a change of sign does superficially achieve the intended effect, many of the
subtleties that make Hebb's rule work in practice do not survive the transformation. In what
follows we address some of the problems thus introduced.
Like the Hebb rule, anti-Hebbian learning requires weight normalization, in this case to
prevent weight vectors from collapsing to zero. Oja's active decay rule (Oja, 1982) is a
popular local approximation to explicit weight normalization:
8w = TJ(xy - wy2), where y = ijjT X
(1)
Competitive Anti-Hebbian Learning of Invariants
Here the first term in parentheses represents the standard Hebb rule, while the second is the
active decay. Unfortunately, Oja's rule can not be used for weight growth in anti-Hebbian
neurons since it is unstable for negative learning rates (ry < 0), as is evident from the
observation that the growth/decay term is proportional to w. In our experiments, explicit
L2-normalization of weight vectors was therefore used instead.
Hebbian feature detectors attain maximal activation for the class of stimuli they represent.
Since the weight vectors of anti-Hebbian invariance detectors are normal to the invariance
class they represent, membership in that class is Signalled by a zero activation. In other
words, linear anti-Hebbian nodes signal violations of the constraints they encode rather
than compliance. While such an output representation can be highly desirable for some
applications 1, it is unsuitable for others, such as the classification of mixtures of invariants
described below.
We therefore use a symmetric activation function that responds maximally for a zero net
input, and decays towards zero for large net inputs. More specifically, we use Gaussian
activation functions, since these allow us to interpret the nodes' outputs as class membership
probabilities. Soft competition between nodes in a layer can then be implemented simply
by normalizing these probabilities (Le. dividing each output by the sum of outputs in a
layer), then using them to scale weight changes (Nowlan, 1990).
2.2
AN ANTI-HEBBIAN OBJECTIVE FUNCTION
The magnitude of weight change in a Hebbian neuron is proportional to the cosine of
the angle between input and weight vectors. This means that nodes that best represent the
current input learn faster than those which are further away, thus encouraging differentiation
among weight vectors. Since anti-Hebbian weight vectors are normal to the hyperplanes
they represent, those that best encode a gi ven stimulus will experience the least change in
weights. As a result, weight vectors will tend to clump together unless weight changes
are rescaled to counteract this deficiency. In our experiments, this is done by the soft
competition mechanism; here we present a more general framework towards this end.
A simple Hebbian neuron maximizes the variance of its output y through stochastic approximation by performing gradient ascent in !y2 (Oja and Karhunen, 1985):
8 1 2
dWi ex: - - -y
8wi 2
8
= y-;:;-y
= XiY
UWi
(2)
As seen above, it is not sufficient for an anti-Hebbian neuron to simply perform gradient
descent in the same function. Instead, an objective function whose derivative has inverse
magnitude to the above at every point is needed, as given by
8 1
2
1 8
dWi ex: --log(y ) = - - y
8Wi 2
Y 8 Wi
= -XiY
(3)
I Consider the subsumption architecture of a hierarchical network in which higher layers only
receive infonnation that is not accounted for by earlier layers.
1019
1020
Schraudolph and Sejnowski
3.00
2.00
-
~
.~
1.00
~
~
A
.~
0.00
//'
'\
-1.00
.f
\\,
-2.00
\
-3.00
-2.00
-4.00
~
-
,/
'j
0.00
2.00
4 .00
Figure I: Possible objective functions for anti-Hebbian learning (see text).
=
Unfortunately, the pole at y
0 presents a severe problem for Simple gradient descent
methods: the near-infinite derivatives in its vicinity lead to catastrophically large step sizes.
More sophisticated optimization methods deal with this problem by explicitly controlling
the step size; for plain gradient descent we suggest reshaping the objective function at the
pole such that its partials never exceed the input in magnitude:
8
~Wi ex -8 dog(y
Wi
2
2
2?XiY
+ ? ) = Y2 +?2 '
(4)
where ? > 0 is a free parameter determining at which point the logarithmic slope is
abandoned in favor of a quadratic function which forms an optimal trapping region for
simple gradient descent (Figure 1).
3
RESULTS ON RANDOM-DOT STEREOGRAMS
In random-dot stereograms, stimuli of a given stereo disparity lie on a hyperplane whose
dimensionality is half that of the input space plus the disparity in pixels. This is easily
appreciated by considering that given, say, the left half-image and the disparity, one can
predict the right half-image except for the pixels shifted in at the edge. Thus stereo
disparities that are small compared to the receptive field width can be learned equally well
by Hebbian and anti-Hebbian algorithms; when the disparity approaches receptive field
width, however, anti-Hebbian neurons have a distinct advantage.
3.1
SINGLE LAYER NETWORK: LOCAL DISPARITY TUNING
Our training set consisted of stereo images of 5,000 frontoparallel strips at uniformly
random depth covered densely with Gaussian features of random location, width, polarity
and power. The images were discretized by integrating over pixel bins in order to allow for
sub-pixel disparity acuity. Figure 2 shows that a single cluster of five anti-Hebbian nodes
with soft competition develops near-perfect tuning curves for local stereo disparity after
10 sweeps through this training set. This disparity tuning is achieved by learning to have
corresponding weights (at the given disparity) be of equal magnitude but opposite sign, so
that any stimulus pattern at that disparity yields a zero net input and thus maximal response.
Competitive Anti-Hebbian Learning of Invariants
average response
0.9
~--+----+------l-----I......_-----1f_-
0.8
~--H-\_---_f+__\_--_+4+_--_H_+_--_l_1I_l_-
o.7 ~-4-+-t---+-+-+------,I--+-4-----j~-+---J.---1f-+-0.6
--+-_+-\---f-+----\---f--+~;___f__-f.~-_I_--4~-
0.5
-~_+~Ir_+--+--_\_-l---+-_+_I_-f.-+J~--4---L-
0.4 - - - + ---Jlr..--+---ic-
0.1 -
-
--+---.b#+-- - 4-MUf-lIA-- - 1 - -
-+-- - - - + - - ---+- - - - 4 --2.00
-1.00
0.00
---1-2.00
1.00
stereo
disparity
Figure 2: Sliding window average response of first-layer nodes after presentation of 50,000
stereograms as a function of stimulus disparity: strong disparity tuning is evident.
iOOOOOi
.
.
? soft competition ~
~
clusters
full connectivity
.iOOOOOl.
II
1 ______ - - - - - - - - - - - - - - - - - - - -
left
right
? Gaussian
nonlinearities
.iOOOOOi.
random connectivity
(5/7 per half-image)
II
? anti-Hebbian
learning rule
0000000 0000000 Input
0000000 0000000
Figure 3: Architecture of the network (see text).
1021
1022
Schraudolph and Sejnowski
Note, however, that this type of detector suffers from false positives: input patterns that
happen to yield near-zero net input even though they have a different stereo disparity.
Although the individual response of a tuned node to an input pattern of the wrong disparity
is therefore highly idiosyncratic, the sliding window average of each response with its 250
closest neighbors (with respect to disparity) shown in Figure 2 is far more well-behaved.
This indicates that the average activity over a number of patterns (in a "moving stereogram"
paradigm) - or, alternatively, over a population of nodes tuned to the same disparity allows discrimination of disparities with sub-pixel accuracy.
3.2 TWO-LAYER NETWORK: COHERENT DISPARITY TUNING
In order to investigate the potential for hierarchical application of this architecture, it
was extended to two layers as shown in Figure 3. The two first-layer clusters with nonoverlapping receptive fields extract local stereo disparity as before; their output is monitored
by a second-layer cluster. Note that there is no backpropagation of derivatives: all three
clusters use the same unsupervised learning algorithm.
This network was trained on coherent input, i.e. stimuli for which the stereo disparity was
identical across the receptive field boundary of first-layer clusters. As shown in Figure 4,
the second layer learns to preserve the first layer's disparity tuning for coherent patterns,
albeit in in somewhat degraded form. Each node in the second layer learns to pick out
exactly the two corresponding nodes in the first-layer clusters, again by giving them weights
of equal magnitude but opposite sign.
However, the second layer represents more than just a noisy copy of the first layer: it
meaningfully integrates coherence information from the two receptive fields. This can be
demonstrated by testing the trained network on non-coherent stimuli which exhibit a depth
discontinuity between the receptive fields of first-layer clusters. The overall response of
the second layer is tuned to the coherent stimuli it was trained on (Figure 5).
4
DISCUSSION
Although a negation of the learning rate introduces various problems to the Hebb rule,
feedforward anti-Hebbian networks can pick up invariant structure from the input. We
have demonstrated this in a competitive classification setting; other applications of this
framework are possible. We find the subsumption aspect of anti-Hebbian learning particularly intriguing: the real world is so rich in redundant data that a learning rule which can
adaptively ignore much of it must surely be an advantage. From this point of view, the
promising first experiments we have reported here use quite impoverished inputs; one of
our goals is therefore to extend this work towards real-world stimuli.
Acknowledgements
We would like to thank Geoffrey Hinton, Sue Becker, Tony Bell and Steve Nowlan for the
stimulating and helpful discussions we had. Special thanks to Sue Becker for permission
to use her random-dot stereogram generator early in our investigation. This work was
supported by a fellowship stipend from the McDonnell-Pew Center for Cognitive Neuroscience at San Diego to the first author, who also received a NIPS travel grant enabling him
to attend the conference.
Competitive Anti-Hebbian Learning of Invariants
average response
0.& - ----ft\-- --I+-.-- ----f-- - --H-+-- - - I - ll - - -
0.7
-Nll-'-+--\--'Tt--~H\I~_\_---.:++_H--+-+_t_--l--l_u__-
0.5 -
---+------IF----+-
-+-+--t---If-IMIl---....+-H-Yt--t---I---
stereo
disparity
-2.00
-1.00
0.00
1.00
2.00
Figure 4: Sliding window average response of second-layer nodes after presentation of
250,000 coherent stereograms as a function of stimulus disparity: disparity tuning is preserved in degraded form.
average tOtal response
3.1
3.0
2.9
2.&
2.7
2.6
2.5
-4.00
-2.00
0.00
2.00
4.00
stereo discontinuity (disparity difference)
Figure 5: Sliding window average of total second-layer response to non-coherent input as
a function of stimulus discontinuity: second layer is tuned to coherent patterns.
1023
1024
Schraudolph and Sejnowski
References
Baldi, P. and Hornik, K. (1989). Neural networks and principal component analysis:
Learning from examples without local minima. Neural Networks, 2:53-58.
Barlow, H. B. and FOldh1k, P. (1989). Adaptation and decorrelation in the cortex. In Durbin,
R. M., Miall, c., and Mitchison, G. J., editors, The Computing Neuron, chapter 4, pages
54-72. Addison-Wesley, Wokingham.
Becker, S. and Hinton, G. E. (1992). A self-organizing neural network that discovers
surfaces in random-dot stereograms. Nature, to appear.
Elman, J. (1990). Finding structure in time. Cognitive Science, 14: 179-211.
FOldiak, P. (1991). Learning invariance from transformation sequences. Neural Computation, 3: 194-200.
Intrator, N. (1991). Exploratory feature extraction in speech signals. In (Lippmann et al.,
1991), pages 241-247.
Jolliffe, I. (1986). Principal Component Analysis. Springer-Verlag, New York.
Kohonen, T. (1989). Self-Organization and Associative Memory. Springer-Verlag, Berlin,
3 edition.
Kung, S. Y. (1990). Neural networks for extracting constrained principal components.
submitted to IEEE Trans. Neural Networks.
Leen, T. K. (1991). Dynamics of learning in linear feature-discovery networks. Network,
2:85-105.
Lippmann, R. P., Moody, J. E., and Touretzky, D. S., editors (1991). Advances in Neural
Information Processing Systems, volume 3, Denver 1990. Morgan Kaufmann, San
Mateo.
Mitchison, G. (1991). Removing time variation with the anti-hebbian differential synapse.
Neural Computation, 3:312-320.
Nowlan, S. J. (1990). Maximum likelihood competitive learning. In Touretzky, D. S.,
editor, Advances in Neural Information Processing Systems, volume 2, pages 574582, Denver 1989. Morgan Kaufmann, San Mateo.
Oja, E. (1982). A simplified neuron model as a principal component analyzer. Journal of
Mathematical Biology, 15:267-273.
OJ a, E. and Karhunen, J. (1985). On stochastic approximation of the eigenvectors and
eigenvalues of the expectation of a random matrix. Journal Of Mathematical Analysis
and Applications, 106:69-84.
Sanger, T. D. (1989). Optimal unsupervised learning in a single-layer linear feedforward
neural network. Neural Networks, 2:459-473.
Zemel, R. S. and Hinton, G. E. (1991). Discovering viewpoint-invariant relationships that
characterize objects. In (Lippmann et al., 1991), pages 299-305.
|
472 |@word autoassociator:2 t_:1 covariance:1 pick:2 tsejnowski:1 catastrophically:1 xiy:3 disparity:27 tuned:4 current:1 nowlan:3 activation:4 intriguing:1 must:1 subsequent:1 happen:1 enables:1 remove:1 discrimination:1 half:5 discovering:1 trapping:1 characterization:1 provides:1 node:12 location:1 hyperplanes:1 five:1 mathematical:2 constructed:1 direct:1 differential:2 baldi:2 elman:2 ry:1 discretized:1 encouraging:1 window:4 considering:1 maximizes:1 what:1 finding:1 transformation:4 differentiation:1 temporal:3 every:1 growth:2 exactly:1 wrong:1 grant:1 appear:1 positive:2 before:1 attend:1 local:5 subsumption:2 plus:1 mateo:2 limited:1 clump:1 unique:1 testing:1 practice:1 backpropagation:2 bell:1 attain:1 word:1 integrating:1 suggest:3 context:1 demonstrated:2 center:1 yt:1 rule:11 population:1 exploratory:1 variation:2 diego:2 controlling:1 recognition:2 particularly:1 neurobiologically:1 ft:1 region:1 highest:2 rescaled:1 stereograms:6 dynamic:1 engr:1 trained:3 easily:1 various:2 chapter:1 distinct:1 sejnowski:5 dichotomy:1 zemel:2 whose:3 quite:1 say:1 favor:1 gi:1 noisy:1 associative:1 differentiate:1 sequence:2 advantage:2 nll:1 net:4 eigenvalue:1 interaction:1 maximal:2 adaptation:1 kohonen:3 turned:1 organizing:1 achieve:2 competition:4 cluster:8 perfect:1 object:2 derive:1 recurrent:1 received:1 strong:2 dividing:1 implemented:1 c:1 come:1 frontoparallel:1 direction:4 closely:1 filter:1 stochastic:2 implementing:1 bin:1 require:1 stipend:1 investigation:1 biological:1 hold:1 ic:1 normal:4 predict:2 vary:1 early:1 integrates:1 travel:1 infonnation:1 him:1 always:1 gaussian:3 rather:2 encode:4 focus:1 acuity:1 indicates:1 likelihood:1 helpful:1 membership:2 eliminate:1 hidden:1 her:1 interested:1 pixel:5 overall:1 classification:2 among:1 spatial:1 special:1 constrained:1 mutual:1 field:6 equal:2 never:1 extraction:1 identical:1 represents:2 ven:1 biology:1 unsupervised:3 survive:1 connectionist:3 others:2 develops:2 stimulus:17 h_:1 oja:6 preserve:1 densely:1 individual:1 familiar:1 intended:1 uwi:1 negation:1 detection:1 organization:1 highly:2 investigate:1 severe:1 signalled:1 violation:1 mixture:1 introduces:1 tj:1 edge:1 partial:1 xy:1 experience:1 machinery:1 unless:1 instance:1 soft:4 earlier:1 maximization:1 pole:2 characterize:2 reported:1 adaptively:1 thanks:1 terrence:1 together:1 moody:1 connectivity:2 again:1 collapsing:1 cognitive:2 derivative:3 potential:1 nonlinearities:1 nonoverlapping:1 explicitly:2 view:1 doing:1 competitive:7 slope:1 accuracy:1 degraded:2 variance:7 who:1 kaufmann:2 yield:3 submitted:1 detector:7 synapsis:1 suffers:1 touretzky:2 strip:1 associated:2 monitored:1 popular:2 dimensionality:2 sophisticated:1 impoverished:1 wesley:1 steve:1 higher:1 supervised:2 response:10 maximally:1 synapse:1 leen:3 formulation:1 done:1 though:1 just:2 working:1 lack:1 behaved:1 effect:1 consisted:1 y2:2 barlow:2 hence:1 vicinity:1 symmetric:1 laboratory:1 deal:1 attractive:1 ll:1 width:3 self:2 cosine:1 presenting:1 evident:2 tt:1 bring:1 image:5 discovers:1 recently:1 denver:2 insensitive:1 volume:2 extend:1 interpret:1 smoothness:1 tuning:7 pew:1 analyzer:1 had:1 dot:5 moving:1 cortex:1 surface:1 closest:1 foldiak:3 jolla:2 certain:1 verlag:2 morgan:2 seen:1 minimum:1 somewhat:1 preceding:1 nici:1 surely:1 novelty:1 paradigm:3 redundant:1 signal:4 ii:2 sliding:4 full:1 desirable:2 stereogram:2 hebbian:35 faster:1 plausibility:1 schraudolph:5 reshaping:1 equally:1 parenthesis:1 prediction:3 expectation:1 sue:2 sometimes:1 represent:4 normalization:3 achieved:1 receive:1 preserved:1 fellowship:1 ascent:1 compliance:1 tend:2 meaningfully:1 extracting:2 near:4 feedforward:4 vital:1 exceed:1 architecture:3 opposite:2 becker:4 stereo:10 speech:1 york:1 covered:1 eigenvectors:1 shifted:1 sign:4 neuroscience:1 per:1 prevent:1 sum:1 counteract:1 angle:1 inverse:1 patch:2 coherence:1 layer:28 quadratic:1 durbin:1 activity:1 constraint:2 deficiency:1 aspect:2 span:2 performing:1 structured:2 mcdonnell:1 across:2 wi:5 invariant:15 turn:1 mechanism:1 jolliffe:2 needed:1 addison:1 end:1 available:1 hierarchical:2 intrator:2 away:1 permission:1 abandoned:1 cf:1 tony:1 sanger:2 unsuitable:1 giving:1 sweep:1 move:1 objective:4 strategy:2 receptive:6 responds:1 exhibit:1 gradient:5 subspace:1 thank:1 lateral:2 berlin:1 unstable:1 spanning:1 polarity:1 relationship:1 unfortunately:3 idiosyncratic:1 negative:1 perform:1 i_:2 neuron:11 observation:1 enabling:1 descent:4 anti:27 neurobiology:1 hinton:5 extended:1 ucsd:2 introduced:1 dog:1 required:2 connection:1 california:1 coherent:9 learned:2 discontinuity:3 nip:1 address:1 trans:1 below:1 pattern:7 built:1 oj:1 memory:1 power:1 decorrelation:2 force:1 extract:4 text:2 l2:1 removal:1 acknowledgement:1 nicol:1 determining:1 discovery:1 proportional:2 geoffrey:1 generator:1 sufficient:2 viewpoint:2 editor:3 share:1 heavy:1 course:1 accounted:1 supported:1 free:1 copy:1 appreciated:1 allow:2 institute:1 neighbor:1 characterizing:1 curve:1 depth:3 plain:1 superficially:1 boundary:1 world:2 rich:1 author:1 dwi:2 san:4 simplified:1 lia:1 far:1 miall:1 lippmann:3 ignore:1 neurobiological:1 active:2 assumed:1 mitchison:3 alternatively:1 promising:1 learn:3 nature:2 ca:2 f_:1 hornik:2 reconcile:1 edition:1 hebb:4 salk:1 sub:2 explicit:2 lie:2 tied:1 learns:3 removing:1 decay:4 negate:1 normalizing:1 false:1 albeit:1 magnitude:5 karhunen:3 logarithmic:1 simply:3 subtlety:1 springer:2 stimulating:1 goal:1 presentation:2 towards:3 change:5 specifically:2 infinite:1 except:1 reversing:1 hyperplane:5 uniformly:1 principal:8 total:2 invariance:6 la:2 kung:2 dept:1 ex:3
|
4,111 | 4,720 |
Multiresolution analysis on the symmetric group
Risi Kondor and Walter Dempsey
Department of Statistics and Department of Computer Science
The University of Chicago
{risi,wdempsey}@uchicago.edu
Abstract
There is no generally accepted way to define wavelets on permutations. We address this issue by introducing the notion of coset based multiresolution analysis
(CMRA) on the symmetric group, find the corresponding wavelet functions, and
describe a fast wavelet transform for sparse signals. We discuss potential applications in ranking, sparse approximation, and multi-object tracking.
1
Introduction
A variety of problems in machine learning, from ranking to multi-object tracking, involve inference
over permutations. Invariably, the bottleneck in such problems is that the number of permutations
grows with n!, ruling out the possibility of representing generic functions or distributions over permutations explicitly, as soon as n exceeds about ten or twelve.
Recently, a number of authors have advocated approximations based on a type of generalized Fourier
transform [1][2][3][4][5][6]. On the group Sn of permutations of n objects, this takes the form
fb(?) =
X
f (?) ?? (?),
(1)
??Sn
where ? plays the role of frequency, while the ?? matrix valued functions, called irreducible representations, are similar to the e?i2?kx/N factors in ordinary Fourier analysis. It is possible to show
that, just as in classical Fourier analysis, the fb(?) Fourier matrices correspond to components of f at
different levels of smoothness with respect to the underlying permutation topology [2][7]. Ordering
the ??s from smooth to rough as ?1 2 ?2 2 . . ., one is thus lead to ?band-limited? approximations
of f via the nested sequence of spaces
V? = { f ? RSn | fb(?) = 0 for all ? ? } .
While this framework is attractive mathematically, it suffers from the same disease as classical
Fourier approximations, namely its inability to handle discontinuities with grace. In applications
such as multi-object tracking this is a particularly serious issue, because each observation of the
form ?object i is at track j? introduces a new discontinuity into the assignment distribution, and the
resulting Gibbs phenonomenon makes it difficult to ensure even that f (?) remains positive.
The time-honored solution is to use wavelets. However, in the absence of a natural dilation operator,
defining wavelets on a discrete space is not trivial. Recently, Gavish et al. defined an analog of Haar
wavelets on trees [8], while Coifman and Maggioni [9] and Hammond et al. [10] managed to define
wavelets on general graphs. In this paper we attempt to do the same on the much more structured
domain of permutations by introducing an altogether new notion of multiresolution analysis, which
we call coset-based multiresolution (CMRA).
1
...
/ V?3
/ ...
/ V?2
/ V0
/ V?1
CC
FF
FF
FF
CC
FF
FF
FF
CC
FF
FF
FF
CC
FF
FF
FF
!
#
#
#
W?1
W?2
W?3
W?4
Figure 1: Multiresolution
2
Multiresolution analysis and the multiscale structure of Sn
The notion of multiresolution analysis on the real line was first formalized by Mallat [11]: a nested
sequence of function spaces
. . . ? V?1 ? V0 ? V1 ? V2 ? . . .
is said to constitute a multiresolution analysis (MRA) for L2 (R) if it satisfies the following axioms:
T
MRA1. k Vk = {0},
S
MRA2. k Vk = L2 (R),
MRA3. for any f ? Vk and any m ? Z, the function f 0 (x) = f (x ? m 2?k ) is also in Vk ,
MRA4. for any f ? Vk , the function f 0 (x) = f (2x), is in Vk+1 .
Setting Vk+1 = Vk ?Wk and starting with, say, V` , the process of moving up the chain of spaces can
be thought of as splitting V` into a smoother part V`?1 (called the scaling space) and a rougher part
W`?1 (called the wavelet space), and then repeating this process recursively for V`?1 , V`?2 , and so
on (Figure 1).
To get an actual wavelet transform, one needs to define appropriate bases for the {Vi } and {Wi }
spaces. In the simplest case, a single function ?, called the scaling function, is sufficient to
generate an orthonormal basis for V0 , and a single function ?, called the mother wavelet generates an orthonormal basis for W0 . In this case, defining ?k,m (x) = 2k/2 ?(2k x ? m), and
?k,m (x) = 2k/2 ?(2k x ? m), we find that {?k,m }m?Z and {?k,m }m?Z will be orthonormal bases
for Vk and Wk , respectively. Moreover, {?k,m }k,m?Z is an orthonormal basis for the whole of
L2 (R). By the wavelet transform of f we mean its expansion in this basis.
The difficulty in defining multiresolution analysis on discrete spaces is that there is no natural analog
of dilation, as required by Mallat?s fourth axiom. However, in the specific case of the symmetric
group, we do at least have a natural multiscale structure on our domain. Our goal in this paper is to
find an analog of Mallat?s axioms that can take advantage of this structure.
2.1
Two decompositions of RSn
A permutation of n objects is a bijective mapping {1, 2, . . . , n} ? {1, 2, . . . , n}. With respect to the
natural notion of multiplication (?2 ?1 )(i) = ?2 (?1 (i)), the n! different permutations of {1, . . . , n}
form a group, called the symmetric group of degree n, which we denote Sn .
Our MRA on Sn is born of the tension between two different ways of carving up RSn into orthogonal
sums of subspaces: one corresponding to subdivision in ?time?, the other in ?frequency?. The first of
these is easier to describe, since it is based on recursively partitioning Sn according to the hierarchy
of sets
Si1 = { ? ? Sn | ?(n) = i1 }
i1 ? {1, . . . , n}
Si1 ,i2 = { ? ? Sn | ?(n) = i1 , ?(n ? 1) = i2 }
i1 6= i2 ,
i1 , i2 ? {1, . . . , n} ,
and so on, down to sets of the form Si1 ...in?1 , which only have a single element. Intuitively, this tree
of nested sets captures the way in which we zoom in on a particular permutation ? by first fixing
?(n), then ?(n ? 1), etc. (see Figure 2 in Appendix B in the Supplement). From the algebraic point
of view, Si1 ,...,ik is a so-called (left) Sn?k ?coset
?i1 ,...,ik Sn?k : = { ?i1 ...ik ? | ? ? Sn?k } ,
2
(2)
where ?i1 ...ik is a permutation mapping n 7? i1 , . . . , n ? k + 1 7? ik . This emphasizes that in some
sense each Si1 ,...,ik is just a ?copy? of Sn?k inside Sn . The first important system of subspaces of
RSn for our purposes are the window spaces
Si1 ...ik = { f | supp(f ) ? Si1 ...ik }
0 ? k ? n ? 1,
L
Sn
Clearly, for any given k, R = i1 ,...,ik Si1 ...ik .
{i1 , . . . , ik } ? {1, . . . , n} .
The second system of spaces is related to the behavior of functions under translation. In fact, there
are two distinct ways in which a given f ? RSn can be translated by some ? ? Sn : left?translation,
f 7? T? f , where (T? f )(?) = f (? ?1 ?), and right?translation f 7? T?R f , where (T?R f )(?) =
f (?? ?1 ). For now we focus on the former.
We say that a space V ? RSn is a left Sn ?module if it is invariant to left-translation in the sense
that for any f ? V and ? ? Sn , T? f ? V . A fundamental result in representation theory tells us
that if V is reducible in the sense that it has a proper subset V1 that is fixed by left-translation, then
V = V1 ? V2 , where V1 and V2 are both (left Sn ?)modules. In particular, RSn is a (left Sn ?)invariant
M
space, therefore
RSn =
Mt
(3)
t?Tn
for some set {Mt } of irreducible modules. This is our second important system of spaces.
To understand the interplay between modules and window spaces, observe that each coset
?i1 ...ik Sn?k has an internal notion of left?translation
(T?i1 ...ik f )(?) = f (?i1 ...ik ? ?1 ??1
i1 ...ik ?),
? ? Sn?k ,
(4)
which fixes S i1 ...ik . Therefore, S i1 ...ik must be decomposable into a sum of irreducible Sn?k ?
modules,
M
Si1 ...ik =
Mti1 ...ik .
(5)
t?Tn?k
i0 ,...,i0
k
Furthermore, the modules of different window spaces can be defined in such a way that Mt 1
=
i1 ...ik
i1 ...ik
?i01 ,...,i0k ??1
M
.
(Note
that
each
M
is
an
S
?module
in
the
sense
of
being
invariant
n?k
t
t
i1 ...ik
to the internal translation
action (4), and this action depends on i1 . . . ik .) Now, for any fixed t,
L
i1 ...ik
the space U =
, is fully Sn ?invariant, and therefore we must also have U =
i1 ,...,ik Mt
L
??A M? , where the M? are now irreducible Sn ?modules. Whenever a relationship of this type
holds between two sets of irreducible Sn ? resp. Sn?k ?modules, we say that the {M? } modules are
induced by {Mti1 ...ik }.
The situation is complicated by the fact that decompositions like (3) and (5) are not unique. In particular, there is no guarantee that the {M? } induced modules will be amongst the modules featured
in (3). However, there is a unique, so-called adapted system of modules, for which this issue does
not arise. Specifically, if, as is usually done, we let the indexing set Tm be the set of Standard
Young Tableaux (SYT) of size m (see Appendix A in the supplementary materials for the exact
definition), such as
1 3 5 6 7
t= 2 4
? T8 ,
8
.
then the adapted modules at different levels of the coset tree are connected via
M
M
Mti1 ...ik =
Mt0
? t ? Tn?k ,
i1 ...ik
(6)
t0 ?t?n
where t ?n := { t0 ? Tn | t0 ?n?k = t } and t0 ?n?k is the tableau that we get by removing the boxes
containing n ? k + 1, . . . , n from t0 . We also
S extend these relationships to sets in the obvious way:
? ?n?k := { t0?n?k | t0 ? ? } and ? ?n := t?? t ?n . We will give an explicit description of the
adapted modules in Section 4. For now abstract relationships of the type (6) will suffice.
3
Coset based multiresolution analysis on Sn
Our guiding principle in defining an analog of Mallat?s axioms for permutations is that the resulting
multiresolution analysis should reflect the multiscale structure of the tree of cosets. At the same time,
we also want the {Vk } spaces to be invariant to translation. Letting P be the projection operator
3
(Pi1 ...ik f )(?) :=
f (?) if ? ? ?i1 ...ik Sn?k ,
0
otherwise,
(7)
we propose the following definition.
Definition 1 We say that a sequence of spaces V0 ? V1 ? . . . ? Vn?1 = RSn forms a left-invariant
coset based multiresolution analysis (L-CMRA) for Sn if
L1. for any f ? Vk and any ? ? Sn , we have T? f ? Vk ,
L2. if f ? Vk , then Pi1 ...ik+1 f ? Vk+1 , for any i1 , . . . , ik+1 , and
L3. if g ? Vk+1 , then for any i1 , . . . , ik+1 there is an f ? Vk such that Pi1 ...ik+1 f = g.
Given any
Lleft-translation invariant space Vk , the unique Vk+1 that satisfies axioms L1?L3 is
Vk+1 := i1 ...ik+1 Pi1 ...ik+1 Vk . Applying this formula recursively, we find that
M
Vk =
Pi1 ...ik V0 ,
(8)
i1 ...ik
so V0 determines the entire sequence of spaces V0 , V1 , . . . , Vn?1 . In contrast to most classical
MRAs, however, this relationship is not bidirectional: Vk does not determine V0 , . . . , Vk?1 .
To gain a better understanding of L-CMRA, we exploit that (by axiom L1) each Vk is Sn ?invariant,
and is therefore a sum of irreducible Sn ?modules. By the following proposition, if V0 is a sum of
adapted modules, then V1 , . . . , Vn?1 are easy to describe.
L
Proposition 1 If {Mt }t?Tn are the adapted left Sn ?modules of RSn, and V0 =
t??0 Mt for some
?0 ? Tn , then
M
M
Mt ,
Wk =
Mt ,
where
?k = ?0 ?n?k?n ,
(9)
Vk =
t ? ?k
t ? ?k+1\?k
for any k ? {0, 1, . . . , n ? 1}.
L
Proof. By (6) Pi1 ...ik [ t0 ?t?n Mt0 ] = Mti1 ...ik . Therefore, for any t0 ? (t?n ? ?0 ) there must be
some f ? Mt0 ? V0 such that for some i1 . . . ik , Pi1 ...ik f ? Mti1 ...ik (and Pi1 ...ik f is non-zero). By
Lemmas 1 and 2 in Appendix D, this implies that Mti1 ...ik ? Vk for all i1 . . . ik . On the other hand,
from (6) it is also clear that if t0 6? ?0 , then Mti1 ...ik ? Vk = {0}. Therefore,
M M
M
Mt00 .
Vk =
Mti1 ...ik =
t00 ??0?n?k?n
t??0?n?k i1 ...ik
The expression for Wk follows from the general formula Vk+1 = Vk ? Wk .
Example 1 The simplest case of L-CMRA is when ?0 = { 1 2 ? ? ? n }. In this case, setting
m = n ? k, we find that ?0 ?m = { 1 2 ? ? ? m }, and ?k = ?0 ?m?n is the set of all Young tableaux
whose first row starts with the numbers 1, 2, . . . , m.
It so happens that M i112...i? k? m is just the trivial invariant subspace of constant functions on
?i1 ...ik Sn?k . Therefore, this instance of L-CMRA is an exact analog of Haar wavelets: Vk will
consist of all functions that are constant on each left Sn?k ?coset. Some more interesting examples
of adapted L-CMRAs are described in Appendix C.
y
When V0 cannot be written as a direct sum of adapted modules, the analysis becomes significantly
more complicated. Due to space limitations, we leave the discussion of this case to the Appendix.
3.1
Bi-invariant multiresolution analysis
The left-invariant multiresolution of Definition 1 is appropriate for problems like ranking, where we
have a natural permutation invariance with respect to relabeling the objects to be ranked, but not the
ranks themselves. In contrast, in problems like multi-object tracking, we want our V0 ? . . . ? Vn?1
hierarchy to be invariant on both the left and the right. This leads to the following definition.
4
Definition 2 We say that a sequence of spaces V0 ? V1 ? . . . ? Vn?1 = RSn forms a bi-invariant
coset based multiresolution analysis (Bi-CMRA) for Sn if
Bi1. for any f ? Vk and any ? ? Sn , we have T? f ? Vk and T?R f ? Vk
Bi2. if f ? Vk?1 , then Pi1 ...ik f ? Vk , for any i1 , . . . , ik ; and
Bi3. Vk is the smallest subspace of RSn satisfying Bi1 and Bi2.
Note that the third axiom had to be modified somewhat compared to Definition 1, but essentially it
serves the same purpose as L3.
A subspace U that is invariant to both left- and right-translation (i.e., for any f ? U and any ?, ? ? Sn
both T? f ? U and T?R f ? U ) is called a two-sided module. The main reason that Bi-CMRA is
easier to describe than L-CMRA is that the irreducible two-sided modules in RSn , called isotypic
subspaces, are unique. In particular, the isotypics turn out to be
M
Mt
? ? ?n ,
U? =
t?Tn : ?(t)=?
where ?(t) is the vector (?1 , . . . , ?p ) in which ?i is the P
number of boxes in row i of t. For t to be a
p
valid SYT, we must have ?1 ? ?2 ? . . . ? ?p ? 1, and i=1 ?i = n. We use ?n to denote the set
of all such p?tuples, called integer partitions of n.
Bi-CMRA is a much more constrained
L framework than L-CMRA because (by axiom Bi1) each Vk
space must be of the form Vk =
??? k U? . It should come as no surprise that the way that ? 0
determines ? 1 , . . . , ? n?1 is related to restriction and extension relationships between partitions. We
write ?0 ? ? if ?0i ? ?i for all i (assuming ? is padded with zeros to make it the same length as ?),
0 n
0
and for m ? n, we define ??m := { ?0 ? ?m | ?0 ? ? }, and
S ? ? := { ? ? ?n | ?S? ? }. Again,
these operators are extended to sets of partitions by ? ?m := ??? ? ?m and ? ?n := ??? ? ?n . (See
Figure 3 in Appendix B.)
Proposition 2 Given a set of partitions ? 0 ? ?n , the corresponding Bi-CMRA comprises the spaces
M
M
U? ,
where
U? ,
Wk =
? k = ? 0 ?n?k?n .
(10)
Vk =
? ? ?k
? ? ? k+1\? k
Moreover, any system of spaces satisfying Definition 2 is of this form for some ? 0 ? ?n .
Example 2 The simplest case of Bi-CMRA corresponds to taking ? 0 = {(n)}. In this case
?L
0 ?n?k = {(n ? k)}, and ? k = { ? ? ?n | ?1 ? n ? k }. In Section 6 we discuss that Vk =
Sn
determined by up to k?th order in???k U? has a clear interpretation as the subspace of R
teractions between elements of the set {1, . . . , n}.
y
4
Wavelets
As mentioned in Section 2, to go from multiresolution analysis to orthogonal wavelets, one needs
to define appropriate bases for the spaces V0 , W0 , W1 , . . . Wn?2 . This can be done via the close
connection between irreducible modules and the {?? } irreducible representations (irreps), that we
encountered in the context of the Fourier transform (1). As explained in Appendix A, each integer
partition ? ? ?n has a corresponding irrep ?? : Sn ? Rd? ?d? ; the rows and columns of the ?? (?)
matrices are labeled by the set T? of standard Young tableaux of shape ?; and if the ?? are defined
according to Young?s Orthogonal Representation (YOR), then for any t ? Tn and t0 ? T?(t) , the
functions ?t0 (?) = [??(t) (?)]t0,t form a basis for the adapted module Mt . Thus, the orthonormal
system of functions
p
t ? ?0
? = ?(t)
t0 ? T?
(11)
?t,t0 (?) = d? /n! [?? (?)]t0,t
p
k
?t,t
d? /n! [?? (?)]t0,t
t ? ?k+1\ ?k ? = ?(t) t0 ? T? ,
(12)
0 (?) =
seems to be a natural choice of scaling resp. wavelet functions for the L-CMRA of Proposition 1.
Similarly, we can take
p
?t,t0 (?) = d? /n! [?? (?)]t0,t
? ? ?0
t, t0 ? T?
(13)
p
k
0
?t,t0 (?) = d? /n! [?? (?)]t0,t
? ? ? k+1\ ? k
t, t ? T? ,
(14)
5
as a basis for the Bi-CMRA of Proposition 2. Comparing with (1), we find that if we use these bases
to compute the wavelet transform of a function, then the wavelet coefficients will just be rescaled
versions of specific columns of the Fourier transform. From the computational point of view, this
is encouraging, because there are well-known and practical fast Fourier transforms (FFTs) available
for Sn [12][13]. On the other hand, it is also somewhat of a letdown, since it suggests that all that
we have gained so far is a way to reinterpret parts of the Fourier transform as wavelet coefficients.
k
An even more serious concern is that the ?t,t
0 functions are not at all localized in the spatial domain, largely contradicting the very idea of wavelets. A solution to this dilemma emerges when we
consider that since
?k+1 \ ?k = (?0 ?n?k?1?n ) \ (?0 ?n?k?n ) = (?0 ?n?k?1?n?k ) \ (?0 ?n?k ) ?n ,
each of the Wk wavelet spaces of Proposition 1 can be rewritten as
M M i ...i
Wk =
Mt 1 k
?k = (?0 ?n?k?1?n?k ) \ (?0 ?n?k ),
(15)
i1 ...ik t??k
and similarly, the wavelet spaces of Proposition 2 can be rewritten as
M M i ...i
Wk =
U?1 k
? k = (? 0 ?n?k?1?n?k ) \ (? 0 ?n?k ),
(16)
i1 ...ik ??? k
L
i1 ...ik
where U?i1 ...ik are now the ?local isotypics? U?i1 ...ik :=
. An orthonormal basis for
t?T? Mt
i1 ...ik
the M
spaces is provided by the local Fourier basis functions
p
d?(t) /(n ? k)! [??(t) (??1
i1 ...ik
i1 ...ik ?)]t0,t ? ? ?i1 ...ik Sn?k
(17)
?t,t0 (?) :=
0
otherwise,
which are localized both in ?frequency? and in ?space?. This basis also affirms the multiscale nature
...ik
of our wavelet spaces, since projecting onto the wavelet functions ?ti11 ,t
of a specific shape, say,
0
1
?1 = (n ? k ? 2, 2) captures very similar information about functions in Si1 ...ik as projecting onto
j 0 ...jk0 0
the analogous ? t21,t0
2
for functions in Sj1 ,...,jk0 if t2 and t02 are of shape ?2 = (n ? k 0 ? 2, 2).
Taking (17) as our wavelet functions, we define the L-CMRA wavelet transform of a function
f : Sn ? R as the collection of column vectors
wf? (t) := (hf, ?t,t0 i)>
t0 ??(t)
wf (t; i1 , . . . , ik ) :=
t ? ?0
i1 ...ik >
(hf, ?t,t
i)t0 ??(t)
0
(18)
t ? ?k
{i1 , . . . , ik } ? {1, . . . , n} , (19)
where 0 ? k ? n ? 2, and ?k is as in (15). Similarly, we define the Bi-CMRA wavelet transform
of f as the collection of matrices
wf? (?) := (hf, ?t,t0 i)t,t0 ??
wf (?; i1 , . . . , ik ) :=
? ? ?0
i1 ...ik
(hf, ?t,t
i)t,t0 ??
0
(20)
? ? ?k
{i1 , . . . , ik } ? {1, . . . , n} , (21)
where 0 ? k ? n ? 2, and ? k is as in (16).
4.1
Overcomplete wavelet bases
While the wavelet spaces W0 , . . . , Wk?1 of Bi-CMRA are left- and right-invariant, the wavelets
(17) still carry the mark of the coset tree, which is not a right-invariant object, since it branches in
the specific order n, n ? 1, n ? 2, . . .. In contexts where wavelets are used as a means of promoting
sparsity, this will bias us towards sparsity patterns that match the particular cosets featured in the
coset tree. The only way to avoid this phenomenon is to span W0 , . . . , Wk?1 with the overcomplete
system of wavelets
p
d?(t) /(n ? k)! [??(t) (??1
i1 ...ik ? ?j1 ...jk )]t0,t ? ? ?i1 ...ik Sn?k ?j1 ...jk
k
?ji11 ...i
(?)
:=
0
...jk ,t,t
0
otherwise,
where now both {i1 , . . . , ik } and {j1 , . . . , jk } are allowed to run over all k?element subsets of
{1, . . . , n}. While sacrificing orthogonality, such a basis is extremely well suited for sparse modeling in various applications.
6
5
Fast wavelet transforms
In the absence of fast wavelet transforms, multiresolution analysis would only be of theoretical
interest. Fortunately, our wavelet transforms naturally lend themselves to efficient recursive computation along branches of the coset tree. This is especially attractive when dealing with functions that
are sparse, since subtrees that only have zeros at their leaves can be eliminated from the transform
altogether.
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
function FastLCWT(f, ?, (i1 . . . ik )) {
if k = n ? 1 then
return(Scaling? (v(f )))
end if
v?0
for each ik+1 6? {i1 . . . ik } do
if Pi1 ...ik+1 f 6= 0 then
v ? v + ?ik (FastLCWT(f?i1 ...ik+1 , ? ?n?k?1 , (i1 . . . ik+1 )))
end if
end for
output Wavelet??n?k?1?n?k \? (v)
return Scaling? (v) }
Algorithm 1: A high level description of a recursive algorithm that computes the wavelet transform
(18)?(19). The function is called as FastLCWT(f, ?0 , ()). The symbol v stands for the collection of coefficient vectors {wf (t; i1 . . . ik )}t???n?k?1?n?k . The function Scaling selects the subset of these vectors that are scaling coefficients, whereas Wavelet selects the wavelet coefficients.
f ?i1 ...ik : Sn?k ? R is the restriction of f to ?i1 ...ik Sn?k , i.e., f?i1 ...ik (? ) = f (?i1 ...ik ? ).
A very high level sketch of the resulting algorithm is given in Algorithm 1, while a more detailed
description in terms of actual coefficient matrices is in Appendix E. Bi-CMRA would lead to a
similar algorithm, which we omit for brevity. A key component of these algorithms is the function
?ik , which serves to convert the coefficient vectors representing any g ? Si1 ...ik+1 in terms of the
i1 ...ik+1
i1 ...ik
basis {?t,t
}t,t0 to the coefficient vectors representing the same g in terms of {?t,t
}t,t0 .
0
0
While in general this can be a complicated and expensive linear transformation, due to the special
properties of Young?s orthogonal representation, in our case it reduces to
q
wg (t; i1 . . . ik ) = d?0 (n?k)
?? (Jik+1 , n ? kK) wg (t0 ; i1 . . . ik+1 ) ?t ,
(22)
d?
where t0 = t?n?k?1 ; ? = ?(t); ?0 = ?(t0 ); Jik+1 , kK is a special permutation, called a contiguous
cycle, that maps k to ik+1 ; and ?t is a copy operation that promotes its argument to a d? ?dimensional
vector by
[wg (t0 ; . . .)]t00?n?k?1 if t00 ?n?k?1 ? T?0
0
t
wg (t ; . . .) ? t00 =
0
otherwise.
Clausen?s FFT [12] uses essentially the same elementary transformations to compute (1). However,
whereas the FFT runs in O(n3 n!) operations, by working with the local wavelet functions (17) as
opposed to (12) and (14), if f is sparse, Algorithm 1 needs only polynomial time.
Proposition 3 Given f : Sn ? R such that |supp(f )| ? q, and ?0 ? Tn , Algorithm 1 P
can compute
the L-CMRA wavelet coefficients (18)?(19) in n2N q scalar operations, where N = t??1 d?(t) .
P
The analogous Bi-CMRA transform runs in n2M q time, where M = ??? 1 d2? .
To estimate the N and M constants in this result, note that for partitions with ?1 >> ?2 , ?3 , . . .,
d? = O(nn??1 ). For example, d(n?1,1) = n ? 1, d(n?2,2) = n(n ? 3)/2, etc.. The inverse wavelet
transforms essentially follow the same computations in reverse and have similar complexity bounds.
7
6
Applications
There is a range of applied problems involving permutations that could benefit from the wavelets
defined in this paper. In this section we mention just two potential applications.
6.1
Spectral analysis of ranking data
Given a distribution p over permutations, the matrix Mk of k?th order marginals is
X
[Mk ]j1 ...jk ;i1 ...ik = p( ?(i1 ) = j1 , . . . , ?(ik ) = jk ) =
p(?),
j ...j
??Si 1...i k
1
...jk
Sij11...i
k
k
is the two-sided coset ?j1 ...jk Sn?k ??1
?j1 ...jk ? ??1
where
i1 ...ik :=
i1 ...ik | ? ? Sn?k .
Clearly, these matrices satisfy a number of linear equations, and therefore are redundant. However,
it can be shown that for for some appropriate basis transformation matrix Tk ,
M
pb(?) Tk ,
Mk = Tk>
??Tn : ?1 ?n?k
i.e., the Fourier matrices {b
p(?)}? : ?i =n?k capture exactly the ?pure k?th order effects? in the distribution p. In the spectral analysis of rankings, as advocated, e.g., in [7], there is a lot of emphasis
on projecting data to this space, Margk , but using an FFT this takes around O(n2 n!) time. On the
other hand, Margk is exactly the wavelet space Wk?1 of the Bi-CMRA generated by ? 0 = {(n)} of
Example 2. Therefore, when p is q?sparse, noting that d(n?1,1) = n?1, by using the methods of the
previous section, we can find its projection to each of these spaces in just O(n4 q) time.
6.2
Multi-object tracking
In multi-object tracking, as mentioned in the Introduction, the first few Fourier coefficients
{b
p(?)}??? (w.r.t. the majorizing order on permutations) provide an optimal approximation to the
assignment distribution p between targets and tracks in the face of a random noise process [2][1].
However, observing target i at track j will zero out p everywhere outside the coset ?j Sn?k ??1
i ,
which is difficult for the Fourier approach to handle. In fact, by analogy with (7), denoting the operator that projects to the space of functions supported on this coset by Pij , the new distribution will
just be Pij p. Thus, if we set ? 0 = ?, after any single observation, our distribution will lie in V1 of the
corresponding Bi-CMRA.
Unfortunately, after a second observation, p will fall in V2 , etc., leading to a combinatorial explosion in the size of the space needed to represent p. However, while each observation makes p less
smooth, it also makes it more concentrated, suggesting that this problem is ideally suited to a sparse
representation in terms of the overcomplete basis functions of Section 4.1. The important departure
from the fast wavelet transforms of Section 5 is that now, to find the optimally sparse representation
of p, we must allow branching to two-sided cosets of the form ?j1 ...jk Sn?k ?i1 ...ik , which are no
longer mutually disjoint.
7
Conclusions
Starting from the self-similar structure of the Sn?k coset tree, we developed a framework for wavelet
analysis on the symmetric group. Our framework resembles Mallat?s multiresolution analysis in its
axiomatic foundations, yet is closer to continuous wavelet transforms in its invariance properties. It
also has strong ties to the ?separation of variables? technique of non-commutative FFTs [14]. In a
certain special case we recover the analog of Haar wavelets on the coset tree, In general, wavelets
can circumvent the rigidity of the Fourier approach when dealing with functions that are sparse
and/or have discontinuities, and, in contrast to the O(n2 n!) complexity of the best FFTs, for sparse
functions and a reasonable choice of ?0 , our fast wavelet transform runs in O(np ) time for some
small p. Importantly, wavelets also provide a natural basis for sparse approximations, which have
hithero not been explored much in the context of permutations. Finally, much of our framework is
applicable not just to the symmetric group, but to other finite groups, as well.
8
References
[1] J. Huang, C. Guestrin, and L. Guibas. Fourier Theoretic Probabilistic Inference over Permutations. Journal of Machine Learning Research, 10:997?1070, 2009.
[2] R. Kondor, A. Howard, and T. Jebara. Multi-object tracking with representations of the symmetric group. In Artificial Intelligence and Statistics (AISTATS), 2007.
[3] S. Jagabathula and D. Shah. Inferring Rankings under Constrained Sensing. In In Advances in
Neural Information Processing Systems (NIPS), 2008.
[4] J. Huang, C. Guestrin, X. Jiang, and L. Guibas. Exploiting Probabilistic Independence for
Permutations. In Artificial Intelligence and Statistics (AISTATS), 2009.
[5] X. Jiang, J. Huang, and L. Guibas. Fourier-information duality in the identity management
problem. In In Proceedings of the European Conference on Machine Learning and Principles
and Practice of Knowledge Discovery in Databases (ECML PKDD), Athens, Greece, September 2011.
[6] D. Rockmore, P. Kostelec, W. Hordijk, and P. F. Stadler. Fast Fourier Transforms for Fitness
Landscapes. Applied and Computational Harmonic Analysis, 12(1):57?76, 2002.
[7] P. Diaconis. Group representations in probability and statistics. Institute of Mathematical
Statistics, 1988.
[8] M. Gavish, B. Nadler, and R. R. Coifman. Multiscale Wavelets on Trees, Graphs and High
Dimensional Data: Theory and Applications to Semi Supervised Learning. In International
Conference on Machine Learning (ICML), 2010.
[9] R. R. Coifman and M. Maggioni. Diffusion wavelets. Applied and Computational Harmonic
Analysis, 21, 2006.
[10] D. K. Hammond, P. Vandergheynst, and R. Gribonval. Wavelets on graphs via spectral graph
theory. Applied and Computational Harmonic Analysis, 30:129?150, 2011.
[11] S. G. Mallat. A Theory for Multiresolution Signal Decomposition. IEEE Transactions on
Pattern Analysis and Machine Intelligence, 11:674?693, 1989.
[12] M. Clausen. Fast generalized Fourier transforms. Theor. Comput. Sci., 67(1):55?63, 1989.
[13] D. Maslen and D. Rockmore. Generalized FFTs ? a survey of some recent results. In Groups
and Computation II, volume 28 of DIMACS Ser. Discrete Math. Theor. Comput. Sci., pages
183?287. AMS, Providence, RI, 1997.
[14] D. K. Maslen and D. N. Rockmore. Separation of Variables and the Computation of Fourier
Transforms on Finite Groups, I. Journal of the American Mathematical Society, 10:169?214,
1997.
9
|
4720 |@word version:1 kondor:2 polynomial:1 seems:1 d2:1 decomposition:3 mention:1 recursively:3 carry:1 born:1 denoting:1 comparing:1 si:1 yet:1 must:6 written:1 chicago:1 partition:6 j1:8 shape:3 intelligence:3 leaf:1 gribonval:1 math:1 si1:11 cosets:3 mathematical:2 along:1 direct:1 ik:96 inside:1 coifman:3 behavior:1 themselves:2 pkdd:1 multi:7 actual:2 encouraging:1 window:3 becomes:1 provided:1 project:1 underlying:1 moreover:2 suffice:1 jik:2 developed:1 transformation:3 guarantee:1 reinterpret:1 tie:1 exactly:2 ser:1 partitioning:1 omit:1 positive:1 local:3 jiang:2 emphasis:1 resembles:1 suggests:1 limited:1 bi:14 range:1 carving:1 unique:4 practical:1 recursive:2 practice:1 featured:2 axiom:8 thought:1 significantly:1 projection:2 get:2 onto:2 close:1 cannot:1 operator:4 context:3 applying:1 rockmore:3 restriction:2 map:1 go:1 starting:2 survey:1 formalized:1 splitting:1 decomposable:1 pure:1 importantly:1 orthonormal:6 handle:2 notion:5 maggioni:2 analogous:2 resp:2 hierarchy:2 play:1 mallat:6 target:2 exact:2 us:1 element:3 satisfying:2 particularly:1 jk:10 expensive:1 t02:1 labeled:1 database:1 role:1 module:23 reducible:1 capture:3 connected:1 mra:2 cycle:1 ordering:1 rescaled:1 disease:1 mentioned:2 sj1:1 complexity:2 ideally:1 honored:1 dilemma:1 basis:14 translated:1 various:1 walter:1 distinct:1 fast:8 describe:4 artificial:2 jk0:2 tell:1 outside:1 whose:1 supplementary:1 valued:1 say:6 otherwise:4 wg:4 statistic:5 transform:14 t00:4 n2m:1 interplay:1 sequence:5 advantage:1 i0k:1 t21:1 propose:1 hordijk:1 multiresolution:19 description:3 exploiting:1 leave:1 object:12 tk:3 fixing:1 advocated:2 strong:1 implies:1 come:1 material:1 fix:1 bi1:3 mt0:3 proposition:8 elementary:1 theor:2 mathematically:1 extension:1 hold:1 around:1 guibas:3 nadler:1 mapping:2 smallest:1 gavish:2 purpose:2 applicable:1 axiomatic:1 combinatorial:1 athens:1 majorizing:1 rough:1 clearly:2 modified:1 avoid:1 focus:1 vk:41 rank:1 contrast:3 sense:4 wf:5 am:1 inference:2 i0:2 nn:1 entire:1 i1:70 selects:2 issue:3 rsn:13 constrained:2 spatial:1 special:3 clausen:2 eliminated:1 icml:1 t2:1 np:1 serious:2 few:1 irreducible:9 diaconis:1 zoom:1 relabeling:1 fitness:1 attempt:1 invariably:1 interest:1 possibility:1 introduces:1 chain:1 subtrees:1 closer:1 explosion:1 orthogonal:4 tree:10 sacrificing:1 overcomplete:3 theoretical:1 mk:3 instance:1 column:3 modeling:1 contiguous:1 assignment:2 ordinary:1 introducing:2 subset:3 stadler:1 optimally:1 providence:1 twelve:1 fundamental:1 international:1 probabilistic:2 w1:1 again:1 reflect:1 management:1 containing:1 opposed:1 huang:3 american:1 leading:1 return:2 supp:2 suggesting:1 potential:2 wk:12 coefficient:10 satisfy:1 explicitly:1 ranking:6 vi:1 depends:1 view:2 lot:1 observing:1 start:1 hf:4 recover:1 complicated:3 largely:1 correspond:1 landscape:1 emphasizes:1 hammond:2 cc:4 suffers:1 whenever:1 definition:8 i01:1 frequency:3 obvious:1 naturally:1 proof:1 gain:1 knowledge:1 emerges:1 greece:1 dempsey:1 bidirectional:1 supervised:1 follow:1 tension:1 done:2 box:2 furthermore:1 just:8 hand:3 sketch:1 working:1 multiscale:5 grows:1 effect:1 managed:1 former:1 symmetric:7 i2:5 attractive:2 branching:1 self:1 dimacs:1 generalized:3 bijective:1 theoretic:1 tn:10 l1:3 harmonic:3 recently:2 mt:12 volume:1 analog:6 extend:1 interpretation:1 ti11:1 marginals:1 gibbs:1 mother:1 smoothness:1 rd:1 similarly:3 had:1 l3:3 moving:1 longer:1 v0:15 etc:3 base:5 recent:1 reverse:1 certain:1 guestrin:2 fortunately:1 somewhat:2 determine:1 redundant:1 signal:2 semi:1 smoother:1 branch:2 ii:1 reduces:1 exceeds:1 smooth:2 match:1 promotes:1 involving:1 essentially:3 represent:1 whereas:2 want:2 induced:2 call:1 integer:2 noting:1 easy:1 wn:1 fft:3 variety:1 independence:1 topology:1 idea:1 tm:1 bottleneck:1 t0:39 expression:1 algebraic:1 constitute:1 action:2 generally:1 clear:2 involve:1 detailed:1 transforms:10 repeating:1 ten:1 band:1 concentrated:1 simplest:3 generate:1 disjoint:1 track:3 discrete:3 write:1 group:13 key:1 pb:1 diffusion:1 v1:9 graph:4 padded:1 sum:5 teractions:1 convert:1 run:4 inverse:1 everywhere:1 fourth:1 ruling:1 reasonable:1 vn:5 separation:2 appendix:8 scaling:7 bound:1 encountered:1 adapted:8 orthogonality:1 n3:1 ri:1 generates:1 fourier:19 syt:2 argument:1 span:1 extremely:1 pi1:10 department:2 structured:1 according:2 wi:1 n4:1 happens:1 intuitively:1 invariant:16 indexing:1 explained:1 projecting:3 jagabathula:1 sided:4 equation:1 mutually:1 remains:1 discus:2 turn:1 tableau:4 needed:1 letting:1 serf:2 end:3 available:1 operation:3 coset:17 rewritten:2 promoting:1 observe:1 v2:4 generic:1 appropriate:4 spectral:3 shah:1 altogether:2 ensure:1 exploit:1 risi:2 especially:1 classical:3 society:1 grace:1 said:1 september:1 amongst:1 subspace:7 n2n:1 sci:2 w0:4 trivial:2 reason:1 assuming:1 length:1 relationship:5 kk:2 difficult:2 unfortunately:1 affirms:1 proper:1 observation:4 howard:1 finite:2 ecml:1 defining:4 situation:1 extended:1 jebara:1 namely:1 required:1 connection:1 rougher:1 discontinuity:3 nip:1 address:1 usually:1 pattern:2 departure:1 sparsity:2 lend:1 bi2:2 natural:7 difficulty:1 ranked:1 haar:3 circumvent:1 representing:3 sn:52 understanding:1 l2:4 discovery:1 multiplication:1 fully:1 permutation:20 interesting:1 limitation:1 analogy:1 localized:2 vandergheynst:1 foundation:1 degree:1 sufficient:1 pij:2 principle:2 translation:10 row:3 supported:1 copy:2 soon:1 lleft:1 uchicago:1 understand:1 bias:1 allow:1 fall:1 institute:1 taking:2 face:1 sparse:11 yor:1 benefit:1 valid:1 stand:1 fb:3 computes:1 author:1 collection:3 far:1 transaction:1 dealing:2 tuples:1 continuous:1 dilation:2 nature:1 expansion:1 european:1 domain:3 t8:1 aistats:2 main:1 whole:1 noise:1 arise:1 n2:2 contradicting:1 allowed:1 ff:12 inferring:1 guiding:1 explicit:1 comprises:1 comput:2 lie:1 third:1 wavelet:53 young:5 ffts:4 down:1 removing:1 formula:2 specific:4 symbol:1 explored:1 sensing:1 concern:1 consist:1 gained:1 supplement:1 commutative:1 kx:1 easier:2 surprise:1 suited:2 tracking:7 scalar:1 nested:3 corresponds:1 satisfies:2 determines:2 goal:1 identity:1 towards:1 absence:2 specifically:1 determined:1 lemma:1 called:13 accepted:1 invariance:2 subdivision:1 duality:1 internal:2 mark:1 inability:1 brevity:1 phenomenon:1 rigidity:1
|
4,112 | 4,721 |
Algorithms for Learning Markov Field Policies
Oliver Kr?omer, Jan Peters
Technische Universit?at Darmstadt
{oli,jan}@robot-learning.de
Abdeslam Boularias
Max Planck Institute for Intelligent Systems
[email protected]
Abstract
We use a graphical model for representing policies in Markov Decision Processes.
This new representation can easily incorporate domain knowledge in the form of
a state similarity graph that loosely indicates which states are supposed to have
similar optimal actions. A bias is then introduced into the policy search process
by sampling policies from a distribution that assigns high probabilities to policies
that agree with the provided state similarity graph, i.e. smoother policies. This
distribution corresponds to a Markov Random Field. We also present forward
and inverse reinforcement learning algorithms for learning such policy distributions. We illustrate the advantage of the proposed approach on two problems:
cart-balancing with swing-up, and teaching a robot to grasp unknown objects.
1
Introduction
Markov Decision Processes (MDP) provide a rich and elegant mathematical framework for solving
sequential decision-making problems. In practice, significant domain knowledge is often necessary
for finding a near-optimal policy in a reasonable amount of time. For example, one needs a suitable
set of basis functions, or features, to approximate the value functions in reinforcement learning and
the reward functions in inverse reinforcement learning. Designing value or reward features can itself
be a challenging problem. The features can be noisy, misspecified or insufficient, particularly in
certain complex robotic tasks such as grasping and manipulating objects. In this type of applications,
the features are mainly acquired through vision, which is inherently noisy. Many features are also
nontrivial, such as the features related to the shape of an object, used for calculating grasp stability.
In this paper, we show how to overcome the difficult problem of designing precise value or reward
features. We draw our inspiration from computer vision wherein similar problems have been efficiently solved using a family of graphical models known as Markov Random Fields (MRFs) (Kohli
et al., 2007; Munoz et al., 2009). We start by specifying a graph that loosely indicates which pairs of
states are supposed to have similar actions under an optimal policy. In an object manipulation task
for example, the states correspond to the points of contact between the robot hand and the object
surface. A state similarity graph can be created by sampling points on the surface of the object and
connecting each point to its k nearest neighbors using the geodesic or the Euclidean distance. The
adjacency matrix of this graph can be interpreted as the Gram matrix of a kernel that can be used
to approximate the optimal value function. Kernels have been widely used before in reinforcement
learning (Ormoneit & Sen, 1999), however, they were used for approximating the values of different
policies in a search for an optimal policy. Therefore, the kernels should span not only the optimal
value function, but also the values of intermediate policies.
In this paper, kernels will be used for a different purpose. We only require that the kernel spans the
value function of an optimal policy. Therefore, the value function of an optimal policy is assumed to
have a low approximation error, measured by the Bellman error, using that kernel. Subsequently, we
derive a distribution on policies, wherein the probability of a policy is proportional to its estimated
value, and inversely proportional to its Bellman error. In other terms, the Bellman error is used
as a surrogate function for measuring how close a policy is to an optimal one. We show that this
1
probability distribution is an MRF, and use a Markov chain Monte Carlo algorithm for sampling
policies from it. We also describe an apprenticeship learning algorithm based on the same principal.
A preliminary version of some parts of this work was presented in (Boularias et al., 2012).
2
Notations
Formally, a finite-horizon Markov Decision Process (MDP) is a tuple (S, A, T, R, H, ?), where S
is a set of states and A is a set of actions, T is a transition function with T (s, a, s0 ) = P (st+1 =
s0 |st = s, at = a) for s, s0 ? S, a ? A, and R is a reward function where R(s, a) is the reward given
for action a in state s. To ease notation and without loss of generality, we restrict our theoretical
analysis to the case where rewards depend only on states, and denote by R an |S| ? 1 vector. H is
the planning horizon and ? ? [0, 1] is a discount factor. A deterministic policy ? is a function that
returns an action a = ?(s) for each state s. T? is defined as T? (s, s0 ) = T (s, ?(s), s0 ). We denote
by ?t:H a non-stationary policy (?t , ?t+1 , . . . , ?H ), where ?i is a policy at time-step i. The value
of policy ?t:H is the expected sum of rewards received by following ?t:H , starting from a state s
PH
?
at time t, V?t:H (s) = i=t ? i?t Esi [R(si )|st = s, T?t:i ]. An optimal policy ?t:H
is one satisfying
?
?t:H ? arg max?t:H V?t:H (s), ?s ? S. Searching for an optimal policy is generally an iterative
process with two phases: policy evaluation, and policy improvement.
When the state space S is large or continuous, the value function V?t:H is approximated by a linear
combination of n basis functions, or features. Let fi be a |S||A| ? 1 vector corresponding to the ith
basis function, and let F be the |S||A| ? n matrix of columns fi . Let ??t be an |S| ? |S||A| actionselection matrix defined as ??t (s, (s, ?t (s))) = 1 and 0 otherwise. Then V?t:H = F?t w, where w
is a n ? 1 weights vector and F?t = ??t F . We define the Bellman error of two consecutive policies
?t and ?t+1 using the feature matrix F and the weights wt , wt+1 ? Rn as BE(F, wt:t+1 , ?t:t+1 ) =
kF?t wt ? ?T?t F?t+1 wt+1 ? Rk1 . Similarly, we define the Bellman error of a distribution P on
policies ?t and ?t+1 as BE(F, wt:t+1 , P ) = kE?t:t+1 ?P [F?t wt ? ?T?t F?t+1 wt+1 ] ? Rk1 . We also
define the minimum Bellman error as BE ? (F, ?t:t+1 ) = minwt:t+1 BE(F, wt:t+1 , ?t:t+1 ) and the
PH?1
total Bellman error as BE(F, w0:H , ?0:H ) = t=0 BE(F, wt:t+1 , ?t:t+1 ).
3
Markov Random Field Policies for Reinforcement Learning
We now present the reinforcement learning approach using the Bellman error as a structure penalty.
3.1
Structure penalty
Optimal policies of many real-world problems are structured and change smoothly over the state
space. Therefore, the optimal value function can often be approximated by simple features, compared to the value functions of arbitrary policies. We exploit this property and propose to indirectly
use these features, provided as domain knowledge, for accelerating the search for an optimal policy.
Specifically, we restrain the policy search to a set of policies that have a low estimated Bellman error
when their values are approximated using the provided features, knowing that the optimal policy has
a low Bellman error. Note that our approach is complementary to function approximation methods.
We only use the features for calculating Bellman errors, the value functions can be approximated by
using other methods, such as LSTD (Boyan, 2002).
Let K? be the Gram matrix defined as K? = ?? K?T? , where K = F F T . Matrix K is the adjacency
matrix of a graph that indicates which states and actions are similar under an optimal policy. Feature
matrix F is not explicitly required, as only the matrix K will be used later. Therefore, the user needs
only to provide a similarity measure between states, such as the Euclidean distance.
Let wt , wt+1 ? R|S| , ? R, if kE?t:t+1 ?P [K?t wt ? ?T?t K?t+1 wt+1 ] ? Rk1 ? then
BE ? (F, P ) ? . This result is obtained by setting F T ?T? wt and F T ?T? wt+1 as the weight vectors of the values of policies ?t and ?t+1 . The condition above implies that the policy distribution
P has a value function that can be approximated by using F . Enforcing this condition results in a
bias favoring policies with a low Bellman error. Thus, we are interested in learning a distribution
P (?0:H ) that satisfies this condition, while maximizing its expected value.
2
QH?1
Distribution P can be decomposed using the chain rule as P (?0:H ) = P (?H ) t=0 P (?t |?t+1:H ).
We start by calculating a distribution over deterministic policies ?H that will be executed at the last
time-step H. Then, for each step t ? {H ? 1, . . . , 0}, we calculate a distribution P (?t |?t+1:H ) over
deterministic policies ?t given policies ?t+1:H that we sample from P (?t+1:H ). In the following,
we show how to calculate P (?t |?t+1:H ).
3.2
Primal problem
Let ? ? R be a lower bound on the entropy of a distribution P on deterministic policies ?t , conditioned on ?t+1:H . ? is used for tuning the exploration. Our problem can then be formulated as
X
max
(1)
EP [V ?t:H ](s) , subject to g1 (P ) = 1, g2 (P ) ? ?, kg3 (P ) ? Rk1 ? ,
P
s?S
where g1 (P ) =
X
X
P (?t |?t+1:H ) , g2 (P ) = ?
P (?t |?t+1:H ) log P (?t |?t+1:H ) ,
?t ?A|S|
g3 (P ) =
X
?t ?A|S|
X
P (?t |?t+1:H )V ?t:H .
P (?t |?t+1:H )[K?t wt ? ?T?t K?t+1 wt+1 ] , EP [V ?t:H ] =
?t ?A|S|
?t ?A|S|
The objective function in Equation 1 is linear and its constraints define a convex set. Therefore, the
optimal solution to Problem 1 can be found by solving its Lagrangian dual.
3.3
Dual problem
The Lagrangian dual is given by
X
L(P, ?, ?, ?)=
EP [V ?t:H ](s) ? ? g1 (P )?1 + ? g2 (P )?? + ?T g3 (P )?R + k?k1 ,
s?S
where ?, ? ? R and ? ? R|S| . We refer the reader to Dudik et al. (2004) for a detailed derivation.
X
?L(P, ?, ?, ?)
=
V ?t:H (s) + ?T [K?t wt ? ?T?t K?t+1 wt+1 ] ? ? log P (?t |?t+1:H )) ? ? ? 1.
?P (?t |?t+1:H )
s?S
By setting
?L(P,?,?,?)
?P (?t |?t+1:H )
= 0 (Karush-Kuhn-Tucker condition), we get the solution
expected sum of rewards
1
P (?t |?t+1:H ) ? exp
?
|{z}
zX }|
{
V ?t:H (s) + ?T [K?t wt ? ?T?t K?t+1 wt+1 ] .
{z
}
|
s?S
smoothness term
exploration factor
This distribution on joint actions is a Markov Random Field. In fact, the kernel K = F F T
is the adjacency matrix of a graph (E, S), where (si , sj ) ? E if and only if ?ai , aj ? A :
K((si , ai ), (sj , aj )) 6= 0. Local Markov property is verified, ?si ? S :
P (?t (si )|?t+1:H , {?t (sj ) : sj ? S, sj 6= si })=P (?t (si )|?t+1:H , {?t (sj ) : (si , sj ) ? E, sj 6= si }).
In other terms, the probability of selecting an action in a given state depends on the expected long
term reward of the action, as well as on the selected actions in the neighboring states. Dependencies
between neighboring states are due to the smoothness term in the distribution.
3.4
Learning parameters
Our goal now is to learn the distribution P , which is parameterized by ? , ?, wt:t+1 and V ?t:H . Given
that the transition function T is unknown, we use samples D = {(st , at , rt , st+1 )} for approximating the gradients of the parameters and the value function V ?t:H . We also restrain K?t to states and
actions that appear in the samples, and denote by T??t the empirical transition matrix of the sampled
QH?1
states. Since P (?0:H ) = P (?H ) t=0 P (?t |?t+1:H ), then
P
P
H
?t:H
(s) + ?Tt [K?t wt ? ? T??t K?t+1 wt+1 ] .
(2)
P (?0:H ) ? exp ?1 t=0
s?D V
3
The value function V ?t:H is empirically calculated from the samples by using a standard value
function approximation algorithm, such as LSTD (Boyan, 2002). Temperature ? determines the
entropy of the distribution P , ? is initially set to a high value and gradually decreased over time as
more samples are collected. One can use the same temperature for all time-steps within the same
episode, or a different one for each step. Since the Lagrangian L is convex, parameters ?t can be
learned by a simple gradient descent. Algorithm 1 summarizes the principal steps of the proposed
approach. The algorithm iterates between two main steps: (i) sampling and executing policies from
Equation 2, and (ii), updating the value functions and the parameters ?t using the samples. The
weight vectors w0:H are the ones that minimize the empirical Bellman error in samples D, they are
also found by a gradient descent , wherein ?w0:H BE(K, w0:H , ?0:H ) is estimated from D.
Algorithm 1 Episodic Policy Search with Markov Random Fields
Initialize the temperature ? with a large value, and ?0:H with 0.
repeat
1. Sample policies ?0:H from P (Equation 2).
2. Discard policies ?0:H that have an empirical Bellman error higher than .
3. Execute ?0:H and collect D = {(st , at , rt , st+1 )}.
4. Update the value functions V ?t:H by using LSTD with D.
5. Find ?0:H that minimizes the dual L by a gradient descent, ?? L is estimated from D.
6. Decrease the temperature ? .
until ? ? ?
The main assumption behind this algorithm is that the kernel K approximates sufficiently well the
optimal value function, what happens when this is not the case? The introduced bias will favor
suboptimal policies. However, this problem can be solved by setting the threshold to a high value
when the user is uncertain about the domain knowledge provided by K. Our experiments confirm
that even a binary matrix K, corresponding to a k-NN graph, can yield an improved performance.
This approach is straightforward to extend to handle samples of continuous states and actions , in
which case, a policy is represented by a vector ?t ? RN of continuous parameters (for instance, the
center and the width of a gaussian). Therefore, Equation 2 defines a distribution P (?0:H ). In our
experiments, we use the Metropolis-Hastings algorithm for sampling ?0:H from P .
4
Markov Random Field Policies for Apprenticeship Learning
We now derive a policy shaping approach for apprenticeship learning using Markov Random Fields.
4.1
Apprenticeship learning
The aim of apprenticeship learning is to find a policy ? that is nearly as good as a policy ?
? demonstrated by an expert, i.e., V? (s) ? V?? (s) ? , ?s ? S. Abbeel & Ng (2004) proposed to learn a
reward function, assuming that the expert is optimal, and to use it to recover the expert?s generalized policy. The process of learning a reward function is known as inverse reinforcement learning.
The reward function is assumed
to be a linear combination of m feature vectors ?k with weights
Pm
?k , ?s ? S : R(s) = k=1 ?k ?k (s). The expected discounted sum of feature ?k , given policy
PH
?t:H and starting from s, is defined as ??k t:H (s) = i=t ? i?t Est:H [?k (si )|st = s, T?t:i ]. Using this
definition, the expected
Pm return of a policy ? can be written as a linear function of the feature expectations, V?t:H (s) = k=1 ?k ?k?t:H (s). Since this problem is ill-posed, Ziebart et al. (2008) proposed
to use the maximum entropy regularization, while matching the expected return of the examples.
This latter constraint can be satisfied by ensuring that ?k, s : ??k (s) = ??k , where ??k denotes the
empirical expectation of feature ?k calculated from the demonstration.
4.2
Structure matching
The classical framework of apprenticeship learning is based on designing features ? of the reward
and learning corresponding weights ?. In practice, as we show in the experiments, it is often difficult
to find an appropriate set of reward features. Moreover, the values of the reward features are usually
4
obtained from empirical data and are subject to measurement errors. However, most real-world
problems exhibit a structure wherein states that are close together tend to have the same optimal
action. This information about the structure of the expert?s policy can be used to partially overcome
the problem of finding reward features. The structure is given by a kernel that measures similarities
between states. Given an expert?s policy ?
?0:H and feature matrix F , we are interested in finding a
distribution P on policies ?0:H that has a Bellman error similar to that of the expert?s policy. The
following proposition states the sufficient conditions for solving this problem.
Proposition 1. Let F be a feature matrix, K = F F T , K?t = ??t K?T?t . Let P be a distribution on policies ?t and ?t+1 such that E?t:t+1 ?P [K?t ] = K??t , and E?t:t+1 ?P [?T?t K?t+1 T?Tt ] =
?T??t K??t+1 T??Tt , then BE ? (F, ?
?t:t+1 ) = BE ? (F, P ).
Proof. We prove that BE ? (F, P ) ? BE ? (F, ?
?t:t+1 ). The same argument can be used for proving
that BE ? (F, ?
?t:t+1 ) ? BE ? (F, P ). This proof borrows the orthogonality technique used for proving the Representer Theorem (Sch?olkopf et al., 2001). Let w
?t , w
?t+1 ? R|S| be the weight vectors
that minimize the Bellman error of the expert?s policy, i.e. k???t F w
?t ? ?T??t ???t+1 F w
?t+1 ? Rkp =
BE ? (F, ?
?t:t+1 ). Let us write w
?t = w
?tk + w
?t? , where w
?tk is the projection of w
?t on the rows
of ???t F , i.e. ??
?t ? R|S| : w
?tk = F T ?T??t ?
? t , and w
?t? is orthogonal to the rows of ???t F .
Thus, ???t F w
?t = ???t F (w
?tk + w
?t? ) = ???t F w
?tk = K??t ?
? t . Similarly, one can show that
T
T T
?T??t ???t+1 F w
?t+1 = ?T??t K??t+1 T??t ?
? t+1 . Let wt = F ??t ?
? t and wt+1 = F T ?T?t+1 T?Tt ?
? t+1 ,
?
then we have BE (F, P ) ? kE?t:t+1 [??t F wt ? ?T?t ??t+1 F wt+1 ] ? Rk1 = kE?t:t+1 [K?t ?
?t ?
?T?t K?t+1 T?Tt ?
? t+1 ] ? Rk1 = kK??t ??T?t ? ?T??t K??t+1 T??Tt ??T?t+1 ? Rk1 = BE ? (F, ?
?t:t+1 ).
4.3
Problem statement
Our problem now is to find a distribution on deterministic policies P that satisfies the conditions
stated in Proposition 1 in addition to the feature matching conditions ??k (s) = ??k . The conditions
of Proposition 1 ensure that P assigns high probabilities to policies that have a structure similar to
the expert?s policy ?
? . The feature matching constraints ensure that the expected value under P is the
same as the value of the expert?s policy. Given that there are infinite solutions to this problem, we
select a distribution P that has a maximal entropy (Ziebart et al., 2008).
X
max
?P (?t |?t+1:H ) log P (?t |?t+1:H ) ,
P
?t ?A|S|
subject to
X
X
P (?t |?t+1:H ) = 1 ,
P (?t |?t+1:H )??t :H = ?? ,
?t ?A|S|
X
?t ?A|S|
X
P (?t |?t+1:H )K?t = K??t , ?T??t K??t+1 T??Tt =
P (?t |?t+1:H )?T?t K?t+1 T?Tt .
?t ?A|S|
?t ?A|S|
def
where ??t :H (s, k) = ??k t :H (s) (defined in subsection 4.1). The objective function of this problem
is concave and the constraints are linear. Note that the three last equalities are between matrices.
4.4
Solution
By setting the derivatives of the Lagrangian to zero (as in subsection 3.3), we derive the distribution
XX
X
X
P (?t |?t+1:H ) ? exp
?ks ?k?t:H (s)+
?i,j K?t (si , sj )+?
?i,j (T?t K?t+1 T?Tt )(si , sj ) .
k s?S
(si ,sj )?S 2
(si ,sj )?S 2
Again, this distribution is a Markov Random Field. The parameters ?, ? and ? are learned by
maximizing the likelihood P (?
?t:H ) of the expert?s policy ?
?t:H . The learned parameters can then
be used for sampling policies that have the same expected value (from the second constraint), and
the same Bellman error (from the last two constraints and Proposition 1) as the expert?s policy. If
kernel K is inaccurate, then the learned ? and ? will take low values to maximize the likelihood of
the expert?s policy. Hence, our approach will be reduced to MaxEnt IRL (Ziebart et al., 2008).
For simplicity, we consider an approximate solution with fewer parameters in our experiments,
where each ?ks is replaced by ?k ? R. This simplification is based on the fact that the reward
function is independent of the initial state. We also replace ?i,j by ? ? R, and ?i,j by ? ? R.
5
For a sparse matrix K, one can create a corresponding graph (E, S), where (si , sj ) ? E if and only if
?ai , aj ? A : K((si , ai ), (sj , aj )) 6= 0 or ?ai , aj ? A, (s0i , s0j ) ? E : ?T (si , ai , s0i )T (sj , aj , s0j ) 6=
0. Finally, the policy distribution can be rewritten as
X
X
X
P (?t |?t+1:H ) ? exp
V??t:H (s) + ?
K?t (si , sj ) + ??
(T?t K?t+1 T?Tt )(si , sj ) , (3)
s?S
where V??t:H (s) =
P
k
?k ?k (s) + ?
(si ,sj )?E
P
s0 ?S
(si ,sj )?E
?t+1:H
T?t (s, s0 )V?
(s0 ).
The distribution given by Equation 3 is a Markov Random Field. The probability of choosing action
a in a given state s depends on the expected value of (s, a) and the actions chosen in neighboring
states. There is a clear similarity between this distribution of joint actions and the distribution of joint
labels in Associative Markov Networks (AMN) (Taskar, 2004). In fact, the proposed framework
generalizes AMN to sequential decision making problems. Also, the MaxEnt method (Ziebart et al.,
2008) can be derived from Equation 3 by setting ? = 0.
?=0
? 6= 0
?=0
Logistic regression
MaxEnt IRL (Ziebart et al., 2008)
? 6= 0
AMN (Taskar, 2004)
AL-MRF
Table 1: Relation between Apprenticeship Learning with MRFs (AL-MRF) and other methods.
4.5
Learning procedure
In the learning phase, Equation 3 is used for finding parameters ?, ? and ? that maximize the likelihood of the expert?s policy ?
? . Since this likelihood function is concave, a global optimal can
be found by using standard optimization methods, such as BFGS. A main drawback of our approach is the high computational cost of calculating the partition function of Equation 3, which is
O(|A||S| |S|2 ). In practice, this problem can be addressed by using several possible tricks. For instance, we reuse the values calculated for a given policy ? as the initial values of all the policies that
differ from ? in one state only. We also decompose the state space into a set of weakly connected
components, and separately calculate the partition of each component. One can also use recent
efficient learning techniques for MRFs, such as (Kr?ahenb?uhl & Koltun, 2011).
4.6
Planning procedure
?
Algorithm 2 describes a dynamic programming procedure for finding a policy (?0? , ?1? , . . . , ?H
) that
?
?
satisfies ?t ? [0, H] : ?t ? arg max?t ?A|S| P (?t |?t+1:H ). The planning problem is reduced to a
sequence of inference problems in Markov Random Fields. The inference problem itself can also be
efficiently solved using techniques such as graph min-cut (Boykov et al., 1999), ?-expansions and
linear programming relaxation (Taskar, 2004). We use the ?-expansions for our experiments.
Algorithm 2 Dynamic Programming for Markov Random Field Policies
?(s, a) ? S ? A : QH+1 (s, a) = 0.
for t = H : 0 do
P
P
?
(s0 ))
1. ?(s, a) ? S ? A : Qt (s, a) = k ?k ?k (s) + ? s0 T (s, a, s0 )Qt+1 (s0 , ?t+1
2. Use an inference algorithm (such as the ?-expansions) in the MRF defined on the graph
(S, E) to label states
of labeling s with a is ?Qt (s, a) and the potential
of
with actions: the cost P
0
0
0
0
?
(si , ai , sj , aj ) is ? K(si , ai , sj , aj ) + ?? (s0 ,s0 )?E T (si , ai , si )T (sj , aj , sj )K?t+1
(si , sj ) .
i j
?
3. Denote by ?t the labeling policy returned by the inference algorithm;
end for
?
Return the policy ? ? = (?0? , ?1? , . . . , ?H
);
5
Experimental Results
We present experiments on two problems: learning to swing-up and balance an inverted pendulum
on a cart, and learning to grasp unknown objects.
6
5.1
Swing-up cart-balancing
The simulated swing-up cart-balancing system (Figure 1) consists of a 6 kg cart running on a 2 m
track and a freely-swinging 1 kg pendulum with mass attached to the cart with a 50 cm rod. The
state of the system is the position and velocity of the cart (x, x),
? as well as the angle and angular
?
velocity of the pendulum (?, ?). An action a ? R is a horizontal force applied to the cart. The
dynamics of the system are nonlinear. States and actions are continuous, but time is discretized to
steps of 0.1 s. The objective is to learn, in a series of 5s episodes, a policy that swings the pendulum
up and balances it in the inverted position. Since the pendulum falls down after hitting one of the two
track limits, the policy should also learn to maintain the cart in the middle of the track. Moreover,
the track has a nonuniform friction modeled as a force slowing down the cart. Part of the track has
a friction of 30 N, while the remaining part has no friction. This variant is more difficult than the
standard ones (Deisenroth & Rasmussen, 2011).
? = P pi qi (x, x,
? where pi are real
We consider parametric policies of the form ?(x, x,
? ?, ?)
? ?, ?),
i
weights and qi are basis functions corresponding to the signs of the angle and the angular velocity
and an exponential function centered at the middle of the track. Moreover, we discretize the track
into 10 segments, and use 10 binary basis functions for friction compensation, each one is nonzero
only in a particular segment. A reward of 1 is given for each step the pendulum is above the horizon.
5.2
0.9
Average reward per time?step
Since the friction changes smoothly along the
track (domain knowledge), we use the adjacency
matrix of a nearest-neighbor graph as the MRF
kernel K in Equation 2. Specifically,
we set
K hxi , x?i , ?i , ??i , ui i, hxj , x?j , ?j , ??j , uj i = 1 iff
|xi ? xj | ? 0.2m, ?i ?j ? 0, ??i ??j ? 0, and
|ui ? uj | ? 5N , otherwise K is set to 0. Figure 1 shows the average reward per time-step of
the learned policies as a function of the learning
time. Our attempts to solve this variant using different policy gradient methods, e.g. (Kober & Peters,
2008), mainly resulted in poor policies. We report
the values of the policies sampled with MetropolisHastings using Equation 2, and compare to the case
where the policies are sampled solely according to
their expected values, i.e. ?t = 0. The expected
values are estimated from the samples. The results,
averaged over 50 independent trials, show that the
convergence is faster when the MRF is used. Moreover, the performance increases as the threshold set
on the maximum Bellman error () in Algorithm 1
is decreased. In fact, policies that change smoothly
have a lower Bellman error as their values can be
better approximated with kernel K.
0.8
0.7
0.6
0.5
0.4
0.3
Optimal
Metropolis?Hastings with MRF, BE < 0.6
Metropolis?Hastings with MRF, BE < 1
Metropolis?Hastings with MRF, BE < 2
Metropolis?Hastings
0.2
0.1
0
0
20
|
40
60
Learning time in seconds
~a
.
.
80
100
|
Figure 1: Swing-up cart-balancing. The
friction is nonuniform, the red area has a
higher friction than the blue one. However,
the friction changes only at one point of the
track. Consequently, restraining the search to
smooth policies yield faster convergence.
Precision grasps of unknown objects
From a high-level point of view, grasping an object can be seen as an MDP with three steps: reaching, preshaping, and grasping. At any step, the robot can either proceed to the next step or restart
from the beginning and get a reward of 0. At t = 0, the robot always starts from the same initial
state s0 , and the set of actions corresponds to the set of points on the surface of the object. Given
a grasping point, we set the approach direction to the surface normal vector. At t = 1, the state is
given by a surface point and an approach direction, and the set of actions corresponds to the set of
all possible hand orientations. At t = 2, the state is given by a surface point, an approach direction
and a hand orientation. There are two possible last actions, closing the fingers or restarting.
In this experiment, we are interested in learning to grasp objects from their handles. The reward of
each step depends on the current state. There is no reward at t = 0. The reward R1 defined at t = 1
is a function of the first three eigenvalues of the scatter matrix defined by the 3D coordinates of the
points inside a small ball centered on the selected point (Boularias et al., 2011). The reward R2 ,
7
Regression
AMN
AL-MRF MaxEnt IRL
Table 2: Learned Q-values at t = 0. Each point on an object corresponds to a reaching action. Blue
indicates low values and red indicates high values. The black arrow indicates the approach direction in the
optimal policy according to the learned reward function.
defined at t = 2, is a function of collision features. We simulate the trajectories of 10 equidistant
points on each finger of a Barrett hand (a three-fingered gripper). The collision features are binary
variables indicating whether or not the corresponding finger points will make contact with the object.
Based on the domain knowledge that points that are close to each other should have the same action
(i.e. same approach direction and hand orientation), the kernel K is given by the k-nearest neighbors
graph, using the Euclidean distance and k = 6 in the state space of positions (or surface points), and
the angular distance, with k = 2 in the discretized state space of hand orientations. We also use a
quadratic kernel for learning R1 , and the Hamming distance between the feature vectors as a kernel
for learning R2 . We also use a single constant feature for all the edges.
We used one object for training and provided six trajectories leading
to a successful grasp from its handle. For testing, we compared our
approach (Apprenticeship Learning with MRF) with MaxEnt IRL,
AMN and Logistic Regression, which is equivalent to AMN without the graph structure. For AMN and Logistic Regression, only
the reward R1 at time-step 1 is learned, since these are classification methods and do not consider subsequent rewards.
Percentage of successful grasps
100
80
60
40
20
Table 2 shows the Q-values at t = 0 and the approach directions at
optimal grasping points. AL-MRF improves over the other methods
by generally giving high values to handle points only. The values of Figure 2:
Percentage of
the other points are zeros because the optimal action at these points grasps located on a handle
is to restart rather than to grasp. The confusion in the other meth- with a correct approach direcods comes from noised point coordinates and self-occlusions. More tion and hand orientation.
importantly, AL-MRF improves over AMN, a structured supervised
learning technique, by considering the reward at t = 2 while making a decision at t = 1. This can be seen as a type of object recognition by functionality. Figure 2
shows the percentage of successful grasps using the objects in Table 2. A grasp is successful if it is
located on a handle and the hand orientation is orthogonal to the handle and the approach direction.
0
Regression
6
AMN
MaxEnt IRL AL-MRF
Conclusion
Based on the observation that the value function of an optimal policy is often smooth and can be
approximated with a simple kernel, we introduced a general framework for incorporating this type
of domain knowledge in forward and inverse reinforcement learning. Our approach uses Markov
Random Fields for defining distributions on deterministic policies, and assigns high probabilities to
smooth policies. We also provided strong empirical evidence of the advantage of this approach.
Acknowledgement
This work was partly supported by the EU-FP7 grant 248273 (GeRT).
8
References
Abbeel, Pieter and Ng, Andrew Y. Apprenticeship Learning via Inverse Reinforcement Learning. In
Proceedings of the Twenty-first International Conference on Machine Learning (ICML?04), pp.
1?8, 2004.
Boularias, Abdeslam, Kr?omer, Oliver, and Peters, Jan. Learning robot grasping from 3-D images
with Markov Random Fields. In Proceedings of the 2011 IEEE/RSJ International Conference on
Intelligent Robots and Systems (IROS?11), pp. 1548?1553, 2011.
Boularias, Abdeslam, Kr?omer, Oliver, and Peters, Jan. Structured Apprenticeship Learning. In
Proceedings of the European Conference on Machine Learning and Knowledge Discovery in
Databases (ECML-PKDD?12), pp. 227?242, 2012.
Boyan, Justin A. Technical Update: Least-Squares Temporal Difference Learning. Machine Learning, 49:233?246, November 2002. ISSN 0885-6125.
Boykov, Yuri, Veksler, Olga, and Zabih, Ramin. Fast Approximate Energy Minimization via Graph
Cuts. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23:2001, 1999.
Deisenroth, Marc Peter and Rasmussen, Carl Edward. PILCO: A Model-Based and Data-Efficient
Approach to Policy Search. In Proceedings of the Twenty-Eighth International Conference on
Machine Learning (ICML?11), pp. 465?472, 2011.
Dudik, Miroslav, Phillips, Steven J., and Schapire, Robert E. Performance guarantees for regularized maximum entropy density estimation. In Proceedings of the 17th Annual Conference on
Computational Learning Theory (COLT?04), pp. 472?486, 2004.
Kober, Jens and Peters, Jan. Policy search for motor primitives in robotics. In NIPS, pp. 849?856,
2008.
Kohli, Pushmeet, Kumar, Pawan, and Torr, Philip. P3 and beyond: Solving energies with higher
order cliques. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition
(CVPR?07), 2007.
Kr?ahenb?uhl, Philipp and Koltun, Vladlen. Efficient Inference in Fully Connected CRFs with Gaussian Edge Potentials. In Advances in Neural Information Processing Systems 24, pp. 109?117.
2011.
Munoz, Daniel, Vandapel, Nicolas, and Hebert, Martial. Onboard contextual classification of 3-D
point clouds with learned high-order Markov random fields. In Proceedings of IEEE International
Conference on Robotics and Automation (ICRA?09), 2009.
Ormoneit, Dirk and Sen, Saunak. Kernel-based reinforcement learning. In Machine Learning, pp.
161?178, 1999.
Sch?olkopf, Bernhard, Herbrich, Ralf, and Smola, Alex. A Generalized Representer Theorem .
Computational Learning Theory, 2111:416?426, 2001.
Taskar, Ben. Learning Structured Prediction Models: A Large Margin Approach. PhD thesis,
Stanford University, CA, USA, 2004.
Ziebart, B., Maas, A., Bagnell, A., and Dey, A. Maximum Entropy Inverse Reinforcement Learning.
In Proceedings of the Twenty-Second AAAI Conference on Artificial Intelligence (AAAI?08), pp.
1433?1438, 2008.
9
|
4721 |@word kohli:2 trial:1 version:1 middle:2 pieter:1 initial:3 series:1 selecting:1 daniel:1 current:1 contextual:1 si:26 scatter:1 written:1 subsequent:1 partition:2 shape:1 motor:1 update:2 stationary:1 intelligence:2 selected:2 fewer:1 slowing:1 beginning:1 ith:1 iterates:1 philipp:1 herbrich:1 mathematical:1 along:1 koltun:2 prove:1 consists:1 inside:1 apprenticeship:10 acquired:1 expected:12 mpg:1 planning:3 pkdd:1 discretized:2 bellman:19 discounted:1 decomposed:1 considering:1 provided:6 xx:1 notation:2 moreover:4 mass:1 what:1 kg:2 cm:1 interpreted:1 minimizes:1 finding:5 guarantee:1 temporal:1 concave:2 universit:1 grant:1 appear:1 planck:1 before:1 local:1 limit:1 solely:1 black:1 k:2 specifying:1 challenging:1 collect:1 ease:1 averaged:1 testing:1 practice:3 procedure:3 jan:5 episodic:1 area:1 empirical:6 matching:4 projection:1 get:2 close:3 equivalent:1 deterministic:6 lagrangian:4 center:1 maximizing:2 demonstrated:1 straightforward:1 primitive:1 starting:2 crfs:1 convex:2 ke:4 swinging:1 simplicity:1 assigns:3 rule:1 importantly:1 ralf:1 stability:1 searching:1 handle:7 proving:2 coordinate:2 gert:1 qh:3 user:2 programming:3 carl:1 us:1 designing:3 trick:1 velocity:3 satisfying:1 particularly:1 approximated:7 updating:1 located:2 recognition:2 cut:2 database:1 ep:3 taskar:4 steven:1 cloud:1 solved:3 calculate:3 noised:1 connected:2 episode:2 grasping:6 decrease:1 eu:1 ui:2 reward:29 ziebart:6 esi:1 dynamic:3 geodesic:1 depend:1 solving:4 weakly:1 segment:2 basis:5 abdeslam:3 easily:1 joint:3 represented:1 finger:3 derivation:1 fast:1 describe:1 monte:1 artificial:1 labeling:2 choosing:1 widely:1 posed:1 solve:1 cvpr:1 stanford:1 otherwise:2 favor:1 g1:3 itself:2 noisy:2 associative:1 advantage:2 sequence:1 eigenvalue:1 sen:2 propose:1 maximal:1 kober:2 neighboring:3 omer:3 iff:1 supposed:2 olkopf:2 convergence:2 r1:3 executing:1 ben:1 object:16 tk:5 illustrate:1 derive:3 andrew:1 measured:1 nearest:3 qt:3 received:1 edward:1 strong:1 implies:1 come:1 differ:1 kuhn:1 direction:7 drawback:1 correct:1 functionality:1 subsequently:1 exploration:2 centered:2 adjacency:4 require:1 darmstadt:1 abbeel:2 karush:1 preliminary:1 rk1:7 proposition:5 decompose:1 sufficiently:1 normal:1 exp:4 consecutive:1 purpose:1 estimation:1 label:2 create:1 minimization:1 gaussian:2 always:1 aim:1 reaching:2 rather:1 derived:1 improvement:1 indicates:6 mainly:2 likelihood:4 inference:5 mrfs:3 nn:1 inaccurate:1 initially:1 relation:1 manipulating:1 favoring:1 interested:3 arg:2 dual:4 ill:1 orientation:6 colt:1 classification:2 initialize:1 uhl:2 field:15 ng:2 sampling:6 icml:2 nearly:1 representer:2 report:1 intelligent:2 resulted:1 replaced:1 phase:2 occlusion:1 pawan:1 maintain:1 attempt:1 evaluation:1 grasp:11 fingered:1 primal:1 behind:1 chain:2 oliver:3 tuple:1 edge:2 necessary:1 orthogonal:2 loosely:2 euclidean:3 maxent:6 theoretical:1 uncertain:1 miroslav:1 instance:2 column:1 measuring:1 cost:2 technische:1 veksler:1 successful:4 dependency:1 st:8 density:1 international:4 connecting:1 together:1 again:1 aaai:2 satisfied:1 boularias:6 thesis:1 expert:13 derivative:1 leading:1 return:4 potential:2 de:2 bfgs:1 automation:1 explicitly:1 depends:3 later:1 view:1 tion:1 actionselection:1 pendulum:6 red:2 start:3 recover:1 minimize:2 square:1 efficiently:2 correspond:1 yield:2 carlo:1 trajectory:2 zx:1 definition:1 energy:2 pp:9 tucker:1 proof:2 hamming:1 sampled:3 knowledge:8 subsection:2 improves:2 shaping:1 higher:3 supervised:1 wherein:4 improved:1 execute:1 generality:1 dey:1 angular:3 smola:1 until:1 hand:8 hastings:5 horizontal:1 irl:5 nonlinear:1 defines:1 logistic:3 aj:9 mdp:3 usa:1 swing:6 regularization:1 inspiration:1 equality:1 hence:1 nonzero:1 amn:9 width:1 self:1 generalized:2 tt:10 confusion:1 temperature:4 onboard:1 image:1 fi:2 boykov:2 misspecified:1 empirically:1 oli:1 attached:1 extend:1 approximates:1 significant:1 refer:1 measurement:1 munoz:2 ai:9 phillips:1 smoothness:2 tuning:1 pm:2 similarly:2 teaching:1 closing:1 hxi:1 robot:7 similarity:6 surface:7 recent:1 restraining:1 discard:1 manipulation:1 certain:1 binary:3 yuri:1 jens:1 inverted:2 seen:2 minimum:1 dudik:2 freely:1 maximize:2 ii:1 smoother:1 pilco:1 smooth:3 technical:1 faster:2 long:1 ensuring:1 qi:2 mrf:14 regression:5 variant:2 prediction:1 vision:3 expectation:2 kernel:17 ahenb:2 robotics:2 addition:1 separately:1 decreased:2 addressed:1 sch:2 cart:11 subject:3 elegant:1 tend:1 near:1 intermediate:1 xj:1 equidistant:1 restrict:1 suboptimal:1 knowing:1 preshaping:1 rod:1 whether:1 six:1 accelerating:1 reuse:1 penalty:2 peter:6 returned:1 proceed:1 action:25 generally:2 collision:2 detailed:1 clear:1 amount:1 discount:1 ph:3 zabih:1 reduced:2 schapire:1 percentage:3 sign:1 estimated:5 track:9 per:2 blue:2 write:1 restrain:2 threshold:2 verified:1 iros:1 graph:15 relaxation:1 sum:3 inverse:6 parameterized:1 angle:2 family:1 reasonable:1 reader:1 p3:1 draw:1 decision:6 summarizes:1 bound:1 def:1 simplification:1 quadratic:1 annual:1 nontrivial:1 constraint:6 orthogonality:1 alex:1 simulate:1 argument:1 span:2 min:1 friction:8 kumar:1 structured:4 according:2 combination:2 poor:1 ball:1 vladlen:1 describes:1 g3:2 metropolis:5 making:3 happens:1 gradually:1 equation:10 agree:1 fp7:1 end:1 generalizes:1 rewritten:1 indirectly:1 appropriate:1 denotes:1 running:1 ensure:2 remaining:1 graphical:2 calculating:4 ramin:1 exploit:1 giving:1 k1:1 uj:2 approximating:2 classical:1 rsj:1 contact:2 icra:1 objective:3 parametric:1 rt:2 bagnell:1 surrogate:1 exhibit:1 gradient:5 distance:5 simulated:1 restart:2 philip:1 w0:4 collected:1 tuebingen:1 enforcing:1 assuming:1 issn:1 modeled:1 insufficient:1 kk:1 demonstration:1 balance:2 difficult:3 executed:1 robert:1 statement:1 stated:1 policy:94 unknown:4 twenty:3 discretize:1 observation:1 markov:21 finite:1 descent:3 compensation:1 ecml:1 november:1 defining:1 precise:1 dirk:1 rn:2 nonuniform:2 arbitrary:1 hxj:1 introduced:3 pair:1 required:1 learned:9 nip:1 justin:1 beyond:1 usually:1 pattern:2 eighth:1 max:5 suitable:1 metropolishastings:1 boyan:3 force:2 regularized:1 ormoneit:2 meth:1 representing:1 inversely:1 martial:1 created:1 acknowledgement:1 discovery:1 kf:1 loss:1 fully:1 proportional:2 borrows:1 sufficient:1 s0:15 pi:2 balancing:4 row:2 maas:1 repeat:1 last:4 rasmussen:2 supported:1 hebert:1 bias:3 institute:1 neighbor:3 fall:1 sparse:1 overcome:2 calculated:3 gram:2 transition:3 rich:1 world:2 forward:2 reinforcement:11 pushmeet:1 transaction:1 sj:24 approximate:4 restarting:1 bernhard:1 confirm:1 clique:1 global:1 robotic:1 assumed:2 xi:1 search:8 iterative:1 continuous:4 s0i:2 table:4 learn:4 nicolas:1 inherently:1 ca:1 expansion:3 s0j:2 complex:1 rkp:1 european:1 domain:7 marc:1 main:3 arrow:1 complementary:1 precision:1 position:3 exponential:1 theorem:2 down:2 r2:2 barrett:1 evidence:1 incorporating:1 gripper:1 sequential:2 kr:5 phd:1 conditioned:1 horizon:3 margin:1 smoothly:3 entropy:6 hitting:1 g2:3 partially:1 lstd:3 corresponds:4 satisfies:3 determines:1 goal:1 formulated:1 consequently:1 replace:1 change:4 specifically:2 infinite:1 torr:1 wt:29 olga:1 principal:2 total:1 partly:1 experimental:1 est:1 indicating:1 formally:1 select:1 deisenroth:2 latter:1 incorporate:1
|
4,113 | 4,722 |
Assessing Blinding in Clinical Trials
Ognjen Arandjelovi?c
Deakin University, Australia
Abstract
The interaction between the patient?s expected outcome of an intervention and
the inherent effects of that intervention can have extraordinary effects. Thus in
clinical trials an effort is made to conceal the nature of the administered intervention from the participants in the trial i.e. to blind it. Yet, in practice perfect
blinding is impossible to ensure or even verify. The current standard is follow up
the trial with an auxiliary questionnaire, which allows trial participants to express
their belief concerning the assigned intervention and which is used to compute a
measure of the extent of blinding in the trial. If the estimated extent of blinding
exceeds a threshold the trial is deemed sufficiently blinded; otherwise, the trial
is deemed to have failed. In this paper we make several important contributions.
Firstly, we identify a series of fundamental problems of the aforesaid practice and
discuss them in context of the most commonly used blinding measures. Secondly,
motivated by the highlighted problems, we formulate a novel method for handling
imperfectly blinded trials. We too adopt a post-trial feedback questionnaire but interpret the collected data using an original approach, fundamentally different from
those previously proposed. Unlike previous approaches, ours is void of any ad hoc
free parameters, is robust to small changes in auxiliary data and is not predicated
on any strong assumptions used to interpret participants? feedback.
1
Introduction
Ultimately, the main aim of a clinical trial is straightforward: it is to examine and quantify the effectiveness of a treatment of interest. Effectiveness is evaluated relative to the effectiveness of a particular reference, the so-called control intervention. To ensure that the aforementioned comparison is
meaningful, it is of essential importance to ensure that any factors not inherently associated with the
two interventions (treatment and control) are normalized (controlled) between the two groups. This
ensures that the observed differential outcome truly is the effect of differing interventions rather than
any orthogonal, confounding variables. A related challenge is that of blinding. Blinding refers to the
concealment of the type of administered intervention from the individuals/patients participating in a
trial and its aim is to eliminate differential placebo effect between groups [10, 3, 11]. Although conceptually simple, the problem of blinding poses difficult challenges in a practical clinical setup. We
highlight two specific challenges which most strongly motivate the work of the present paper. The
first of these stems from the difficulty of ensuring that absolute blinding with respect to a particular
trial variable is achieved. The second challenge arises as a consequence of the fact that blinding can
only be attempted with respect to those variables of the trial which have been identified as revealing
of the treatment administered. Put differently, it is always possible that a particular variable which
can reveal the nature of the treatment to a trial participant is not identified by the trial designers
and thus that no blinding with respect to it is attempted or achieved. This is a ubiquitous problem,
present in every controlled trial, and one which can severely affect the trial?s outcome.
Given that it is both practically and in principle impossible to ensure perfect blinding, the practice
of post hoc assessment of the level of blinding achieved has been gaining popularity and general
acceptance by the clinical community. The key idea is to use a statistical model and the participants? responses to a generic post-trial questionnaire to quantify the participants? knowledge about
the administered intervention. While the statistical model used to this end has been a source of
disagreement between researchers, as discussed in detail in Sec 2, the general approach is shared
by different methods described in the literature. In this paper we argue that this common approach
suffers from several important limitations. Motivated by these, in the present work we propose a
novel statistical framework and use it to derive an original method for integrated trial assessment
which is experimentally shown to produce more meaningful and more clearly interpretable data.
1
Table 1: Notational convention for mathematical symbols adopted in this paper.
Symbol
a
g
Pag
Pa
Pg
2
Description
subscript specifying group assignment; a = C and a = T signify control and treatment groups
subscript specifying membership belief; g = ? and g = + signify belief in control and
treatment group memberships, g = 0 signifies uncertainty
proportion of participants who were assigned to group a and believe the membership to be g
proportion of participants who were assigned assigned to group a
proportion of participants who believe their group membership to be g
Previous Work
In this section we describe the general methodology of auxiliary post-trial data collection, the two
most influential statistical models which use the aforesaid data to quantify the extent of blinding in
a trial, and discuss the key limitations of the existing approaches which motivate the work described
in the present paper.
2.1
Method 1: James?s Blinding Index
At the heart of the so-called blinding index proposed by James et al. [7] is the observation that the
effect of a particular intervention is affected by the participant?s perception of the effectiveness of
the intervention the participant believes was administered. For example, a control group member
who incorrectly believes to be a member of the treatment group may indeed experience positive
effects expected from the studied treatment. The is the extensively studied placebo effect [2, 9].
Auxiliary Data James et al. propose the use of a post-trial questionnaire to assess the level of
blinding in a trial. The participants are asked if they believe that they were assigned to the (i) control
or (ii) treatment groups, or (iii) if they are uncertain of their assignment (the ?don?t know? response).
Extensions of this scheme which attempt to harness more detailed information have also been used,
e.g. allowing the participants to quantify the strength of their belief.
Blinding Level The existing work on the assessment of trial blinding uses the collected auxiliary
data to calculate a statistic referred to as the blinding index. For a 3-tier auxiliary questionnaire,
James et al. [7] define their index as (our mathematical notation is summarized in Tab 1):
i
1h
1 + P0 + (1 ? P0 ) ? ?
(1)
?1 =
2
It can attain values in the interval [0, 1], higher values denoting increasing level of blindness. Thus
?1 = 1 indicates perfect blinding and ?1 = 0 an unblinded trial. The statistic ? takes into account
the distribution of participants who have a decisive belief regarding their assignment:
X
X
Pag (1 ? P0 ) ? Pg (Pa ? Pa0 )
(2)
?=
?ag
(1 ? P0 )2
a?{P,T } g?{+,?}
The constants ?ag are weighting coefficients whose effect is to scale relative contributions of the
correct and incorrect assignment guesses. To gain intuitive insight into the nature of ?1 , consider the
plot shown in Fig 1(a). It is readily apparent that ?1 is a concave function which attains its maximal
value of 1 when (i) all participants are uncertain of their assignment or (ii) when all participants have
an incorrect belief regarding their assignment.
In comparison with the case of P0 = 1 the attainment of the maximal value ?1 = 1 for
PT + = PC? = 0 is more questionable. While it is tempting to reason that blinding must have
been successful since no participant correctly guessed their assignment, it would be erroneous to
do so. In particular, the consistency of the wrong belief amongst trial participants actually reveals
unblinding, but with the participants? incorrect association of the unblinded factor with the corresponding group assignment. For example, the treatment may cause perceivable side effects (thus
unblinding the participants) and the worsening of the condition of the treatment group participants.
This observation which could lead them to the conclusion that they were assigned to the control
group.
2.2
Method 2: Bang?s Blinding Index
The blinding index ?1 places a lot of value on those participants who plead ignorance regarding their
assignment status. Bang et al. argue that the non-decisive ?don?t know? response may not express a
2
(a)
(b)
Figure 1: Dependency of the blinding indexes (a) ?1 and (b) ?02 on the proportions of ?don?t know? responses
P0 , and the correct assignment guesses PT + and PC? . Note that although PT + and PC? are independent
variables, due to their symmetric contributions and for the purpose of easier visualization, in this plot it was
taken that PT + and PC? were always equal.
true lack of knowledge but rather that it may be a conservative response born out of desire to appear
balanced in judgement [1]. Thus, they propose an alternative which instead most heavily weights the
contribution of decisive responses. Because decisive responses can be in either the positive or the
negative direction, the index is asymmetrical and can be applied separately to treatment and control
groups. For a 3-tier auxiliary questionnaire, the index for the treatment group is defined as:
PC?
P T ? + PT +
0
?2 = 2
?1 ? P
.
(3)
PC? + PT ?
g??,0,+ PT g
The behaviour of ?02 can be seen in Fig 1(b) which plots it against the proportions of indecisive
responses and correct guesses. It is readily apparent that the plot has a form very different from that
in Fig 1(a) showing the corresponding variation of ?1 . Firstly, note that unlike ?1 , the range of values
for ?2 is [?0.5, 0.5]. The value of ?2 = 0 indicates perfect blinding, ?2 = 0.5 an unblinded trial and
?2 = ?0.5 an unblinded trial with incorrect assignment association, as discussed in Sec 2.1.
As the plot shows, this index achieves its perfect blinding value only when P0 = 1. Unlike ?1 , the
case when PT + = PC? = 0 does not necessarily result in perfect blinding. Also, PT + = PC? = 1
and P0 = 0 deems the trial unblinded, as does PT + = PC? = 0 and P0 = 0 but with the incorrect
assignment association. Contrast this with the corresponding value of ?1 .
3
Limitations of the Current Best Standards
In the preceding sections we described two blinding indexes most widely used in practice to assess
the level of blinding in controlled clinical trials. To highlight and motivate the contribution of the
present work, we now analyze the limitations of the aforesaid approaches.
Adjustment of Free Parameters One of the most obvious difficulties encountered when applying
either of the described blinding indexes concerns the need to choose appropriate values for the free
parameters in Equations (2) and (3) in their general form. These are the weighting constants wag .
Recall that their purpose is to scale the relative contributions of different responses. Although not
without an intuitive appeal, a thorough analysis of this ad hoc approach reveals a series of problems,
both inherent and practical. Firstly, there is no objective underlying mechanism which would explain
why the contributions of different responses should be combined linearly at all. What is more, even
if linear combination is adopted, it is fundamentally the case that there is no principled method of
choosing the values of the weighting constants ? the lack of observable ?ground truth? means that
it is not possible to objectively compare the quality of different predictions. Lastly, the values of
?best? weighting constant ratios are likely to differ from trial to trial.
Interpretation of Participants? Feedback It is important to highlight that both the index of James
et al. as well as that of Bang et al. use the same type of feedback data collected from the trial
participants ? the participants? stated belief regarding their trial group assignment and their degree
of confidence. Where the two approaches differ in is the interpretation of the participants? answers.
James et al. interpret the non-decisive, ?don?t know? response as indicative of true lack of knowledge
regarding the nature of the intervention (treatment or control). If the trial participants are ignorant of
their group assignment, it is assumed that they have indeed been blinded. Consequently, ?1 heavily
relies on the proportion of the non-decisive participants. However, the ?don?t know? response may
3
not truly represent lack of knowledge. Instead, this response may be seen as a conservative one,
reflecting the participants? desire to appear balanced in their judgement or indeed the response that
the participants believe would please the trial administration staff. Thus, ?02 mostly relies on the
responses of those trial participants who did express belief regarding their group assignment. Blindness is measured by comparing the observed statistics of decisive responses with those expected
from an ideal, fully blinded trial. However, this interpretation of participants? responses is readily
criticized too. As Hemili?a amongst others notes, because the participants? feedback is collected post
hoc it is possible that even a perfectly blinded subject becomes aware of the correct assignment by
virtue of observing the effects (or lack thereof) of the assigned intervention [5]. Considering the
same issue, Henneicke-von Zepelin [6] suggested that auxiliary data should be collected before or
shortly after the commencement of a trial. However, this is in most cases unsatisfactory as the participants would not have yet been exposed to any unblinded aspects of the trial. As we demonstrate
in the next section, the approach proposed in this paper entirely avoids this problem.
Sensitivity to Small Input Differences Both James et al. and Bang et al. establish the level of
blindness in a trial by computing a blinding index and then comparing it with a predefined threshold.
This hard thresholding whereby a trial is considered either sufficiently well blinded or not means
that the outcome of the blinding assessment can exhibit high sensitivity to small differences in
participants? responses. The response of a single individual may change the assessment outcome.
Yet, such binarization in some form is necessitated by the nature of the blinding indexes because
neither of the two described statistics has a clear practical interpretation in the clinical context. The
task of choosing the value of the aforesaid threshold suffers from much the same problems as the
task of selecting the values of the weighting constants, discussed previously ? inherently, there is
no objective and meaningful way of defining the optimal threshold value, and the value actually
selected by the practitioner is likely to vary from trial to trial.
Inference Atomization The problem of high sensitivity to small input differences considered previously is but one of the consequences of the inference atomization. Specifically, observe that the
analysis of the trial outcome data is separated from the blinding assessment. Indeed, only if the
trial is deemed sufficiently well blinded does the analysis of actual trial data proceed. Thus, if the
blinding index falls short of the predetermined threshold, the data is effectively thrown away and
the trial needs to be repeated. On the other hand, if the blinding index exceeds the threshold, the
analysis of data is performed in the same manner regardless of the actual value of the index, that is,
regardless of whether it is just above the threshold or if it indicates perfect blinding.
The variety of problems that emerges from the atomization of different statistical aspects of a trial
is inherently rooted in the very nature of the framework adopted by James et al. and Bang James
et al. alike. As stated earlier, neither of the two indexes has a clear practical interpretation in the
clinical context. For example, neither tells the clinician the probability that a particular portion of
the participants were unblinded, nor the probability of a particular level of unblinding. Instead, from
the point of view of a clinician, the blinding index behaves like a black box which deems the trial
well blinded or not, with little additional insight.
4
Principled Approach to Controlled Clinical Trial Data Analysis
We now describe a principled method for inference from collected trial data.
4.1
Study Design and Outcome Model
As we demonstrated in the previous section, many of the problems of the approaches proposed by
James et al. and Bang et al. inherently stem from the underlying statistical model. Although our
approach uses the same type of participants? feedback data, our statistical model differs significantly
from that employed in previous works.
In the general case, the effectiveness of a particular intervention in a trial participant depends on
the inherent effects of the intervention, as well as the participant?s expectations (conscious or not).
Thus, in the interpretation of trial results, we separately consider each population of participants
which share the same combination of the type of intervention and the expressed belief regarding this
group assignment. This is conceptually illustrated in Fig 2.
A key idea of the proposed method is that because the outcome of an intervention depends on both
the inherent effects of the intervention and the participants? expectations, the effectiveness should
be inferred in a like-for-like fashion. In other words, the response observed in, say, the sub-group
of participants assigned to the control group whose feedback professes belief in the control group
4
Figure 2: Conceptual illusProbability density
tration of the proposed statistical model for the 3-tier
feedback questionnaire. Dotted and solid lines show
respectively the probability
density functions of the measured trial outcome across individuals in the three control
and treatment sub-groups.
Outcome magnitude
assignment should be compared with the response of only the sub-group of the treatment group who
equally professed belief in the control group assignment. Similarly, the ?don?t know? sub-groups
should be compared only with each other, as should the subgroups corresponding to the belief in the
treatment assignment. This idea is formalized next.
4.2
Inference
Consider two corresponding sub-groups, that is, sub-groups corresponding to different types of received intervention but the same response in the participants? feedback questionnaire. Furthermore,
let the benefit of an intervention observed in a particular participant be expressed as a real number
(i)
(i)
xag . Thus, and without loss of generality, a greater xag indicates greater benefit. For example, xi
may represent the amount of fat loss in a fat loss trial, the reduction in blood plasma LDL in a statin
trial etc. Our goal is to infer p(?x), that is, the probability density function over the difference ?x
in the benefit observed across the two compared sub-groups.
(n
(1)
)
Let DCg = {xCg , . . . , xCgCg } be the trial outcome data collected from a control sub-group and
(n
(1)
)
DT g = {xT g , . . . , xT gT g } of the matching treatment sub-group. Then, if Dg = DCg ? DT g is the
totality of all data of participants who believe they were assigned to the group g:
p(?x | Dg ) =
P (Dg | ?x) p(?x)
.
p(Dg )
(4)
Modelling the response of each sub-group using a normal distribution
(i)
xCg ? N (mCg , ?g )
(j)
xT g ? N (mT g , ?g )
and
(5)
and remembering that for the underlying distributions it holds that mCg + ?x = mT g , allows us to
further write
Z
Z
p(?x | Dk ) ? p(Dg | ?x) =
p(Dg | ?x, mCg , ?g ) p(mCg ) p(?g ) d?g dmCg (6)
mCg
?g
where p(mCg ) is a prior on the mean of the control sub-group and p(?g ) a prior on the standard
deviation within sub-groups. What Eq (6) expresses is the process of probability density function
marginalization over nuisance variables mCg and ?g . Since the values of these latent model variables are unknown, marginalization takes into account all of the possibilities and weights them in
proportion to the supporting evidence.
When two corresponding sub-groups of participants are considered, for uninformed priors over mCg
and ?g , the posterior distribution of ?x is given by:
?
p(?x | Dg ) ? cg
nCg +nT g ?1
2
?
= cg
ng ?1
2
where constant scaling factors have been omitted for clarity, and
nX
2
nCg
nT g
nT g
Cg
X (i) 2 X
X
(j)
(i)
(j)
2
cg =
xCg +
(xT g + ?x) ?
xCg +
(xT g + ?x) / (nCg + nT g )
i=1
j=1
i=1
(7)
(8)
j=1
Extending to the joint inference over the entire data corpus, the posterior can be computed simply
as a product of all sub-group pair posteriors (up to a scaling constant):
Y
Y ? ng ?1
p(?x | ?g Dg ) ?
p(?x | Dg ) ?
cg 2
(9)
g
g
5
The estimate of the posterior distribution of ?x in Eq (9) is the best estimate that can be made using
the available data, and it is of the most interest to the clinician. However, as we will discuss in Sec 6,
both Eq (7) and (9) have significance in the interpretation of trial results and their joint consideration
can be used to reveal important additional information about the effectiveness of the treatment.
5
Experiments
Certain advantages of the proposed methodology over previous approaches are ipso facto inherent
in the theory, e.g. the absence of free parameters. Other claimed properties of the method, such as
its robustness to small input differences, are not immediately obvious. In this section we present the
results of a series of experiments which demonstrate the superiority of the proposed method.
5.1
Evaluation Methodology
In contrast to the methods of James et al. and Bang et al. which do not attempt to infer any objective
and measurable quantity, the proposed approach pools all available data (trial outcomes and auxiliary
questionnaire feedback) in an effort to evaluate robustly the effectiveness of the studied treatment.
This feature of our method allows us to directly evaluate its performance. Specifically, we employ
a computer-based simulation whereby data is first randomly (or rather pseudo-randomly) generated
using a statistical model with adjustable parameters, followed by the application of the proposed
method which is used to infer the said parameters. The values inferred by our method can then be
directly compared with their known true values.
Exp 1: Reference For our first experiment, we simulated a trial involving 200 individuals, half of
which were assigned to the control and half to the treatment group. For each of the groups, 60% of
the participants were taken to be in the ?undecided? subgroups GC0 and GT 0 . The remaining 40%
of the participants was split between correct and incorrect guesses of the assigned intervention in
proportion 3 : 1. In this initial experiment we assume that all participants correctly disclosed their
belief regarding which group they were assigned to. Note that this assumption is done purely in the
process of generating data for the experiment ? neither this nor any of the preceding information is
used by our method to analyze the outcome of the trial.
We set the differential effect of treatment to ?x = 0.1 and the standard deviation of variability
within each of the assignment-response subgroups to ?? = ?0 = ?+ = 0.1. Relative to genuine
lack of belief in either control or treatment group assignments, belief in control group assignment
was set to exhibit negative effect of magnitude 0.2 and that in treatment group assignment a positive
effect of magnitude 0.2. Intervention outcomes were then generated by repeated random draws from
the corresponding distributions. For example, the outcome associated with a participant in GC? was
determined by a random draw from the normal distribution N (mC? , ?? ).
The result of applying the proposed method is summarized in Fig 3 which plots the posteriors (bold
lines) corresponding to the three subgroups matched by the patients? post-trial belief and the amalgamated posterior. The maximum a posteriori (MAP) value of the estimate of the differential effectiveness of the treatment is ?x? ? 0.107, which is close to the true value of ?x = 0.1. In
comparison, when the differential effectiveness is estimated by subtracting the mean response of the
control group from that of the treatment group, without the use of our matching sub-groups based
statistical model, the estimate is ?x ? 0.141. Finally, the corresponding values of the blinding
indices proposed by James et al. and Bang et al. are ?1 = 0.53 and ?02 = ?002 = 0.10. Notice that
the former indicates a level of blinding roughly half way between a perfectly blinded and unblinded
trial, while the latter deems the trial nearly perfectly blinded.
5.1.1
Exp 2: Conservative Distortion
We modify the baseline experiment by simulating conservative behavioural tendency of participants in a trial. This was achieved by randomly choosing individuals from decisive subgroups and
re-assigning them to their corresponding indecisive subgroup without changing their treatment?s
observed effectiveness. The probability of re-assignment was set to pcons = 0.2.
As before, we applied the proposed method on the modified data and display the key results in
Fig 3. In addition to the new subgroup posteriors (dotted lines), for comparison in Fig 3(a) we
also show the three initial subgroup posteriors from Exp 1 (solid lines). The baseline (thick solid
line) and new (thin solid line) amalgamated posteriors are shown in Fig 3(b). Fig 3(b) also shows the
semi-amalgamated posterior obtained using only decisive subgroups which, by experimental design,
6
comprise data of only those individuals which honestly disclosed their belief of group assignment.
The new MAP value for the differential effectiveness using the amalgamated posterior can be seen
to be ?x? ? 0.122 and that using the semi-amalgamated posterior ?x? ? 0.116. In Sec 6 we will
show how the difference in statistical features of sub-group posteriors can be used to select the most
reliable posteriors to amalgamate, as well as to reveal additional insight into the nature of the studied
treatment and the blinding in the trial.
(a) Sub-group posteriors
(b) Full posteriors
Figure 3: Exp 2: (a) Posteriors for the differential effect treatment computed using the data Dg of each
experimental sub-group comprising control and treatment individuals matched by their feedback. (b) Posterior
for the differential effect treatment computed using all available data.
Exp 3: Asymmetric Progressive Unblinding Starting with the baseline setup, we simulate unblinding of previously undecided individuals of the treatment group. In other words, in each turn
we re-assign an individual from the subgroup GT 0 to the subgroup GT + and compute the novel
distribution for ?x.
The robustness of our method is illustrated in Fig 4(a), which shows the MAP estimate of the effectiveness of the treatment after an increasing number of participants were unblinded. This estimate
only shows small random perturbations, with the corresponding standard deviation of 0.0054. The
plots in Fig 4(b) show the variation of the two blinding indexes throughout the experiment. As expected from the change in the participants? auxiliary data, both indexes change in value dramatically.
The index of James et al. decreases, while that of Bang et al. increases in absolute value, indicating
agreement on the lowered level of blinding.
(a)
(b)
Figure 4: Exp 3: (a) The MAP estimate of the treatment effectiveness as the participants assigned to the
treatment group are progressively unblinded. (b) The values of the blinding indexes ?1 (blue line) and ?02 (red
line), computed at each step of the progressive unblinding of the participants assigned to the treatment group.
Exp 4: Symmetric Progressive Unblinding As in Exp 3 we start with the baseline setup and
simulate unblinding of previously undecided individuals of the treatment group. In each turn we
re-assign an individual from GT 0 to GT + and an individual from GC0 to GC? , and compute the
novel distribution for ?x.
We illustrate the robustness of the method by plotting the MAP estimate of the effectiveness of the
treatment in Fig 5(a). As before, the estimate only shows small random perturbations, as expected in
any experiment with a stochastic nature and is to be contrasted with the plots in Fig 5(b) which show
the changes in the two blinding indexes throughout the experiment. Again, with the change in the
participants? auxiliary data, both indexes also change in value. It is insightful to observe that unlike
in Exp 3, in this instance the values of the two indexes do not exhibit agreement on the direction
of change of the level of blinding. This reflects the importance that the auxiliary data interpretation
plays in the methods of both James et al. and Bang et al.
7
(a)
(b)
Figure 5: Exp 4: (a) The MAP estimate of the treatment effectiveness as the participants assigned to both the
treatment and the control groups are progressively unblinded. (b) The values of the blinding indexes ?1 (blue
line) and ?02 (red line), computed at each step of the progressive unblinding.
6
Discussion
Degenerate Cases One of the key ideas behind the present method is that it is meaningful to
compare only the sub-groups matched by their auxiliary responses. While a greater number of
subgroups may provide more precise auxilliary/blinding information, the introduced partitioning
of data decreases the statistical strength of each comparison of the corresponding sub-groups. In
an extreme case, a particular sub-group may be empty. In other words, it is possible that none of
the participants of the treatment or the control group expressed a particular belief regarding their
treatment assignment. Although this may appear as a problem at first, a more careful examination
of such cases reveals that this is not so.
Firstly, note that whenever at least one pair of matching sub-groups is non-empty, the proposed
method is able to compute a meaningful estimate of differential treatment effectiveness. In instances
when there are no non-empty matching sub-groups, the nature of degeneracy can provide useful insight to the clinician. The absence of individuals in GT + may indicate that the participants assigned
to the treatment group have either been poorly blinded but misidentified the received treatment, or
that the treatment was vastly ineffective and was recognized as such by the participants assigned to
it. Similarly, the absence of individuals in GT ? may indicate that the participants assigned to the
treatment group have either been poorly blinded and correctly identified the received treatment, or
that the treatment was obviously effective. In all cases, because degenerate data is trivial to recognize, the clinician is immediately made aware of the presence of a major flaw in the experimental
design. The cause of degeneration can then be determined using the knowledge of the administered
interventions, and the statistics of both auxiliary responses and trial outcomes.
Further Insight In Sec 4.2 we derived posteriors corresponding both to only a single pair of
corresponding sub-groups in Eq (7) and to the entirety of data, that is, all sub-groups in Eq (9).
While the latter of these is of primary interest, the clinician can derive further useful insight into the
nature of studied treatment by comparative examination of sub-group posteriors too.
The least interesting case is when the sub-group posteriors and the total posterior exhibit similar
characteristics (e.g. the location of the mode). However, consider the case when that is not so. For
example, let us say that the posterior corresponding to the two matching ?don?t know? subgroups
has the mode near ?x ? 0 and the total posterior has a decidedly positive mode (with suitably small
standard deviations, to make the observation statistically significant). This could indicate that there
may be so-called ?non-responders? in the treatment group, i.e. individuals which did not respond
positively to the treatment which in most people does produce a positive result [4, 8]. Similar
arguments can be made by considering differences between other sub-group posteriors. Ultimately,
the exact interpretation is in the hands of the clinicians who should use their insight into the nature
of the administered interventions to infer further information of this type.
7
Summary and Conclusions
This paper examined the problem of assessing the extent of blindness in a clinical trial. We demonstrated a series of fundamental flaws in blinding index based approaches and thus proposed a novel
framework. At the centre of our idea is that the comparison of the treatment and control groups
should be done in like-for-like fashion, giving rise to the partitioning of participants into sub-groups,
each sub-group sharing the same intervention and post-trial responses. A Bayesian framework was
used to interpret jointly the auxiliary and trial outcome data, giving the clinician a meaningful and
readily understandable end result. The effectiveness of our method was demonstrated empirically in
a simulation study, which showed its robustness in a variety of scenarios.
8
References
[1] H. Bang, L. Ni, and C. E. Davis. Assessment of blinding in clinical trials. Contemp Clin Trials,
25(2):143?156, 2004.
[2] H. K. Beecher. The powerful placebo. JAMA, 159(17):1602?1606, 1955.
[3] F. Benedetti, H. S. Mayberg, T. D. Wager, C. S. Stohler, and J.-K. Zubieta. Neurobiological
mechanisms of the placebo effect. J Neurosci, 25(45):10390?10402, 2005.
[4] G. Costantino, F. Furfaro, A. Belvedere, A. Alibrandi, and W. Fries. Thiopurine treatment in inflammatory bowel disease: Response predictors, safety, and withdrawal in follow-up. J Crohns
Colitis, 2011.
[5] H. Hemil?a. Assessment of blinding may be inappropriate after the trial. Contemp Clin Trials,
26(4):512?514, 2005.
[6] H.-H. Henneicke-von Zepelin. Letter to the editor. Contemp Clin Trials, 26(4):512, 2005.
[7] K. E. James, D. A. Bloch, K. K. Lee, H. C. Kraemer, and R. K. Fuller. An index for assessing
blindness in a multi-centre clinical trial: disulfiram for alcohol cessation?a va cooperative
study. Stat Med, 15(13):1421?1434, 1996.
[8] D. Karakitsos, J. Papanikolaou, A. Karabinis, R. Alalawi, M. Wachtel, C. Jumper, D. Alexopoulos, and P. Davlouros. Acute effect of sildenafil on central hemodynamics in mechanically
ventilated patients with WHO group III pulmonary hypertension and right ventricular failure
necessitating administration of dobutamine. Int J Cardiol, 2012.
[9] H. S. Mayberg, J. A. Silva, S. K. Brannan, J. L. Tekell, R. K. Mahurin, S. McGinnis, and P. A.
Jerabek. The functional neuroanatomy of the placebo effect. 159:728?737, 2002.
[10] D. E. Moerman and W. B. Jonas. Deconstructing the placebo effect and finding the meaning
response. Ann Intern Med, 136(6):471?476, 2002.
[11] G. H. Montgomery and I. Kirsch. Classical conditioning and the placebo effect. Pain, 72(1?
2):107?113, 1997.
9
|
4722 |@word trial:78 blindness:5 judgement:2 proportion:8 suitably:1 simulation:2 p0:9 pg:2 deems:3 solid:4 reduction:1 initial:2 born:1 series:4 selecting:1 denoting:1 ours:1 existing:2 current:2 comparing:2 nt:4 worsening:1 yet:3 assigning:1 must:1 readily:4 predetermined:1 plot:8 interpretable:1 progressively:2 half:3 selected:1 guess:4 indicative:1 short:1 location:1 firstly:4 mathematical:2 differential:9 jonas:1 incorrect:6 manner:1 indeed:4 expected:5 roughly:1 examine:1 nor:2 multi:1 actual:2 little:1 inappropriate:1 considering:2 increasing:2 becomes:1 notation:1 underlying:3 matched:3 what:2 differing:1 ag:2 finding:1 pseudo:1 thorough:1 every:1 concave:1 questionable:1 fat:2 wrong:1 facto:1 control:23 partitioning:2 intervention:25 appear:3 superiority:1 positive:5 before:3 safety:1 modify:1 consequence:2 severely:1 subscript:2 black:1 studied:5 examined:1 specifying:2 range:1 statistically:1 practical:4 practice:4 differs:1 attain:1 revealing:1 significantly:1 matching:5 confidence:1 word:3 refers:1 close:1 put:1 context:3 impossible:2 applying:2 statin:1 measurable:1 map:6 demonstrated:3 straightforward:1 regardless:2 starting:1 formulate:1 formalized:1 immediately:2 insight:7 population:1 variation:2 pt:10 play:1 heavily:2 exact:1 us:2 auxilliary:1 agreement:2 pa:2 asymmetric:1 cooperative:1 observed:6 calculate:1 degeneration:1 ensures:1 decrease:2 balanced:2 principled:3 questionnaire:9 disease:1 asked:1 ultimately:2 motivate:3 exposed:1 purely:1 joint:2 differently:1 undecided:3 separated:1 describe:2 effective:1 tell:1 outcome:17 choosing:3 whose:2 apparent:2 widely:1 say:2 distortion:1 otherwise:1 objectively:1 statistic:5 deconstructing:1 highlighted:1 jointly:1 obviously:1 hoc:4 advantage:1 propose:3 subtracting:1 interaction:1 maximal:2 product:1 degenerate:2 poorly:2 description:1 intuitive:2 participating:1 empty:3 assessing:3 extending:1 produce:2 generating:1 perfect:7 comparative:1 derive:2 illustrate:1 stat:1 pose:1 uninformed:1 measured:2 received:3 eq:5 strong:1 auxiliary:15 entirety:1 misidentified:1 indicate:3 quantify:4 convention:1 direction:2 differ:2 thick:1 correct:5 stochastic:1 australia:1 behaviour:1 assign:2 secondly:1 extension:1 hold:1 practically:1 sufficiently:3 considered:3 ground:1 normal:2 exp:10 major:1 achieves:1 adopt:1 vary:1 omitted:1 purpose:2 reflects:1 clearly:1 always:2 aim:2 modified:1 rather:3 derived:1 notational:1 unsatisfactory:1 modelling:1 indicates:5 contrast:2 attains:1 cg:5 baseline:4 posteriori:1 inference:5 flaw:2 membership:4 entire:1 eliminate:1 integrated:1 dcg:2 comprising:1 issue:1 aforementioned:1 equal:1 aware:2 genuine:1 comprise:1 ng:2 fuller:1 progressive:4 nearly:1 thin:1 others:1 fundamentally:2 inherent:5 employ:1 randomly:3 dg:10 recognize:1 individual:15 ignorant:1 attempt:2 thrown:1 interest:3 acceptance:1 possibility:1 evaluation:1 truly:2 extreme:1 pc:9 behind:1 wager:1 bloch:1 predefined:1 experience:1 perceivable:1 orthogonal:1 necessitated:1 re:4 uncertain:2 criticized:1 instance:2 earlier:1 assignment:27 signifies:1 deviation:4 placebo:7 imperfectly:1 predictor:1 successful:1 too:3 arandjelovi:1 dependency:1 answer:1 combined:1 density:4 fundamental:2 sensitivity:3 lee:1 mechanically:1 ncg:3 pool:1 von:2 again:1 vastly:1 central:1 choose:1 ldl:1 account:2 sec:5 summarized:2 bold:1 coefficient:1 int:1 blind:1 ad:2 decisive:9 performed:1 view:1 lot:1 depends:2 tab:1 analyze:2 observing:1 portion:1 red:2 participant:62 start:1 contribution:7 aforesaid:4 ass:2 responder:1 ni:1 who:11 characteristic:1 guessed:1 identify:1 conceptually:2 bayesian:1 mc:1 none:1 researcher:1 explain:1 suffers:2 whenever:1 sharing:1 against:1 failure:1 james:15 obvious:2 thereof:1 associated:2 degeneracy:1 gain:1 treatment:54 recall:1 knowledge:5 emerges:1 ubiquitous:1 actually:2 reflecting:1 higher:1 dt:2 follow:2 methodology:3 response:30 harness:1 evaluated:1 box:1 strongly:1 generality:1 furthermore:1 just:1 predicated:1 lastly:1 done:2 hand:2 assessment:8 lack:6 cessation:1 mode:3 quality:1 reveal:3 believe:5 effect:22 verify:1 normalized:1 true:4 asymmetrical:1 former:1 assigned:18 symmetric:2 illustrated:2 ignorance:1 nuisance:1 please:1 rooted:1 whereby:2 davis:1 demonstrate:2 necessitating:1 silva:1 meaning:1 consideration:1 novel:5 common:1 behaves:1 functional:1 mt:2 empirically:1 conditioning:1 discussed:3 association:3 interpretation:9 jumper:1 interpret:4 significant:1 consistency:1 similarly:2 centre:2 lowered:1 acute:1 etc:1 gt:8 posterior:25 showed:1 confounding:1 scenario:1 claimed:1 certain:1 seen:3 additional:3 greater:3 preceding:2 staff:1 employed:1 remembering:1 recognized:1 neuroanatomy:1 inflammatory:1 tempting:1 ii:2 semi:2 full:1 infer:4 stem:2 exceeds:2 clinical:12 concerning:1 post:8 equally:1 totality:1 controlled:4 ensuring:1 prediction:1 involving:1 va:1 patient:4 expectation:2 pa0:1 represent:2 achieved:4 mayberg:2 addition:1 signify:2 separately:2 interval:1 indecisive:2 void:1 source:1 unlike:4 ineffective:1 subject:1 med:2 member:2 effectiveness:18 contemp:3 practitioner:1 near:1 presence:1 ideal:1 iii:2 split:1 variety:2 affect:1 marginalization:2 identified:3 perfectly:3 idea:5 regarding:9 administration:2 administered:7 whether:1 motivated:2 effort:2 blinded:12 proceed:1 cause:2 kraemer:1 hypertension:1 dramatically:1 useful:2 detailed:1 clear:2 amount:1 extensively:1 conscious:1 notice:1 dotted:2 designer:1 estimated:2 popularity:1 correctly:3 blue:2 write:1 affected:1 express:4 group:75 key:5 threshold:7 blood:1 clarity:1 changing:1 neither:4 letter:1 uncertainty:1 respond:1 powerful:1 place:1 throughout:2 draw:2 scaling:2 kirsch:1 entirely:1 followed:1 display:1 encountered:1 strength:2 ventricular:1 aspect:2 simulate:2 argument:1 influential:1 combination:2 across:2 alike:1 heart:1 tier:3 taken:2 equation:1 visualization:1 previously:5 behavioural:1 discus:3 turn:2 mechanism:2 montgomery:1 know:7 end:2 adopted:3 available:3 observe:2 away:1 generic:1 disagreement:1 appropriate:1 simulating:1 fry:1 robustly:1 alternative:1 robustness:4 shortly:1 original:2 remaining:1 ensure:4 conceal:1 clin:3 giving:2 establish:1 classical:1 objective:3 quantity:1 primary:1 said:1 exhibit:4 amongst:2 pain:1 simulated:1 tration:1 nx:1 argue:2 extent:4 collected:7 trivial:1 reason:1 cardiol:1 index:31 ratio:1 difficult:1 setup:3 mostly:1 negative:2 stated:2 rise:1 design:3 understandable:1 unknown:1 adjustable:1 allowing:1 observation:3 honestly:1 incorrectly:1 supporting:1 defining:1 variability:1 precise:1 gc:2 perturbation:2 community:1 deakin:1 inferred:2 introduced:1 pair:3 subgroup:13 able:1 suggested:1 perception:1 challenge:4 gaining:1 reliable:1 belief:21 hemodynamics:1 difficulty:2 examination:2 decidedly:1 blinding:52 alcohol:1 scheme:1 mcg:8 deemed:3 professes:1 binarization:1 prior:3 literature:1 relative:4 fully:1 loss:3 highlight:3 interesting:1 limitation:4 degree:1 principle:1 thresholding:1 plotting:1 editor:1 share:1 summary:1 free:4 side:1 fall:1 pag:2 absolute:2 benefit:3 feedback:11 avoids:1 made:4 commonly:1 collection:1 observable:1 status:1 neurobiological:1 reveals:3 conceptual:1 corpus:1 assumed:1 xi:1 don:7 latent:1 why:1 table:1 nature:11 robust:1 inherently:4 attainment:1 necessarily:1 did:2 significance:1 main:1 linearly:1 neurosci:1 repeated:2 positively:1 fig:13 referred:1 fashion:2 extraordinary:1 atomization:3 sub:30 weighting:5 erroneous:1 specific:1 xt:5 showing:1 insightful:1 symbol:2 appeal:1 dk:1 virtue:1 concern:1 evidence:1 essential:1 disclosed:2 effectively:1 importance:2 magnitude:3 easier:1 simply:1 likely:2 intern:1 jama:1 failed:1 desire:2 adjustment:1 expressed:3 pulmonary:1 truth:1 relies:2 goal:1 bang:11 consequently:1 careful:1 ann:1 shared:1 absence:3 change:8 experimentally:1 hard:1 specifically:2 clinician:8 determined:2 contrasted:1 conservative:4 called:3 total:2 tendency:1 plasma:1 attempted:2 experimental:3 meaningful:6 indicating:1 select:1 withdrawal:1 people:1 latter:2 arises:1 evaluate:2 handling:1
|
4,114 | 4,723 |
Density Propagation and
Improved Bounds on the Partition Function?
Stefano Ermon, Carla P. Gomes
Dept. of Computer Science
Cornell University
Ithaca NY 14853, U.S.A.
Ashish Sabharwal
IBM Watson Research Ctr.
Yorktown Heights
NY 10598, U.S.A.
Bart Selman
Dept. of Computer Science
Cornell University
Ithaca NY 14853, U.S.A.
Abstract
Given a probabilistic graphical model, its density of states is a distribution that,
for any likelihood value, gives the number of configurations with that probability. We introduce a novel message-passing algorithm called Density Propagation
(DP) for estimating this distribution. We show that DP is exact for tree-structured
graphical models and is, in general, a strict generalization of both sum-product and
max-product algorithms. Further, we use density of states and tree decomposition
to introduce a new family of upper and lower bounds on the partition function.
For any tree decomposition, the new upper bound based on finer-grained density
of state information is provably at least as tight as previously known bounds based
on convexity of the log-partition function, and strictly stronger if a general condition holds. We conclude with empirical evidence of improvement over convex
relaxations and mean-field based bounds.
1
Introduction
Associated with any undirected graphical model [1] is the so-called density of states, a term borrowed from statistical physics indicating a distribution that, for any likelihood value, gives the
number of configurations with that probability. The density of states plays an important role in
statistical physics because it provides a fine grained description of the system, and can be used to
efficiently compute many properties of interests, such as the partition function and its parameterized
version [2, 3]. It can be seen that computing the density of states is computationally intractable in
the worst case, since it subsumes a #-P complete problem (computing the partition function) and an
NP-hard one (MAP inference). All current approximate techniques estimating the density of states
are based on sampling, the most prominent being the Wang-Landau algorithm [3] and its improved
variants [2]. These methods have been shown to be very effective in practice. However, they do not
provide any guarantee on the quality of the results. Furthermore, they ignore the structure of the
underlying graphical model, effectively treating the energy function (which is proportional to the
negative log-likelihood of a configuration) as a black-box.
As a first step towards exploiting the structure of the graphical model when computing the density
of states, we propose an algorithm called D ENSITY P ROPAGATION (DP). The algorithm is based on
dynamic programming and can be conveniently expressed in terms of message passing on the graphical model. We show that D ENSITY P ROPAGATION computes the density of states exactly for any
tree-structured graphical model. It is closely related to the popular Sum-Product (Belief Propagation, BP) and Max-Product (MP) algorithms, and can be seen as a generalization of both. However,
it computes something much richer, namely the density of states, which contains information such
as the partition function and variable marginals. Although we do not work at the level of individual
configurations, D ENSITY P ROPAGATION allows us to reason in terms of groups of configurations
with the same probability (energy).
?
Supported by NSF Expeditions in Computing award for Computational Sustainability (grant 0832782).
1
Being able to solve inference tasks for certain tractable classes of problems (e.g., trees) is important
because one can often decompose a complex problem into tractable subproblems (such as spanning
trees) [4], and the solutions to these simpler problems can be combined to recover useful properties
of the original graphical model [5, 6]. In this paper we show that by combining the additional
information given by the density of states, we can obtain a new family of upper and lower bounds on
the partition function. We prove that the new upper bound is always at least as tight as the one based
on the convexity of the log-partition function [4], and we provide a general condition where the
new bound is strictly tighter. Further, we illustrate empirically that the new upper bound improves
upon the convexity-based one on Ising grid and clique models, and that the new lower bound is
empirically slightly stronger than the one given by mean-field theory [4, 7].
2
Problem definition and setup
We consider a graphical model specified as a factor graph with N = |V | discrete random variables
xi , i ? V where xi ? Xi . The global random vector x = {xs , s ? V } takes value in the Cartesian
QN
product X = X1 ?X2 ?? ? ??XN , with cardinality D = |X | = i=1 |Xi |. We consider a probability
distribution over elements x ? X (called configurations)
1 Y
p(x) =
?? ({x}? )
(1)
Z
??I
that factors into factors ?? : {x}? ? R+ , where I is an index set and {x}? ? V a subset of
variables the factor ?? depends on, and Z is a normalization constant known as partition function.
The corresponding factor graph is a bipartite graph with vertex set V ? I. In the factor graph, each
variable node i ? V is connected with all the factors ? ? I that depend on i. Similarly, each factor
node ? ? I is connected with all the variable nodes i ? {x}? . We denote the neighbors of i and ?
by N (i) and N (?) respectively.
We will also make use of the related exponential representation [8]. Let ? be a collection of potential
functions {?? , ? ? I}, defined over the index set I. Given an exponential parameter vector ? =
{?? , ? ? I}, the exponential family defined by ? is the family of probability distributions over X
defined as follows:
!
X
1
1
exp(? ? ?(x)) =
exp
p(x, ?) =
?? ?? ({x}? )
(2)
Z(?)
Z(?)
??I
where we assume p(x) = p(x, ?? ). Given an exponential family, we define the density of states [2]
as the following distribution:
X
n(E, ?) =
? (E ? ? ? ?(x))
(3)
x?X
where ? (E ? ? ? ?(x)) indicates a Dirac delta centered at ? ? ?(x). For any exponential parameter
?, it holds that
Z
A
n(E, ?)dE = |{x ? X |? ? ?(x) ? A}|
??
P
?
=
n(E, ?)dE = |X |.
We will refer to the quantity
??I ?? ?? ({x}? )
??I log ?? ({x}? ) as the energy of a configuration x, although it has an additional minus sign
with respect to the conventional energy in statistical physics.
and
P
3
R
R
Density Propagation
Since any propositional Satisfiability (SAT) instance can be efficiently encoded as a factor graph
(e.g., by defining a uniform probability measure over satisfying assignments), it is clear that computing the density of states is computationally intractable in the worst case, as a generalization of an
NP-Complete problem (satisfiability testing) and a #-P complete problem (model counting).
We show that the density of states can be computed efficiently1 for acyclic graphical models. We
provide a Dynamic Programming algorithm, which can also be interpreted as a message passing
algorithm on the factor graph, called D ENSITY P ROPAGATION (DP), which computes the density of
states exactly for acyclic graphical models.
1
Polynomial in the cardinality of the support, which could be exponential in N in the worst case.
2
3.1
Density propagation equations
D ENSITY P ROPAGATION works by exchanging messages from variable to factor nodes and vice
versa. Unlike traditional message passing algorithms, where messages represent marginal probabilities (vectors of real numbers), for every xi ? Xi a D ENSITY P ROPAGATION
message ma?i (xi ) is
P
a distribution (a ?marginal? density of states), i.e. ma?i (xi ) = k ck (a ? i, xi )?Ek (a?i,xi ) is a
sum of Dirac deltas.
At every iteration, messages are updated according to the following rules. The message from variable node i to factor node a is updated as follows:
O
mi?a (xi ) =
mb?i (xi )
(4)
b?N (i)\a
N
where
is the convolution operator (commutative, associative and distributive). Intuitively, the
convolution operation gives the distribution of the sum of (conditionally) independent random variables, in this case corresponding to distinct subtrees in a tree-structured graphical model. The message from factor a to variable i is updated as follows:
?
?
X
O
O
?
ma?i (xi ) =
?E? ({x}? )
(5)
mj?a (xj )?
{x}?\i
j?N (a)\i
where ?E? ({x}? ) is a Dirac delta function centered at E? (x? ) = log ?? ({x}? ).
For tree structured graphical models, D ENSITY P ROPAGATION converges after a finite number of
iterations, independent of the initial condition, to the true density of states. Formally,
Theorem
1. For any variable i ? V and
A ? R, for any initial condition, after a finite number of
P
N
iterations
m
(q)
(E) = n(E, ?? ).
b?i
q?Xs
b?N (i)
The proof is by induction on the size of the tree (omitted due to lack of space).
3.1.1
Complexity and Approximation with Energy Bins
The most efficient message update schedule for tree structured models is a two-pass procedure where
messages are first sent from the leaves to the root node, and then propagated backwards from the
root to the leaves. However, as with other message-passing algorithms, for tree structured instances
the algorithm will converge with either a sequential or a parallel update schedule, with any initial
condition for the messages. Although DP requires the same number of messages updates as BP
and MP, DP updates are more expensive because they require the computation of convolutions.
Specifically, each variable-to-factor update rule (4) requires (N ? 2)L convolutions, where N is the
number of neighbors of the variable node and L is the number of states in the random variable. Each
factor-to-variable update rule (5) requires summation over N ? 1 variables, each of size L, requiring
O(LN ) convolutions. Using Fast Fourier Transform (FFT), each convolution takes O(K log K),
where K is the maximum number of non-zero entries in a message. In the worst case, the density of
states can have an exponential number of non-zero entries (i.e., the finite number of possible energy
values, which we will also refer to as ?buckets?), for instance when potentials are set to logarithms
of prime numbers, making every x ? X have a different probability. However, in many practical
problems of interest (e.g., SAT/CSP models and certain grounded Markov Logic Networks [9]), the
number of energy ?buckets? is limited, e.g., bounded by the total number of constraints. For general
graphical models, coarse-grain energy bins can be used, similar to the Wang-Landau algorithm [3],
without losing much precision. Specifically, if we use bins of size ?/M , where each bin corresponds
to configurations with energy in the interval [k?/M, (k + 1)?/M ), the energy estimated for each
configuration through O(M ) convolutions is at most an O(?) additive value away from its true
energy (as the quantization error introduced by energy binning is summed up across convolution
steps). This also guarantees that the density of states with coarse-grain energy bins gives a constant
factor approximation of the true partition function.
3.1.2
Relationship with sum and max product algorithms
D ENSITY P ROPAGATION is closely related to traditional message passing algorithms such as BP
(Belief Propagation, Sum-Product) and MP (Max-Product), since it is based on the same (conditional) independence assumptions. Specifically, as shown by the next theorem, both BP and MP can
3
be seen as simplified versions of D ENSITY P ROPAGATION that consider only certain global statistics
of the distributions represented by D ENSITY P ROPAGATION messages.
Theorem 2. With the same initial condition and message update schedule, at every iteration we can
recover Belief Propagation and Max-Product marginals from D ENSITY P ROPAGATION messages.
P
Proof. Given a DP message mi?j (xj ) = k ck (i ? j, xj )?Ek (i?j,xj ) , the Max-Product algorithm
corresponds to considering only the entry associated with the highest probability, i.e. ?i?j (xj ) =
f (mi?j (xj )) , maxk {Ek (i ? j, xj )}. According to DP updates in equations (4) and (5), the
quantities ?i?j (xj ) are updated as follows
?
?i?a (xi ) = f ?
?
?a?i (xi ) = f ?
?
O
b?N (i)\a
X
{x}?\i
?
?
mb?i (xi )? =
O
j?N (a)\i
X
?b?i (xi )
b?N (i)\a
?
mj?a (xj )?
O
?
?E? ({x}? ) ? = max
{x}?\i
X
?j?a (xj ) + E? ({x}? )
j?N (a)\i
These results show that the quantities ?i?j (xj ) are updated according to the Max-Product algorithm
(with messages in log-scale). To see the relationship with BP, for every DP message mi?j (xj ), let
us define
Z
?i?j (xj ) = ||mi?j (xj )(E) exp(E)||1 =
mi?j (xj )(E) exp(E)dE
R
Notice that ?i?j (xj ) would correspond to an unnormalized marginal probability, assuming that
mi?j (xj ) is the density of states of the instance when variable j is clamped to value xj . According
to DP updates in equation (4) and (5)
O
Y
?i?a (xi ) = ||mi?a (xi )(E) exp(E)||1 =
mb?i (xi )(E) exp(E) =
?b?i (xi )
b?N (i)\a
b?N (i)\a
1
?
?
X
O
O
?
?E? ({x}? ) (E) exp(E)
mj?a (xj )?
?a?i (xi ) = ||?a?i (xi )(E) exp(E)||1 =
{x}?\i j?N (a)\i
1
?
?
X
O
X
O
Y
?
=
?E? ({x}? ) (E) exp(E) =
mj?a (xj )?
?? ({x}? )
?j?a (xi )
{x}?\i j?N (a)\i
{x}?\i
j?N (a)\i
1
that is we recover BP updates for the ?i?j quantities. Similarly, if we define temperature versions
of the marginals ?Ti?j (xj ) , ||mi?j (xj )(E) exp(E/T )||1 , we recover the temperature-versions of
Belief Propagation updates, similar to [10] and [11].
As other message passing algorithms, D ENSITY P ROPAGATION updates are well defined also for
loopy graphical models, even though there is no guarantee of convergence or correctness [12]. The
correspondence with BP and MP (Theorem 2) however still holds: if loopy BP converges, then
the corresponding quantities ?i?j computed from DP messages will converge as well, and to the
same value (assuming the same initial condition and update schedule). Notice however that the
convergence of the ?i?j does not imply the convergence of D ENSITY P ROPAGATION messages
(e.g., in probability, law, or Lp ). In fact, we have observed empirically that the situation where
?i?j converge but mi?j do not converge (not even in distribution) is fairly common. It would
be interesting to see if there is a variational interpretation for D ENSITY P ROPAGATION equations,
similar to [13]. Notice also that Junction Tree style algorithms could also be used in conjunction
with DP updates for the messages, as an instance of generalized distributive law [14].
4
Bounds on the density of states using tractable families
Using techniques such as D ENSITY P ROPAGATION, we can compute the density of states exactly for
tractable families such as tree-structured graphical models. Let p(x, ?? ) be a general (intractable)
probabilistic model of interest, and let ?i be a family of tractable parameters (e.g., corresponding to
trees) such that ?? is a convex combination of ?i , as defined formally below and used previously
4
by Wainwright et al. [5, 6]. See below (Figure 1) for an example of a possible decomposition of
a 2 ? 2 Ising model into 2 tractable distributions. By computing the partition function or MAP
estimates for the tree structured subproblems, Wainwright et al. showed that one can recover useful
information about the original intractable problem, for instance by exploiting convexity of the logpartition function log Z(?).
We present a way to exploit the decomposition idea to derive an upper bound on the density of states
n(E, ?? ) of the original intractable model, despite the fact that density of states is not a convex
function of ?? . The result below gives a point-by-point upper bound which, to the best of our
knowledge, is the first bound of this kindPfor density of states. In the following, with some abuse
of the notation, we denote n(E, ?? ) = x?X 1{?? ??(x)=E} the function giving the number of
configurations with energy E (zero almost everywhere).
Pn
Pn
Pn?1
Theorem 3. Let ?? = i=1 ?i ?i , i=1 ?i = 1, and yn = E ? i=1 yi . Then
Z Z
Z n
n(E, ?? ) ?
. . . min {n(yi , ?i ?i )} dy1 dy2 . . . dyn?1
R
R i=1
R
Proof. From the definition of density of states and using 1{} to denote the 0-1 indicator function,
X
X
n(E, ?? ) =
1{?? ?(x)=E} =
1{(Pi ?i ?i )?(x)=E}
x?X
=
x?X
XZ Z
R
x?X
=
Z Z
R
=
Z Z
R
?
Z Z
R
Observing that
5
...
R
...
R
n
Y
R
i=1
Z X
n
Y
R x?X
...
R
Z X
...
x?X
Z
n
min
R i=1
1{?i ?i ?(x)=yi }
1{?i ?i ?(x)=yi }
i=1
R x?X
R
P
Z
!
dy1 dy2 . . . dyn?1
!
dy1 dy2 . . . dyn?1
where yn = E ?
n?1
X
yi
i=1
dy1 dy2 . . . dyn?1
min 1{?i ?i ?(x)=yi }
(
n
i=1
X
1{?i ?i ?(x)=yi }
x?X
)
dy1 dy2 . . . dyn?1
1{?i ?i ?(x)=yi } is precisely n(yi , ?i ?i ) finishes the proof.
Bounds on the partition function using n-dimensional matching
The density of states n(E, ?? ) can be used to compute the partition function, since by definition
Z(?? ) = ||n(E, ?? ) exp(E)||1 . We can therefore get an upper bound on Z(?? ) by integrating the
point-by-point upper bound on n(E, ?? ) from Theorem 3. This bound can be tighter than the known
bound [6] obtained by applying
Jensen?s inequality to the log-partition function (which is convex),
P
given by log Z(?? ) ? i ?i log Z(?i ). For instance, consider a graphical model with weights that
are large enough such that the density of states based sum defining Z(?? ) is dominated by the contribution of the highest-energy bucket. As a concrete example, consider the decomposition in Figure 1.
As the edge weight w (w = 2 in the figure) grows, the convexity-based bound will approximately
equal the geometric average of 2 exp(6w) and 8 exp(2w), which is 4 exp(4w). On the other hand,
the bound based on Theorem 3 will approximately equal min{2, 8} exp((2 + 6)w/2) = 2 exp(4w).
In general, the latter bound will always be strictly better for large enough w unless the highest-energy
bucket counts are identical across all ?i .
While this is already promising, we can, in fact, obtain a much tighter bound by taking into account
the interactions between different energy levels across any parameter decomposition, e.g., by enforcing the fact that there are a total of |X | configurations. For compactness, in the following let us
define yi (x) = exp(?i ? ?(x)) for any x ? X and i = 1, ? ? ? , n. Then,
X
XY
Z(?? ) =
exp(?? ? ?(x)) =
yi (x)?i
x?X
x?X
i
Theorem 4. Let ? be
Pthe (finite)
Q set of all possible permutations of X . Given ? = (?1 , ? ? ? , ?n ) ?
?n , let Z(?? , ?) = x?X i yi (?i (x))?i . Then,
minn Z(?? , ?) ? Z(?? ) ? maxn Z(?? , ?)
(6)
???
???
5
Algorithm 1 Greedy algorithm for the maximum matching (upper bound).
1: while there exists E such that n(E, ?i ) > 0 do
2:
Emax (?i ) ? maxE {E|n(E, ?i ) > 0)}, for i = 1, ? ? ? , n
3:
c? ? min {n(Emax (?1 ), ?1 ), ? ? ? , n(Emax (?n ), ?n )}
4:
ub (?1 Emax (?1 ) + ? ? ? + ?n Emax (?n ), ?1 , ? ? ? , ?n ) ? c?
5:
n(Emax (?i ), ?i ) ? n(Emax (?i ), ?i ) ? c? , for i = 1, ? ? ? , n
6: end while
Proof. Let ?I ? ?n denote a collection of n identity permutations. Then we have Z(?? ) =
Z(?? , ?I ), which proves the upper and lower bounds in equation (6).
We can think of ? ? ?n as an n-dimensional matching over the exponential size configuration
space X . For any i, j, ?i (x) matches with ?j (x), and ?(x) gives the corresponding
Q hyper-edge.
?i
If we define the weight
of
each
hyper-edge
in
the
matching
graph
as
w(?(x))
=
i yi (?i (x))
P
?
then Z(? , ?) = x?X w(?(x)) corresponds to the weight of the matching represented by ?. We
can therefore think the bounds in equation (6) as given by a maximum and a minimum matching,
respectively. Intuitively, the maximum matching corresponds to the case where the configurations
in the high energy buckets of the densities happen to be the same configuration (matching), so that
their energies are summed up.
5.1
Upper bound
The maximum matching max? Z(?? , ?) (i.e., the upper bound on the partition
function) can be
R
computed
using
Algorithm
1.
Algorithm
1
returns
a
distribution
u
such
that
u
(E)dE
= |X | and
b
b
R
ub (E) exp(E)dE = max? Z(?? , ?). Notice however that ub (E) is not a valid point-by-point
upper bound on the density n(E, ?? ) of the original mode.
Proposition 1. Algorithm 1 computes
P the maximum matching and its runtime is bounded by the
total number of non-empty buckets i |{E|n(E, ?i ) > 0}|.
Proof. The correctness of Algorithm 1 follows from observing that exp(E1 +E2 )+exp(E1? +E2? ) ?
exp(E1 + E2? ) + exp(E1? + E2 ) when E1 ? E1? and E2 ? E2? . Intuitively, this means that for
n = 2 parameters it is always optimal to connect the highest energy configurations, therefore the
greedy method is optimal. This result can be generalized for n > 2 by induction. The runtime is
proportional to the total number of buckets because we remove one bucket from at least one density
at every iteration.
A key property of Algorithm 1 is that even though it defines a matching over an exponential number of configurations |X |, its runtime proportional only to the total number of buckets, because it
matches configurations in groups at the bucket level.
The following result shows that the value of the maximum matching is at least as tight as the
bound provided by the convexity of the log-partition function, which is used for example by Tree
Reweighted Belief Propagation (TRWBP) [6].
Pn
Theorem 5. For any parameter decomposition i=1 ?i ?i = ?? , the upper bound given by the
maximum matching in equation (6) and computed using Algorithm 1 is always at least as tight as
the bound obtained using the convexity of the log-partition function.
Proof. The bound obtained by applying
Jensen?s inequality to the log-partition function (which is
P
?
?
log
Z(?i ) [6], leads to the following geometric average
convex), given by
log
Z(?
)
?
i
i
Q P
?
bound Z(?? ) ? i ( x yi (x)) i . Given any n permutations of the configurations ?i : X ? X for
i = 1, ? ? ? , n (in particular, it holds for the one attaining the maximum matching value) we have
! ?i
XY
Y
Y
Y X
yi (?i (x))?i = ||
yi (?i (x))?i ||1 ?
||yi (?i (x))?i ||1/?i =
yi (?i (x))
x
i
i
i
i
x
where we used Generalized Holder inequality and the norm || ? ||? indicates a sum over X .
6
Algorithm 2 Greedy algorithm for the minimum matching with n = 2 parameters (lower bound).
1: while there exists E such that n(E, ?i ) > 0 do
2:
Emax (?i ) ? maxE {E|n(E, ?i ) > 0)}; Emin (?2 ) ? minE {E|n(E, ?2 ) > 0)}
3:
c? ? min {n(Emax (?1 ), ?1 ), n(Emin (?2 ), ?2 )}
4:
lb (?1 Emax (?1 ) + ?2 Emin (?2 ), ?1 , ?2 ) ? c?
5:
n(Emax (?1 ), ?1 ) ? n(Emax (?1 ), ?1 ) ? c? ; n(Emin (?2 ), ?2 ) ? n(Emin (?2 ), ?2 ) ? c?
6: end while
5.2
Lower bound
We also provide Algorithm 2 to compute the minimum matching when there are n = 2 parameters.
The proof of correctness is similar to that for Proposition 1.
Proposition 2. For n = 2, Algorithm 2 computes
the minimum matching and its runtime is bounded
P
by the total number of non-empty buckets i |{E|n(E, ?i ) > 0}|.
For the minimum matching case, the induction argument does not apply and the result does not
extend to the case n > 2. For that case, we can obtain a weaker lower bound by applying Reverse Generalized Holder inequality [15], obtaining from a differentP
perspective a bound previously
1
derived in [16]. Specifically, let s1 , ? ? ? , sn?1 < 0 and sn such that
si = 1. We then have
XY
Y
min Z(?? , ?) =
yi (?min,i (x))?i = ||
(7)
yi (?min,i (x))?i ||1 ?
?
x
Y
?i
||yi (?min,i (x)) ||si =
i
Y X
i
i
i
! s1
i
yi (?min,i (x))
x
s i ?i
=
Y X
i
x
! s1
i
yi (x)
s i ?i
Notice this result cannot be applied if yi (x) = 0, i.e. there are factors assigning probability zero
(hard constraints) in the probabilistic model.
6
Empirical evaluation
To evaluate the quality of the bounds, we consider an Ising model from statistical physics,
where given a graph (V, E), single node variables xs , s ? V are Bernoulli distributed
(xs ? {0, 1})), and the global random vector
is distributed according to p(x, ?) =
P
P
1
s?V ?s xs +
(i,j)?E ?ij 1{xi =xj } . Figure 1 shows a simple 2 ? 2 grid Ising
Z(?) exp
model with exponential parameter ?? = [0, 0, 0, 0, 1, 1, 1, 1] (?s = 0 and ?ij = 1) decomposed as the convex sum of two parameters ?1 and ?2 corresponding to tractable distributions,
i.e. ?? = (1/2)?1 + (1/2)?2 . The corresponding partition function is Z(?? ) = 2 + 12 exp(2) +
2 exp(4) ? 199.86. In panels 1(d) and 1(e) we report the corresponding density of states n(E, ?1 )
and n(E, ?2 ) as histograms. For instance, for the model corresponding to ?2 there are only two
global configurations (all variables positive and all negative) that give an energy of 6. It can be seen
from the densities reported that Z(?1 ) = 2 + 6 exp(2) + 6 exp(4) + 2 exp(6) ? 1180.8, while
Z(?2 ) = 8 + 8 exp(2) ? 67.11. Thepcorresponding
p geometric average (obtained from the convexity of the log-partition function) is (Z(?1 )) (Z(?2 )) ? 281.50. In panels 1(f) and 1(c) we
show ub and lb computed using Algorithms 1 and 2, i.e. the solutions to the maximum and minimum
matching problems, respectively. For instance, for the maximum matching case the 2 configurations
with energy 6 from n(E, ?1 ) are matched with 2 of the 8 with energy 2 from n(E, ?2 ), giving an
energy 6/2 + 2/2 = 4. Notice that ub and lb are not valid bounds on individual densities of states
themselves, but they nonetheless provide upper and lower bounds on the partition function as shown
in the figure: ? 248.01 and 134.27, respectively. The bound (8) given by inverse Holder inequality
with s1 = ?1, s2 = 1/2 is ? 126.22, while the mean field lower bound [4, 7] is ? 117.91. In this
case, the additional information provided by the density leads to tighter upper and lower bounds on
the partition function.
In Figure 2 we report the upper bounds obtained for several types of Ising models (in all cases,
?s = 0, i.e., there is no external field). In the two left plots, we consider a N ?N square Ising model,
once with attractive interactions (?ij ? [0, w]) and once with mixed interactions (?ij ? [?w, w]).
In the two right plots, we use a complete graph (a clique) with N = 15 vertices. For each model,
we compute the upper bound given by TRWBP (with edge appearance probabilities ?e based on a
7
v1
2
v2
v1
2
2
v4
v3
6
6
6
8
4
0
2
0
2
2
Energy
4
(d) Histogram n(E, ?1 )
2
2
0
1
8
2
3
4
12
12
10
8
6
4
2
2
Energy
Energy
(c) Zub = 2 + 6e + 6e + 2e4 .
4
0
2
3
6
0
6
8
6
4
0
(b) Graph for ?2 .
Configurations
Configurations
(a) Graph for ?1 .
v4
6
6
Configurations
2
v3
2
Configurations
v2
2
(e) Histogram n(E, ?2 )
0
1
2
2
Energy
3
2
(f) Zlb = 2e + 12e + 2e3
Figure 1: Decomposition of a 2 ? 2 Ising model, densities obtained with maximum and minimum
matching algorithms, and the corresponding upper and lower bounds on Z(?? ).
(a) 15 ? 15 grid, attractive. (b) 10 ? 10 grid, mixed.
(c) 15-Clique, attractive.
(d) 15-Clique, mixed.
Figure 2: Relative error of the upper bounds.
subset of 10 randomly selected spanning trees) and the mean-field bound using the implementations
in libDAI [17]. We then compute the bound based on the maximum matching using the same set
of spanning trees. For the grid case, we also use a combination of 2 spanning trees and compute
the corresponding lower bound based on the minimum matching (notice it is not possible to cover
all the edges in a clique with only 2 spanning tree). For each bound, we report the relative error,
defined as (log(bound) ? log(Z)) / log(Z), where Z is the true partition function, computed using
the junction tree method.
In these experiments, both our upper and lower bounds improve over the ones obtained with TRWBP [6] and mean-field respectively. The lower bound based on minimum matching visually overlaps with the mean-field bound and is thus omitted from Figure 2. It is, however, strictly better, even
if by a small amount. Notice that we might be able to get a better bound by choosing a different
set of parameters ?i (which may be suboptimal for TRW-BP). By optimizing the parameters si in
the inverse Holder bound (8) using numerical optimization (BFGS and BOBYQA [18]), we were
always able to obtain a lower bound at least as good as the one given by mean field.
7
Conclusions
We presented D ENSITY P ROPAGATION, a novel message passing algorithm for computing the density of states while exploiting the structure of the underlying graphical model. We showed that
D ENSITY P ROPAGATION computes the exact density for tree structured graphical models and is a
generalization of both Belief Propagation and Max-Product algorithms. We introduced a new family
of bounds on the partition function based on n-dimensional matching and tree decomposition but
without relying on convexity. The additional information provided by the density of states leads,
both theoretically and empirically, to tighter bounds than known convexity-based ones.
8
References
[1] M.J. Wainwright and M.I. Jordan. Graphical models, exponential families, and variational
inference. Foundations and Trends in Machine Learning, 1(1-2):1?305, 2008.
[2] S. Ermon, C. Gomes, A. Sabharwal, and B. Selman. Accelerated Adaptive Markov Chain for
Partition Function Computation. Neural Information Processing Systems, 2011.
[3] F. Wang and DP Landau. Efficient, multiple-range random walk algorithm to calculate the
density of states. Physical Review Letters, 86(10):2050?2053, 2001.
[4] M.J. Wainwright. Stochastic processes on graphs with cycles: geometric and Variational approaches. PhD thesis, Massachusetts Institute of Technology, 2002.
[5] M. Wainwright, T. Jaakkola, and A. Willsky. Exact map estimates by (hyper) tree agreement.
Advances in neural information processing systems, pages 833?840, 2003.
[6] M.J. Wainwright. Tree-reweighted belief propagation algorithms and approximate ML estimation via pseudo-moment matching. In AISTATS, 2003.
[7] G. Parisi and R. Shankar. Statistical field theory. Physics Today, 41:110, 1988.
[8] L.D. Brown. Fundamentals of statistical exponential families: with applications in statistical
decision theory. Institute of Mathematical Statistics, 1986.
[9] M. Richardson and P. Domingos. Markov logic networks. Machine Learning, 62(1):107?136,
2006.
[10] Y. Weiss, C. Yanover, and T. Meltzer. MAP estimation, linear programming and belief propagation with convex free energies. In Uncertainty in Artificial Intelligence, 2007.
[11] T. Hazan and A. Shashua. Norm-product belief propagation: Primal-dual message-passing for
approximate inference. Information Theory, IEEE Transactions on, 56(12):6294?6316, 2010.
[12] K.P. Murphy, Y. Weiss, and M.I. Jordan. Loopy belief propagation for approximate inference:
An empirical study. In Proceedings of the Fifteenth conference on Uncertainty in artificial
intelligence, pages 467?475. Morgan Kaufmann Publishers Inc., 1999.
[13] J.S. Yedidia, W.T. Freeman, and Y. Weiss. Understanding belief propagation and its generalizations. Exploring artificial intelligence in the new millennium, 8:236?239, 2003.
[14] S.M. Aji and R.J. McEliece. The generalized distributive law. Information Theory, IEEE
Transactions on, 46(2):325?343, 2000.
[15] W.S. Cheung. Generalizations of H?olders inequality. International Journal of Mathematics
and Mathematical Sciences, 26:7?10, 2001.
[16] Qiang Liu and Alexander Ihler. Negative tree reweighted belief propagation. In Proceedings
of the Twenty-Sixth Conference Annual Conference on Uncertainty in Artificial Intelligence
(UAI-10), pages 332?339, Corvallis, Oregon, 2010. AUAI Press.
[17] J.M. Mooij. libDAI: A free and open source c++ library for discrete approximate inference in
graphical models. The Journal of Machine Learning Research, 11:2169?2173, 2010.
[18] M.J.D. Powell. The BOBYQA algorithm for bound constrained optimization without derivatives. University of Cambridge Technical Report, 2009.
9
|
4723 |@word version:4 polynomial:1 stronger:2 norm:2 open:1 decomposition:9 minus:1 moment:1 initial:5 configuration:24 contains:1 liu:1 current:1 si:3 assigning:1 grain:2 numerical:1 happen:1 additive:1 partition:25 remove:1 treating:1 plot:2 update:14 bart:1 greedy:3 leaf:2 selected:1 intelligence:4 provides:1 coarse:2 node:9 simpler:1 height:1 mathematical:2 prove:1 introduce:2 theoretically:1 themselves:1 xz:1 freeman:1 relying:1 decomposed:1 landau:3 cardinality:2 considering:1 provided:3 estimating:2 underlying:2 bounded:3 notation:1 panel:2 matched:1 interpreted:1 guarantee:3 pseudo:1 every:6 ti:1 auai:1 runtime:4 exactly:3 grant:1 yn:2 positive:1 despite:1 abuse:1 approximately:2 black:1 might:1 limited:1 range:1 practical:1 testing:1 practice:1 procedure:1 powell:1 aji:1 empirical:3 matching:26 integrating:1 get:2 cannot:1 operator:1 shankar:1 applying:3 conventional:1 map:4 logpartition:1 convex:7 emax:12 rule:3 updated:5 play:1 today:1 exact:3 programming:3 losing:1 domingo:1 agreement:1 element:1 trend:1 satisfying:1 expensive:1 ising:7 binning:1 observed:1 role:1 wang:3 worst:4 calculate:1 connected:2 cycle:1 highest:4 convexity:10 complexity:1 mine:1 dynamic:2 depend:1 tight:4 upon:1 bipartite:1 represented:2 distinct:1 fast:1 effective:1 artificial:4 hyper:3 choosing:1 richer:1 encoded:1 solve:1 statistic:2 richardson:1 think:2 transform:1 associative:1 parisi:1 propose:1 interaction:3 product:13 mb:3 combining:1 pthe:1 description:1 zlb:1 dirac:3 exploiting:3 convergence:3 empty:2 converges:2 illustrate:1 derive:1 ij:4 borrowed:1 sabharwal:2 closely:2 stochastic:1 centered:2 ermon:2 bin:5 require:1 generalization:6 decompose:1 proposition:3 tighter:5 summation:1 strictly:4 exploring:1 hold:4 exp:30 visually:1 omitted:2 estimation:2 vice:1 correctness:3 always:5 csp:1 ck:2 pn:4 cornell:2 jaakkola:1 conjunction:1 derived:1 improvement:1 bernoulli:1 likelihood:3 indicates:2 inference:6 dy2:5 compactness:1 provably:1 dual:1 constrained:1 summed:2 fairly:1 marginal:3 field:9 equal:2 once:2 libdai:2 sampling:1 qiang:1 identical:1 np:2 report:4 randomly:1 individual:2 murphy:1 interest:3 message:29 evaluation:1 dyn:5 primal:1 chain:1 subtrees:1 edge:5 xy:3 unless:1 tree:26 logarithm:1 walk:1 instance:9 cover:1 assignment:1 exchanging:1 loopy:3 vertex:2 subset:2 entry:3 uniform:1 reported:1 connect:1 combined:1 density:44 fundamental:1 international:1 probabilistic:3 physic:5 v4:2 ashish:1 concrete:1 ctr:1 thesis:1 external:1 ek:3 derivative:1 style:1 return:1 account:1 potential:2 de:5 attaining:1 bfgs:1 subsumes:1 inc:1 oregon:1 mp:5 depends:1 root:2 observing:2 hazan:1 shashua:1 recover:5 parallel:1 expedition:1 contribution:1 square:1 holder:4 kaufmann:1 efficiently:2 correspond:1 finer:1 bobyqa:2 definition:3 sixth:1 energy:29 nonetheless:1 e2:6 associated:2 mi:10 proof:8 ihler:1 propagated:1 popular:1 massachusetts:1 knowledge:1 improves:1 satisfiability:2 schedule:4 trw:1 improved:2 emin:5 wei:3 box:1 though:2 furthermore:1 mceliece:1 hand:1 propagation:16 lack:1 defines:1 mode:1 quality:2 grows:1 requiring:1 true:4 brown:1 conditionally:1 reweighted:3 attractive:3 yorktown:1 unnormalized:1 generalized:5 prominent:1 complete:4 stefano:1 temperature:2 variational:3 novel:2 dy1:5 common:1 empirically:4 physical:1 extend:1 interpretation:1 marginals:3 refer:2 corvallis:1 versa:1 cambridge:1 grid:5 mathematics:1 similarly:2 something:1 showed:2 perspective:1 optimizing:1 prime:1 reverse:1 certain:3 inequality:6 watson:1 yi:24 seen:4 minimum:9 additional:4 morgan:1 converge:4 v3:2 multiple:1 technical:1 match:2 dept:2 ensity:17 e1:6 award:1 variant:1 fifteenth:1 iteration:5 normalization:1 represent:1 grounded:1 histogram:3 fine:1 interval:1 source:1 publisher:1 ithaca:2 unlike:1 strict:1 undirected:1 sent:1 jordan:2 counting:1 backwards:1 enough:2 fft:1 meltzer:1 xj:23 independence:1 finish:1 suboptimal:1 idea:1 e3:1 passing:9 useful:2 clear:1 amount:1 nsf:1 notice:8 sign:1 delta:3 estimated:1 discrete:2 group:2 key:1 v1:2 graph:12 relaxation:1 sum:9 inverse:2 parameterized:1 everywhere:1 letter:1 uncertainty:3 family:11 almost:1 decision:1 bound:62 correspondence:1 annual:1 constraint:2 precisely:1 bp:9 x2:1 dominated:1 fourier:1 argument:1 min:11 structured:9 according:5 maxn:1 combination:2 across:3 slightly:1 lp:1 making:1 s1:4 intuitively:3 bucket:11 computationally:2 equation:7 ln:1 previously:3 count:1 tractable:7 end:2 junction:2 operation:1 yedidia:1 apply:1 sustainability:1 away:1 v2:2 original:4 graphical:21 exploit:1 giving:2 prof:1 already:1 quantity:5 traditional:2 dp:13 distributive:3 reason:1 spanning:5 induction:3 enforcing:1 assuming:2 willsky:1 index:2 relationship:2 minn:1 setup:1 subproblems:2 negative:3 implementation:1 twenty:1 upper:22 convolution:8 markov:3 finite:4 defining:2 maxk:1 situation:1 lb:3 introduced:2 propositional:1 namely:1 specified:1 able:3 below:3 max:11 belief:12 wainwright:6 overlap:1 indicator:1 yanover:1 improve:1 millennium:1 technology:1 imply:1 library:1 sn:2 review:1 geometric:4 understanding:1 mooij:1 relative:2 law:3 permutation:3 mixed:3 interesting:1 proportional:3 acyclic:2 foundation:1 pi:1 zub:1 ibm:1 supported:1 free:2 weaker:1 institute:2 neighbor:2 taking:1 distributed:2 xn:1 valid:2 computes:6 qn:1 selman:2 collection:2 adaptive:1 simplified:1 transaction:2 approximate:5 ignore:1 logic:2 clique:5 ml:1 global:4 uai:1 sat:2 gomes:2 conclude:1 xi:25 promising:1 mj:4 obtaining:1 complex:1 aistats:1 s2:1 x1:1 ny:3 precision:1 exponential:12 clamped:1 grained:2 theorem:9 e4:1 jensen:2 x:5 evidence:1 intractable:5 exists:2 quantization:1 sequential:1 effectively:1 phd:1 commutative:1 cartesian:1 carla:1 appearance:1 conveniently:1 expressed:1 trwbp:3 corresponds:4 ma:3 conditional:1 identity:1 cheung:1 towards:1 hard:2 specifically:4 called:5 total:6 pas:1 indicating:1 formally:2 maxe:2 support:1 latter:1 alexander:1 ub:5 accelerated:1 evaluate:1
|
4,115 | 4,724 |
Adaptive Learning of Smoothing Functions:
Application to Electricity Load Forecasting
Amadou Ba
IBM Research - Ireland
Mulhuddart, Dublin 15
[email protected]
Mathieu Sinn
IBM Research - Ireland
Mulhuddart, Dublin 15
[email protected]
Yannig Goude
EDF R&D
Clamart, France
[email protected]
Pascal Pompey
IBM Research - Ireland
Mulhuddart, Dublin 15
[email protected]
Abstract
This paper proposes an efficient online learning algorithm to track the smoothing
functions of Additive Models. The key idea is to combine the linear representation of Additive Models with a Recursive Least Squares (RLS) filter. In order to
quickly track changes in the model and put more weight on recent data, the RLS
filter uses a forgetting factor which exponentially weights down observations by
the order of their arrival. The tracking behaviour is further enhanced by using an
adaptive forgetting factor which is updated based on the gradient of the a priori
errors. Using results from Lyapunov stability theory, upper bounds for the learning rate are analyzed. The proposed algorithm is applied to 5 years of electricity
load data provided by the French utility company Electricit?e de France (EDF).
Compared to state-of-the-art methods, it achieves a superior performance in terms
of model tracking and prediction accuracy.
1
Introduction
Additive Models are a class of nonparametric regression methods which have been the subject of
intensive theoretical research and found widespread applications in practice (see [1]). This considerable attention comes from the ability of Additive Models to represent non-linear associations
between covariates and response variables in an intuitive way, and the availability of efficient training methods. The fundamental assumption of Additive Models is that the effect of covariates on
the dependent variable follows an additive form. The separate effects are modeled by smoothing
splines, which can be learned using penalized least squares.
A particularly fruitful field for the application of Additive Models is the modeling and forecasting
of short term electricity load. There exists a vast body of literature on this subject, covering methods
from statistics (Seasonal ARIMA models [2, 3], Exponential Smoothing [4], regression models
[5, 6, 7]) and, more recently, also from machine learning [8, 9, 10]. Additive Models were applied,
with good results, to the nation-wide load in France [11] and to regional loads in Australia [12].
Besides electricity load, Additive Models have also been applied to natural gas demand [13].
Several methods have been proposed to track time-varying behaviour of the smoothing splines in
Additive Models. Hoover et al. [14] examine estimators based on locally weighted polynomials and
derive some of their asymptotic properties. In a similar vein, Eubank et al. [15] introduce a Bayesian
approach which can handle multiple responses. A componentwise smoothing spline is suggested by
Chiang et al. [16]. Fan and Zhang [17] propose a two-stage algorithm which first computes raw
estimates of the smoothing functions at different time points and then smoothes the estimates. A
comprehensive review can be found in [18]. A common feature of all these methods is that they
identify and estimate the time-varying behaviour a posteriori.
Adaptive learning of Additive Models in an online fashion is a relatively new topic. In [19], an
algorithm based on iterative QR decompositions is proposed, which yields promising results for the
French electricity load but also highlights the need for a forgetting factor to be more reactive, e.g., to
macroeconomic and meteorological changes, or varying consumer portfolios. Harvey and Koopman
[20] propose an adaptive learning method which is restricted to changing periodic patterns. Adaptive
methods of a similar type have been studied in the field of neural networks [21, 22].
The contributions of our paper are threefold: First, we introduce a new algorithm which combines
Additive Models with a Recursive Least Squares (RLS) filter to track time-varying behaviour of the
smoothing splines. Second, in order to enhance the tracking ability, we consider filters that include a
forgetting factor which can be either fixed, or updapted using a gradient descent approach [23]. The
basic idea is to decrease the forgetting factor (and hence increase the reactivity) in transient phases,
and increasing the forgetting factor (thus decreasing the variability) during stationary regimes. Using
results from Lyapunov stability theory [24], we provide a theoretical analysis of the learning rate in
the gradient descent approach. Third, we evaluate the proposed methodology on 5 years of electricity
load data provided by the French utility company Electricit?e de France (EDF). The results show that
the adaptive learning algorithm outperforms state-of-the-art methods in terms of model tracking
and prediction accuracy. Moreover, the experiments demonstrate that using an adaptive forgetting
factor stabilizes the algorithm and yields results comparable to those obtained by using the (a priori
unknown) optimal value for a fixed forgetting factor. Note that, in this paper, we do not compare our
proposed algorithm with existing online learning methods from the machine learning literature, such
as tracking of best experts (see [25] for an overview). The reason is that we are specifically interested
in adaptive versions of Additive Models, which have been shown to be particularly well-suited for
modeling and forecasting electricity load.
The remainder of the paper is organized as follows. Section 2 reviews the definition of Additive
Models and provides some background on the spline representation of smoothing functions. In Section 3 we present our adaptive learning algorithms which combine Additive Models with a Recursive
Least Squares (RLS) filter. We discuss different approaches for including forgetting factors and analyze the learning rate for the gradient descent method in the adaptive forgetting factor approach.
A case study with real electricity load data from EDF is presented in Section 4. An outlook on
problems for future research concludes the paper.
2
Additive Models
In this section we review the Additive Models and provide background information on the spline
representation of smoothing functions. Additive Models have the following form:
yk
=
I
X
fi (xk ) + k .
i=1
In this formulation, xk is a vector of covariates which can be either categorical or continuous, and
yk is the dependent variable, which is assumed to be continuous. The noise term k is assumed
to be Gaussian, independent and identically distributed with mean zero and finite variance. The
functions fi are the transfer functions of the model, which can be of the following types: constant
(exactly one transfer function, representing the intercept of the model), categorical (evaluating to 0
or 1 depending on whether the covariates satisfy certain conditions), or continuous. The continuous
transfer functions can be either linear functions of covariates (representing simple linear trends), or
smoothing splines. Typically, smoothing splines depend on only 1-2 of the continuous covariates.
An interesting possibility is to combine smoothing splines with categorical conditions; in the context
of electricity load modeling this allows, e.g., for having different effects of the time of the day
depending on the day of the week.
In our experiments, we use 1- and 2-dimensional cubic B-splines, which allows us to write the
smoothing splines in the following form:
Ji
X
fi (xk ) = ? Ti bi (xk ) =
?ij bij (xk ),
(1)
j=1
where ?ij are the spline coefficients and bij are the spline basis functions which depend on 1 or 2
components of xk . Note that the basis functions are defined by a (fixed) sequence of knot points,
while the coefficients are used to fit the spline to the data (see [1] for details). The quantity Ji in
equation (1) is the number of spline coefficients associated with the transfer function fi . Now, let ?
denote the stacked vector containing the spline coefficients, and b(xk ) the stacked vector containing
the spline basis functions of all the transfer functions. This allows us to write the Additive Models
in the following linear form:
yk
2.1
= ? T b(xk ) + k .
(2)
Learning Additive Models
The linear representation of Additive Models in (2) is the starting point for efficient learning algorithms. Consider K samples (xk , yk ), k = 1, . . . , K of covariates and dependent variables. Then an
estimate of the model coefficients ? can be obtained by solving the following weighted penalized
least squares problem:
n
o
b = min (y ? B K ?)T ?K (y ? B K ?) + ? T S K ? .
?
(3)
K
K
?
K
Here y K = (y1 , y2 , . . . , yK )T is the K ? 1 vector containing all the dependent variables, B K is the
matrix with the rows b(x1 )T , b(x2 )T , . . . , b(xK )T containing the evaluated spline basis functions.
The matrix ?K puts different weights on the samples. In this paper, we consider two scenarios: ?K
is the identity matrix (putting equal weight on the K regressors), or a diagonal matrix which puts
exponentially decreasing weights on the samples, according to the order of their arrival (thus giving
rise to the notion of forgetting factors). The different weighting schemes are discussed in more detail
in Section 3. The matrix S K in (3) introduces a penalizing term in order to avoid overfitting of the
smoothing splines. In this paper, we use diagonal penalizers not depending on the sample size K:
S = diag(?, ?, . . . , ?),
(4)
where ? > 0. Note that this penalizer shrinks the smoothing splines towards zero functions, and
the strength of this effect is tuned by ?. As a well-known fact (see [1]), provided that the matrix
(B TK ?K B K + S) is non-singular, the above least squares problem has the closed-form solution
b
(5)
?
= (B T ?K B K + S)?1 B T ?K y .
K
3
K
K
K
Adaptive learning of smoothing functions
Equation (5) gives rise to an efficient batch learning algorithm for Additive Models. Next, we
propose an adaptive method which allows us to track changes in the smoothing functions in an
online fashion. The basic idea is to combine the linear representation of Additive Models in (2) with
classical Recursive Least Squares (RLS) filters. To improve the tracking behaviour, we introduce a
forgetting factor which puts more weight on recent samples. See Algorithm 1 for details. As starting
b equal to an initial estimate of ? (e.g., obtained in previous experiments), or
values, we choose ?
0
equal to a zero vector if no prior information is available. The initial precision matrix P 0 is set equal
to the inverse of the penalizer S in (4). Anytime while the algorithm is running, the current estimate
b can be used to compute predictions for new given covariates.
?
k
Let us discuss the role of the forgetting factor ? in the adaptive learning algorithm. First, note
that Algorithm 1 is equivalent to the solution of the weighted least squares problem in (5) with the
weighting matrix ?K = diag(? K?1 , ? K?2 , . . . , ? 2 , ?, 1) and the penalizer S as defined in (4). If
? = 1, all samples are weighted equally. For ? < 1, samples are discounted exponentially according
to the order of their arrival. In general, a smaller forgetting factor improves the tracking of temporal
changes in the model coefficients ?. This reduction of the bias typically comes at the cost of an
increase of the variance. Therefore, finding the right balance between the forgetting factor ? and the
strength ? of the penalizer in (4) is crucial for a good performance of the forecasting algorithm.
Algorithm 1 Adaptive learning (fixed forgetting factor)
b , forgetting factor ? ? (0, 1], penalizer strength ? > 0.
1: Input: Initial estimate ?
0
2: Compute the initial precision matrix P 0 = diag(? ?1 , ? ?1 , . . . , ? ?1 ).
3: for k = 1, 2, . . . do
4:
Obtain new covariates xk and dependent variable yk .
5:
Compute the spline basis functions bk = b(xk ).
6:
Compute the a priori error and the Kalman gain:
b
k
gk
7:
b
= yk ? bTk ?
k?1 ,
P k?1 bk
=
.
? + bTk P k?1 bk
Update the estimate and the precision matrix:
b
?
k
=
Pk
=
b
?
k ,
k?1 + g k b
?1
?
P k?1 ? g k bTk P k?1 .
8: end for
Algorithm 2 Adaptive learning (adaptive forgetting factor)
b , initial forgetting factor ?0 ? (0, 1], lower bound for the forgetting
1: Input: Initial estimate ?
0
factor ?min ? (0, 1], learning rate ? > 0, penalizer strength ? > 0.
2: Same as Step 2 in Algorithm 1.
3: Set ? 0 equal to a zero vector and ?0 to the identity matrix.
4: for k = 1, 2, . . . do
5:
Same as Steps 4-6 in Algorithm 1, with ?k?1 instead of ?.
6:
Update the forgetting factor:
?k
7:
8:
k .
= ?k?1 + ? bTk ? k?1 b
If ?k > 1, then set ?k equal to 1. If ?k < ?min , then set ?k equal to ?min .
Same as Step 7 in Algorithm 1, with ?k instead of ?.
Compute the updates (where I denotes the identity matrix):
?k = ?k?1 I ? g k bTk ?k?1 I ? bk g Tk ? ?k?1 P k + ?k?1 g k g Tk ,
k .
? k = I ? g k bTk ? k?1 + ?k bk b
9: end for
3.1
Adaptive forgetting factors
In this section we present a modification of Algorithm 1 which uses adaptive forgetting factors
in order to improve the stability and the tracking behaviour. The basic idea is to choose a large
forgetting factor during stationary regimes (when the a priori errors are small), and small forgetting
factors during transient phases (when the a priori error is large). In this paper we adopt the gradient
descent approach in [23] and update the forgetting factor according to the following formula:
?k
= ?k?1 ? ?
? E[ b
k2 ]
.
? ?k?1
Searching in the direction of the partial derivative of E[ b
k2 ] with respect to ?k?1 aims at minimizing
the expected value of the a priori errors. The learning rate ? > 0 determines the reactivity of the
algorithm: if it is high, then the errors lead to large decreases of the forgetting factor, and vice versa.
The details of the adaptive forgetting factor approach are given in Algorithm 2.
b with
Note that ?k is updated in an iterative fashion based on ? k (the gradient of the estimate ?
k
respect to ?k?1 ), and on ?k (the gradient of the precision matrix P k with respect to ?k?1 ).
3.2
Stability analysis
In the following, we apply results from Lyapunov stability theory to analyze the effect of the learning
rate ?. We show how to derive analytical bounds for ? that guarantee stability of the algorithm.
b
Recall the definition of the a priori error, b
k = yk ? bTk ?
k?1 . As equilibrium point of our algorithm,
we consider the ideal situation b
k = 0. We choose the candidate Lyapunov function V (b
k ) = b
k2 /2.
Clearly, the following conditions are satisfied: if x = 0 then V (x) = 0; if x 6= 0 then V (x) > 0;
and V (x) ? ? as x ? ?. Consider the discrete time derivative ?V (b
k ) = V (b
k+1 ) ? V (b
k )
of the candidate Lyapunov function. According to Lyapunov stability theory, if V (b
k ) > 0 and
?V (b
k ) < 0, then V (b
k ) converges to zero as k tends to infinity.
Let us analyze ?V (b
k ) more closely. Using the relation b
k = ?b
k+1 + b
k we arrive at
1
?V (b
k ) = ?b
k ?b
k + b
k .
2
Next we approximate ?b
k by its first order Taylor series expansion:
?b
k
=
?b
k
??k .
??k
(6)
(7)
Furthermore, note that
?b
k
= ?bTk ? k?1
??k
??k = ?b
k bTk ? k?1 .
and
(8)
Substituting the expressions in (7) and (8) back into (6), we obtain the approximation
1
?V (b
k ) =
?bTk ? k?1 ?b
k bTk ? k?1
?bTk ? k?1 ?b
k bTk ? k?1 + b
k .
2
After some basic algebraic manipulations we arrive at the approximation
2
2
1 2 T
(9)
? 2 + ? bTk ? k?1 .
?b
k bk ? k?1
?V (b
k ) =
2
Now it is easy to see that an (approximate) equivalent condition for Lyapunov stability is given by
0 < ? <
4
2
2 .
bTk ? k?1
Case study: Forecasting of electricity load
In this section, we apply our adaptive learning algorithms to real electricity load data provided by
the French utility company Electricit?e de France (EDF). Modeling and forecasting electricity load
is a challenging task due to the non-linear effects, e.g., of the temperature and the time of the day.
Moreover, the electricity load exhibits many non-stationary patterns, e.g., due to changing macroeconomic conditions (leading to an increase/decrease in electricity demand), or varying customer
portfolios resulting from the liberalization of European electricity markets. The performance on
these highly complex, non-linear and non-stationary learning tasks is a challenging benchmark for
our adaptive algorithms.
4.1
Experimental data
The dependent variables yk in the data provided by EDF represent half-hourly electricity load measurements between February 2, 2006 and April 6, 2011. The covariates xk include the following
information:
xk = xDayType
, xTimeOfDay
, xTimeOfYear
, xTemperature
, xCloudCover
, xLoadDecrease
.
k
k
k
k
k
k
Let us explain these components in more detail:
? xDayType
is a categorical variable representing the day type: 1 for Sunday, 2 for Monday, 3
k
for Tuesday-Wednesday-Thursday, 4 for Friday, 5 for Saturday, and 6 for bank holidays.
? xTimeOfDay
is the index (in half-hourly time steps) of the current time within the day. Its
k
values range from 0 for 0.00 am to 47 for 11.30 pm.
? xTimeOfYear
is the position of the current day within the year (taking values from 0 for January
k
1, to 1 for December 31).
? xTemperature
and xCloudCover
represent the temperature and the cloud cover (ranging from 0 for a
k
k
blue sky to 8 for overcast). These meteorological covariates have been provided by M?et?eo
France; the raw data include temperature and cloud cover data recorded every 3 hours from
26 weather stations all over France. We interpolate these measurements to obtain halfhourly data. A weighted average ? the weights reflecting the importance of a region in
terms of the national electricity load ? is computed to obtain the national temperature and
cloud cover covariates.
? xLoadDecrease
contains information about the activation of contracts between EDF and some
k
big customers to reduce the electricity load during peak days.
We partition the data into two sets: a training set from February 2, 2006 to August 31, 2010, and a
test set from September 1, 2010 to April 6, 2011.
4.2
Modeling the electricity load
We use the following Additive Model for the electricity load:
yk
Intercept
+f
Trend
(k) + f
LagLoad
(yk?48 ) +
6
X
1(xDayType
= l)(?lDayType + flTimeOfDay (xk ))
k
=
?
+
+
f CloudCover (xk ) + f Temperature/TimeOfDay (xk ) + f LagTemperature (xk?48 )
f TimeOfYear (xk ) + xLoadDecrease
f LoadDecrease (xk ) + k .
k
l=1
Let us explain the model in more detail:
? The intercept ? Intercept models the base load, and f Trend (k) captures non-linear trends, e.g.,
due to the economic crisis and changes in the customer portfolios of EDF.
? f LagLoad (yk?48 ) takes into account the electricity load of the previous day.
? ?lDayType and flTimeOfDay (xk ) capture the day-type specific effects of the time of the day.
? f CloudCover (xk ) and f Temperature/TimeOfDay (xk ) represent respectively the effect of the cloud cover
and the bivariate effect of the temperature and the time of the day.
? The term f LagTemperature (xk?48 ) takes into account the temperature of the previous day, which
is important to capture the thermal inertia of buildings.
? f TimeOfYear (xk ) represents yearly cycles, and xLoadDecrease
f LoadDecrease (xk ) models the effect of
k
contracts to reduce peak loads depending on the time of the day.
To fit the model we use the R package mgcv (see [26, 27]). For more information about the design of models for electricity data we refer to [19, 11]. Figure 1 shows the estimated joint effect
of the temperature and the time of the day, and the estimated yearly cycle. As to be expected,
low (resp. high) temperatures lead to an increase of the electricity load due to electrical heating
(resp. cooling), whereas temperatures between 10? and 20? Celsius have almost no effect on the
electricity load. Due to the widespread usage of electrical heating and relatively low usage of air
conditioning in France, the effect of heating is approximately four times higher than the effect of
cooling. The yearly cycle reveals a strong decrease of the electricity load during the summer and
Christmas holidays (around 0.6 and 1 of the time of the year). Note that the scales of the effects
have been normalized because of data confidentiality reasons.
The fitted model achieves a good performance on the training data set with an adjusted R-square of
0.993, a Mean Absolute Percentage Error (MAPE) of 1.4%, and a Root Mean Square Error (RMSE)
of 835 MW. All the incorporated effects yield significant improvements in terms of the Generalized
Cross Validation (GCV) score, so the model size cannot be reduced. The fitted model consists of
268 spline basis coefficients, which indicates the complexity of modeling electricity load data.
?
?
?
??
?
? ?
?
?
?
?
?
?
?
??
?
?
?
? ?
??
? ?
?
? ? ?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
? ?
?
?
?
? ?
? ?
?
? ?
?
?
?
?
?
?
?
?
?
1.0
?
?
0.5
?
?
?
?
??
? ?
? ?
? ?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
? ?
? ?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
? ?
?
?
?
E
?
Y
C
0.0
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
? ?
?
? ?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
? ?
?
? ?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
? ?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
? ?
?
?
?
?
?
?
?
?
?
?
?
?
?
0
10
?
?
?
?
?
?
?
?
?
?
?
t
tan
Ins
20
30
?
?
?
? ?
?
? ?
?
?
?
?
?
20
30
10
40
?
?
?
ture
pera
Tem
0
m O Y
F gure 1 Effec of he empera ure and he me of he day ( ef ) and year y cyc e (r gh )
43
Adapt ve earn ng and forecast ng
We compare he performance of five d fferen a gor hms
? The offl ne me hod (deno ed by ofl) uses he mode earned n R and app es o he es
da a w hou upda ng he mode parame ers
? The fixed forge ng fac or me hod (deno ed by fff) upda es he Add ve Mode us ng a
fixed forge ng fac or (see A gor hm 1) The va ue of he fixed forge ng fac or and he
s reng h of he pena zer are de erm ned n he fo ow ng way We d v de he es se n o
wo par s of equa eng h a ca bra on se (Sep ember 1 2010 - November 15 2010)
and a va da on se (November 16 2010 - Apr 6 2011) We choose he comb na on of
forge ng fac or and pena zer s reng h wh ch y e ds he bes resu s on he ca bra on se
n erms of MAPE and RMSE and eva u e he performance on he va da on se
? The pos -fixed forge ng fac or me hod (deno ed by post-fff) uses he fixed forge ng facor and s reng h of he pena zer wh ch y e d he bes performance on he va da on se Th s
? dea ? parame er za on g ves us an upper bound for he performance of he fff me hod and
a benchmark for he adap ve forge ng fac or approaches
? The adap ve forge ng fac or me hod (deno ed by aff) uses A gor hm 2
? F na y we eva ua e an adap ve approach ha op m zes he va ues of he forge ng fac or
and he pena zer s reng h on a gr d (deno ed by affg) For each comb na on on he gr ds
(0 995 0 996
0 999) and (1000 2000
10000) we run fixed forge ng fac or a gor hms n para e A each me po n we choose he comb na on wh ch so far has g ven
he bes performance n erms of MAPE
44
Resu ts
The performance of a five a gor hms s eva u ed on he va da on se from November 16 2010 o
Apr 6 2011 Tab e 1 shows he resu s n erms of MAPE and RMSE As can be seen he adap ve
forge ng fac or me hod (aff) ach eves he bes performance I even ou performs he pos -fff me hod
wh ch uses he (a pr or unknown) op ma comb na on of pena zer s reng h and fixed forge ng
fac or The mprovemen s over he offl ne approach (wh ch doesn? upda e he mode parame ers)
are s gn fican bo h n erms of he MAPE (abou 0 2%) and he RMSE (abou 100 MW) Th s
corresponds o an mprovemen of approx ma ve y 10% n erms of he day-ahead forecas ng error
F gure 2 ( ef ) shows he cumu a ve sum of he errors of he five forecas ng a gor hms As can be
seen he offl ne approach suffers from a s rong pos ve b as and ends o overes ma e he e ec r c y
oad over me In fac here was a decrease n he e ec r c y demand over he cons dered me
hor zon due o he econom c cr s s The adap ve forge ng fac or shows a much be er rack ng
behav our and s ab e o adap o he change n he demand pa erns
The graph on the right hand side of Figure 2 illustrates the roles of the forgetting factor and of
the strength of the penalizer. Values of the forgetting factor close to 1 result in reduced tracking behaviour and less improvement over the offline approach. Choosing too small values for the forgetting
factor can lead to loss of information and instabilities of the algorithm. Increasing the penalizer reduces the variability of the smoothing splines, however, it also introduces a bias as the splines are
shrinked towards zero.
Method
MAPE (%)
RMSE (MW)
ofl
1.83
1185
fff
2.28
1869
affg
1.7
1124
aff
1.63
1071
post-fff
1.64
1073
?0.4
?0.6
?0.8
?1.0
Normalized Cumulative Errors
?0.2
0.0
Table 1: Performance of the five different forecasting methods
?1.2
ofl
fff
affg
0
1000
aff
post?fff
2000
3000
4000
Time
Figure 2: Cumulative sum of the errors (left) and results for different choices of the forgetting factor
and the strength of the penalizer (right)
5
Conclusions and future work
We have presented an adaptive learning algorithm that updates the smoothing functions of Additive Models in an online fashion. We have introduced methods to improve the tracking behaviour
based on forgetting factors and analyzed theoretical properties using results from Lyapunov stability
theory. The significance of the proposed algorithms was demonstrated in the context of forecasting
electricity load data. Modeling and forecasting electricity load data is particularly challenging due
to the high complexity of the models (the Additive Models in our experiments included 268 spline
basis functions), the non-linear relation between the covariates and dependent variables, and the
non-stationary dynamics of the models. Experiments on 5 years of data from Electricit?e de France
have shown the superior performance of algorithms using an adaptive forgetting factor. As it turned
out, a crucial point is to find the right combination of forgetting factors and the strength of the penalizer. While forgetting factors tend to reduce the bias of models evolving over time, they typically
increase the variance, an effect which can be compensated by choosing stronger penalizer. Our future research will follow two directions: first, we plan to consider dynamic penalizers which can
automatically adapt to changes in the model complexity. Second, we will develop methods for incorporating prior information on model components, e.g., by integrating beliefs for the initial values
of the adaptive algorithms.
References
[1] Trevor Hastie, Robert Tibshirani, and Jerome Friedman. The Elements of Statistical Learning. Second
Edition, Springer, 2009.
[2] J Nowicka-Zagrajek and R Weron. Modeling electricity loads in California: ARMA models with hyperbolic noise. Signal Processing, pages 1903?1915, 2002.
[3] Shyh-Jier Huang and Kuang-Rong Shih. Short-Term Load Forecasting Via ARMA Model Identification
Including Non-Gaussian Process Considerations. IEEE Transactions on Power Systems, 18(2):673?679,
2003.
[4] James W. Taylor. Short-Term Load Forecasting with Exponentially Weighted Methods. IEEE Transactions on Power Systems, 27(1):673?679, 2012.
[5] Derek W. Bunn and E. D. Farmer. Comparative models for electrical load forecasting. Eds. Wiley, New
York, 1985.
[6] R Campo and P. Ruiz. Adaptive Weather-Sensitive Short Term Load Forecast . IEEE Transactions on
Power Systems, 2(3):592?598, 1987.
[7] Ramu Ramanathan, Robert Engle, Clive W. J. Granger, Farshid Vahid-Araghi, and Casey Brace. Short-run
forecasts of electricity loads and peaks . International Journal of Forecasting, 13(3):161?174, 1997.
[8] Bo-Juen Chen, Ming-Wei Chang, and Chih-Jen Lin. Load Forecasting Using Support Vector Machines:
A Study on EUNITE Competition 2001 . IEEE Transaction on Power Systems, 19(3):1821?1830, 2004.
[9] Shu Fan and Luonan Chen. Short-term load forecasting based on an adaptive hybrid method . IEEE
Transaction on Power Systems, 21(1):392?401, 2006.
[10] V. H Hinojosa and A Hoese. Short-Term Load Forecasting Using Fuzzy Inductive Reasoning and Evolutionary Algorithms . IEEE Transaction on Power Systems, 25(1):565?574, 2010.
[11] A Pierrot and Yannig Goude. Short-term electricity load forecasting with generalized additive models. In
Proceedings of ISAP Power, pages 593?600, 2011.
[12] Shu Fan and R Hyndman. Short-Term Load Forecasting Based on a Semi-Parematetric Additive Model .
IEEE Transaction on Power Systems, 27(1):134?141, 2012.
[13] M Brabek, O Konr, M Mal, M Pelikn, and J Vondrcek. A statistical model for natural gas standardized
load profiles. Journal of the Royal Statistical Society: Series C (Applied Statistics), 58(1):123?139, 2009.
[14] Donald R. Hoover, John A. Rice, Colin O. Wu, and Li-Ping Yang. Nonparametric smoothing estimates
of time-varying coefficient models with longitudinal data. Biometrika, 85(4):809?822, 1998.
[15] R. L. Eubank, Chunfeng Huang, Y. Munoz. Maldonado, and R. J. Buchanan. Smoothing spline estimation
in varying coefficient models. Journal of the Royal Statistical Society, 66(3):653?667, 2004.
[16] Chin-Tsang Chiang, John A. Rice, and Colin O. Wu. Smoothing spline estimation for varying coefficient
models with repeatedly measured dependent variables. Journal of the American Statistical Asociation,
96(454):605?619, 2001.
[17] Jianqing Fan and Jin-Ting Zhang. Two-Step Estimation of Functional Linear Models with Applications
to Longitudinal Data. Journal of the Royal Statistical Society, 62:303?322, 2000.
[18] Jianqing Fan and Wenyang Zhang. Statistical methods with varying coefficient models. Statistics and Its
Interface, 1:179?195, 2008.
[19] S. Wood, Y. Goude, and S. Shaw. Generalized Additive Models. Preprint, 2011.
[20] A Harvey and S. J Koopman. Forecasting Hourly Electricity Demand Using Time-Varying Splines. Journal of the American Statistical Association, 88(424):1228?1236, 1993.
[21] Herbert Jaeger. Adaptive non-linear system identification with echo state networks. In Proc. Advances
Neural Information Processing Systems, pages 593?600, 2002.
[22] Mauro Birattari, Gianluca Bontempi, and Hugues Bersini. Lazy learning meets the recursive least squares
algorithm. In Proc. Advances Neural Information Processing Systems, pages 375?381, 1999.
[23] S-H Leung and C. F So. Gradient-Based Variable Forgetting Factor RLS Algorithm in Time-Varying
Environments. IEEE Transaction on Signal Processing, 53(8):3141?3150, 2005.
[24] Z Man, H. R Wu, S Liu, and X Yu. A New Adaptive Backpropagation Algorithm Based on Lyapunov
Stability Theory for Neural Network. IEEE Transaction on Neural Networks, 17(6):1580?1591, 2006.
[25] Nicol`o Cesa-Bianchi and G?abor Lugosi. Prediction, Learning, and Games. Cambridge University Press,
2006.
[26] Simon Wood. Generalized Additive Models an Introduction with R. Chapman and Hall Eds., 2006.
[27] Simon Wood. mgcv :GAMs and Generalized Ridge Regression for R. R News, 1(2):20?25, 2001.
|
4724 |@word version:1 polynomial:1 stronger:1 decomposition:1 eng:1 abou:2 outlook:1 reduction:1 initial:7 liu:1 series:2 contains:1 score:1 tuned:1 longitudinal:2 outperforms:1 existing:1 current:3 com:3 erms:5 activation:1 hou:1 john:2 additive:30 partition:1 update:5 stationary:5 half:2 xk:26 short:9 chiang:2 gure:2 provides:1 monday:1 zhang:3 five:4 saturday:1 consists:1 combine:5 comb:4 buchanan:1 introduce:3 forgetting:37 market:1 expected:2 vahid:1 examine:1 discounted:1 decreasing:2 ming:1 company:3 automatically:1 increasing:2 ua:1 provided:6 hugues:1 moreover:2 crisis:1 z:1 fuzzy:1 finding:1 guarantee:1 temporal:1 sky:1 every:1 nation:1 ti:1 exactly:1 biometrika:1 k2:3 clive:1 farmer:1 hourly:3 tends:1 ure:1 meet:1 approximately:1 lugosi:1 studied:1 challenging:3 bi:1 range:1 confidentiality:1 equa:1 recursive:5 practice:1 backpropagation:1 evolving:1 hyperbolic:1 weather:2 integrating:1 donald:1 cannot:1 close:1 put:4 context:2 intercept:4 instability:1 fruitful:1 equivalent:2 customer:3 demonstrated:1 compensated:1 penalizer:11 attention:1 starting:2 estimator:1 stability:10 handle:1 notion:1 searching:1 holiday:2 updated:2 resp:2 enhanced:1 tan:1 us:6 pa:1 trend:4 element:1 oad:1 particularly:3 cooling:2 vein:1 role:2 cloud:4 preprint:1 electrical:3 capture:3 tsang:1 mal:1 region:1 eva:3 cycle:3 earned:1 news:1 decrease:5 yk:12 environment:1 complexity:3 covariates:13 dynamic:2 depend:2 solving:1 basis:7 sep:1 joint:1 po:4 stacked:2 fac:13 choosing:2 ability:2 statistic:3 reactivity:2 echo:1 online:5 sequence:1 analytical:1 propose:3 fr:1 remainder:1 turned:1 zer:5 intuitive:1 competition:1 qr:1 ach:1 adap:6 jaeger:1 comparative:1 converges:1 tk:3 derive:2 depending:4 develop:1 measured:1 ij:2 op:2 strong:1 come:2 lyapunov:9 direction:2 closely:1 filter:6 australia:1 transient:2 fff:8 behaviour:8 hoover:2 adjusted:1 rong:2 around:1 hall:1 equilibrium:1 week:1 stabilizes:1 substituting:1 achieves:2 adopt:1 estimation:3 proc:2 sensitive:1 vice:1 weighted:6 clearly:1 gaussian:2 sunday:1 aim:1 avoid:1 cr:1 varying:11 casey:1 seasonal:1 improvement:2 indicates:1 sinn:1 am:1 posteriori:1 dependent:8 leung:1 typically:3 abor:1 relation:2 france:9 interested:1 pascal:1 priori:7 proposes:1 plan:1 smoothing:23 art:2 zon:1 field:2 equal:7 having:1 ng:21 chapman:1 represents:1 ven:1 yu:1 rls:6 tem:1 future:3 spline:28 national:2 comprehensive:1 interpolate:1 ve:10 phase:2 ab:1 friedman:1 possibility:1 highly:1 maldonado:1 introduces:2 analyzed:2 bontempi:1 partial:1 taylor:2 arma:2 theoretical:3 fitted:2 dublin:3 modeling:8 gn:1 cover:4 goude:4 electricity:32 cost:1 kuang:1 gcv:1 gr:2 too:1 para:1 periodic:1 fundamental:1 peak:3 international:1 ie:3 contract:2 enhance:1 quickly:1 earn:1 na:5 satisfied:1 recorded:1 containing:4 choose:5 huang:2 cesa:1 expert:1 derivative:2 leading:1 friday:1 american:2 li:1 koopman:2 account:2 de:6 availability:1 coefficient:11 satisfy:1 root:1 closed:1 analyze:3 tab:1 penalizers:2 simon:2 rmse:5 contribution:1 square:11 air:1 accuracy:2 variance:3 yield:3 identify:1 bayesian:1 raw:2 eubank:2 identification:2 knot:1 app:1 za:1 explain:2 ping:1 fo:1 suffers:1 ed:8 trevor:1 definition:2 derek:1 james:1 associated:1 con:1 gain:1 wh:5 recall:1 anytime:1 improves:1 organized:1 ou:1 back:1 reflecting:1 higher:1 day:16 follow:1 methodology:1 response:2 wei:1 april:2 formulation:1 evaluated:1 shrink:1 furthermore:1 stage:1 jerome:1 d:2 hand:1 meteorological:2 widespread:2 french:4 rack:1 mode:4 hor:1 building:1 effect:17 usage:2 normalized:2 y2:1 inductive:1 hence:1 during:5 ue:1 game:1 covering:1 generalized:5 chin:1 ridge:1 demonstrate:1 performs:1 temperature:11 gh:1 interface:1 reasoning:1 ranging:1 consideration:1 ef:2 recently:1 fi:4 superior:2 common:1 functional:1 ji:2 overview:1 conditioning:1 exponentially:4 association:2 discussed:1 he:52 pena:5 measurement:2 refer:1 celsius:1 versa:1 significant:1 munoz:1 cambridge:1 approx:1 pm:1 portfolio:3 base:1 add:1 recent:2 scenario:1 manipulation:1 certain:1 harvey:2 jianqing:2 seen:2 herbert:1 campo:1 eo:1 bra:2 colin:2 signal:2 semi:1 multiple:1 reduces:1 adapt:2 cross:1 lin:1 post:3 equally:1 va:6 prediction:4 regression:3 basic:4 hyndman:1 represent:4 ofl:3 econom:1 mulhuddart:3 background:2 whereas:1 singular:1 crucial:2 brace:1 regional:1 subject:2 tend:1 december:1 birattari:1 eve:1 mw:3 yang:1 ideal:1 identically:1 easy:1 ture:1 fit:2 hastie:1 reduce:3 idea:4 economic:1 intensive:1 engle:1 whether:1 expression:1 utility:3 forecasting:19 cyc:1 wo:1 f:1 algebraic:1 york:1 behav:1 repeatedly:1 se:7 nonparametric:2 locally:1 reduced:2 percentage:1 estimated:2 track:5 tibshirani:1 blue:1 write:2 threefold:1 discrete:1 arima:1 key:1 putting:1 four:1 shih:1 changing:2 penalizing:1 vast:1 graph:1 wood:3 year:6 sum:2 run:2 inverse:1 package:1 arrive:2 almost:1 chih:1 smoothes:1 wu:3 comparable:1 bound:4 summer:1 fan:5 strength:7 ahead:1 infinity:1 aff:4 x2:1 btk:15 min:4 relatively:2 ned:1 ern:1 according:4 combination:1 smaller:1 modification:1 restricted:1 pr:1 erm:1 equation:2 discus:2 granger:1 end:3 available:1 forge:13 apply:2 gam:1 shaw:1 batch:1 denotes:1 running:1 include:3 standardized:1 yearly:3 giving:1 ting:1 bersini:1 february:2 classical:1 society:3 quantity:1 diagonal:2 exhibit:1 gradient:8 september:1 ireland:3 ow:1 separate:1 evolutionary:1 mauro:1 me:11 topic:1 parame:3 reason:2 consumer:1 besides:1 kalman:1 modeled:1 index:1 balance:1 minimizing:1 robert:2 gk:1 shu:2 rise:2 ba:1 design:1 unknown:2 bianchi:1 upper:2 observation:1 benchmark:2 finite:1 jin:1 descent:4 november:3 gas:2 january:1 thermal:1 situation:1 gor:6 variability:2 incorporated:1 t:1 y1:1 station:1 august:1 bk:6 introduced:1 componentwise:1 california:1 learned:1 hour:1 suggested:1 pattern:2 regime:2 including:2 royal:3 belief:1 power:8 natural:2 hybrid:1 representing:3 scheme:1 improve:3 ne:3 mathieu:1 concludes:1 categorical:4 hm:6 ues:1 review:3 literature:2 prior:2 nicol:1 asymptotic:1 loss:1 par:1 highlight:1 interesting:1 resu:3 validation:1 edf:9 bank:1 ibm:6 row:1 gianluca:1 penalized:2 offline:1 bias:3 side:1 wide:1 taking:1 absolute:1 distributed:1 evaluating:1 cumulative:2 computes:1 doesn:1 inertia:1 adaptive:28 regressors:1 far:1 ec:2 transaction:9 approximate:2 christmas:1 overfitting:1 reveals:1 assumed:2 continuous:5 iterative:2 table:1 promising:1 transfer:5 ca:2 tuesday:1 expansion:1 dea:1 european:1 complex:1 diag:3 da:5 pk:1 apr:2 significance:1 big:1 noise:2 arrival:3 edition:1 heating:3 profile:1 body:1 x1:1 fashion:4 cubic:1 wiley:1 precision:4 position:1 exponential:1 candidate:2 mape:6 third:1 weighting:2 ruiz:1 bij:2 down:1 formula:1 load:41 specific:1 jen:1 er:4 shrinked:1 bivariate:1 exists:1 incorporating:1 deno:5 ramanathan:1 importance:1 illustrates:1 hod:7 ember:1 demand:5 forecast:3 chen:2 suited:1 upda:3 lazy:1 tracking:10 bo:2 chang:1 wednesday:1 ch:5 corresponds:1 springer:1 determines:1 ma:3 rice:2 identity:3 towards:2 man:1 considerable:1 change:7 included:1 specifically:1 experimental:1 e:4 thursday:1 support:1 macroeconomic:2 reactive:1 evaluate:1
|
4,116 | 4,725 |
Local Supervised Learning through Space
Partitioning
Venkatesh Saligrama
Dept. of Electrical and Computer Engineering
Boston University
Boston, MA 02116
[email protected]
Joseph Wang
Dept. of Electrical and Computer Engineering
Boston University
Boston, MA 02116
[email protected]
Abstract
We develop a novel approach for supervised learning based on adaptively partitioning the feature space into different regions and learning local region-specific
classifiers. We formulate an empirical risk minimization problem that incorporates both partitioning and classification in to a single global objective. We show
that space partitioning can be equivalently reformulated as a supervised learning
problem and consequently any discriminative learning method can be utilized in
conjunction with our approach. Nevertheless, we consider locally linear schemes
by learning linear partitions and linear region classifiers. Locally linear schemes
can not only approximate complex decision boundaries and ensure low training
error but also provide tight control on over-fitting and generalization error. We
train locally linear classifiers by using LDA, logistic regression and perceptrons,
and so our scheme is scalable to large data sizes and high-dimensions. We present
experimental results demonstrating improved performance over state of the art
classification techniques on benchmark datasets. We also show improved robustness to label noise.
1 Introduction
We develop a novel approach for supervised learning based on adaptively partitioning the feature
space into different regions and learning local region classifiers. Fig. 1 (left) presents one possible
architecture of our scheme (others are also possible). Here each example passes through a cascade
of reject classifiers (gj ?s). Each reject classifier, gj , makes a binary decision and the observation is
either classified by the associated region classifier, fj , or passed to the next reject classifier. Each
reject classifier, gj , thus partitions the feature space into regions. The region classifier fj operates
only on examples within the local region that is consistent with the reject classifier partitions.
We incorporate both feature space partitioning (reject classifiers) and region-specific classifiers into
a single global empirical risk/loss function. We then optimize this global objective by means of coordinate descent, namely, by optimizing over one classifier at a time. In this context we show that each
step of the coordinate descent can be reformulated as a supervised learning problem that seeks to optimize a 0/1 empirical loss function. This result is somewhat surprising in the context of partitioning
and has broader implications. First, we can now solve feature space partitioning through empirical
risk function minimization(ERM) and so powerful existing methods including boosting, decision
trees and kernel methods can be used in conjunction for training flexible partitioning classifiers.
Second, because data is usually locally ?well-behaved,? simpler region-classifiers, such as linear
classifiers, often suffice for controlling local empirical error. Furthermore, since complex boundaries
for partitions can be approximated by piecewise linear functions, feature spaces can be partitioned
to arbitrary degree of precision using linear boundaries (reject classifiers). Thus the combination
of piecewise linear partitions along with linear region classifiers has the ability to adapt to complex
data sets leading to low training error. Yet we can prevent overfitting/overtraining by optimizing the
1
Local
Perceptron
Local
Logistic
Regression
g1(x)
g2(x)
AdaBoost
Decision Tree
Figure 1: Left: Architecture of our system. Reject Classifiers, gj (x), partition space and region classifiers,
fj (x), are applied locally within the partitioned region. Right: Comparison of our approach (upper panel)
against Adaboost and Decision tree (lower panel) on the banana dataset[1]. We use linear perceptrons and
logistic regression for training partitioning classifier and region classifiers. Our scheme splits with 3 regions
and does not overtrain unlike Adaboost.
number of linear partitions and linear region classifiers, since the VC dimension of such a structure
is reasonably small. In addition this also ensures significant robustness to label noise. Fig. 1 (right)
demonstrates the substantial benefits of our approach on the banana dataset[1] over competing methods such as boosting and decision trees, both of which evidently overtrain.
Limiting reject and region classifiers to linear methods has computational advantages as well. Since
the datasets are locally well-behaved we can locally train with linear discriminant analysis (LDA),
logistic regression and variants of perceptrons. These methods are computationally efficient in that
they scale linearly with data size and data dimension. So we can train on large high-dimensional
datasets with possible applications to online scenarios.
Our approach naturally applies to multi-class datasets. Indeed, we present some evidence that shows
that the partitioning step can adaptively cluster the dataset into groups and letting region classifiers
to operate on simpler problems. Additionally linear methods such as LDA, Logistic regression, and
perceptron naturally extend to multi-class problems leading to computationally efficient and statistically meaningful results as evidenced on challenging datasets with performance improvements over
state of the art techniques.
1.1 Related Work
Our approach fits within the general framework of combining simple classifiers for learning complex
structures. Boosting algorithms [2] learn complex decision boundaries characterized as a weighted
linear combination of weak classifiers. In contrast our method takes unions and intersections of
simpler decision regions to learn more complex decision boundaries. In this context our approach is
closely related to decision trees. Decision trees are built by greedily partitioning the feature space
[3]. One main difference is that decision trees typically attempt to greedily minimize some loss or a
heuristic, such as region purity or entropy, at each split/partition of the feature space. In contrast our
method attempts to minimize global classification loss. Also decision trees typically split/partition
a single feature/component resulting in unions of rectangularly shaped decision regions; in contrast
we allow arbitrary partitions leading to complex decision regions.
Our work is loosely related to so called coding techniques that have been used in multi-class classification [4, 5]. In these methods a multiclass problem is decomposed into several binary problems
using a code matrix and the predicted outcomes of these binary problems are fused to obtain multiclass labels. Jointly optimizing for the code matrix and binary classification is known to be NP
hard [6] and iterative techniques have been proposed [7, 8]. There is some evidence (see Sec. 3)
that suggests that our space partitioning classifier groups/clusters multiple classes into different regions; nevertheless our formulation is different in that we do not explicitly code classes into different
regions and our method does not require fusion of intermediate outcomes.
Despite all these similarities, at a fundamental level, our work can also be thought of as a somewhat
complementary method to existing supervised learning algorithms. This is because we show that
space partitioning itself can be re-formulated as a supervised learning problem. Consequently, any
2
existing method, including boosting and decision trees, could be used as a method of choice for
learning space partitioning and region-specific decision functions.
We use simple linear classifiers for partitioning and region-classifiers in many of our experiments.
Using piecewise combinations of simple functions to model a complex global boundary is a well
studied problem. Mixture Discriminant Analysis (MDA), proposed by Hastie et al. [9], models
each class as a mixture of gaussians, with linear discriminant analysis used to build classifiers between estimated gaussian distributions. MDA relies upon the structure of the data, assuming that
the true distribution is well approximated by a mixture of Gaussians. Local Linear Discriminant
Analysis (LLDA) , proposed by Kim et al. [10], clusters the data and performs LDA within each
cluster. Both of these approaches partition the data then attempt to classify locally. Partitioning of
the data is independent of the performance of the local classifiers, and instead based upon the spatial
structure of the data. In contrast, our proposed approach partitions the data based on the performance
of classifiers in each region. A recently proposed alternative approach is to build a global classifier
ignoring clusters of errors, and building separate classifiers in each error cluster region [11]. This
proposed approach greedily approximates a piecewise linear classifier in this manner, however fails
to take into account the performance of the classifiers in the error cluster regions. While piecewise linear techniques have been proposed in the past [12, 13], we are unaware of techniques that
learn piecewise linear classifiers based on minimizing global ERM and allows any discriminative
approach to be used for partitioning and local classification, and also extends to multiclass learning
problems.
2 Learning Space Partitioning Classifiers
The goal of supervised classification is to learn a function, f (x), that maps features, x ? X , to a
discrete label, y ? {1, 2, . . . , c}, based on training data, (xi , yi ), i = 1, 2, . . . , n. The empirical
n
risk/loss of classifier f is:
1X
R(f ) =
1{f (xi )6=yi }
n i=1
Our goal is empirical risk minimization(ERM), namely, to minimize R(f ) over all classifiers, f (?),
belonging to some class F . It is well known that the complexity of the family F dictates generalization errors. If F is too simple, it often leads to large bias errors; if the family F is too rich, it
often leads to large variance errors. With this perspective we consider a family of classifiers (see
Fig. 1 that adaptively partitions data into regions and fits simple classifiers within each region. We
predict the output for a test sample, x, based on the output of the trained simple classifier associated
with the region x belongs to. The complexity of our family of classifiers depends on the number
of local regions, the complexity of the simple classifiers in each region, and the complexity of the
partitioning. In the sequel we formulate space partitioning and region-classification into a single
objective and show that space partitioning is equivalent to solving a binary classification problem
with 0/1 empirical loss.
2.1 Binary Space Partitioning as Supervised Learning
In this section we consider learning binary space partitioning for ease of exposition. The function
g(?) partitions the space by mapping features, x ? X , to a binary label, z ? {0, 1}. Region classifiers f0 (x), f1 (x) operate on the respective regions generated by g(x) (see Fig. 1). The empirical
risk/loss associated with the binary space partitioned classifiers is given by:
n
n
1X
1X
R(g, f0 , f1 ) =
1{g(xi )=0} 1{f0 (xi )6=yi } +
1{g(xi )=1} 1{f1 (xi )6=yi }
(1)
n i=1
n i=1
Our goal is to minimize the empirical error jointly over the family of functions g(?) ? G and fi (?) ?
F . From the above equation, when the partitioning function g(?) is fixed, it is clear how one can
view choice of classifiers f0 (?) and f1 (?) as ERM problems. In contrast, even when f0 , f1 are fixed,
(i)
(i)
it is unclear how to view minimization over g ? G as an ERM. To this end let, `0 , `1 indicate
whether or not classifier f0 , f1 makes an error on example x(i) and let S denote the set of instances
where the classifier f0 makes errors, namely,
`0i = 1{f0 (xi )6=yi } , `1i = 1{f1 (xi )6=yi } , S = {i | `0i = 1}
(2)
3
We can then rewrite Eq. 1 as follows:
n
n
1X 0
1X 1
R(g, f0 , f1 ) =
+
` 1
` 1
n i=1 i {g(xi )=0} n i=1 i {g(xi )=1}
1X 1
1X 1
1X
1{g(xi )=0} +
=
`i 1{g(xi )=1} +
`i 1{g(xi )=1}
n
n
n
i?S
=
i?S
=
i6?S
i?S
1X
1X 1
1X 1
`i (1 ? 1{g(xi )=0} ) +
`i 1{g(xi )=1}
1{g(xi )=0} +
n
n
n
i6?S
i?S
1X 1
1X
1X 1
(1 ? `1i )1{g(xi )=0} +
`i +
`i 1{g(xi )=1}
n
n
n
i?S
i?S
i6?S
| {z }
indep. of g
Note that for optimizing g ? G for fixed f0 , f1 , the second term above is constant. Furthermore, by
consequence of Eq. 2 we see that the first and third terms can be further simplified as follows:
1X
1X 1
1X
1X 1
(1 ? `1i )1{g(xi )=0} =
(1 ? `1i )1{g(xi )6=`0i } ;
`i 1{g(xi )=1} =
`i 1{g(xi )6=`0i }
n
n
n
n
i?S
i6?S
i?S
i6?S
Putting all this together we have the following lemma:
Lemma 2.1. For a fixed f0 , f1 the problem of choosing the best binary space partitions, g(?) in
Eq. 1 is equivalent to choosing a binary classifier g that optimizes following 0/1 (since wi ? {0, 1})
empirical loss function:
n
1X
1, `0i 6= `1i
?
0
R(g)
=
,
where
w
=
wi 1{g(xi )6=`i }
i
0, otherwise
n
i=1
The composite classifier F (x) based on the reject and region classifiers can be written compactly as
F (x) = fg(x) (x). We observe several aspects of our proposed scheme:
(1) Binary partitioning is a binary classification problem on the training set, (xi , `0i ), i =
1, 2, . . . , n.
(2) The 0/1 weight, wi = 1, is non-zero if and only if the classifiers disagree on xi , i.e.,
f0 (xi ) 6= f1 (xi ).
(3) The partitioning error is zero on a training example xi with weight wi = 1 if we choose g(xi ) = 0
on examples where f0 (xi ) = yi . In contrast if f0 (xi ) 6= yi the partitioning error can be reduced by
choosing g(xi ) = 1, and thus rejecting the example from consideration by f0 .
2.2 Surrogate Loss Functions, Algorithms and Convergence
An important implication of Lemma 2.1 is that we can now use powerful learning techniques such
as decision trees, boosting and SVMs for learning space partitioning classifiers. Our method is a
coordinate descent scheme which optimizes over a single variable at a time. Each step is an ERM
and so any learning method can be used at each step.
Convergence Issues: It is well known that that indicator losses are hard to minimize, even when
the class of classifiers, F , is nicely parameterized. Many schemes are based on minimizing surrogate losses. These surrogate losses are upper bounds for indicator losses and usually attempt
to obtain large margins. Our coordinate descent scheme in this context is equivalent to describing surrogates for each step and minimizing these surrogates. This means that our scheme may
not converge, let alone converge to a global minima, even when surrogates at each step are nice
and convex. This is because even though each surrogate upper bounds indicator loss functions
at each step, when put together they do not upper bound the global objective of Eq. 1. Consequently, we need a global surrogate to ensure that the solution does converge. Loss functions are
most conveniently thought of in terms of margins. For notational convenience, in this section we
will consider the case where the partition classifier, g, maps to labels ` ? {?1, 1}, where a label of ?1 and 1 indicates classification by f0 and f1 , respectively. We seek functions ?(z) that
satisfy 1z?0 ? ?(z). Many such surrogates can be constructed using sigmoids, exponentials etc.
Consider the classification function g(x) = sign (h(x) > 0). The empirical error can be upper
4
bounded: 1`g(x)=1 = 1?`h(x)?0 ? ?(?`h(x)) We then form a global surrogate for the empirical loss function. Approximating the indicator functions of the empirical risk/loss in Eq. 1 with
surrogate functions, the global surrogate is given by:
n
n
X
1X
? f0 , f1 ) = 1
? (h(xi )) ? (yi f0 (xi )) +
? (?h(xi )) ? (yi f1 (xi )) ,
(3)
R(g,
n i=1
n i=1
which is an upper bound on Eq. 1. Optimizing the partitioning function g(?) can be posed as a
supervised learning problem, resulting in the following lemma (see Supplementary for a proof):
Lemma 2.2. For a fixed f0 , f1 the problem of choosing the best binary space partitions, g(?) in
Eq. 3 is equivalent to choosing a binary classifier h that optimizes a surrogate function ?(?):
2n
1 X
1, i < n + 1
?(f0 (xi )yi ), i < n + 1
?
R(g) =
wi ? (h(xi )ri ) , ri =
,wi =
.
?1, otherwise
?(f1 (xi )yi ), otherwise
2n
i=1
Theorem 2.3. For any continuous surrogate ?(?, ?), performing alternating minimization on the
classifiers f0 , f1 , and g converges to a local minima of Eq. 3, with a loss upper-bounding the
empirical loss defined by Eq. 1.
Proof. This follows directly, as this is coordinate descent on a smooth cost function.
2.3 Multi-Region Partitioning
Lemma 2.1 can be used to also reduce multi-region space partitioning to supervised learning. We
can obtain this reduction in one of several ways. One approach is to use pairwise comparisons,
training classifiers to decide between pairs of regions. Unfortunately, the number of different reject
classifiers scales quadratically, so we instead employ a greedy partitioning scheme using a cascade
classifier.
Fig 1 illustrates a recursively learnt three region space partitioning classifier. In general the regions
are defined by a cascade of binary reject classifiers, gk (x), k ? {1, 2, . . . , r ? 1}, where r is the
number of classification regions. Region classifiers, fk (x), k ? {1, 2, . . . , r}, map observations in
the associated region to labels. At stage k, if gk (x) = 0, an observation is classified by the region
classifier, fk (x), otherwise the observation is passed to the next stage of the cascade. At the last
reject classifier in the cascade, if gr?1 (x) = 1, the observation is passed to the final region classifier,
fr (x). This ensures that only r reject classifiers have to be trained for r regions.
Now define for an arbitrary instance (x, y) and fixed {gj }, {fj }, the 0/1 loss function at each stage
k,
1{gk (x)=0} 1{fk (x)6=y} + 1{gk (x)=1} Lk+1 (x, y) if k < r
Lk (x, y) =
,
(4)
1{fk+1 (x)6=y}
if k = r
We observe that Lk (x, y) ? {0, 1} and is equal to zero if the example is classified correctly at
current or future stages and one otherwise. Consequently, the aggregate 0/1 empirical risk/loss is
the average loss over all training points at stage 1, namely,
n
R (g1 , g2 , . . . , gr?1 , f1 , f2 , . . . , fr ) =
1X
L1 (xi , yi )
n i=1
(5)
In the expression above we have made the dependence on reject classifiers and region-classifiers
explicit. We minimize Eq. 5 over all gj , fj by means of coordinate descent, namely, to optimize gk
we hold fj , ?j and gj , j 6= k fixed. Based on the expressions derived above the coordinate descent
steps for gk and fk reduces respectively to:
n
n
1X
1X
gk (?) = argmin
Ck (xi )Lk (xi , yi ), fk (?) = argmin
Ck (xi )1{fk (xi )6=yi } V{gk (xi )=0}
n i=1
n i=1
g?G
f ?F
(6)
where, Cj (x) = 1{Vj?1 {gi (x)=1}} , denotes whether or not an example makes it to the jth stage. The
i=1
optimization problem for fk (?) is exactly the standard 0/1 empirical loss minimization over training
5
Algorithm 1 Space Partitioning Classifier
Input: Training data, {(xi , yi )}ni=1 , number of classification regions, r
Output: Composite classifier, F (?)
Initialize: Assign points randomly to r regions
while F not converged do
for j = 1, 2, . . . , r do
Train region classifier fj (x) to optimize 0/1 empirical loss of Eq. (6).
end for
for k = r ? 1, r ? 2, . . . , 2, 1 do
Train reject classifier gk (x) to optimize 0/1 empirical loss of Eq. (7).
end for
end while
data that survived upto stage k. On the other hand, the optimization problem for gk is exactly in
the form where Lemma 2.1 applies. Consequently, we can also reduce this problem to a supervised
n
learning problem:
1X
gk (?) = argmin
wi 1{g(xi )6=`i } ,
(7)
n
g?G
i=1
where
`i =
0 if fk (xi ) = yi
1 if fk (xi ) 6= yi
and wi =
1, `i 6= Lk+1 (xi , yi ), Ck (x) 6= 0
.
0,
otherwise
The composite classifier F (x) based on the reject and region classifiers can be written compactly as
follows:
F (x) = fs (x), s = min{j | gj (x) = 0} ? {r}
(8)
Observe that if the kth region classifier correctly classifies the example xi , i.e., fk (xi ) = yi then
this would encourage gk (xi ) = 0. This is because gk (xi ) = 1 would induce an increased cost in
terms of increasing Lk+1 (xi , yi ). Similarly, if the kth region classifier incorrectly classifies, namely,
fk (xi ) 6= yi , the optimization would prefer gk (xi ) = 1. Also note that if the kth region classifier
loss as well as the subsequent stages are incorrect on an example are incorrect then the weight on
that example is zero. This is not surprising since reject/no-reject does not impact the global cost.
We can deal with minimizing indicator losses and resulting convergence issues by deriving a global
surrogate as we did in Sec. 2.2. A pseudo-code for the proposed scheme is described in Algorithm 1.
2.4 Local Linear Classification
Linear classification is a natural method for learning local decision boundaries, with the global decision regions
approximated by piecewise linear functions. In local linear classification, local classifiers, f1 , f2 , . . . , fr , and reject classifiers, g1 , g2 , . . . , gr?1 , are optimized over the
set of linear functions. Local linear rules can effectively
tradeoff bias and variance error. Bias error (empirical error) can be made arbitrarily small by approximating the
decision boundary by many local linear classifiers. Variance error (classifier complexity) can be made small by Figure 2: Local LDA classification regions
restricting the number of local linear classifiers used to for XOR data, the black line is reject classifier boundary.
construct the global classifier. This idea is based on the
relatively small VC-dimension of a binary local linear classifier, namely,
Theorem 2.4. The VC-dimension of the class composed (Eq. 8) with r ? 1 linear classifiers gj and
r linear classifiers fj in a d-dimensional space is bounded by 2(2r ? 1) log(e(2r ? 1))(d + 1).
The VC-dimension of local linear classifiers grows linearly with dimension and nearly linearly with
respect to the number of regions. This is seen from Fig. 1. In practice, few regions are necessary to
achieve low training error as highly non-linear decision boundaries can be approximated well locally
with linear boundaries. For example, consider 2-D XOR data. Learning the local linear classifier
with 2 regions using LDA produces a classifier with small empirical error. In fact our empirical
observation can be translated to a theorem (see Supplementary for details):
6
Theorem 2.5. Consider an idealized XOR, namely, samples are concentrated into four equal clusters at coordinates (?1, 1), (1, 1), (1, ?1), (?1, ?1) in a 2D space. Then with high probability
(where probability is wrt initial sampling of reject region) a two region composite classifier trained
locally using LDA converges to zero training error.
In general, training linear classifiers on the indicator loss is impractical. Optimization on the nonconvex problem is difficult and usually leads to non-unique optimal solutions. Although margin
based methods such as SVMs can be used, we primarily use relatively simple schemes such as
LDA, logistic regression, and average voted perceptron in our experiments. We use each of these
schemes for learning both reject and region-classifiers. These schemes enjoy significant computational advantages over other schemes.
Computational Costs of LDA, Logistic Regression and Perceptron: Each LDA classifier is
trained in O(nd2 ) computations, where n is the number of training observations and d is the dimension of the training data. As a result, the total computation cost per iteration of the local linear classifier with LDA scales linearly with respect to the number of training samples, requiring
O(nd2 r) computations per iteration, where r is the number of classification regions. Similarly, the
computational cost of training a single linear classifier by logistic regression scales O(ncd2 ) for a
fixed number of iterations, with the local linear classifier training time scaling O(rncd2 ) computations per iteration, where c is the number of classes. A linear variant of the voted perceptron was
implemented by taking the average of the weights generated by the unnormalized voted perceptron
[15]. Training each perceptron for a fixed number of epochs is extremely efficient, requiring only
O(ndc) computations to train. Therefore, training local linear perceptron scales linearly with data
size and dimensions, with O(ndcr) computations, per iteration.
3 Experimental Results
Multiclass Classification:Experimental results on six datasets from the UCI repository [16] were
performed using the benchmark training and test splits associated with each data set, as shown in
Table 1. Confidence intervals are not possible with the results, as the predefined training and test
splits were used. Although confidence intervals cannot be computed by multiple training/test splits,
test set error bounds [17] show that with test data sets of these sizes, the difference between true error
and empirical error is small with high probability. The six datasets tested were: Isolet (d=617, c= 26,
n=6238, T=1559), Landsat (d=36, c=7, n=4435, T=2000), Letter (d=16, c=26, n=16000, T=4000),
Optdigit (d=64, c=10, n=3823, T=1797), Pendigit (d=16, n=10, n=7494, T=3498), and Shuttle (d=9,
c=7, n=43500, T=14500), where d is the dimensions, c the number of classes, n training data size
and T the number of test samples.
Local linear classifiers were trained with LDA, logistic regression, and perceptron (mean of weights)
used to learn local surrogates for the rejection and local classification problems. The classifiers were
initialized with 5 classification regions (r = 5), with the trained classifiers often reducing to fewer
classification regions due to empty rejection region. Termination of the algorithm occurred when the
rejection outputs, gk (x), and classification labels, F (x), remained consistent on the training data for
two iterations. Each classifier was randomly initialized 15 times, and the classifier with the minimum
training error was chosen. Results were compared with Mixture Discriminant Analysis (MDA)
g1(x)
g2(x)
g3(x)
g4(x)
g5(x)
Figure 3: Histogram of classes over test data for the Optdigit dataset in different partitions generated by our
approach using the linear voted perceptron .
[9] and classification trees trained using the Gini diversity index (GDI) [3]. These classification
algorithms were chosen for comparison as both train global classifiers modeled as simple local
classifiers, and both are computationally efficient.
7
For comparison to globally complex classification techniques, previous state of the art boosting results of Saberian and Vasconcelos [18] and Jhu et al. [19] were listed. Although the multiclass
boosted classifiers were terminated early, we consider the comparison appropriate, as early termination limits the complexity of the classifiers. The improved performance of local linear learning
of comparable complexity justifies approximating these boundaries by piecewise linear functions.
Comparison with kernelized SVM was omitted, as SVM is rarely applied to multiclass learning
on large datasets. Training each binary kernelized classifier is computationally intensive, and on
weakly learnable data, boosting also allows for modeling of complex boundaries with arbitrarily
small empirical error.
Table 1: Multiclass learning algorithm test errors on six UCI datasets using benchmark training and test sets.
Bold indicates best test error among listed algorithms. One vs All AdaBoostis trained using decision stumps as
weak learners. AdaBoost-SAMME and GD-MCBoost are trained using depth-2 decision trees as weak learners.
Algorithm
One vs All AdaBoost [2]
GDI Tree [3]
MDA [9]
AdaBoost-SAMME [19]
GD-MCBoost [18]
Local Classifiers
LDA
Logistic Regression
Perceptron
Isolet
11.10%
20.59%
35.98%
39.00%
15.72%
Landsat
16.10%
14.45%
36.45%
20.20%
13.35%
Letter
37.37%
14.37%
22.73%
44.35%
40.35%
Optdigit
12.24%
14.58%
9.79%
22.47%
7.68%
Pendigit
11.29%
8.78%
7.75%
16.18%
7.06%
Shuttle
0.11%
0.04%
9.59%
0.30%
0.27%
5.58%
19.95%
5.71%
13.95%
14.00%
20.15%
24.45%
13.08%
20.40%
5.78%
7.74%
4.23%
6.60%
4.75%
4.32%
2.67%
1.19%
0.32%
In 4 of the 6 datasets, local linear classification produced the lowest classification error on test
datasets, with optimal test errors within 0.6% of the minimal test error methods for the remaining
two datasets. Also there is evidence that suggests that our scheme partitions multiclass problems
into simpler subproblems. We plotted histogram output of class labels for Optdigit dataset across
different regions using local perceptrons (Fig. 3). The histogram is not uniform across regions,
implying that the reject classifiers partition easily distinguishable classes. We may interpret our
approach as implicitly learning data-dependent codes for multiclass problems. This can contrasted
with many state of the art boosting techniques, such as [18], which attempt to optimize both the
codewords for each class as well as the binary classification problems defining the codewords.
Figure 4: Test error for different values of label noise. Left: Wisconsin Breast Cancer data, Middle: Vertebrae
data, and Right: Wine data.
Robustness to Label Noise: Local linear classification trained using LDA, logistic regression, and
averaged voted perceptron was tested in the presence of random label noise. A randomly selected
fraction of all training observations were given incorrect labels, and trained as described for the
multiclass experiments. Three datasets were chosen from the UCI repository [16]: Wisconsin Breast
Cancer data, Vertebrae data, and Wine data. A training set of 100 randomly selected observations
was used, with the remainder of the data used as test. For each label noise fraction, 100 randomly
drawn training and test sets were used, and the average test error is shown in Fig. 4.
For comparison, results are shown for classification trees trained according to Gini?s diversity index
(GDI) [3], AdaBoost trained with stumps [2], and support vector machines trained on Gaussian radial basis function kernels. Local linear classification, notably when trained using LDA, is extremely
robust to label noise. In comparison, boosting and classification trees show sensitivity to label noise,
with the test error increasing at a faster rate than LDA-trained local linear classification on both the
Wisconsin Breast Cancer data and Vertebrae data.
Acknowledgments
This research was partially supported by NSF Grant 0932114.
8
References
[1] G. R?atsch, T. Onoda, and K.-R. M?uller. Soft margins for AdaBoost. Technical Report NC-TR1998-021, Department of Computer Science, Royal Holloway, University of London, Egham,
UK, August 1998. Submitted to Machine Learning.
[2] Yoav Freund and Robert E Schapire. A decision-theoretic generalization of on-line learning
and an application to boosting. Journal of Computer and System Sciences, 55(1):119 ? 139,
1997.
[3] Leo Breiman, J. H. Friedman, R. A. Olshen, and C. J. Stone. Classification and Regression
Trees. Wadsworth, 1984.
[4] Thomas G. Dietterich and Ghulum Bakiri. Solving multiclass learning problems via errorcorrecting output codes. Journal of Artificial Intelligence Research, 2:263?286, 1995.
[5] Erin L. Allwein, Robert E. Schapire, and Yoram Singer. Reducing multiclass to binary: a
unifying approach for margin classifiers. J. Mach. Learn. Res., 1:113?141, September 2001.
[6] Koby Crammer and Yoram Singer. On the learnability and design of output codes for multiclass
problems. In In Proceedings of the Thirteenth Annual Conference on Computational Learning
Theory, pages 35?46, 2000.
[7] Venkatesan Guruswami and Amit Sahai. Multiclass learning, boosting, and error-correcting
codes. In Proceedings of the twelfth annual conference on Computational learning theory,
COLT ?99, pages 145?155, New York, NY, USA, 1999. ACM.
[8] Yijun Sun, Sinisa Todorovic, Jian Li, and Dapeng Wu. Unifying the error-correcting and
output-code adaboost within the margin framework. In Proceedings of the 22nd international
conference on Machine learning, ICML ?05, pages 872?879, New York, NY, USA, 2005.
ACM.
[9] Trevor Hastie and Robert Tibshirani. Discriminant analysis by gaussian mixtures. Journal of
the Royal Statistical Society, Series B, 58:155?176, 1996.
[10] Tae-Kyun Kim and Josef Kittler. Locally linear discriminant analysis for multimodally distributed classes for face recognition with a single model image. IEEE Transactions on Pattern
Analysis and Machine Intelligence, 27:318?327, 2005.
[11] Ofer Dekel and Ohad Shamir. There?s a hole in my data space: Piecewise predictors for
heterogeneous learning problems. In Proceedings of the International Conference on Artificial
Intelligence and Statistics, volume 15, 2012.
[12] Juan Dai, Shuicheng Yan, Xiaoou Tang, and James T. Kwok. Locally adaptive classification piloted by uncertainty. In Proceedings of the 23rd international conference on Machine learning,
ICML ?06, pages 225?232, New York, NY, USA, 2006. ACM.
[13] Marc Toussaint and Sethu Vijayakumar. Learning discontinuities with products-of-sigmoids
for switching between local models. In Proceedings of the 22nd international conference on
Machine Learning, pages 904?911. ACM Press, 2005.
[14] Eduardo D. Sontag. Vc dimension of neural networks. In Neural Networks and Machine
Learning, pages 69?95. Springer, 1998.
[15] Yoav Freund and Robert E. Schapire. Large margin classification using the perceptron algorithm. Machine Learning, 37:277?296, 1999. 10.1023/A:1007662407062.
[16] A. Frank and A. Asuncion. UCI machine learning repository, 2010.
[17] J. Langford. Tutorial on practical prediction theory for classification. Journal of Machine
Learning Research, 6(1):273, 2006.
[18] Mohammad J. Saberian and Nuno Vasconcelos. Multiclass boosting: Theory and algorithms.
In J. Shawe-Taylor, R.S. Zemel, P. Bartlett, F.C.N. Pereira, and K.Q. Weinberger, editors,
Advances in Neural Information Processing Systems 24, pages 2124?2132. 2011.
[19] Ji Zhu, Hui Zou, Saharon Rosset, and Trevor Hastie. Multi-class adaboost, 2009.
9
|
4725 |@word repository:3 middle:1 nd:2 dekel:1 twelfth:1 termination:2 shuicheng:1 seek:2 recursively:1 reduction:1 initial:1 series:1 past:1 existing:3 current:1 surprising:2 yet:1 written:2 subsequent:1 partition:20 v:2 alone:1 greedy:1 fewer:1 implying:1 selected:2 intelligence:3 boosting:12 simpler:4 along:1 constructed:1 incorrect:3 fitting:1 manner:1 g4:1 pairwise:1 notably:1 indeed:1 overtrain:2 multi:6 globally:1 decomposed:1 increasing:2 classifies:2 bounded:2 suffice:1 panel:2 lowest:1 argmin:3 impractical:1 eduardo:1 pseudo:1 exactly:2 classifier:115 demonstrates:1 uk:1 partitioning:35 control:1 grant:1 enjoy:1 engineering:2 local:38 limit:1 consequence:1 switching:1 despite:1 mach:1 black:1 studied:1 suggests:2 challenging:1 ease:1 statistically:1 averaged:1 unique:1 acknowledgment:1 practical:1 union:2 practice:1 survived:1 empirical:24 yan:1 jhu:1 cascade:5 reject:24 thought:2 dictate:1 composite:4 induce:1 confidence:2 radial:1 convenience:1 cannot:1 put:1 risk:8 context:4 optimize:6 equivalent:4 map:3 convex:1 formulate:2 correcting:2 rule:1 isolet:2 deriving:1 sahai:1 coordinate:8 limiting:1 controlling:1 shamir:1 approximated:4 recognition:1 utilized:1 electrical:2 wang:1 region:72 ensures:2 kittler:1 sun:1 indep:1 yijun:1 substantial:1 complexity:7 saberian:2 trained:16 weakly:1 tight:1 solving:2 rewrite:1 pendigit:2 upon:2 f2:2 learner:2 basis:1 compactly:2 translated:1 easily:1 xiaoou:1 leo:1 train:7 london:1 gini:2 artificial:2 zemel:1 aggregate:1 outcome:2 choosing:5 heuristic:1 posed:1 solve:1 supplementary:2 tested:2 otherwise:6 ability:1 statistic:1 gi:1 g1:4 jointly:2 itself:1 final:1 online:1 advantage:2 evidently:1 product:1 fr:3 saligrama:1 remainder:1 uci:4 combining:1 achieve:1 convergence:3 cluster:8 empty:1 produce:1 converges:2 develop:2 eq:13 implemented:1 predicted:1 indicate:1 closely:1 vc:5 require:1 assign:1 f1:19 generalization:3 hold:1 mapping:1 predict:1 early:2 omitted:1 wine:2 label:17 weighted:1 minimization:6 uller:1 gaussian:3 ck:3 shuttle:2 boosted:1 breiman:1 broader:1 allwein:1 conjunction:2 derived:1 improvement:1 notational:1 nd2:2 indicates:2 contrast:6 greedily:3 kim:2 dependent:1 landsat:2 typically:2 kernelized:2 josef:1 issue:2 classification:39 flexible:1 among:1 colt:1 art:4 spatial:1 initialize:1 wadsworth:1 equal:2 construct:1 shaped:1 nicely:1 sampling:1 vasconcelos:2 koby:1 icml:2 nearly:1 future:1 report:1 np:1 others:1 piecewise:9 employ:1 few:1 primarily:1 randomly:5 composed:1 attempt:5 friedman:1 highly:1 mixture:5 implication:2 predefined:1 encourage:1 necessary:1 respective:1 ohad:1 tree:16 loosely:1 taylor:1 initialized:2 re:2 plotted:1 minimal:1 instance:2 classify:1 increased:1 modeling:1 soft:1 ghulum:1 yoav:2 cost:6 uniform:1 predictor:1 gr:3 too:2 learnability:1 learnt:1 my:1 gd:2 adaptively:4 rosset:1 fundamental:1 sensitivity:1 international:4 vijayakumar:1 bu:2 sequel:1 together:2 fused:1 choose:1 juan:1 leading:3 li:1 account:1 diversity:2 stump:2 coding:1 sec:2 bold:1 erin:1 satisfy:1 explicitly:1 depends:1 idealized:1 performed:1 view:2 asuncion:1 minimize:6 voted:5 ni:1 xor:3 variance:3 gdi:3 weak:3 rejecting:1 produced:1 classified:3 converged:1 overtraining:1 submitted:1 trevor:2 against:1 james:1 nuno:1 naturally:2 associated:5 proof:2 dataset:5 cj:1 supervised:12 adaboost:10 improved:3 formulation:1 though:1 furthermore:2 stage:8 langford:1 hand:1 joewang:1 logistic:11 lda:16 behaved:2 grows:1 building:1 dietterich:1 usa:3 requiring:2 true:2 alternating:1 deal:1 unnormalized:1 stone:1 theoretic:1 mohammad:1 performs:1 l1:1 saharon:1 fj:8 image:1 consideration:1 novel:2 recently:1 fi:1 ji:1 volume:1 extend:1 occurred:1 approximates:1 optdigit:4 interpret:1 significant:2 rd:1 fk:12 i6:5 similarly:2 shawe:1 f0:21 similarity:1 gj:9 etc:1 perspective:1 optimizing:5 belongs:1 optimizes:3 scenario:1 nonconvex:1 binary:20 arbitrarily:2 samme:2 yi:22 seen:1 minimum:3 dai:1 somewhat:2 purity:1 converge:3 venkatesan:1 multiple:2 reduces:1 smooth:1 technical:1 faster:1 adapt:1 characterized:1 impact:1 prediction:1 scalable:1 regression:12 variant:2 breast:3 heterogeneous:1 iteration:6 kernel:2 histogram:3 addition:1 thirteenth:1 interval:2 jian:1 operate:2 unlike:1 pass:1 incorporates:1 presence:1 intermediate:1 split:6 fit:2 architecture:2 competing:1 hastie:3 reduce:2 idea:1 multiclass:15 tradeoff:1 intensive:1 whether:2 expression:2 six:3 guruswami:1 bartlett:1 passed:3 f:1 reformulated:2 sontag:1 york:3 todorovic:1 clear:1 listed:2 locally:12 concentrated:1 svms:2 reduced:1 schapire:3 nsf:1 tutorial:1 sign:1 estimated:1 correctly:2 per:4 tibshirani:1 discrete:1 group:2 putting:1 four:1 nevertheless:2 demonstrating:1 drawn:1 prevent:1 fraction:2 parameterized:1 powerful:2 letter:2 uncertainty:1 extends:1 family:5 decide:1 wu:1 decision:25 prefer:1 scaling:1 comparable:1 bound:5 annual:2 mda:4 ri:2 aspect:1 min:1 extremely:2 performing:1 relatively:2 department:1 according:1 combination:3 belonging:1 across:2 partitioned:3 wi:8 joseph:1 g3:1 erm:6 errorcorrecting:1 computationally:4 equation:1 describing:1 wrt:1 singer:2 letting:1 end:4 ofer:1 gaussians:2 observe:3 kwok:1 upto:1 appropriate:1 egham:1 alternative:1 robustness:3 weinberger:1 thomas:1 denotes:1 remaining:1 ensure:2 unifying:2 yoram:2 build:2 amit:1 approximating:3 bakiri:1 society:1 objective:4 codewords:2 dependence:1 surrogate:16 unclear:1 g5:1 september:1 kth:3 separate:1 sethu:1 srv:1 discriminant:7 assuming:1 code:9 index:2 modeled:1 minimizing:4 equivalently:1 difficult:1 unfortunately:1 nc:1 robert:4 olshen:1 frank:1 subproblems:1 gk:15 design:1 upper:7 disagree:1 observation:9 datasets:13 benchmark:3 descent:7 incorrectly:1 defining:1 kyun:1 banana:2 arbitrary:3 august:1 venkatesh:1 namely:8 evidenced:1 pair:1 optimized:1 quadratically:1 discontinuity:1 usually:3 pattern:1 built:1 including:2 royal:2 natural:1 indicator:6 mcboost:2 zhu:1 scheme:17 lk:6 nice:1 epoch:1 wisconsin:3 freund:2 loss:28 toussaint:1 degree:1 consistent:2 editor:1 cancer:3 supported:1 last:1 jth:1 bias:3 allow:1 perceptron:13 taking:1 face:1 fg:1 benefit:1 distributed:1 boundary:13 dimension:11 depth:1 unaware:1 rich:1 made:3 adaptive:1 simplified:1 transaction:1 approximate:1 implicitly:1 global:17 overfitting:1 discriminative:2 xi:57 continuous:1 iterative:1 table:2 additionally:1 learn:6 reasonably:1 robust:1 onoda:1 ignoring:1 complex:10 zou:1 ndc:1 marc:1 vj:1 did:1 main:1 linearly:5 terminated:1 bounding:1 noise:8 complementary:1 fig:8 ny:3 precision:1 fails:1 pereira:1 explicit:1 exponential:1 third:1 tang:1 theorem:4 remained:1 specific:3 learnable:1 svm:2 evidence:3 fusion:1 restricting:1 effectively:1 hui:1 sigmoids:2 illustrates:1 justifies:1 margin:7 hole:1 boston:4 rejection:3 entropy:1 intersection:1 distinguishable:1 conveniently:1 sinisa:1 g2:4 partially:1 applies:2 springer:1 relies:1 acm:4 ma:2 goal:3 formulated:1 consequently:5 exposition:1 hard:2 operates:1 reducing:2 contrasted:1 lemma:7 called:1 total:1 experimental:3 meaningful:1 perceptrons:4 rarely:1 atsch:1 holloway:1 support:1 crammer:1 incorporate:1 dept:2 tae:1
|
4,117 | 4,726 |
Convergence and Energy Landscape for Cheeger Cut
Clustering
Thomas Laurent
University of California, Riversize
Riverside, CA 92521
[email protected]
Xavier Bresson
City University of Hong Kong
Hong Kong
[email protected]
David Uminsky
University of San Francisco
San Francisco, CA 94117
[email protected]
James H. von Brecht
University of California, Los Angeles
Los Angeles, CA 90095
[email protected]
Abstract
This paper provides both theoretical and algorithmic results for the `1 -relaxation
of the Cheeger cut problem. The `2 -relaxation, known as spectral clustering, only
loosely relates to the Cheeger cut; however, it is convex and leads to a simple optimization problem. The `1 -relaxation, in contrast, is non-convex but is provably
equivalent to the original problem. The `1 -relaxation therefore trades convexity
for exactness, yielding improved clustering results at the cost of a more challenging optimization. The first challenge is understanding convergence of algorithms.
This paper provides the first complete proof of convergence for algorithms that
minimize the `1 -relaxation. The second challenge entails comprehending the `1 energy landscape, i.e. the set of possible points to which an algorithm might
converge. We show that `1 -algorithms can get trapped in local minima that are
not globally optimal and we provide a classification theorem to interpret these local minima. This classification gives meaning to these suboptimal solutions and
helps to explain, in terms of graph structure, when the `1 -relaxation provides the
solution of the original Cheeger cut problem.
1
Introduction
Partitioning data points into sensible groups is a fundamental problem in machine learning. Given a
set of data points V = {x1 , ? ? ? , xn } and similarity weights {wi,j }1?i,j?n , we consider the balance
Cheeger cut problem [4]:
P
P
xi ?S
xj ?S c wi,j
Minimize C(S) =
over all subsets S ( V .
(1)
min(|S|, |S c |)
Here |S| denotes the number of data points in S and S c is the complementary set of S in V . While
this problem is NP-hard, it has the following exact continuous `1 -relaxation:
P
1
i,j wi,j |fi ? fj |
2
Minimize E(f ) = P
over all non-constant functions f : V ? R. (2)
i |fi ? med(f )|
Here med(f ) denotes the median of f ? Rn and fi ? f (xi ). Recently, various algorithms have
been proposed [12, 6, 7, 1, 9, 5] to minimize `1 -relaxations of the Cheeger cut (1) and of other
related problems. Typically these `1 -algorithms provide excellent unsupervised clustering results
1
and improve upon the standard `2 (spectral clustering) method [10, 13] in terms of both Cheeger
energy and classification error. However, complete theoretical guarantees of convergence for such
algorithms do not exist. This paper provides the first proofs of convergence for `1 -algorithms that
attempt to minimize (2).
In this work we consider two algorithms for minimizing (2). We present a new steepest descent (SD)
algorithm and also consider a slight modification of the inverse power method (IPM) from [6]. We
provide convergence results for both algorithms and also analyze the energy landscape. Specifically,
we give a complete classification of local minima. This understanding of the energy landscape
provides intuition for when and how the algorithms get trapped in local minima. Our numerical
experiments show that the two algorithms perform equally well with respect to the quality of the
achieved cut. Both algorithms produce state of the art unsupervised clustering results. Finally, we
remark that the SD algorithm has a better theoretical guarantee of convergence. This arises from
the fact that the distance between two successive iterates necessarily converges to zero. In contrast,
we cannot guarantee this holds for the IPM without further assumptions on the energy landscape.
The simpler mathematical structure of the SD algorithm also provides better control of the energy
descent.
Both algorithms take the form of a fixed point iteration f k+1 ? A(f k ), where f ? A(f ) implies
that f is a critical point. To prove convergence towards a fix point typically requires three key
ingredients: the first is monotonicity of A, that is E(z) ? E(f ) for all z ? A(f ); the second
is some estimate that guarantees the successive iterates remain in a compact domain on which E
is continuous; lastly, some type of continuity of the set-valued map A is required. For set valued
maps, closedness provides the correct notion of continuity [8]. Monotonicity of the IPM algorithm
was proven in [6]. This property alone is not enough to obtain convergence, and the closedness
property proves the most challenging ingredient to establish for the algorithms we consider. Section
2 elucidates the form these properties take for the SD and IPM algorithms. In Section 3 we show
that that if the iterates of either algorithm approach a neighborhood of a strict local minimum then
both algorithms will converge to this minimum. We refer to this property as local convergence.
When the energy is non-degenerate, section 4 extends this local convergence to global convergence
toward critical points for the SD algorithm by using the additional structure afforded by the gradient
flow. In Section 5 we develop an understanding of the energy landscape of the continuous relaxation
problem. For non-convex problems an understanding of local minima is crucial. We therefore
provide a complete classification of the local minima of (2) in terms of the combinatorial local
minima of (1) by means of an explicit formula. As a consequence of this formula, the problem
of finding local minima of the combinatorial problem is equivalent to finding local minima of the
continuous relaxation. The last section is devoted to numerical experiments.
We now present the SD algorithm. Rewrite the Cheeger functional (2) as E(f ) = T (f )/B(f ),
where the numerator T (f ) is the total variation term and the denominator B(f ) is the balance term.
If T and B were differentiable, a mixed explicit-implicit gradient flow of the energy would take the
form (f k+1 ?f k )/? k = ?(?T (f k+1 )?E(f k )?B(f k ))/(B(f k )), where {? k } denotes a sequence
of time steps. As T and B are not differentiable, particularly at the binary solutions of paramount
interest, we must consider instead their subgradients
?T (f ) := {v ? Rn : T (g) ? T (f ) ? hv, g ? f i ?g ? Rn } ,
?0 B(f ) := {v ? Rn : B(g) ? B(f ) ? hv, g ? f i ?g ? Rn and h1, vi = 0} .
(3)
(4)
Here 1 ? Rn denotes the constant vector of ones. Also note that if f has zero median then B(f ) =
||f ||1 and ?0 B(f ) = {v ? sign(f ), s.t. mean(v) = 0}. After an appropriate choice of time steps
we arrive to the SD Algorithm summarized in table 1(on left), i.e. a non-smooth variation of steepest
descent. A key property of the the SD algorithm?s iterates is that kf k+1 ? f k k2 ? 0. This property
allows us to conclude global convergence of the SD algorithm in cases where we can not conclude
convergence for the IPM algorithm. We also summarize the IPM algorithm from [6] in Table 1 (on
right). Compared to the original algorithm from [6], we have added the extra step to project onto
the sphere S n?1 , that is f k+1 = hk /||hk ||2 . While we do not think that this extra step is essential,
it simplifies the proof of convergence.
The successive iterates of both algorithms belong to the space
S0n?1 := {f ? Rn : ||f ||2 = 1 and
2
med(f ) = 0}.
(5)
Table 1: ASD : SD Algorithm.
0
f nonzero function with med(f ) = 0.
c positive constant.
while E(f k ) ? E(f k+1 ) ? TOL do
v k ? ?0 B(f k )
gk = f k + c vk
k
? k = arg min T (u)+ E(f ) ||u?g k ||2
h
2
2c
u?Rn
? k ? med(h
? k )1
h =h
hk
k+1
f
= khk k2
end while
k
AIPM : Modifed IPM Algorithm [6].
f 0 nonzero function with med(f ) = 0.
while E(f k ) ? E(f k+1 ) ? TOL do
v k ? ?0 B(f k )
Dk = min||u||2 ?1 T (u) ? E(f k )hu, v k i
g k = arg min T (u) ?E(f k )hu, v k i if Dk< 0
||u||2 ?1
k
g = f k if Dk = 0
hk = g k ? med(g k )1
k
f k+1 = ||hhk ||2
end while
As the successive iterates have zero median, ?0 B(f k ) is never empty. For example, we can take
v k ? Rn so that v k (xi ) = 1 if f (xi ) > 0, v k (xi ) = ?1 if f (xi ) < 0 and v k (xi ) = (n? ? n+ )/(n0 )
if f (xi ) = 0 where n+ , n? and n0 denote the cardinalities of the sets {xi : f (xi ) > 0}, {xi :
f (xi ) > 0} and {xi : f (xi ) = 0}, respectively. Other possible choices also exist, so that v k is
not uniquely defined. This idea, i.e. choosing an element from the subdifferential with mean zero,
was introduced in [6] and proves indispensable when dealing with median zero functions. As v k is
not uniquely defined in either algorithm, we must introduce the concepts of a set-valued map and a
closed map, which is the proper notion of continuity in this context:
Definition 1 (Set-valued Map, Closed Maps). Let X and Y be two subsets of Rn . If for each x ? X
there is a corresponding set F (x) ? Y then F is called a set-valued map from X to Y . We denote
this by F : X ? Y . The graph of F , denoted Graph(F) is defined by
Graph(F ) = {(x, y) ? Rn ? Rn : x ? X, y ? F (x)}.
A set-valued map F is called closed if Graph(F ) is a closed subset of Rn ? Rn .
With these notations in hand we can write f k+1 ? ASD (f k ) (SD algorithm) and f k+1 ? AIPM (f k )
(IPM algorithm) where ASD , AIPM : S0n?1 ? S0n?1 are the appropriate set-valued maps. The
? k ? H(f k ) in the SD algorithm.
notion of a closed map proves useful when analyzing the step h
Particularly,
Lemma 1 (Closedness of H(f )). The following set-valued map H : S0n?1 ? Rn is closed.
E(f )
||u ? (f + c?0 B(f ))||22
H(f ) := arg min T (u) +
2c
u
Currently, we can only show that lemma 1 holds at strict local minima for the analogous step, g k ,
of the IPM algorithm. That lemma 1 holds without this further restriction on f ? S0n?1 will allow
us to demonstrate stronger global convergence results for the SD algorithm. Due to page limitations
the supplementary material contains the proofs of all lemmas and theorems in this paper.
2
Properties of ASD and AIPM
This section establishes the required properties of the of the set-valued maps ASD and AIPM mentioned in the introduction. In section 2.1 we first elucidate the monotonicity and compactness of
ASD and AIPM . Section 2.2 demonstrates that a local notion of closedness holds for each algorithm.
This form of closedness suffices to show local convergence toward isolated local minima (c.f. Section 3). In particular, this more difficult and technical section is necessary as monotonicity alone
does not guarantee this type of convergence.
2.1
Monotonicity and Compactness
We provide the monotonicity and compactness results for each algorithm in turn. Lemmas 2 and 3
establish monotonicity and compactness for ASD while Lemmas 4 and 5 establish monotonicity and
compactness for AIPM .
3
? and h according to the SD
Lemma 2 (Monotonicity of ASD ). Let f ? S0n?1 and define v, g, h
?
algorithm. Then neither h nor h is a constant vector. Moreover, the energy inequality
? ? f k2
E(f ) kh
2
E(f ) ? E(h) +
(6)
B(h)
c
holds. As a consequence, if z ? ASD (f ) then E(z) = E(h) < E(f ) unless z = f .
Lemma 3 (Compactness of ASD ). Let f 0 ? S0n?1 and define a sequence of iterates
? k , hk , f k+1 ) according to the SD algorithm. Then for any such sequence
(g k , h
?
?
? k ||2 .
? k k2 ? kg k k2 , 1 ? ||g k ||2 ? 1 + c n and 0 < ||hk ||2 ? (1 + n)||h
(7)
kh
Moreover, we have
? k ? f k ||2 ? 0,
||h
Therefore
S0n?1
? k ) ? 0,
med(h
kf k ? f k+1 k2 ? 0.
(8)
? k } and {hk }.
attracts the sequences {h
By the monotonicity result of Hein and B?uhler [6] we have
Lemma 4 (Monotonicity of AIPM ). Let f ? S0n?1 . If z ? AIPM (f ) then E(z) < E(f ) unless
z = f.
To prove convergence for AIPM using our techniques, we must also maintain control over the iterates
after subtracting the median. This control is provided by the following lemma.
Lemma 5 (Compactness of AIPM ). Let f ? S0n?1 and define v, D, g and h according to the IPM.
1. The minimizer is unique when D < 0, i.e. g ? S n?1 is a single point.
?
2. 1 ? ||h||2 ? 1 + n. In particular, AIPM (f ) is always well-defined for a given choice of
v ? ?0 B(f ).
2.2
Closedness Properties
The final ingredient to prove local convergence is some form of closedness. We require closedness
of the set valued maps A at strict local minima of the energy. As the energy (2) is invariant under
constant shifts and scalings, the usual notion of a strict local minimum on Rn does not apply. We
must therefore remove the effects of these invariances when referring to a local minimum as strict.
To this end, define the spherical and annular neighborhoods on S0n?1 by
B (f ? ) := {||f ? f ? ||2 ? } ? S0n?1
A?, (f ? ) := {? ? ||f ? f ? ||2 ? } ? S0n?1 .
With these in hand we introduce the proper definition of a strict local minimum.
Definition 2 (Strict Local Minima). Let f ? ? S0n?1 . We say f ? is a strict local minimum of the
energy if there exists > 0 so that f ? B (f ? ) and f 6= f ? imply E(f ) > E(f ? ).
This definition then allows us to formally define closedness at a strict local minimum in Definition
3. For the IPM algorithm this is the only form of closedness we are able to establish. Closedness at
an arbitrary f ? S0n?1 (c.f. lemma 1) does in fact hold for the SD algorithm. Once again, this fact
manifests itself in the stronger global convergence results for the SD algorithm in section 4.
Definition 3 (CLM/CSLM Mappings). Let A(f ) : S0n?1 ? S0n?1 denote a set-valued mapping.
We say A(f ) is closed at local minima (CLM) if z k ? A(f k ) and f k ? f ? imply z k ? f ?
whenever f ? is a local minimum of the energy. If z k ? f ? holds only when f ? is a strict local
minimum then we say A(f ) is closed at strict local minima (CSLM).
The CLM property for the SD algorithm, provided by lemma 6, follows as a straight forward consequence of lemma 1. The CSLM property for the IPM algorithm provided by lemma 7 requires the
additional hypothesis that the local minimum is strict.
? and h according to the SD algorithm.
Lemma 6 (CLM Property for ASD ). For f ? S0n?1 define g, h
Then ASD (f ) defines a CLM mapping.
Lemma 7 (CSLM Property for AIPM ). For f ? S0n?1 define v, D, g, h according to the IPM. Then
AIPM (f ) defines a CSLM mapping.
4
3
Local Convergence of ASD and AIPM at Strict Local Minima
Due to the lack of convexity of the energy (2) , at best we can only hope to obtain convergence
to a local minimum of the energy. An analogue of Lyapunov?s method from differential equations
allows us to show that such convergence does occur provided the iterates reach a neighborhood of
an isolated local minimum. To apply the lemmas from section 2 we must assume that f ? ? S0n?1
is a local minimum of the energy. We will assume further that f ? is an isolated critical point of the
energy according to the following definition.
Definition 4 (Isolated Critical Points). Let f ? S0n?1 . We say that f is a critical point of the energy
E(f ) if there exist w ? ?T (f ) and v ? ?0 B(f ) so that 0 = w ? E(f )v. This generalizes the usual
quotient rule 0 = ?T (f ) ? E(f )?B(f ). If there exists > 0 so that f is the only critical point in
B (f ? ) we say f is an isolated critical point of the energy.
Note that as any local minimum is a critical point of the energy, if f ? is an isolated critical point
and a local minimum then it is necessarily a strict local minimum. The CSLM property therefore
applies.
Finally, to show convergence, the set-valued map A must possess one further property, i.e. the
critical point property.
Definition 5 (Critical Point Property). Let A(f ) : S0n?1 ? S0n?1 denote a set-valued mapping. We
say that A(f ) satisfies the critical point property (CP property) if, given any sequence satisfying
f k+1 ? A(f k ), all limit points of {f k } are critical points of the energy.
Analogously to the CLM property, for the SD algorithm the CP property follows as a direct consequence of lemma 1. For the IPM algorithm it follows from closedness of the minimization step.
The proof of local convergence utilizes a version of Lyapunov?s direct method for set-valued maps,
and we adapt this technique from the strategy outlined in [8]. We first demonstrate that if any
iterate f k lies in a sufficiently small neighborhood B? (f ? ) of the strict local minimum then all
subsequent iterates remain in the neighborhood B (f ? ) in which f ? is an isolated critical point.
By compactness and the CP property, any subsequence of {f k } must have a further subsequence
that converges to the only critical point in B (f ? ), i.e. f ? . This implies that the whole sequence
must converge to f ? as well. We formalize this argument in lemma 8 and its corollary theorem 1.
Lemma 8 (Lyapunov Stability at Strict Local Minima). Suppose A(f ) is a monotonic, CSLM mapping. Fix f 0 ? S0n?1 and let {f k } denote any sequence satisfying f k+1 ? A(f k ). If f ? is a strict
local minimum of the energy, then for any > 0 there exists a ? > 0 so that if f 0 ? B? (f ? ) then
{f k } ? B (f ? ).
Theorem 1 (Local Convergence at Isolated Critical Points). Let A(f ) : S0n?1 ? S0n?1 denote a
monotonic, CSLM, CPP mapping. Let f 0 ? S0n?1 and suppose {f k } is any sequence satisfying
f k+1 ? A(f k ). Let f ? denote a local minimum that is an isolated critical point of the energy. If
f 0 ? B? (f ? ) for ? > 0 sufficiently small then f k ? f ? .
Note that both algorithms satisfy the hypothesis of theorem 1, and therefore possess identical local convergence properties. A slight modification of the proof of theorem 1 yields the following
corollary that also applies to both algorithms.
Corollary 1. Let f 0 ? S0n?1 be arbitrary, and define f k+1 ? A(f k ) according to either algorithm.
If any accumulation point f ? of the sequence {f k } is both an isolated critical point of the energy
and a local minimum, then the whole sequence f k ? f ? .
4
Global Convergence for ASD
To this point the convergence properties of both algorithms appear identical. However, we have
yet to take full advantage of the superior mathematical structure afforded by the SD algorithm.
In particular, from lemma 3 we know that ||f k+1 ? f k ||2 ? 0 without any further assumptions
regarding the initialization of the algorithm or the energy landscape. This fact combines with the
fact that lemma 1 also holds globally for f ? S0n?1 to yield theorem 2. Once again, we arrive at this
conclusion by adapting the proof from [8].
5
Theorem 2 (Convergence of the SD Algorithm). Take f 0 ? S0n?1 and fix a constant c > 0. Let
{f k } denote any sequence satisfying f k+1 ? ASD (f k ). Then
1. Any accumulation point f ? of the sequence is a critical point of the energy.
2. Either the sequence converges, or the set of accumulation points form a continuum in S0n?1 .
We might hope to rule out the second possibility in statement 2 by showing that E can never have
an uncountable number of critical points. Unfortunately, we can exhibit (c.f. the supplementary
material) simple examples to show that a continuum of local or global minima can in fact happen.
This degeneracy of a continuum of critical points arises from a lack of uniqueness in the underlying
combinatorial problem. We explore this aspect of convergence further in section 5.
By assuming additional structure in the energy landscape we can generalize the local convergence
result, theorem 1, to yield global convergence of both algorithms. This is the content of corollary 2
for the SD algorithm and the content of corollary 3 for the IPM algorithm. The hypotheses required
for each corollary clearly demonstrate the benefit of knowing apriori that ||f k+1 ? f k ||2 ? 0 occurs
for the SD algorithm. For the IPM algorithm, we can only deduce this a posteriori from the fact that
the iterates converge.
Corollary 2. Let f 0 ? S0n?1 be arbitrary and define f k+1 ? ASD (f k ). If the energy has only
countably many critical points in S0n?1 then {f k } converges.
Corollary 3. Let f 0 ? S0n?1 be arbitrary and define f k+1 ? AIPM (f k ). Suppose all critical
points of the energy are isolated in S0n?1 and are either local maxima or local minima. Then {f k }
converges.
While at first glance corollary 3 provides hope that global convergence holds for the IPM algorithm,
our simple examples in the supplementary material demonstrate that even benign graphs with welldefined cuts have critical points of the energy that are neither local maxima nor local minima.
5
Energy Landscape of the Cheeger Functional
This section demonstrates that the continuous problem (2) provides an exact relaxation of the combinatorial problem (1). Specifically, we provide an explicit formula that gives an exact correspondence
between the global minimizers of the continuous problem and the global minimizers of the combinatorial problem. This extends previous work [12, 11, 9] on the relationship between the global
minima of (1) and (2). We also completely classifiy the local minima of the continuous problem by
introducing a notion of local minimum for the combinatorial problem. Any local minimum of the
combinatorial problem then determines a local minimum of the combinatorial problem by means of
an explicit formula, and vice-versa. Theorem 4 provides this formula, which also gives a sharp condition for when a global minimum of the continuous problem is two-valued (binary), three-valued
(trinary), or k-valued in the general case. This provides an understanding the energy landscape,
which is essential due to the lack of convexity present in the continuous problem. Most importantly,
we can classify the types of local minima encountered and when they form a continuum. This is
germane to the global convergence results of the previous sections. The proofs in this section follow
closely the ideas from [12, 11].
5.1
Local and Global Minima
We first introduce the two fundamental definitions of this section. The first definition introduces the
concept of when a set S ? V of vertices is compatible with an increasing sequence S1 ( S2 (
? ? ? ( Sk of vertex subsets. Loosely speaking, a set S is compatible with S1 ( S2 ( ? ? ? ( Sk
whenever the cut defined by the pair (S, S c ) neither intersects nor crosses any of the cuts (Si , Sic ).
Definition 6 formalizes this notion.
Definition 6 (Compatible Vertex Set). A vertex set S is compatible with an increasing sequence
S1 ( S2 ( ? ? ? ( Sk if S ? S1 , Sk ? S or
S1 ( S2 ( ? ? ? ( Si ? S ? Si+1 ( ? ? ? ( Sk
6
for some 1 ? i ? k ? 1,
The concept of compatible cuts then allows us to introduce our notion of a local minimum of the
combinatorial problem, i.e. definition 7.
Definition 7 (Combinatorial k-Local Minima). An increasing collection of nontrivial sets S1 (
S2 ( ? ? ? ( Sk is called a k-local minimum of the combinatorial problem if C(S1 ) = C(S2 ) =
? ? ? = C(Sk ) ? C(S) for all S compatible with S1 ( S2 ( ? ? ? ( Sk .
Pursuing the previous analogy, a collection of cuts (S1 , S1c ), ? ? ? , (Sk , Skc ) forms a k-local minimum
of the combinatorial problem precisely when they do not intersect, have the same energy and all other
non-intersecting cuts (S, S c ) have higher energy. The case of a 1-local minimum is paramount. A cut
(S1 , S1c ) defines a 1-local minimum if and only if it has lower energy than all cuts that do not intersect
it. As a consequence, if a 1-local minimum is not a global minimum then the cut (S1 , S1c ) necessarily
intersects all of the cuts defined by the global minimizers. This is a fundamental characteristic of
local minima: they are never ?parallel? to global minima.
For the continuous problem, combinatorial k-local minima naturally correspond to vertex functions
f ? Rn that take (k + 1) distinct values. We therefore define the concept of a (k + 1)-valued local
minimum of the continuous problem.
Definition 8 (Continuous (k + 1)-valued Local Minima). We call a vertex function f ? Rn a
(k + 1)-valued local minimum of the continuous problem if f is a local minimum of E and if its
range contains exactly k + 1 distinct values.
Theorem 3 provides the intuitive picture connecting these two concepts of minima, and it follows as
a corollary of the more technical and explicit theorem 4.
Theorem 3. The continuous problem has a (k + 1)-valued local minimum if and only if the combinatorial problem has a k-local minimum.
For example, if the continuous problem has a trinary local minimum in the usual sense then the combinatorial problem must have a 2-local minimum in the sense of definition 7. As the cuts (S1 , S1c )
and (S2 , S2c ) defining a 2-local minimum do not intersect, a 2-local minimum separates the vertices
of the graph into three disjoint domains. A trinary function therefore makes intuitive sense. We
make this intuition precise in theorem 4. Before stating it we require two further definitions.
Definition 9 (Characteristic Functions). Given ? 6= S ? V , define its characteristic function fS
as
?1
fS = Cut(S, S c )
?S
if |S| ? n/2
fS = ?Cut(S, S c )
and
?1
?S c
if |S| > n/2. (9)
Note that fS has median zero and T V -norm equal to 1.
Definition 10 (Strict Convex Hull). Given k functions f1 , ? ? ? , fk , their strict convex hull is the set
sch{f1 , ? ? ? , fk } = {?1 f1 + ? ? ? + ?k fk : ?i > 0 for 1 ? i ? k and ?1 + ? ? ? + ?k = 1}
(10)
Theorem 4 (Explicit Correspondence of Local Minima).
1. Suppose S1 ( S2 ( ? ? ? ( Sk is a k-local minimum of the combinatorial problem and let
f ? sch{fS1 , ? ? ? , fSk }. Then any function of the form g = ?f + ?1 defines a (k + 1)valued local minimum of the continuous problem and with E(g) = C(S1 ).
2. Suppose that f is a (k + 1)-valued local minimum and let c1 > c2 > ? ? ? > ck+1 denote
its range. For 1 ? i ? k set ?i = {f = ci }. Then the increasing collection of sets
S1 ( ? ? ? ( Sk given by
S1 = ?1 ,
S2 = ?1 ? ?2
???
Sk = ?1 ? ? ? ? ? ?k
is a k-local minimum of the combinatorial problem with C(Si ) = E(f ).
Remark 1 (Isolated vs Continuum of Local Minima). If a set S1 is a 1-local min then the strict
convext hull (10) of its characteristic function reduces to the single binary function fS1 . Thus every
1-local minimum generates exactly one local minimum of the continuous problem in S0n?1 , and this
local minimum is binary. On the other hand, if k ? 2 then every k-local minimum of the combinatorial problem generates a continuum (in S0n?1 ) of non-binary local minima of the continuous
problem. Thus, the hypotheses of theorem 1, corollary 2 or corollary 3 can hold only if no such
higher order k-local minima exist. When these theorems do apply the algorithms therefore converge to a binary function.
7
As a final consequence, we summarize the fact that theorem 4 implies that the continuous relaxation
of the Cheeger cut problem is exact. In other words,
Theorem 5. Given {f ? arg min E} an explicit formula exists to construct the set {S ?
arg min C}, and vice-versa.
6
Experiments
In all experiments, we take the constant c = 1 in the SD algorithm. We use the method from
[3] to solve the minimization problem in the SD algorithm and the method from [7] to solve the
minimization problem in the IPM algorithm. We terminate each minimization when either a stopping
tolerance of ? = 10?10 (i.e. kuj+1 ? uj k1 ? ?) or 2, 000 iterations is reached. This yields a
comparison of the idealized cases of the SD algorithm and the IPM algorithm. Our first experiment
uses the two-moon dataset [2] in the same setting as in [12]. The second experiment utilizes pairs of
image digits extracted from the MNIST dataset. The first table summarizes the results of these tests.
It shows the mean Cheeger energy value (2), the mean error of classification (% of misclassified data)
and the mean computational time for both algorithms over 10 experiments with the same random
initialization for both algorithms in each of the individual experiments.
2 moons
4?s and 9?s
3?s and 8?s
Energy
0.126
0.115
0.086
SD Algorithm
Error (%) Time (sec.)
8.69
2.06
1.65
52.4
1.217
49.2
Modified IPM Algorithm [7]
Energy Error (%) Time (sec.)
0.145
14.12
1.98
0.185
25.23
58.9
0.086
1.219
48.1
Our second set of experiments applies both algorithms to multi-class clustering problems using a
standard, recursive bi-partitioning method. We use the MNIST, USPS and COIL datasets. We
preprocessed the data by projecting onto the first 50 principal components, and take k = 10 nearest
neighbors for the MNIST and USPS datasets and k = 5 nearest neighbors for the COIL dataset.
We used the same tolerances for the minimization problems, i.e. ? = 10?10 and 2, 000 maximum
iterations. The table below presents the mean Cheeger energy, classification error and time over 10
experiments as before.
MNIST (10 classes)
USPS (10 classes)
COIL (20 classes)
Energy
1.30
2.37
0.19
SD Algorithm
Err. (%) Time (min.)
11.78
45.01
4.11
5.15
1.58
4.31
Modified IPM Algorithm [7]
Energy Err. (%) Time (min.)
1.29
11.75
42.83
2.37
4.13
4.81
0.18
2.52
4.20
Overall, the results show that both algorithms perform equivalently for both two-class and multiclass clustering problems.
As our interest here lies in the theoretical properties of both algorithms, we will study practical
implementation details for the SD algorithm in future work. For instance, as Hein and B?uhler remark
[6], solving the minimization problem for the IPM algorithm precisely is unnecessary. Analogously
for the SD Algorithm, we only need to lower the energy sufficiently before proceeding to the next
iteration of the algorithm. It proves convenient to stop the minimization when a weaker form of the
energy inequality (6) holds, such as
!
? ? f ||2
E(f ) ||h
2
E(f ) ? E(h) + ?
B(h)
c
for some constant 0 < ? < 1. This condition provably holds in a finite number of iterations and
still guarantees that ||f k+1 ? f k ||2 ? 0. The concrete decay estimate provided by SD algorithm
therefore allows us to give precise meaning to ?sufficiently lowers the energy.? We investigate these
aspects of the algorithm and prove convergence for this practical implementation in future work.
Reproducible research: The code is available at http://www.cs.cityu.edu.hk/?xbresson/codes.html
Acknowledgements: This work supported by AFOSR MURI grant FA9550-10-1-0569 and Hong
Kong GRF grant #110311.
8
References
[1] X. Bresson, X.-C. Tai, T.F. Chan, and A. Szlam. Multi-Class Transductive Learning based on
`1 Relaxations of Cheeger Cut and Mumford-Shah-Potts Model. UCLA CAM Report, 2012.
[2] T. B?uhler and M. Hein. Spectral Clustering Based on the Graph p-Laplacian. In International
Conference on Machine Learning, pages 81?88, 2009.
[3] A. Chambolle and T. Pock. A First-Order Primal-Dual Algorithm for Convex Problems with
Applications to Imaging. Journal of Mathematical Imaging and Vision, 40(1):120?145, 2011.
[4] J. Cheeger. A Lower Bound for the Smallest Eigenvalue of the Laplacian. Problems in Analysis, pages 195?199, 1970.
[5] F. R. K. Chung. Spectral Graph Theory, volume 92 of CBMS Regional Conference Series in
Mathematics. Published for the Conference Board of the Mathematical Sciences, Washington,
DC, 1997.
[6] M. Hein and T. B?uhler. An Inverse Power Method for Nonlinear Eigenproblems with Applications in 1-Spectral Clustering and Sparse PCA. In In Advances in Neural Information
Processing Systems (NIPS), pages 847?855, 2010.
[7] M. Hein and S. Setzer. Beyond Spectral Clustering - Tight Relaxations of Balanced Graph
Cuts. In In Advances in Neural Information Processing Systems (NIPS), 2011.
[8] R.R. Meyer. Sufficient conditions for the convergence of monotonic mathematical programming algorithms. Journal of Computer and System Sciences, 12(1):108 ? 121, 1976.
[9] S. Rangapuram and M. Hein. Constrained 1-Spectral Clustering. In International conference
on Artificial Intelligence and Statistics (AISTATS), pages 1143?1151, 2012.
[10] J. Shi and J. Malik. Normalized Cuts and Image Segmentation. IEEE Transactions on Pattern
Analysis and Machine Intelligence (PAMI), 22(8):888?905, 2000.
[11] G. Strang. Maximal Flow Through A Domain. Mathematical Programming, 26:123?143,
1983.
[12] A. Szlam and X. Bresson. Total variation and cheeger cuts. In Proceedings of the 27th International Conference on Machine Learning, pages 1039?1046, 2010.
[13] L. Zelnik-Manor and P. Perona. Self-tuning Spectral Clustering. In In Advances in Neural
Information Processing Systems (NIPS), 2004.
9
|
4726 |@word kong:3 version:1 stronger:2 norm:1 hu:2 zelnik:1 ipm:22 contains:2 series:1 trinary:3 err:2 si:4 yet:1 must:9 numerical:2 subsequent:1 happen:1 benign:1 remove:1 reproducible:1 n0:2 v:1 alone:2 intelligence:2 s0n:37 steepest:2 fa9550:1 provides:12 math:2 iterates:11 successive:4 simpler:1 mathematical:6 c2:1 direct:2 differential:1 welldefined:1 prove:4 khk:1 combine:1 kuj:1 introduce:4 nor:3 multi:2 globally:2 spherical:1 cardinality:1 increasing:4 project:1 provided:5 notation:1 moreover:2 underlying:1 kg:1 finding:2 guarantee:6 formalizes:1 every:2 exactly:2 k2:6 demonstrates:2 partitioning:2 control:3 grant:2 szlam:2 appear:1 positive:1 before:3 local:89 pock:1 sd:32 limit:1 consequence:6 analyzing:1 laurent:2 pami:1 might:2 fs1:2 initialization:2 challenging:2 range:2 bi:1 unique:1 practical:2 recursive:1 digit:1 intersect:3 adapting:1 convenient:1 word:1 get:2 cannot:1 onto:2 context:1 restriction:1 equivalent:2 map:15 accumulation:3 www:1 shi:1 convex:6 rule:2 importantly:1 stability:1 notion:8 variation:3 analogous:1 elucidate:1 suppose:5 exact:4 elucidates:1 programming:2 us:1 hypothesis:4 element:1 satisfying:4 particularly:2 cut:25 muri:1 rangapuram:1 hv:2 trade:1 mentioned:1 cheeger:15 intuition:2 convexity:3 balanced:1 cam:1 rewrite:1 solving:1 tight:1 upon:1 completely:1 usps:3 cityu:2 various:1 intersects:2 distinct:2 artificial:1 neighborhood:5 choosing:1 supplementary:3 valued:23 solve:2 say:6 statistic:1 think:1 transductive:1 itself:1 final:2 sequence:15 differentiable:2 advantage:1 eigenvalue:1 subtracting:1 maximal:1 degenerate:1 grf:1 intuitive:2 kh:2 los:2 convergence:38 empty:1 produce:1 converges:5 help:1 develop:1 stating:1 nearest:2 quotient:1 c:1 implies:3 lyapunov:3 closely:1 correct:1 germane:1 hull:3 material:3 require:2 modifed:1 fix:3 suffices:1 f1:3 hold:12 sufficiently:4 algorithmic:1 mapping:7 continuum:6 smallest:1 uniqueness:1 combinatorial:18 currently:1 vice:2 city:1 establishes:1 hope:3 minimization:7 exactness:1 clearly:1 always:1 modified:2 ck:1 manor:1 corollary:12 vk:1 s1c:4 potts:1 hk:9 contrast:2 sense:3 posteriori:1 minimizers:3 stopping:1 typically:2 compactness:8 perona:1 misclassified:1 provably:2 arg:5 classification:7 overall:1 html:1 denoted:1 dual:1 art:1 constrained:1 apriori:1 equal:1 once:2 never:3 construct:1 washington:1 identical:2 unsupervised:2 future:2 np:1 report:1 individual:1 maintain:1 attempt:1 uhler:4 interest:2 possibility:1 investigate:1 introduces:1 yielding:1 primal:1 clm:6 devoted:1 necessary:1 unless:2 loosely:2 isolated:12 hein:6 theoretical:4 instance:1 classify:1 bresson:3 cost:1 introducing:1 vertex:7 subset:4 closedness:12 referring:1 fundamental:3 international:3 analogously:2 connecting:1 concrete:1 intersecting:1 von:1 again:2 xbresson:2 chung:1 summarized:1 sec:2 satisfy:1 vi:1 idealized:1 h1:1 closed:8 analyze:1 reached:1 parallel:1 minimize:5 moon:2 characteristic:4 yield:4 correspond:1 landscape:10 generalize:1 straight:1 published:1 hhk:1 explain:1 reach:1 whenever:2 definition:20 energy:46 james:1 naturally:1 proof:8 degeneracy:1 stop:1 dataset:3 manifest:1 segmentation:1 formalize:1 cbms:1 higher:2 follow:1 improved:1 chambolle:1 implicit:1 lastly:1 hand:3 nonlinear:1 lack:3 glance:1 continuity:3 defines:4 sic:1 quality:1 effect:1 concept:5 normalized:1 xavier:1 nonzero:2 numerator:1 self:1 uniquely:2 hong:3 complete:4 demonstrate:4 cp:3 fj:1 meaning:2 image:2 fi:3 recently:1 superior:1 functional:2 volume:1 belong:1 slight:2 interpret:1 refer:1 versa:2 tuning:1 outlined:1 fk:3 mathematics:1 entail:1 similarity:1 deduce:1 chan:1 indispensable:1 inequality:2 binary:6 minimum:82 additional:3 converge:5 relates:1 full:1 reduces:1 smooth:1 technical:2 annular:1 adapt:1 cross:1 sphere:1 equally:1 laplacian:2 denominator:1 vision:1 iteration:5 achieved:1 c1:1 subdifferential:1 median:6 crucial:1 sch:2 extra:2 regional:1 posse:2 strict:20 med:8 flow:3 call:1 enough:1 iterate:1 xj:1 brecht:1 attracts:1 suboptimal:1 simplifies:1 idea:2 regarding:1 knowing:1 multiclass:1 angeles:2 shift:1 pca:1 setzer:1 f:4 riverside:1 speaking:1 remark:3 tol:2 useful:1 fsk:1 eigenproblems:1 http:1 exist:4 sign:1 trapped:2 disjoint:1 write:1 group:1 key:2 preprocessed:1 neither:3 asd:16 imaging:2 graph:10 relaxation:14 inverse:2 comprehending:1 extends:2 arrive:2 pursuing:1 utilizes:2 ucr:1 scaling:1 summarizes:1 bound:1 correspondence:2 paramount:2 encountered:1 nontrivial:1 occur:1 precisely:2 afforded:2 ucla:2 generates:2 aspect:2 argument:1 min:10 uminsky:1 subgradients:1 according:7 remain:2 wi:3 modification:2 s1:17 projecting:1 invariant:1 equation:1 tai:1 turn:1 know:1 end:3 generalizes:1 available:1 apply:3 spectral:8 appropriate:2 shah:1 thomas:1 original:3 denotes:4 clustering:13 uncountable:1 k1:1 prof:4 establish:4 uj:1 malik:1 added:1 occurs:1 mumford:1 strategy:1 usual:3 exhibit:1 gradient:2 distance:1 separate:1 sensible:1 toward:2 assuming:1 code:2 relationship:1 balance:2 minimizing:1 equivalently:1 difficult:1 unfortunately:1 statement:1 gk:1 implementation:2 proper:2 perform:2 datasets:2 finite:1 descent:3 defining:1 precise:2 dc:1 rn:18 arbitrary:4 sharp:1 david:1 introduced:1 pair:2 required:3 california:2 nip:3 able:1 beyond:1 below:1 pattern:1 challenge:2 summarize:2 analogue:1 power:2 critical:24 improve:1 usfca:1 imply:2 picture:1 understanding:5 acknowledgement:1 kf:2 strang:1 afosr:1 mixed:1 limitation:1 proven:1 analogy:1 ingredient:3 sufficient:1 compatible:6 supported:1 last:1 allow:1 weaker:1 neighbor:2 sparse:1 benefit:1 tolerance:2 xn:1 forward:1 collection:3 san:2 transaction:1 compact:1 countably:1 monotonicity:11 dealing:1 global:17 conclude:2 francisco:2 unnecessary:1 xi:14 subsequence:2 continuous:19 sk:12 table:5 terminate:1 ca:3 excellent:1 necessarily:3 domain:3 aistats:1 whole:2 s2:10 complementary:1 x1:1 board:1 meyer:1 explicit:7 lie:2 theorem:19 formula:6 showing:1 dk:3 decay:1 essential:2 exists:4 mnist:4 ci:1 explore:1 applies:3 monotonic:3 minimizer:1 satisfies:1 determines:1 extracted:1 coil:3 towards:1 content:2 hard:1 specifically:2 lemma:23 principal:1 total:2 called:3 cpp:1 invariance:1 formally:1 arises:2 s2c:1
|
4,118 | 4,727 |
Optimal kernel choice for large-scale two-sample tests
Arthur Gretton,1,3 Bharath Sriperumbudur,1 Dino Sejdinovic,1 Heiko Strathmann2
1
Gatsby Unit and 2 CSD, CSML, UCL, UK; 3 MPI for Intelligent Systems, Germany
{arthur.gretton,bharat.sv,dino.sejdinovic,heiko.strathmann}@gmail
Sivaraman Balakrishnan
LTI, CMU, USA
[email protected]
Massimiliano Pontil
CSD, CSML, UCL, UK
[email protected]
Kenji Fukumizu
ISM, Japan
[email protected]
Abstract
Given samples from distributions p and q, a two-sample test determines whether
to reject the null hypothesis that p = q, based on the value of a test statistic
measuring the distance between the samples. One choice of test statistic is the
maximum mean discrepancy (MMD), which is a distance between embeddings
of the probability distributions in a reproducing kernel Hilbert space. The kernel
used in obtaining these embeddings is critical in ensuring the test has high power,
and correctly distinguishes unlike distributions with high probability. A means of
parameter selection for the two-sample test based on the MMD is proposed. For a
given test level (an upper bound on the probability of making a Type I error), the
kernel is chosen so as to maximize the test power, and minimize the probability
of making a Type II error. The test statistic, test threshold, and optimization over
the kernel parameters are obtained with cost linear in the sample size. These
properties make the kernel selection and test procedures suited to data streams,
where the observations cannot all be stored in memory. In experiments, the new
kernel selection approach yields a more powerful test than earlier kernel selection
heuristics.
1 Introduction
The two sample problem addresses the question of whether two independent samples are drawn from
the same distribution. In the setting of statistical hypothesis testing, this corresponds to choosing
whether to reject the null hypothesis H0 that the generating distributions p and q are the same, vs.
the alternative hypothesis HA that distributions p and q are different, given a set of independent
observations drawn from each.
A number of recent approaches to two-sample testing have made use of mappings of the distributions to a reproducing kernel Hilbert space (RKHS); or have sought out RKHS functions with large
amplitude where the probability mass of p and q differs most [8, 10, 15, 17, 7]. The most straightforward test statistic is the norm of the difference between distribution embeddings, and is called
the maximum mean discrepancy (MMD). One difficulty in using this statistic in a hypothesis test,
however, is that the MMD depends on the choice of the kernel. If we are given a family of kernels,
we obtain a different value of the MMD for each member of the family, and indeed for any positive
definite linear combination of the kernels. When a radial basis function kernel (such as the Gaussian kernel) is used, one simple choice is to set the kernel width to the median distance between
points in the aggregate sample [8, 7]. While this is certainly straightforward, it has no guarantees of
optimality. An alternative heuristic is to choose the kernel that maximizes the test statistic [15]: in
experiments, this was found to reliably outperform the median approach. Since the MMD returns
a smooth RKHS function that minimizes classification error under linear loss, then maximizing the
1
MMD corresponds to minimizing this classification error under a smoothness constraint. If the
statistic is to be applied in hypothesis testing, however, then this choice of kernel does not explicitly
address the question of test performance.
We propose a new approach to kernel choice for hypothesis testing, which explicitly optimizes the
performance of the hypothesis test. Our kernel choice minimizes Type II error (the probability of
wrongly accepting H0 when p ?= q), given an upper bound on Type I error (the probability of
wrongly rejecting H0 when p = q). This corresponds to optimizing the asymptotic relative efficiency in the sense of Hodges and Lehmann [13, Ch. 10]. We address the case of the linear time
statistic in [7, Section 6], for which both the test statistic and the parameters of the null distribution can be computed in O(n), for sample size n. This has a higher variance at a given n than
the U-statistic estimate costing O(n2 ) used in [8, 7], since the latter is the minimum variance unbiased estimator. Thus, we would use the quadratic time statistic in the ?limited data, unlimited
time? scenario, as it extracts the most possible information from the data available. The linear time
statistic is used in the ?unlimited data, limited time? scenario, since it is the cheapest statistic that
still incorporates each datapoint: it does not require the data to be stored, and is thus appropriate
for analyzing data streams. As a further consequence of the streaming data setting, we learn the
kernel parameter on a separate sample to the sample used in testing; i.e., unlike the classical testing
scenario, we use a training set to learn the kernel parameters. An advantage of this setting is that our
null distribution remains straightforward, and the test threshold can be computed without a costly
bootstrap procedure.
We begin our presentation in Section 2 with a review of the maximum mean discrepancy, its linear
time estimate, and the associated asymptotic distribution and test. In Section 3 we describe a criterion for kernel choice to maximize the Hodges and Lehmann asymptotic relative efficiency. We
demonstrate the convergence of the empirical estimate of this criterion when the family of kernels is
a linear combination of base kernels (with non-negative coefficients), and of the kernel coefficients
themselves. In Section 4, we provide an optimization procedure to learn the kernel weights. Finally,
in Section 5, we present experiments, in which we compare our kernel selection strategy with the
approach of simply maximizing the test statistic subject to various constraints on the coefficients of
the linear combination; and with a cross-validation approach, which follows from the interpretation
of the MMD as a classifier. We observe that a principled kernel choice for testing outperforms competing heuristics, including the previous best-performing heuristic in [15]. A Matlab implementation
is available at: www.gatsby.ucl.ac.uk/ ? gretton/adaptMMD/adaptMMD.htm
2 Maximum mean discrepancy, and a linear time estimate
We begin with a brief review of kernel methods, and of the maximum mean discrepancy [8, 7, 14].
We then describe the family of kernels over which we optimize, and the linear time estimate of the
MMD.
2.1
MMD for a family of kernels
Let Fk be a reproducing kernel Hilbert space (RKHS) defined on a topological space X with reproducing kernel k, and p a Borel probability measure on X . The mean embedding of p in Fk is a unique
element ?k (p) ? Fk such that Ex?p f (x) = ?f, ?k (p)?Fk for all f ? Fk [4]. By the Riesz representation theorem, a sufficient condition for the existence of ?k (p) is that k be Borel-measurable
and Ex?p k 1/2 (x, x) < ?. We assume k is a bounded continuous function, hence this condition
holds for all Borel probability measures. The maximum mean discrepancy (MMD) between Borel
probability measures p and q is defined as the RKHS-distance between the mean embeddings of p
and q. An expression for the squared MMD is thus
2
?k (p, q) = ??k (p) ? ?k (q)?Fk = Exx? k(x, x? ) + Eyy? k(y, y ? ) ? 2Exy k(x, y),
i.i.d.
(1)
i.i.d.
where x, x? ? p and y, y ? ? q. By introducing
we can write
hk (x, x? , y, y ? ) = k(x, x? ) + k(y, y ? ) ? k(x, y ? ) ? k(x? , y),
?k (p, q) = Exx? yy? hk (x, x? , y, y ? ) =: Ev hk (v),
2
(2)
where we have defined the random vector v := [x, x? , y, y ? ]. If ?k is an injective map, then k is said
to be a characteristic kernel, and the MMD is a metric on the space of Borel probability measures,
i.e., ?k (p, q) = 0 iff p = q [16]. The Gaussian kernels used in the present work are characteristic.
Our goal is to select a kernel for hypothesis testing from a particular family K of kernels, which we
d
now define. Let {ku }u=1 be a set of positive definite functions ku : X ? X ? R. Let
?
?
d
d
?
?
K := k : k =
?u ku ,
?u = D, ?u ? 0, ?u ? {1, . . . , d}
(3)
u=1
u=1
for some D > 0, where the constraint on the sum of coefficients is needed for the consistency proof
(see Section 3). Each k ? K is associated uniquely with an RKHS Fk , and we assume the kernels
are bounded, |ku | ? K, ?u ? {1, . . . , d}. The squared MMD becomes
2
?k (p, q) = ??k (p) ? ?k (q)?Fk =
d
?
?u ?u (p, q),
u=1
where ?u (p, q) := Ev hu (v). It is clear that if every kernel ku , u ? {1, . . . , d}, is characteristic and at
least one ?u > 0, then k is characteristic. Where there is no ambiguity, we will write ?u := ?u (p, q)
?
and Ehu := Ev hu (v). We denote h = (h1 , h2 , . . . , hd ) ? Rd?1 , ? = (?1 , ?2 , . . . , ?d )? ? Rd?1 ,
?
d?1
and ? = (?1 , ?2 , . . . , ?d ) ? R . With this notation, we may write
?k (p, q) = E(? ? h) = ? ? ?.
2.2
Empirical estimate of the MMD, asymptotic distribution, and test
We now describe an empirical estimate of the maximum mean discrepancy, given i.i.d. samples
X := {x1 , . . . , xn } and Y := {y1 , . . . , yn } from p and q, respectively. We use the linear time
estimate of [7, Section 6], for which both the test statistic and the parameters of the null distribution
can be computed in time O(n). This has a higher variance at a given n than a U-statistic estimate
costing O(n2 ), since the latter is the minimum variance unbiased estimator [13, Ch. 5]. That
said, it was observed experimentally in [7, Section 8.3] that the linear time statistic yields better
performance at a given computational cost than the quadratic time statistic, when sufficient data
are available (bearing in mind that consistent estimates of the null distribution in the latter case are
computationally demanding [9]). Moreover, the linear time statistic does not require the sample
to be stored in memory, and is thus suited to data streaming contexts, where a large number of
observations arrive in sequence.
The linear time estimate of ?k (p, q) is defined in [7, Lemma 14]: assuming for ease of notation that
n is even,
n/2
2?
??k =
hk (vi ),
(4)
n i=1
where vi := [x2i?1 , x2i , y2i?1 , y2i ] and hk (vi ) := k(x2i?1 , x2i ) + k(y2i?1 , y2i ) ? k(x2i?1 , y2i ) ?
k(x2i , y2i?1 ); this arrangement of the samples ensures we get an expectation over independent
variables as in (2) with cost O(n). We use ??k to denote the empirical statistic computed over the
samples being tested, to distinguish it from the training sample estimate ??k used in selecting the
kernel. Given the family of kernels K in (3), this can be written ??k = ? ? ??, where we again use
the convention ?? = (?
?1 , ??2 , . . . , ??d )? ? Rd?1 . The statistic ??k has expectation zero under the null
hypothesis H0 that p = q, and has positive expectation under the alternative hypothesis HA that
p ?= q.
Since ??k is a straightforward average of independent random variables, its asymptotic distribution
is given by the central limit theorem (e.g. [13, Section 1.9]). From [7, corollary 16], under the
assumption 0 < E(h2k ) < ? (which is true for bounded continuous k),
D
n1/2 (?
?k ? ?k (p, q)) ? N (0, 2?k2 ),
(5)
where the factor of two arises since the average is over n/2 terms, and
2
?k2 = Ev h2k (v) ? [Ev (hk (v))] .
3
(6)
Unlike the case of a quadratic time statistic, the null and alternative distributions differ only in
mean; by contrast, the quadratic time statistic has as its null distribution an infinite weighted sum of
?2 variables [7, Section 5], and a Gaussian alternative distribution.
To obtain an estimate of the variance based on the samples X, Y , we will use an expression derived
from the U-statistic of [13, p. 173] (although as earlier, we will express this as a simple average so
as to compute it in linear time). The population variance can be written
?k2 = Ev h2k (v) ? Ev,v? (hk (v)hk (v ? )) =
1
Ev,v? (hk (v) ? hk (v ? ))2 .
2
Expanding in terms of the kernel coefficients ?, we get
?k2 := ? ? Qk ?,
where Qk := cov(h) is the covariance matrix of h. A linear time estimate for the variance is
? k ?,
?
?k2 = ? ? Q
where
?
?k
Q
?
n/4
uu?
=
4?
h?,u (wi )h?,u? (wi ),
n i=1
(7)
and wi := [v2i?1 , v2i ],1 h?,k (wi ) := hk (v2i?1 ) ? hk (v2i ).
We now address the construction of a hypothesis test. We denote by ? the CDF of a standard Normal
random variable N (0, 1), and by ??1 the inverse CDF. From (5), a test of asymptotic level ? using
the statistic ??k will have the threshold
?
tk,? = n?1/2 ?k 2??1 (1 ? ?),
(8)
bearing in mind the asymptotic distribution of the test statistic, and that ?k (p, p) = 0. This threshold
is computed empirically by replacing ?k with its estimate ?
?k (computed using the data being tested),
which yields a test of the desired asymptotic level.
The asymptotic distribution (5) holds only when the kernel is fixed, and does not depend on the
sample X, Y . If the kernel were a function of the data, then a test would require large deviation
probabilities over the supremum of the Gaussian process indexed by the kernel parameters (e.g.
[1]). In practice, the threshold would be computed via a bootstrap procedure, which has a high
computational cost. Instead, we set aside a portion of the data to learn the kernel (the ?training
data?), and use the remainder to construct a test using the learned kernel parameters.
3 Choice of kernel
The choice of kernel will affect both the test statistic itself, (4), and its asymptotic variance, (6).
Thus, we need to consider how these statistics determine the power of a test with a given level ? (the
upper bound on the Type I error). We consider the case where p ?= q. A Type II error occurs when
the random variable ??k falls below the threshold tk,? defined in (8). The asymptotic probability of a
Type II error is therefore
?
? ?
?k (p, q) n
?
P (?
?k < tk,? ) = ? ??1 (1 ? ?) ?
.
?k 2
As ? is monotonic, the Type II error probability will decrease as the ratio ?k (p, q)?k?1 increases.
Therefore, the kernel minimizing this error probability is
k? = arg sup ?k (p, q)?k?1 ,
(9)
k?K
with the associated test threshold tk? ,? . In practice, we do not have access to the population estimates
?k (p, q) and ?k , but only their empirical estimates ??k , ?
?k from m pairs of training points (xi , yi )
(this training sample must be independent of the sample used to compute the test parameters ??k , ?
?k ).
We therefore estimate tk? ,? by a regularized empirical estimate tk?? ,? , where
?1
k?? = arg sup ??k (?
?k,? ) ,
k?K
1
This vector is the concatenation of two four-dimensional vectors, and has eight dimensions.
4
and we define the regularized standard deviation ?
?k,?
? ?
?
?
? + ?m I ? = ?
= ?? Q
?k2 + ?m ???22 .
?1
The next theorem shows the convergence of supk?K ??k (?
?k,? ) to supk?K ?k (p, q)?k?1 , and of k??
to k? , for an appropriate schedule of decrease for ?m with increasing m.
Theorem 1. Let K be defined as in (3). Assume supk?K,x,y?X |k(x, y)| < K and ?k is bounded
?
?
away from zero. Then if ?m = ? m?1/3 ,
?
?
?
?
?
?
?1
?1 ?
?1/3
? sup ??k ?
?
?
sup
?
?
k k ? = OP m
k,?
?k?K
k?K
and
P
k?? ? k? .
?
Proof. Recall from the definition of K that ???1 = D, and that ???2 ? ???1 and ???1 ? d ???2
[11, Problem 3 p. 278], hence ???2 ? Dd?1/2 . We begin with the bound
?
?
?
?
?
?
?
?
?1
?1 ?
?1
? sup ??k ?
?
?
sup
?
?
?k ?
?k,?
? ?k ?k?1 ?
k k ? ? sup ??
k,?
?k?K
k?K
k?K
?
?
?
?
?
?
?
?1
?1 ?
?1
? sup ??
?k ?
?k,?
? ?k ?k,?
? ?k ?k?1 ?
? + sup ??k ?k,?
k?K
k?K
?
?
?
?
2
?
??
? 2
??1/2
? ?k2
?k,? ? ?k,? ??
?k ?? ?k,?
?
2
?
? sup ?
?k + ???2 ?m
|??k ? ?k | + sup ?k ?
+
sup
?
?
?
?k,? ?k,? ? k?K ?k ? ?k,? (?k,? + ?k ) ?
k?K
k?K
?
?
?
?
?
C1 d
?
?k,? ? ?k,?
?
?
? ?
sup |??k ? ?k | + sup ?k ?
?
2 + ???2 ? (? 2 + ?
2 ) + ???2 ?2 )1/2 ?
? (? 2 ?
D ?m k?K
k?K
?
?
2 m
2 m
k k
k
k
?
?
?k
???22 ?m
+ sup
???22 ?m + ?k2
k?K ?k
?
?
?
d
? ?
C1 sup |?
?k ? ?k | + C2 sup |?
?k,? ? ?k,? | + C3 D2 ?m ,
D ?m
k?K
k?K
where constants C1 , C2 , and C3 follow from the boundedness of ?k and ?k . The the first result in the
theorem follows from supk?K |?
?k ? ?k | = OP (m?1/2 ) and supk?K |?
?k,? ? ?k,? | = OP (m?1/2 ),
which are proved using McDiarmid?s Theorem [12] and results from [3]: see Appendix A of the
supplementary material.
Convergence of k?? to k? : For k ? K defined in (3), we show in Section 4 that k?? and k? are unique
P
?1
k
optimizers of ??k ?
?k,?
and ?k ?k?1 , respectively. Since supk?K ????k,?
? supk?K ??kk , the result follows
from [18, Corollary 3.2.3(i)].
We remark that other families of kernels may be worth considering, besides K. For instance, we
could use a family of RBF kernels with continuous bandwidth parameter ? ? 0. We return to this
point in the conclusions (Section 6).
4 Optimization procedure
?d ??
We wish to select kernel k =
?k /?
?k,? . We perform
u=1 ?u ku ? K that maximizes the ratio ?
?
?
this optimization over training data, then use the resulting parameters ? to construct a hypothesis
test on the data to be tested (which must be independent of the training data, and drawn from the
same p, q). As discussed in Section 2.2, this gives us the test threshold without?
requiring a bootstrap
?
?
? + ?m I ?,
procedure. Recall from Sections 2.2 and 3 that ??k = ? ? ??, and ?
?k,? =
?? Q
? is a linear-time empirical estimate of the covariance matrix cov(h). Since the objective
where Q
? ??1/2
?
?? ?
? := ? ? ?? ? ? Q
? + ?m I ?
?(?; ??, Q)
is a homogenous function of order zero in ?, we
can omit the constraint ???1 = D, and set
?
??? = arg max ?(?; ??, Q).
??0
5
(10)
Grid of Gaussians, p
35
Feature selection
30
1
Grid of Gaussians
x2
25
0.5
20
0.8
10
5
0.3
10
20
30
x1
Grid of Gaussians, q
35
0.2
30
Type I I error
Typer I I error
15
0.4
0.6
0.4
25
0
0
5
10
15
20
Dimension
max ratio
opt
l2
max mmd
25
30
0.2
y2
0.1
20
15
0
0
10
5
max ratio
opt
l2
max mmd
xval
median
5
10
Ratio ?
10
20
y1
30
Figure 1: Left: Feature selection results, Type II error vs number of dimensions, average over 5000
trials, m = n = 104 . Centre: 3 ? 3 Gaussian grid, samples from p and q. Right: Gaussian grid
results, Type II error vs ?, the eigenvalue ratio for the covariance of the Gaussians in q; average over
1500 trials, m = n = 104 . The asymptotic test level was ? = 0.05 in both experiments. Error bars
give the 95% Wald confidence interval.
? > 0. Then clearly,
If ?? has at least one positive entry, there exists ? ? 0 such that ?(?; ??, Q)
?
?
2
?
?
?
?
?(? ; ??, Q) > 0, so we can write ? = arg max??0 ? (?; ??, Q). In this case, the problem (10)
becomes equivalent to a (convex) quadratic program with a unique solution, given by
?
?
? + ?m I ? : ? ? ?? = 1, ? ? 0}.
min{? ? Q
(11)
Under the alternative hypothesis, we have ?u > 0, ?u ? {1, . . . , d}, so the same reasoning can be applied to the population version of the optimization problem, i.e., to ? ? =
arg max??0 ?(?; ?, cov(h)), which implies the optimizer ? ? is unique. In the case where no entries
in ?? are positive, we obtain maximization of a quadratic form subject to a linear constraint,
?
?
? + ?m I ? : ? ? ?? = ?1, ? ? 0}.
max{? ? Q
While this problem is somewhat more difficult to solve, in practice its exact solution is irrelevant to
the Type II error performance of the proposed two-sample test. Indeed, since all of the squared MMD
estimates calculated on the training data using each of the base kernels are negative, it is unlikely the
statistic computed on the data used for the test will exceed the (always positive) threshold. Therefore,
when no entries in ?? are positive, we (arbitrarily) select a single base kernel ku with largest ??u /?
?u,? .
The key component of the optimization procedure is the quadratic program in (11). This problem can
be solved by interior point methods, or, if the number of kernels d is large,
? we could use proximalgradient methods. In this case, an ?-minimizer can be found in O(d2 / ?) time. Therefore, the
overall computational cost of the proposed test is linear in the number of samples, and quadratic in
the number of kernels.
5 Experiments
We compared our kernel selection strategy to alternative approaches, with a focus on challenging
problems that benefit from careful kernel choice. In our first experiment, we investigated a synthetic
data set for which the best kernel in the family K of linear combinations in (3) outperforms the best
d
individual kernel from the set {ku }u=1 . Here p was a zero mean Gaussian with unit covariance,
and q was a mixture of two Gaussians with equal weight, one with mean 0.5 in the first coordinate
and zero elsewhere, and the other with mean 0.5 in the second coordinate and zero elsewhere.
d
Our base kernel set {ku }u=1 contained only d univariate kernels with fixed bandwidth (one for each
dimension): in other words, this was a feature selection problem. We used two kernel selection
strategies arising from our criterion in (9): opt - the kernel from the set K that maximizes the ratio
??k /?
?k,? , as described in Section 4, and max-ratio - the single base kernel ku with largest ??u /?
?u,? .
6
15
AM signals, p
Amplitude modulated signals
AM signals, q
1
Type I I error
0.8
0.6
0.4
0.2
0
0
0.2
0.4
0.6
0.8
Added noise ? ?
max ratio
opt
median
l2
max mmd
1
1.2
Figure 2: Left: amplitude modulated signals, four samples from each of p and q prior to noise being
added. Right: AM results, Type II error vs added noise, average over 5000 trials, m = n = 104 .
The asymptotic test level was ? = 0.05. Error bars give the 95% Wald confidence interval.
We used ?n = 10?4 in both cases. An alternative kernel selection procedure is simply to maximize
the MMD on the training data, which is equivalent to minimizing the error in classifying p vs. q
under linear loss [15]. In this case, it is necessary to bound the norm of ?, since the test statistic
can otherwise be increased without limit by rescaling the ? entries. We employed two such kernel
selection strategies: max-mmd - a single base kernel ku that maximizes ??u (as proposed in [15]),
and l2 - a kernel from the set K that maximizes ??k subject to the constraint ???2 ? 1 on the vector
of weights.
Our results are shown in Figure 1. We see that opt and l2 perform much better than max-ratio and
max-mmd, with the former each having large ??? weights in both the relevant dimensions, whereas the
latter are permitted to choose only a single kernel. The performance advantage decreases as more
irrelevant dimensions are added. Also note that on these data, there is no statistically significant
difference between opt and l2 , or between max-ratio and max-mmd.
Difficult problems in two-sample testing arise when the main data variation does not reflect the
difference between p and q; rather, this is encoded as perturbations at much smaller lengthscales. In
these cases, a good choice of kernel becomes crucial. Both remaining experiments are of this type.
In the second experiment, p and q were both grids of Gaussians in two dimensions, where p had
unit covariance matrices in each mixture component, and q was a grid of correlated Gaussians with
a ratio ? of largest to smallest covariance eigenvalues. A sample dataset is provided in Figure 1. The
testing problem becomes more difficult when the number of Gaussian centers in the grid increases,
and when ? ? 1. In experiments, we used a five-by-five grid.
We compared opt, max-ratio, max-mmd, and l2 , as well as an additional approach, xval, for which
d
we chose the best kernel from {ku }u=1 by five-fold cross-validation, following [17]. In this case,
we learned a witness function on four fifths of the training data, and used it to evaluate the linear
loss on p vs q for the rest of the training data (see [7, Section 2.3] for the witness function definition,
and [15] for the classification interpretation of the MMD). We made repeated splits to obtain the
average validation error, and chose the kernel with the highest average MMD on the validation sets
(equivalently, the lowest average linear loss). This procedure has cost O(m2 ), and is much more
computationally demanding than the remaining approaches.
d
Our base kernels {ku }u=1 in (3) were multivariate isotropic Gaussians with bandwidth varying
between 2?10 and 215 , with a multiplicative step-size of 20.5 , and we set ?n = 10?5 . Results
are plotted in Figure 1: opt and max-ratio are statistically indistinguishable, followed in order of
decreasing performance by xval, max-mmd, and l2 . The median heuristic fails entirely, yielding
the 95% error expected under the null hypothesis. It is notable that the cross-validation approach
performs less well than our criterion, which suggests that a direct approach addressing the Type II
error is preferable to optimizing the classifier performance.
In our final experiment, the distributions p, q were short samples of amplitude modulated (AM)
signals, which were carrier sinusoids with amplitudes scaled by different audio signals for p and q.
7
These signals took the form
y(t) = cos(?c t) (As(t) + oc ) + n(t),
where y(t) is the AM signal at time t, s(t) is an audio signal, ?c is the frequency of the carrier
signal, A is an amplitude scaling parameter, oc is a constant offset, and n(t) is i.i.d. Gaussian noise
with standard deviation ?? . The source audio signals were [5, Vol. 1, Track 2; Vol. 2 Track 17],
and had the same singer but different accompanying instruments. Both songs were normalized to
have unit standard deviation, to avoid a trivial distinction on the basis of sound volume. The audio
was sampled at 8kHz, the carrier was at 24kHz, and the resulting AM signals were sampled at
120kHz. Further settings were A = 0.5 and oc = 2. We extracted signal fragments of length 1000,
corresponding to a time duration of 8.3 ? 10?3 seconds in the original audio. Our base kernels
d
{ku }u=1 in (3) were multivariate isotropic Gaussians with bandwidth varying between 2?15 and
215 , with a multiplicative step-size of 2, and we set ?n = 10?5 . Sample extracts from each source
and Type II error vs noise level ?? are shown in Figure 2. Here max-ratio does best, with successively
decreasing performance by opt, max-mmd, l2 , and median. We remark that in the second and third
experiments, simply choosing the kernel ku with largest ratio ??u /?
?u,? does as well or better than
?
?
solving for ? in (11). The max-ratio strategy is thus recommended when a single best kernel exists
d
in the set {ku }u=1 , although it clearly fails when a linear combination of several kernels is needed
(as in the first experiment).
Further experiments are provided in the supplementary material. These include an empirical verification that the Type I error is close to the design parameter ?, and that kernels are not chosen at
extreme values when the null hypothesis holds, additional AM experiments, and further synthetic
benchmarks.
6 Conclusions
We have proposed a criterion to explicitly optimize the Hodges and Lehmann asymptotic relative
efficiency for the kernel two-sample test: the kernel parameters are chosen to minimize the asymptotic Type II error at a given Type I error. In experiments using linear combinations of kernels, this
approach often performs significantly better than the simple strategy of choosing the kernel with
largest MMD (the previous best approach), or maximizing the MMD subject to an ?2 constraint on
the kernel weights, and yields good performance even when the median heuristic fails completely.
A promising next step would be to optimize over the parameters of a single kernel (e.g., over the
bandwidth of an RBF kernel). This presents two challenges: first, in proving that a finite sample
estimate of the kernel selection criterion converges, which might be possible following [15]; and
second, in efficiently optimizing the criterion over the kernel parameter, where we could employ a
DC programming [2] or semi-infinite programming [6] approach.
Acknowledgements: Part of this work was accomplished when S. B. was visiting the MPI for
Intelligent Systems. We thank Samory Kpotufe and Bernhard Sch?olkopf for helpful discussions.
References
[1] R. Adler and J. Taylor. Random Fields and Geometry. Springer, 2007.
[2] Andreas Argyriou, Raphael Hauser, Charles A. Micchelli, and Massimiliano Pontil. A dcprogramming algorithm for kernel selection. In ICML, pages 41?48, 2006.
[3] P. L. Bartlett and S. Mendelson. Rademacher and Gaussian complexities: Risk bounds and
structural results. Journal of Machine Learning Research, 3:463?482, 2002.
[4] A. Berlinet and C. Thomas-Agnan. Reproducing Kernel Hilbert Spaces in Probability and
Statistics. Kluwer, 2004.
[5] Magnetic Fields. 69 love songs. Merge, MRG169, 1999.
[6] P. Gehler and S. Nowozin. Infinite kernel learning. Technical Report TR-178, Max Planck
Institute for Biological Cybernetics, 2008.
[7] A. Gretton, K. Borgwardt, M. Rasch, B. Schoelkopf, and A. Smola. A kernel two-sample test.
JMLR, 13:723?773, 2012.
8
[8] A. Gretton, K. Borgwardt, M. Rasch, B. Sch?olkopf, and A. J. Smola. A kernel method for
the two-sample problem. In Advances in Neural Information Processing Systems 15, pages
513?520, Cambridge, MA, 2007. MIT Press.
[9] A. Gretton, K. Fukumizu, Z. Harchaoui, and B. Sriperumbudur. A fast, consistent kernel twosample test. In Advances in Neural Information Processing Systems 22, Red Hook, NY, 2009.
Curran Associates Inc.
[10] Z. Harchaoui, F. Bach, and E. Moulines. Testing for homogeneity with kernel Fisher discriminant analysis. In Advances in Neural Information Processing Systems 20, pages 609?616. MIT
Press, Cambridge, MA, 2008.
[11] R. A. Horn and C. R. Johnson. Matrix analysis. Cambridge Univ Press, 1990.
[12] C. McDiarmid. On the method of bounded differences. In Survey in Combinatorics, pages
148?188. Cambridge University Press, 1989.
[13] R. Serfling. Approximation Theorems of Mathematical Statistics. Wiley, New York, 1980.
[14] A. J. Smola, A. Gretton, L. Song, and B. Sch?olkopf. A Hilbert space embedding for distributions. In Proceedings of the International Conference on Algorithmic Learning Theory,
volume 4754, pages 13?31. Springer, 2007.
[15] B. Sriperumbudur, K. Fukumizu, A. Gretton, G. Lanckriet, and B. Schoelkopf. Kernel choice
and classifiability for RKHS embeddings of probability distributions. In Advances in Neural
Information Processing Systems 22, Red Hook, NY, 2009. Curran Associates Inc.
[16] B. Sriperumbudur, A. Gretton, K. Fukumizu, G. Lanckriet, and B. Sch?olkopf. Hilbert space
embeddings and metrics on probability measures. Journal of Machine Learning Research,
11:1517?1561, 2010.
[17] M. Sugiyama, T. Suzuki, Y. Itoh, T. Kanamori, and M. Kimura. Least-squares two-sample test.
Neural Networks, 24(7):735?751, 2011.
[18] A. W. van der Vaart and J. A. Wellner. Weak Convergence and Empirical Processes. Springer,
1996.
9
|
4727 |@word trial:3 version:1 norm:2 d2:2 hu:2 covariance:6 tr:1 boundedness:1 fragment:1 selecting:1 rkhs:7 outperforms:2 exy:1 gmail:1 written:2 must:2 v:7 aside:1 isotropic:2 short:1 accepting:1 mcdiarmid:2 five:3 mathematical:1 c2:2 direct:1 bharat:1 classifiability:1 expected:1 indeed:2 themselves:1 love:1 moulines:1 decreasing:2 v2i:4 considering:1 increasing:1 becomes:4 begin:3 provided:2 bounded:5 notation:2 maximizes:5 mass:1 moreover:1 null:11 lowest:1 minimizes:2 kimura:1 guarantee:1 every:1 preferable:1 classifier:2 k2:8 uk:4 scaled:1 unit:4 berlinet:1 omit:1 yn:1 planck:1 positive:7 carrier:3 limit:2 consequence:1 analyzing:1 merge:1 might:1 chose:2 suggests:1 challenging:1 co:1 ease:1 limited:2 statistically:2 unique:4 horn:1 testing:11 practice:3 definite:2 differs:1 bootstrap:3 optimizers:1 procedure:9 pontil:3 y2i:6 empirical:9 reject:2 significantly:1 confidence:2 radial:1 word:1 get:2 cannot:1 interior:1 selection:14 close:1 wrongly:2 context:1 risk:1 www:1 optimize:3 measurable:1 map:1 equivalent:2 maximizing:3 center:1 straightforward:4 duration:1 convex:1 survey:1 m2:1 estimator:2 hd:1 embedding:2 population:3 proving:1 coordinate:2 variation:1 construction:1 exact:1 programming:2 curran:2 hypothesis:16 lanckriet:2 associate:2 element:1 gehler:1 observed:1 solved:1 ensures:1 schoelkopf:2 decrease:3 highest:1 principled:1 complexity:1 depend:1 solving:1 efficiency:3 basis:2 completely:1 htm:1 various:1 univ:1 massimiliano:2 fast:1 describe:3 aggregate:1 choosing:3 h0:4 lengthscales:1 heuristic:6 supplementary:2 solve:1 encoded:1 tested:3 otherwise:1 statistic:32 cov:3 vaart:1 itself:1 final:1 advantage:2 sequence:1 eigenvalue:2 ucl:4 propose:1 took:1 raphael:1 remainder:1 relevant:1 iff:1 itoh:1 olkopf:4 convergence:4 strathmann:1 rademacher:1 generating:1 converges:1 tk:6 ac:3 op:3 c:2 kenji:1 implies:1 riesz:1 convention:1 differ:1 uu:1 rasch:2 material:2 require:3 opt:9 biological:1 hold:3 accompanying:1 normal:1 mapping:1 algorithmic:1 sought:1 optimizer:1 smallest:1 sivaraman:1 largest:5 weighted:1 fukumizu:5 mit:2 clearly:2 gaussian:10 always:1 heiko:2 rather:1 avoid:1 varying:2 corollary:2 derived:1 focus:1 hk:12 contrast:1 sense:1 am:7 helpful:1 streaming:2 unlikely:1 germany:1 arg:5 classification:3 overall:1 homogenous:1 equal:1 construct:2 field:2 having:1 icml:1 discrepancy:7 report:1 intelligent:2 employ:1 distinguishes:1 homogeneity:1 individual:1 geometry:1 n1:1 certainly:1 mixture:2 extreme:1 yielding:1 arthur:2 injective:1 necessary:1 indexed:1 taylor:1 desired:1 plotted:1 instance:1 increased:1 earlier:2 measuring:1 maximization:1 cost:6 introducing:1 addressing:1 deviation:4 entry:4 johnson:1 stored:3 hauser:1 sv:1 synthetic:2 adler:1 borgwardt:2 international:1 squared:3 hodges:3 ambiguity:1 again:1 successively:1 choose:2 central:1 reflect:1 return:2 rescaling:1 japan:1 coefficient:5 inc:2 notable:1 explicitly:3 combinatorics:1 depends:1 stream:2 vi:3 multiplicative:2 h1:1 sup:17 portion:1 red:2 minimize:2 square:1 variance:8 characteristic:4 qk:2 efficiently:1 yield:4 weak:1 rejecting:1 worth:1 cybernetics:1 bharath:1 datapoint:1 definition:2 sriperumbudur:4 frequency:1 associated:3 proof:2 sampled:2 proved:1 dataset:1 recall:2 proximalgradient:1 hilbert:6 schedule:1 amplitude:6 higher:2 follow:1 permitted:1 smola:3 replacing:1 usa:1 requiring:1 unbiased:2 true:1 normalized:1 former:1 hence:2 y2:1 sinusoid:1 indistinguishable:1 width:1 uniquely:1 mpi:2 oc:3 criterion:7 demonstrate:1 performs:2 reasoning:1 charles:1 empirically:1 khz:3 jp:1 volume:2 discussed:1 interpretation:2 kluwer:1 significant:1 cambridge:4 smoothness:1 rd:3 fk:8 consistency:1 grid:9 centre:1 sugiyama:1 dino:2 had:2 access:1 base:8 multivariate:2 recent:1 optimizing:3 optimizes:1 irrelevant:2 scenario:3 arbitrarily:1 yi:1 accomplished:1 der:1 minimum:2 additional:2 somewhat:1 employed:1 determine:1 maximize:3 recommended:1 signal:13 ii:12 semi:1 sound:1 harchaoui:2 gretton:9 smooth:1 technical:1 cross:3 bach:1 ensuring:1 wald:2 cmu:2 metric:2 expectation:3 sejdinovic:2 kernel:97 mmd:30 c1:3 whereas:1 interval:2 median:7 source:2 crucial:1 sch:4 rest:1 unlike:3 subject:4 member:1 balakrishnan:1 incorporates:1 structural:1 exceed:1 split:1 embeddings:6 affect:1 competing:1 bandwidth:5 andreas:1 csml:2 whether:3 expression:2 bartlett:1 wellner:1 song:3 york:1 remark:2 matlab:1 clear:1 outperform:1 arising:1 correctly:1 yy:1 track:2 write:4 vol:2 express:1 key:1 four:3 threshold:9 drawn:3 costing:2 lti:1 sum:2 inverse:1 powerful:1 lehmann:3 arrive:1 family:10 exx:2 appendix:1 scaling:1 entirely:1 bound:6 followed:1 distinguish:1 fold:1 quadratic:8 topological:1 constraint:7 x2:1 unlimited:2 xval:3 optimality:1 min:1 performing:1 combination:6 smaller:1 serfling:1 wi:4 making:2 computationally:2 remains:1 needed:2 mind:2 singer:1 instrument:1 available:3 gaussians:9 eight:1 observe:1 away:1 appropriate:2 magnetic:1 alternative:8 existence:1 original:1 thomas:1 remaining:2 include:1 ism:2 classical:1 micchelli:1 objective:1 question:2 arrangement:1 occurs:1 added:4 strategy:6 costly:1 said:2 visiting:1 distance:4 separate:1 thank:1 concatenation:1 discriminant:1 trivial:1 assuming:1 besides:1 length:1 kk:1 ratio:17 minimizing:3 equivalently:1 difficult:3 negative:2 implementation:1 reliably:1 design:1 perform:2 kpotufe:1 upper:3 observation:3 benchmark:1 finite:1 witness:2 y1:2 dc:1 perturbation:1 reproducing:5 pair:1 c3:2 learned:2 distinction:1 address:4 bar:2 below:1 agnan:1 ev:8 challenge:1 program:2 including:1 memory:2 max:24 power:3 critical:1 demanding:2 difficulty:1 regularized:2 x2i:6 brief:1 hook:2 extract:2 review:2 prior:1 l2:9 acknowledgement:1 asymptotic:15 relative:3 loss:4 validation:5 h2:1 sufficient:2 consistent:2 verification:1 dd:1 classifying:1 nowozin:1 elsewhere:2 twosample:1 kanamori:1 institute:1 fall:1 fifth:1 benefit:1 van:1 dimension:7 xn:1 calculated:1 made:2 suzuki:1 bernhard:1 supremum:1 xi:1 continuous:3 promising:1 learn:4 ku:16 expanding:1 correlated:1 obtaining:1 sbalakri:1 bearing:2 investigated:1 cheapest:1 main:1 csd:2 noise:5 arise:1 n2:2 repeated:1 x1:2 borel:5 gatsby:2 ny:2 wiley:1 samory:1 fails:3 wish:1 jmlr:1 third:1 theorem:7 offset:1 exists:2 mendelson:1 suited:2 simply:3 univariate:1 h2k:3 contained:1 supk:7 monotonic:1 springer:3 ch:2 corresponds:3 minimizer:1 determines:1 extracted:1 cdf:2 ma:2 goal:1 presentation:1 rbf:2 careful:1 fisher:1 experimentally:1 infinite:3 lemma:1 called:1 select:3 latter:4 arises:1 modulated:3 evaluate:1 audio:5 argyriou:1 ex:2
|
4,119 | 4,728 |
Communication-Efficient Algorithms for
Statistical Optimization
1
Yuchen Zhang1
John C. Duchi1
Martin Wainwright1,2
Department of Electrical Engineering and Computer Science and 2 Department of Statistics
University of California, Berkeley
Berkeley, CA 94720
{yuczhang,jduchi,wainwrig}@eecs.berkeley.edu
Abstract
We study two communication-efficient algorithms for distributed statistical optimization on large-scale data. The first algorithm is an averaging method that
distributes the N data samples evenly to m machines, performs separate minimization on each subset, and then averages the estimates. We provide a sharp
analysis of this average mixture algorithm, showing that under a reasonable set of
conditions, the combined parameter achieves
mean-squared error that decays as
?
O(N ?1 + (N/m)?2 ). Whenever m ? N , this guarantee matches the best possible rate achievable by a centralized algorithm having access to all N samples.
The second algorithm is a novel method, based on an appropriate form of the
bootstrap. Requiring only a single round of communication, it has mean-squared
error that decays as O(N ?1 + (N/m)?3 ), and so is more robust to the amount of
parallelization. We complement our theoretical results with experiments on largescale problems from the internet search domain. In particular, we show that our
methods efficiently solve an advertisement prediction problem from the Chinese
SoSo Search Engine, which consists of N ? 2.4 ? 108 samples and d ? 700, 000
dimensions.
1
Introduction
Many problems in machine learning are based on a form of (regularized) empirical risk minimization. Given the current explosion in the size and amount of data, a central challenge in machine
learning is to design efficient algorithms for solving large-scale problem instances. In a centralized setting, there are many procedures for solving empirical risk minimization problems, including
standard convex programming approaches [3] as well as various types of stochastic approximation [19, 8, 14]. When the size of the dataset becomes extremely large, however, it may be infeasible
to store all of the data on a single computer, or at least to keep the data in memory. Accordingly,
the focus of this paper is the theoretical analysis and empirical evaluation of some distributed and
communication-efficient procedures for empirical risk minimization.
Recent years have witnessed a flurry of research on distributed approaches to solving very large-scale
statistical optimization problems (e.g., see the papers [13, 17, 9, 5, 4, 2, 18] and references therein).
It can be difficult within a purely optimization-theoretic setting to show explicit benefits arising
from distributed computation. In statistical settings, however, distributed computation can lead to
gains in statistical efficiency, as shown by Dekel et al. [4] and extended by other authors [2, 18].
Within the family of distributed algorithms, there can be significant differences in communication
complexity: different computers must be synchronized, and when the dimensionality of the data
is high, communication can be prohibitively expensive. It is thus interesting to study distributed
inference algorithms that require limited synchronization and communication while still enjoying
the statistical power guaranteed by having a large dataset.
1
With this context, perhaps the simplest algorithm for distributed statistical inference is what we
term the average mixture (AVGM) algorithm. This approach has been studied for conditional random fields [10], for perceptron-type algorithms [12], and for certain stochastic approximation methods [23]. It is an appealingly simple method: given m different machines and a dataset of size
N = nm, give each machine a (distinct) dataset of size n = N/m, have each machine i compute
the empirical minimizer ?i on its fraction of the data, then average all the parameters ?i across the
network. Given an empirical risk minimization algorithm that works on one machine, the procedure
is straightforward to implement and is extremely communication efficient (requiring only one round
of communication); it is also relatively robust to failure and slow machines, since there is no repeated
synchronization. To the best of our knowledge, however, no work has shown theoretically that the
AVGM procedure has greater statistical efficiency than the naive approach of using n samples on
a single machine. In particular, Mann et al. [10] prove that the AVGM approach enjoys a variance
reduction relative to the single processor solution, but they only prove that the final mean-squared
error of their estimator is O(1/n), since they do not show a reduction in the bias of the estimator.
Zinkevich et al. [23] propose a parallel stochastic gradient descent (SGD) procedure, which runs
SGD independently on k machines for T iterations, averaging the outputs. The algorithm enjoys
good practical performance, but their main result [23, Theorem 12] guarantees a convergence rate
of O(log k/T ), which is no better than sequential SGD on a single machine processing T samples.
This paper makes two main contributions. First, we provide a sharp analysis of the AVGM algorithm,
showing that under a reasonable set of conditions on the statistical risk function, it can indeed achieve
substantially better rates. More concretely, we provide bounds on the mean-squared error that decay
as O((nm)?1 +n?2 ). Whenever the number of machines m is less than the number of samples n per
machine, this guarantee matches the best possible rate achievable by a centralized algorithm having
access to all N = nm samples. This conclusion is non-trivial and requires a surprisingly careful
analysis. Our second contribution is to develop a novel extension of simple averaging; it is based
on an appropriate form of bootstrap [6, 7], which we refer to bootstrap average mixture (BAVGM)
approach. At a high level, the BAVGM algorithm distributes samples evenly among m processors or
computers as before, but instead of simply returning the empirical minimizer, each processor further
subsamples its own dataset in order to estimate the bias of its local estimate, returning a bootstrapcorrected estimate. We then prove that the BAVGM algorithm has mean-squared error decaying as
O(m?1 n?1 + n?3 ). Thus, as long as m < n2 , the bootstrap method matches the centralized gold
standard up to higher order terms. Finally, we complement our theoretical results with experiments
on simulated data and a large-scale logistic regression experiment that arises from the problem of
predicting whether a user of a search engine will click on an advertisement. Our experiments show
that the resampling and correction of the BAVGM method provide substantial performance benets
over naive solutions as well as the averaging algorithm AVGM.
2
Problem set-up and methods
Let {f (?; x), x ? X } be a collection of convex loss functions with domain containing the convex
set ? ? Rd . Let P be a probability distribution over the sample space X , and define the population
risk function F0 : ? ? R via
Z
f (?; x)dP (x).
F0 (?) := EP [f (?; X)] =
X
?
R
We wish to estimate the risk-minimizing parameter ? = argmin??? F0 (?) = X f (?; x)dP (x),
which we assume to be unique. In practice, the population distribution P is unknown to us, but we
have access to a collection S of samples from the distribution P . In empirical risk minimization,
P
1
one estimates the vector ?? by solving the optimization problem ?b ? argmin??? |S|
x?S f (?; x).
Throughout the paper, we impose some standard regularity conditions on the parameter space and
its relationship to the optimal parameter ?? .
Assumption A (Parameters). The parameter space ? ? Rd is closed convex with ?? ? int ?.
We use R = sup??? k? ? ?? k2 to denote the ?2 -diameter of the parameter space with respect to the
optimum. In addition, the risk function is required to have some amount of curvature:
Assumption B (Local strong convexity). There exists a ? > 0 such that the population Hessian
matrix ?2 F0 (?? ) ?Id?d .
2
Here ?2 F0 (?? ) denotes the Hessian of the population objective F0 evaluated at ?? . Note that this
local condition is milder than a global strong convexity condition and is required to hold only for the
population risk F0 . It is of course well-known that some type of curvature is required to consistently
estimate the parameters ?? .
We now describe our methods. In the distributed setting, we are given a dataset of N = mn
samples i.i.d. according to the initial distribution P , which we divide evenly amongst m processors
or inference procedures. Let Sj , j ? {1, 2, . . . , m}, denote a subsampled dataset of size n, and
define the (local) empirical distribution P1 and empirical objective F1Zvia
1 X
1 X
?x and F1,j (?) :=
f (?; x) =
f (?; x)dP1,j (x).
P1,j :=
|Sj |
|Sj |
X
x?Sj
x?Sj
The AVGM procedure operates as follows: for j ? {1, . . . , m}, machine j uses its dataset Sj to
compute a vector ?1,j ? argmin??? F1,j (?). AVGM combines these m estimates by averaging:
m
1 X
?1,j .
(1)
?1 : =
m j=1
The bootstrap average mixture (BAVGM) procedure is based on an additional level of random sampling. In particular, for a parameter r ? (0, 1], each machine j draws a subset S2,j of size ?rn?
by sampling uniformly at random without replacement from its local data set Sj . In addition to
computing the empirical minimizer ?1,j basedPon Sj , BAVGM also computes the empirical min1
imizer ?2,j of the function F2,j (?) := |S2,j
x?S2,j f (?; x), constructing the bootstrap average
|
P
m
1
?2 : = m j=1 ?2,j and returning the estimate
?1 ? r?2
.
(2)
1?r
The parameter r ? (0, 1) is a user-defined quantity. The purpose of the weighted estimate (2) is to
perform a form of bootstrap bias correction [6, 7]. In rough terms, if b0 = ?? ? ?1 is the bias of the
first estimator, then we may approximate b0 by the bootstrap estimate of bias b1 = ?1 ? ?2 . Then,
since ?? = ?1 + b0 , we use the fact that b1 ? b0 to argue that ?? = ?1 + b0 ? ?1 + b1 .1
? BAVGM : =
3
3.1
Main results
Bounds for simple averaging
To guarantee good estimation properties of our algorithms, we require regularity conditions on the
empirical risks F1 and F2 . It is simplest to state these in terms of the sample functions f , and we
note that, as with Assumption B, we require these to hold only locally around the optimal point ?? .
Assumption C. For some ? > 0, there exists a neighborhood U = {? ? Rd : k?? ? ?k2 ? ?} ? ?
such that for arbitrary x ? X , the gradient and the Hessian
of f exist
and satisfy the bounds
k?f (?; x)k2 ? G and ?2 f (?; x)2 ? H.
for finite constants G, H. For x ? X, the Hessian matrix ?2 f(?;
x) is Lipschitz continuous for
? ? U : there is a constant L such that ?2 f (v; x) ? ?2 f (w; x)2 ? L kv ? wk2 for v, w ? U .
While Assumption C may appear strong, some smoothness of ?2 f is necessary for averaging methods to work, as we now demonstrate by an example. (In fact, this example underscores the difficulty
of proving that the AVGM algorithm achieves better mean-squared error than single-machine strategies.) Consider a distribution {0, 1} with P (X = 0) = P (X = 1) = 1/2, and use the loss
2
? ??
if x = 0
(3)
f (?; x) =
?2 1(? ? 0) + ? if x = 1.
The associated population risk is F0 (w) = 12 (w2 +w2 1(w?0) ), which is strongly convex and smooth,
since |F0? (w) ? F0? (v)| ? 2|w ? v|, but has discontinuous second derivative. Evidently ?? = 0, and
1
by an asymptotic expansion we have that E[?1 ] = ?(n? 2 ) (see the long version of our paper [22,
1
Appendix D] for this asymptotic result). Consequently, the bias of ?1 is ?(n? 2 ), and the AVGM
1
When the index j is immaterial, we use the shorthand notation ?1 and ?2 to denote ?1,j and ?2,j , respectively, and similarly with other quantities.
3
algorithm using N = mn observations must suffer mean squared error E[(?1 ? ?? )2 ] = ?(n?1 ).
Some type of smoothness is necessary for fast rates.
That being said, Assumptions B and C are somewhat innocuous for practical problems. Both hold
for logistic and linear regression problems so long as the population data covariance matrix is not
rank deficient and the data is bounded; moreover, in the linear regression case, we have L = 0.
Our assumptions in place, we present our first theorem on the convergence of the AVGM procedure.
We provide the proof of Theorem 1?under somewhat milder assumptions?and its corollaries in
the full version of this paper [22].
Theorem 1. For each i ? {1, . . . , m}, let Si be a dataset of n independent samples, and let
1 X
?1,i ? argmin
f (?; xj )
n
???
xj ?Si
Pm
1
?
denote the minimizer of the empirical risk for the dataset Si . Define ?1 = m
i=1 ?1,i and let ?
denote the population risk minimizer. Then under Assumptions A?C, we have
h
2 i
2 i
2 h
E
? 1 ? ? ?
2 ?
E
?2 F0 (?? )?1 ?f (?? ; X)
2
nm
h
2 i h
2 i
5 2
+ 2 2 H log d + E
?2 F0 (?? )?1 ?f (?? ; X)
2 E
?2 F0 (?? )?1 ?f (?? ; X)
2
? n
+ O(m?1 n?2 ) + O(n?3 ).
(4)
A simple corollary of Theorem 1 makes it somewhat easier to parse, though we prefer the general
form in the theorem as its dimension dependence is somewhat stronger. Specifically, note that by
definition of the operator norm, |||Ax|||2 ? |||A||| kxk2 for any matrix A and vector x. Consequently,
2
? F0 (?? )?1 ?f (?? ; x)
? ?2 F0 (?? )?1 k?f (?? ; x)k ? 1 k?f (?? ; x)k ,
2
2
2
2
?
where for the last inequality we used Assumption B. In general, this upper bound may be quite
loose, and in many statistical applications (such as linear regression) multiplying ?f (?? ; X) by the
2
inverse Hessian standardizes the data. Assumption C implies E[k?f (?? ; X)k2 ] ? G2 , so that we
arrive at the following:
Corollary 1. Under the same conditions as Theorem 1, we have
h
2 i
G2
5G2
2G2
+ 4 2 H 2 log d + 2 + O(m?1 n?2 ) + O(n?3 ).
E
? 1 ? ? ?
2 ? 2
? nm ? n
?
A comparison of Theorem 1?s conclusions with classical statistical results is also informative. If
the loss f (?; x) : ? ? R is the negative log-likelihood ?(x | ?) for a parametric model P (? | ?? ),
then under suitable smoothness conditions on the log likelihood [21], we can define the Fisher
Information matrix
I(?? ) := E?? ??(X | ?? )??(X | ?? )? = E?? [?2 ?(X | ?? )],
where E?? denotes expectation under the model P (? | ?? ). Let N = mn denote the total number
of samples available. Then under our assumptions, we have the minimax result [21, Theorem 8.11]
that for any estimator ?bN based on N samples,
2
sup lim inf
sup ? E?? +?
?bN ? ?? ? ?
? tr(I(?? )?1 ).
(5)
M <? N ?? k?k?M/ N
2
In connection with Theorem 1, we obtain the comparative result
Corollary 2. Let the assumptions of Theorem 1 hold, and assume that the loss functions f (?; x) are
the negative log-likelihood ?(x | ?) for a parametric model P (? | ?? ). Let N = mn. Then
h
2 i
2
5m2 tr(I(?? )?1 )
tr(I(?? )?1 ) +
H 2 log d + tr(I(?? )?1 ) + O(m?1 n?2 ).
E
? 1 ? ? ?
2 ?
2
2
N
? N
Except for the factor of 2 in the bound, Corollary 2 shows that Theorem 1 essentially achieves the
best possible result. The important aspect of our bound, however, is that we obtain this convergence
rate without calculating an estimate on all N = mn data samples xi ; we calculate m independent
estimators and average them to attain the convergence guarantee.
4
3.2
Bounds for bootstrap mixture averaging
As shown in Theorem 1 and the immediately preceding corollary, for small m, the convergence rate
of the AVGM algorithm is mainly determined by the first term in the bound (4), which is at worst
G2
?2 mn . When the number of processors m grows, however, the second term in the bound (4) may
have non-negligible effect in spite of being O(n?2 ). In addition, when the population risk?s local
strong convexity parameter ? is close to zero or the Lipschitz continuity constant H of ?f (?; x) is
large, the n?2 term in the bound (4) and Corollary 1 may dominate the leading term. This concern
motivates our development of the bootstrap average mixture (BAVGM) algorithm and analysis.
Due the additional randomness introduced by the bootstrap algorithm BAVGM, its analysis requires
an additional smoothness condition. In particular, we require that in a neighborhood of the optimal
point ?? , the loss function f is smooth through its third derivatives.
Assumption D. For a ? > 0, there exists a neighborhood U = {? ? Rd : k?? ? ?k2 ? 2?} ? ?
such that the smoothness conditions of Assumption C hold. For x ? X , the third derivatives of f are
Lipschitz continuous: there is a constant M ? 0 such that for v, w ? U and u ? Rd ,
3
? f (v; x) ? ?3 f (w; x) (u ? u)
? M kv ? wk |||u ? u||| = M kv ? wk kuk2 .
2
2
2
2
2
Note that Assumption D holds for linear regression (in fact, with M = 0); it also holds for logistic
regression problems with finite M as long as the data is bounded.
We now state our second main theorem, which shows that the use of bootstrap samples to reduce the
bias of the AVGM algorithm yields improved performance. (Again, see [22] for a proof.)
?r? 2
of the bootstrap BAVGM
Theorem 2. Let Assumptions A?D hold. Then the output ? BAVGM = ?11?r
algorithm satisfies
h
2 i
i
2 + 3r
1 h
?2 F0 (?? )?1 ?f (?? ; X)
2
E
? BAVGM ? ??
2 ?
?
E
2
(1 ? r)2 nm
1
1
(6)
m?1 n?2 +
n?3
+O
2
(1 ? r)
r(1 ? r)2
Comparing the conclusions of Theorem 2 to those of Theorem 1, we see that the the O(n?2 ) term
in the bound (4) has been eliminated. The reason for this elimination is that resampling at a rate
r reduces the bias of the BAVGM algorithm to O(n?3 ); the bias of the AVGM algorithm induces
terms of order n?2 in Theorem 1. Unsurprisingly, Theorem 2 suggests that the performance of the
BAVGM algorithm is affected by the resampling rate r; typically, one uses r ? (0, 1). Roughly,
when m becomes large we increase r, since the bias of the independent solutions may increase and
we enjoy averaging affects from the BAVGM algorithm. When m is small, the BAVGM algorithm
appears to provide limited benefits. The big-O notation hides some problem dependent constants
for simplicity in the bound. We leave as an intriguing open question whether computing multiple
bootstrap samples at each machine can yield improved performance for the BAVGM procedure.
3.3
Time complexity
In practice, the exact empirical minimizers assumed in Theorems 1 and 2 may be unavailable. In
this section, we sketch an argument that shows that both the AVGM algorithm and the BAVGM algorithm can use approximate empirical minimizers to achieve the same (optimal) asymptotic bounds.
Indeed, suppose that we employ approximate empirical minimizers in AVGM and BAVGM instead
of the exact ones.2 Let the vector ?? denotes the approximation to the vector ? (at each point of the
algorithm). With this notation, we have by the triangle inequality and Jensen?s inequality that
2
2
?
?
2
E[k?1 ? ?? k22 ] ? 2E[
?1 ? ??
] + 2E[k?1 ? ?1 k22 ] ? 2E[
?1 ? ??
] + 2E[k?1? ? ?1 k ]. (7)
2
2
2
The bound (7) shows that solving the empirical minimization problem to accuracy sufficient to have
2
E[k?1? ? ?1 k2 ] = O((mn)?2 ) guarantees the same convergence rates provided by Theorem 1.
Now we show that in time O(n log(mn))?assuming that processing one sample requires one unit
of time?it is possible to achieve empirical accuracy O((nm)?2 ). When this holds, the speedup
2
We provide the arguments only for the AVGM algorithm to save space; the arguments for the BAVGM
algorithm are completely similar, though they also include ?2 .
5
?4
x 10
kw
b ? w ? k 22
11
0.06
Average
Bootstrap
All
0.05
Average
Bootstrap
All
kw
b ? w ? k 22
12
10
0.04
9
0.03
8
0.02
7
6
0
50
100
Number m of machines
0.01
0
150
50
100
Number m of machines
(a)
150
(b)
?
Figure 1: Experiments plotting the error in the estimate of ? given by the AVGM algorithm and
BAVGM algorithm for total number of samples N = 105 versus number of dataset splits (parallel
machines) m. Each plot indicates a different dimension d of problem. (a) d = 20, (b) d = 100.
of the AVGM and similar algorithms over the naive approach of processing all N = mn samples
on one processor is at least of order m/ log(N ). Let us argue that for such time complexity the
necessary empirical convergence is achievable. As we show in our proof of Theorem 1, with high
probability the empirical risk F1 is strongly convex in a ball B? (?1 ) of constant radius ? > 0 around
?1 with high probability. (A similar conclusion holds for F2 .) A combination of stochastic gradient
descent [14] and standard convex programming approaches [3] completes the argument. Indeed,
performing stochastic gradient descent for O(log2 (mn)/?2 ) iterations on the empirical objective
F1 yields that with probability at least 1 ? m?2 n?2 , the resulting parameter falls within B? (?1 ) [14,
Proposition 2.1]. The local strong convexity guarantees that O(log(mn)) iterations of standard
gradient descent [3, Chapter 9]?each requiring O(n) units of time?beginning from this parameter
2
is sufficient to achieve E[k?1? ? ?1 k2 ] = O((mn)?2 ), since gradient descent enjoys a locally linear
convergence rate. The procedure outlined requires at most O(n log(mn)) units of time.
We also remark that under a slightly more global variant of Assumptions A?C, we can show that
stochastic gradient descent achieves convergence rates of O((mn)?2 + n?3/2 ), which is order optimal. See the full version of this paper [5, Section 3.4] for this result.
4
Experiments with synthetic data
In this section, we report the results of simulation studies comparing the AVGM and BAVGM methods, as well as a trivial method using only a fraction of the data available on a single machine. For
our simulated experiments, we solve linear regression problems of varying dimensionality. For each
experiment, we use a fixed total number N = 105 of samples, but we vary the number of parallel
splits m of the data (and consequently, the local dataset sizes n = N/m) and the dimensionality
d of the problem solved. For each simulation, we choose a constant vector u ? Rd . The data
samples consist of pairs (x, y), where x ? Rd and y ? R is the target value. To sample each x
vector, we choose five entries of x distributed as N (0, 1); the remainder of x is zero. The vector y is
Pd
sampled as y = hu, xi + j=1 (xj /2)3 , so the noise in the linear estimate hu, xi is correlated with
x. For our linear regression problem, we use the loss f (?; (x, y)) := 21 (h?, xi ? y)2 . We attempt
to find the vector ?? minimizing F (?) = E[f (?; (X, Y ))] using the standard batch solution, using
AVGM, using BAVGM, and simply solving the linear regression problem resulting from a single split
of the data (of size N/m). We use m ? {2, 4, 8, 16, 32, 64, 128} datasets, recalling that the distributed datasets are of size n = N/m. We perform experiments with each of the dimensionalities
d = 20, 50, 100, 200, 400. (We plot d = 20 and d = 100; other results are qualitatively similar.)
Let ?b denote the vector output by any of our procedures after inference (so in the BAVGM case, for
example, this is the vector ?b = ? BAVGM = (?1 ? r?2 )/(1 ? r)). We obtain the true optimal vector
?? by solving the linear regression problem with sufficiently large number of samples. In Figure 1,
we plot the error k?b ? ?? k22 of the inferred parameter vector ?b for the true parameters ?? versus the
number of splits, or number of parallel machines, m we use. We also plot standard errors (across
6
0.12
Negative Log-Likelihood
BAVGM (m=128)
kw
b ? w ? k 22
0.1
0.131
Average
Single
0.08
0.06
0.04
0.02
0
0
50
100
Number m of machines
0.1308
0.1306
0.1304
0.1302
0
150
0.1
0.2
0.3
0.4
S ub-s ampling Rate r
0.5
(a)
(b)
Figure 2: (a) Sythetic data: comparison of AVGM estimator to linear regression estimator based on
N/m data points. (b) Advertising data: the log-loss on held-out data for the BAVGM method applied
with m = 128 parallel splits of the data, plotted versus the sub-sampling rate r.
fifty experiments) for each curve. In each plot, the flat bottom line is the error of the batch method
using all the N samples.
From the plots in Figure 1, we can make a few claims. First, the AVGM and BAVGM algorithms
indeed enjoy excellent performance, as our theory predicts. Even as the dimensionality d grows, we
see that splitting the data into as many as m = 64 independent pieces and averaging the solution
vectors ?i estimated from each subsample i yields a vector ?b whose estimate of ?? is no worse
than twice the solution using all N samples. We also see that the AVGM curve appears to increase
roughly quadratically with m. This agrees with our theoretical predictions in Theorem 1. Indeed,
2
2
1
+ n12 ) = O( N1 + m
setting n = N/m, we see that Theorem 1 implies E[
? ? ??
2 ] = O( mn
N 2 ),
which matches Figure 1. In addition, we see that the BAVGM algorithm enjoys somewhat morep
stable
performance, with increasing benefit as the number of machines m increases. We chose r ? d/n
for the BAVGM algorithm, as that choice appeared to give reasonable performance. (The optimal
choice of r remains an open question.)
As a check that our results are not simply consequences of the fact that the problems are easy to
solve, even using a fraction 1/m of the data in a single machine, in Figure 2(a) we plot the estimation
error k?b ? ?? k22 of an estimate of ?? based on just a fraction 1/m of the data versus the number of
machines/data splits m. Clearly, the average mixture approach dominates. (Figure 2(a) uses d = 20;
larger dimensions are similar but more pronounced).
5
Experiments with advertising data
Predicting whether a user of a search engine will click on an advertisement presented to him or her
is of central importance to the business of several internet companies, and in this section, we present
experiments studying the performance of the AVGM and BAVGM methods for this task. We use
a large dataset from the Tencent search engine, soso.com [20], which contains 641,707 distinct
advertisement items with N = 235,582,879 data samples. Each sample consists of a so-called
impression, which is a list containing a user-issued search, the advertisement presented to the user
and a label y ? {+1, ?1} indicating whether the user clicked on the advertisement. The ads in our
dataset were presented to 23,669,283 distinct users.
Tencent dataset provides a standard encoding to transform an impression into a useable set of regressors x. We list the features present in the data in Table 1 of the full version of this paper [22].
Each text-based feature is given a ?bag-of-words? encoding [11]. Real-valued features are binned
into a fixed number of intervals. When a feature falls into a particular bin, the corresponding entry
of is assigned a 1, and otherwise assigned 0. This combination of encodings yields a binary-valued
covariate vector x ? {0, 1}d with d = 741,725 dimensions.
Our goal is to predict the probability of a user clicking a given advertisement as a function of the
covariates x. In order to do so, we use a logistic regression model to estimate the probability of a
1
click response P (y = 1 | x; ?) := 1+exp(?h?,xi)
, where ? ? Rd is the unknown regression vector.
7
Negative Log-Likelihood
Negative Log-Likelihood
0.1306
0.1304
0.1302
0.13
0.1298
0.1296
0.1294
8
16
32
64
Number of machines m
(a)
SGD
0.1308
AVGM
BAVGM (r=0.1)
BAVGM (r=0.25)
0.1308
0.1306
0.1304
0.1302
0.13
0.1298
0.1296
0.1294
128
1
2
3
4 5 6 7 8
Number of Pas s es
9 10
(b)
Figure 3: The negative log-likelihood of the output of the AVGM, BAVGM, and a stochastic gradient
descent method on the held-out dataset for the click-through prediction task. (a) Performance of the
AVGM and BAVGM methods versus the number of splits m of the data. (b) Performance of the SGD
baseline as a function of number of passes through the entire dataset.
We use the negative logarithm of P as the loss, incorporating a ridge regularization penalty. This
combination yields the optimization objective
?
2
f (?; (x, y)) = log (1 + exp(?y h?, xi)) + k?k2 .
2
In all our experiments, we use regularization parameter ? = 10?6 , a choice obtained by cross
validation.
For this problem, we cannot evaluate the mean-squared error k?b ? ?? k22 , as we do not know the true
optimal parameter ?? . Consequently, we evaluate the performance of an estimate ?b using log-loss
on a held-out dataset. Specifically, we perform a five-fold validation experiment, where we shuffle
the data and partition it into five equal-sized subsets. For each of our five experiments, we hold out
one partition to use as the test set, using the remaining data as the training set used for inference.
When studying the AVGM or BAVGM method, we compute the local estimate ?i via a trust-region
Newton-based method [15].
The dataset is too large to fit in main memory on most computers: in total, four splits of the data
require 55 gigabytes. Consequently, it is difficult to provide an oracle training comparison using the
full N samples. Instead, for each experiment, we perform 10 passes of stochastic gradient descent
through the dataset to get a rough baseline of the performance attained by the empirical minimizer
for the entire dataset. Figure 3(b) shows the hold-out set log-loss after each of the sequential passes
through the training data finishes.
In Figure 3(a), we show the average hold-out set log-loss (with standard errors) of the estimator ?1
provided by the AVGM method and the BAVGM method versus number of splits of the data m. The
plot shows that for small m, both AVGM and BAVGM enjoy good performance, comparable to or
better than (our proxy for) the oracle solution using all N samples. As the number of machines m
grows, the de-biasing provided by the subsampled bootstrap method yield substantial improvements
over the standard AVGM method. In addition, even with m = 128 splits of the dataset, the BAVGM
method gives better hold-out set performance than performing two passes of stochastic gradient on
the entire dataset of m samples. This is striking, as doing even one pass through the data with
stochastic gradient descent is known to give minimax optimal convergence rates [16, 1].
It is instructive and important to understand the sensitivity of the BAVGM method to the resampling
parameter r. We explore this question in in Figure 2(b) using m = 128 splits. We choose m = 128
because more data splits provide more variable performance in r. For the soso.com ad prediction
data set, the choice r = .25 achieves the best performance, but Figure 2(b) suggests that misspecifying the ratio is not terribly detrimental. Indeed, while the performance of BAVGM degrades
to that of the AVGM method, there is a wide range of r giving improved performance, and there does
not appear to be a phase transition to poor performance.
Acknowledgments This work is based on research supported in part by the Office of Naval Research under MURI grant N00014-11-1-0688. JCD was also supported by an NDSEG fellowship
and a Facebook PhD fellowship.
8
References
[1] A. Agarwal, P. L. Bartlett, P. Ravikumar, and M. J. Wainwright. Information-theoretic lower bounds on the
oracle complexity of convex optimization. IEEE Transactions on Information Theory, 58(5):3235?3249,
May 2012.
[2] A. Agarwal and J. C. Duchi. Distributed delayed stochastic optimization. In Advances in Neural Information Processing Systems 25, 2011.
[3] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004.
[4] O. Dekel, R. Gilad-Bachrach, O. Shamir, and L. Xiao. Optimal distributed online prediction using minibatches. Journal of Machine Learning Research, 13:165?202, 2012.
[5] J. C. Duchi, A. Agarwal, and M. J. Wainwright. Dual averaging for distributed optimization: convergence
analysis and network scaling. IEEE Transactions on Automatic Control, 57(3):592?606, 2012.
[6] B. Efron and R. J. Tibshirani. An Introduction to the Bootstrap. Chapman & Hall, 1993.
[7] P. Hall. The Bootstrap and Edgeworth Expansion. Springer, 1992.
[8] E. Hazan, A. Kalai, S. Kale, and A. Agarwal. Logarithmic regret algorithms for online convex optimization. In Proceedings of the Nineteenth Annual Conference on Computational Learning Theory, 2006.
[9] B. Johansson, M. Rabi, and M. Johansson. A randomized incremental subgradient method for distributed
optimization in networked systems. SIAM Journal on Optimization, 20(3):1157?1170, 2009.
[10] G. Mann, R. McDonald, M. Mohri, N. Silberman, and D. Walker. Efficient Large-Scale Distributed Training of Conditional Maximum Entropy Models. In Advances in Neural Information Processing Systems
22, pages 1231?1239, 2009.
[11] C. Manning, P. Raghavan, and H. Sch?utze. Introduction to Information Retrieval. Cambridge University
Press, 2008.
[12] R. McDonald, K. Hall, and G. Mann. Distributed training strategies for the structured perceptron. In
North American Chapter of the Association for Computational Linguistics (NAACL), 2010.
[13] A. Nedi?c and A. Ozdaglar. Distributed subgradient methods for multi-agent optimization. IEEE Transactions on Automatic Control, 54:48?61, 2009.
[14] A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro. Robust stochastic approximation approach to
stochastic programming. SIAM Journal on Optimization, 19(4):1574?1609, 2009.
[15] J. Nocedal and S. J. Wright. Numerical Optimization. Springer, 2006.
[16] B. T. Polyak and A. B. Juditsky. Acceleration of stochastic approximation by averaging. SIAM Journal
on Control and Optimization, 30(4):838?855, 1992.
[17] S. S. Ram, A. Nedi?c, and V. V. Veeravalli. Distributed stochastic subgradient projection algorithms for
convex optimization. Journal of Optimization Theory and Applications, 147(3):516?545, 2010.
[18] B. Recht, C. Re, S. Wright, and F. Niu. Hogwild: a lock-free approach to parallelizing stochastic gradient
descent. In Advances in Neural Information Processing Systems 25, 2011.
[19] H. Robbins. Asymptotically subminimax solutions of compound statistical decision problems. In Proceedings of the 2nd Berkeley Symposium on Mathematical Statistics and Probability, pages 131?148,
1951.
[20] G. Sun. KDD cup track 2 soso.com ads prediction challenge, 2012. Accessed August 1, 2012.
[21] A. W. van der Vaart. Asymptotic Statistics. Cambridge Series in Statistical and Probabilistic Mathematics.
Cambridge University Press, 1998.
[22] Y. Zhang, J. C. Duchi, and M. J. Wainwright. Communication-efficient algorithms for statistical optimization. arXiv:1209.4129 [stat.ML], 2012.
[23] M. A. Zinkevich, A. Smola, M. Weimer, and L. Li. Parallelized Stochastic Gradient Descent. In Advances
in Neural Information Processing Systems 24, 2010.
9
|
4728 |@word version:4 achievable:3 stronger:1 johansson:2 norm:1 dekel:2 nd:1 open:2 hu:2 simulation:2 bn:2 covariance:1 sgd:5 tr:4 reduction:2 initial:1 contains:1 series:1 wainwrig:1 current:1 comparing:2 com:3 si:3 intriguing:1 must:2 john:1 numerical:1 partition:2 informative:1 kdd:1 plot:8 juditsky:2 resampling:4 item:1 accordingly:1 beginning:1 provides:1 accessed:1 five:4 zhang:1 mathematical:1 symposium:1 consists:2 prove:3 shorthand:1 combine:1 theoretically:1 indeed:6 roughly:2 p1:2 multi:1 company:1 increasing:1 becomes:2 provided:3 clicked:1 notation:3 bounded:2 moreover:1 what:1 appealingly:1 argmin:4 substantially:1 jduchi:1 guarantee:7 berkeley:4 prohibitively:1 returning:3 k2:8 control:3 unit:3 grant:1 zhang1:1 appear:2 enjoy:3 ozdaglar:1 before:1 negligible:1 engineering:1 local:9 consequence:1 encoding:3 id:1 niu:1 chose:1 twice:1 therein:1 studied:1 wk2:1 suggests:2 innocuous:1 yuczhang:1 limited:2 nemirovski:1 range:1 practical:2 unique:1 acknowledgment:1 practice:2 regret:1 implement:1 useable:1 edgeworth:1 bootstrap:19 procedure:12 empirical:23 attain:1 boyd:1 projection:1 word:1 spite:1 get:1 cannot:1 close:1 operator:1 risk:16 context:1 zinkevich:2 straightforward:1 kale:1 independently:1 convex:11 nedi:2 bachrach:1 simplicity:1 splitting:1 immediately:1 m2:1 estimator:8 dominate:1 vandenberghe:1 gigabyte:1 population:9 proving:1 n12:1 target:1 suppose:1 shamir:1 user:8 exact:2 programming:3 us:3 pa:1 expensive:1 predicts:1 muri:1 ep:1 min1:1 bottom:1 electrical:1 solved:1 worst:1 calculate:1 region:1 sun:1 shuffle:1 substantial:2 pd:1 convexity:4 complexity:4 covariates:1 flurry:1 immaterial:1 solving:7 purely:1 efficiency:2 f2:3 completely:1 triangle:1 various:1 chapter:2 distinct:3 fast:1 describe:1 neighborhood:3 quite:1 whose:1 larger:1 solve:3 valued:2 nineteenth:1 otherwise:1 statistic:3 vaart:1 transform:1 final:1 online:2 subsamples:1 evidently:1 propose:1 remainder:1 networked:1 achieve:4 gold:1 kv:3 pronounced:1 convergence:11 regularity:2 optimum:1 comparative:1 incremental:1 leave:1 develop:1 stat:1 b0:5 strong:5 implies:2 synchronized:1 radius:1 discontinuous:1 stochastic:17 terribly:1 raghavan:1 elimination:1 mann:3 bin:1 require:5 f1:5 proposition:1 duchi1:1 extension:1 correction:2 hold:14 around:2 sufficiently:1 hall:3 wright:2 exp:2 predict:1 claim:1 achieves:5 vary:1 utze:1 purpose:1 estimation:2 bag:1 label:1 him:1 agrees:1 robbins:1 weighted:1 minimization:7 rough:2 clearly:1 kalai:1 varying:1 office:1 corollary:7 ax:1 focus:1 naval:1 improvement:1 consistently:1 rank:1 likelihood:7 mainly:1 indicates:1 underscore:1 check:1 baseline:2 inference:5 milder:2 dependent:1 minimizers:3 typically:1 entire:3 her:1 among:1 dual:1 development:1 field:1 equal:1 having:3 sampling:3 eliminated:1 chapman:1 kw:3 report:1 employ:1 few:1 delayed:1 subsampled:2 phase:1 replacement:1 n1:1 attempt:1 recalling:1 centralized:4 evaluation:1 mixture:7 held:3 explosion:1 necessary:3 enjoying:1 yuchen:1 divide:1 logarithm:1 re:1 plotted:1 theoretical:4 instance:1 witnessed:1 subset:3 entry:2 too:1 eec:1 synthetic:1 combined:1 recht:1 sensitivity:1 randomized:1 siam:3 probabilistic:1 squared:8 central:2 nm:7 again:1 containing:2 choose:3 ndseg:1 worse:1 american:1 derivative:3 leading:1 li:1 de:1 wk:2 north:1 int:1 satisfy:1 ad:3 piece:1 hogwild:1 closed:1 doing:1 sup:3 hazan:1 decaying:1 parallel:5 contribution:2 accuracy:2 variance:1 efficiently:1 yield:7 multiplying:1 advertising:2 dp1:1 processor:6 randomness:1 whenever:2 facebook:1 definition:1 failure:1 associated:1 proof:3 gain:1 sampled:1 dataset:23 knowledge:1 lim:1 dimensionality:5 efron:1 appears:2 higher:1 attained:1 response:1 improved:3 evaluated:1 though:2 strongly:2 just:1 smola:1 sketch:1 parse:1 trust:1 veeravalli:1 continuity:1 logistic:4 perhaps:1 grows:3 effect:1 k22:5 requiring:3 true:3 naacl:1 regularization:2 assigned:2 round:2 impression:2 theoretic:2 demonstrate:1 ridge:1 mcdonald:2 performs:1 duchi:3 novel:2 association:1 jcd:1 significant:1 refer:1 cambridge:4 cup:1 smoothness:5 rd:8 automatic:2 outlined:1 pm:1 similarly:1 mathematics:1 access:3 f0:16 stable:1 curvature:2 own:1 recent:1 hide:1 inf:1 store:1 certain:1 issued:1 n00014:1 inequality:3 binary:1 compound:1 der:1 ampling:1 greater:1 additional:3 impose:1 somewhat:5 preceding:1 parallelized:1 full:4 multiple:1 reduces:1 smooth:2 match:4 cross:1 long:4 retrieval:1 ravikumar:1 prediction:6 variant:1 regression:13 essentially:1 expectation:1 arxiv:1 iteration:3 agarwal:4 gilad:1 addition:5 fellowship:2 interval:1 completes:1 walker:1 sch:1 parallelization:1 w2:2 fifty:1 pass:4 deficient:1 split:12 easy:1 xj:3 affect:1 fit:1 finish:1 click:4 polyak:1 reduce:1 whether:4 bartlett:1 penalty:1 suffer:1 hessian:5 remark:1 amount:3 locally:2 induces:1 simplest:2 diameter:1 shapiro:1 exist:1 estimated:1 arising:1 per:1 tibshirani:1 track:1 affected:1 four:1 lan:1 nocedal:1 ram:1 asymptotically:1 subgradient:3 fraction:4 year:1 run:1 inverse:1 striking:1 place:1 family:1 reasonable:3 throughout:1 arrive:1 draw:1 decision:1 appendix:1 prefer:1 scaling:1 comparable:1 bound:15 internet:2 guaranteed:1 fold:1 oracle:3 annual:1 binned:1 flat:1 aspect:1 argument:4 extremely:2 performing:2 martin:1 relatively:1 speedup:1 department:2 structured:1 according:1 ball:1 combination:3 poor:1 manning:1 across:2 slightly:1 remains:1 loose:1 know:1 studying:2 available:2 appropriate:2 save:1 batch:2 denotes:3 remaining:1 include:1 linguistics:1 log2:1 lock:1 newton:1 calculating:1 giving:1 chinese:1 classical:1 rabi:1 silberman:1 objective:4 question:3 quantity:2 strategy:2 parametric:2 dependence:1 degrades:1 said:1 gradient:13 dp:2 amongst:1 detrimental:1 separate:1 simulated:2 evenly:3 argue:2 trivial:2 reason:1 assuming:1 index:1 relationship:1 ratio:1 minimizing:2 difficult:2 negative:7 design:1 motivates:1 unknown:2 perform:4 upper:1 observation:1 datasets:2 finite:2 descent:11 extended:1 communication:10 rn:1 sharp:2 arbitrary:1 august:1 parallelizing:1 inferred:1 introduced:1 complement:2 pair:1 required:3 connection:1 california:1 engine:4 quadratically:1 appeared:1 biasing:1 challenge:2 including:1 memory:2 wainwright:3 power:1 suitable:1 difficulty:1 business:1 regularized:1 predicting:2 largescale:1 mn:15 minimax:2 naive:3 text:1 relative:1 asymptotic:4 synchronization:2 loss:11 unsurprisingly:1 interesting:1 versus:6 validation:2 agent:1 sufficient:2 proxy:1 xiao:1 plotting:1 course:1 mohri:1 surprisingly:1 last:1 supported:2 free:1 infeasible:1 enjoys:4 bias:10 understand:1 perceptron:2 fall:2 wide:1 distributed:19 benefit:3 curve:2 dimension:5 van:1 transition:1 computes:1 author:1 concretely:1 collection:2 qualitatively:1 regressors:1 transaction:3 sj:8 approximate:3 keep:1 ml:1 global:2 b1:3 assumed:1 xi:6 search:6 continuous:2 table:1 robust:3 ca:1 tencent:2 unavailable:1 expansion:2 excellent:1 constructing:1 domain:2 main:5 weimer:1 s2:3 big:1 noise:1 subsample:1 n2:1 misspecifying:1 repeated:1 slow:1 sub:1 explicit:1 wish:1 clicking:1 kxk2:1 third:2 advertisement:7 theorem:24 kuk2:1 covariate:1 showing:2 jensen:1 list:2 decay:3 concern:1 dominates:1 exists:3 consist:1 incorporating:1 sequential:2 importance:1 phd:1 easier:1 entropy:1 logarithmic:1 simply:3 explore:1 g2:5 soso:4 springer:2 minimizer:6 satisfies:1 minibatches:1 conditional:2 goal:1 sized:1 consequently:5 careful:1 acceleration:1 lipschitz:3 fisher:1 specifically:2 except:1 operates:1 uniformly:1 averaging:12 determined:1 distributes:2 total:4 called:1 pas:1 e:1 indicating:1 arises:1 wainwright1:1 ub:1 evaluate:2 instructive:1 correlated:1
|
4,120 | 4,729 |
Multi-Stage Multi-Task Feature Learning?
?
Pinghua Gong, ? Jieping Ye, ? Changshui Zhang
State Key Laboratory on Intelligent Technology and Systems
Tsinghua National Laboratory for Information Science and Technology (TNList)
Department of Automation, Tsinghua University, Beijing 100084, China
?
Computer Science and Engineering, Center for Evolutionary Medicine and Informatics
The Biodesign Institute, Arizona State University, Tempe, AZ 85287, USA
?
{gph08@mails, zcs@mail}.tsinghua.edu.cn, ? [email protected]
?
Abstract
Multi-task sparse feature learning aims to improve the generalization performance
by exploiting the shared features among tasks. It has been successfully applied to
many applications including computer vision and biomedical informatics. Most
of the existing multi-task sparse feature learning algorithms are formulated as
a convex sparse regularization problem, which is usually suboptimal, due to its
looseness for approximating an ?0 -type regularizer. In this paper, we propose a
non-convex formulation for multi-task sparse feature learning based on a novel
regularizer. To solve the non-convex optimization problem, we propose a MultiStage Multi-Task Feature Learning (MSMTFL) algorithm. Moreover, we present
a detailed theoretical analysis showing that MSMTFL achieves a better parameter
estimation error bound than the convex formulation. Empirical studies on both
synthetic and real-world data sets demonstrate the effectiveness of MSMTFL in
comparison with the state of the art multi-task sparse feature learning algorithms.
1
Introduction
Multi-task learning (MTL) exploits the relationships among multiple related tasks to improve the
generalization performance. It has been applied successfully to many applications such as speech
classification [16], handwritten character recognition [14, 17] and medical diagnosis [2]. One common assumption in multi-task learning is that all tasks should share some common structures including the prior or parameters of Bayesian models [18, 21, 24], a similarity metric matrix [16], a
classification weight vector [6], a low rank subspace [4, 13] and a common set of shared features
[1, 8, 10, 11, 12, 14, 20].
In this paper, we focus on multi-task feature learning, in which we learn the features specific to
each task as well as the common features shared among tasks. Although many multi-task feature
learning algorithms have been proposed, most of them assume that the relevant features are shared
by all tasks. This is too restrictive in real-world applications [9]. To overcome this limitation, Jalali
et al. (2010) [9] proposed an ?1 + ?1,? regularized formulation, called dirty model, to leverage
the common features shared among tasks. The dirty model allows a certain feature to be shared
by some tasks but not all tasks. Jalali et al. (2010) also presented a theoretical analysis under the
incoherence condition [5, 15] which is more restrictive than RIP [3, 27]. The ?1 + ?1,? regularizer
is a convex relaxation for the ?0 -type one, which, however, is too loose to well approximate the
?0 -type regularizer and usually achieves suboptimal performance (requiring restrictive conditions or
obtaining a suboptimal error bound) [23, 26, 27]. To remedy the shortcoming, we propose to use a
non-convex regularizer for multi-task feature learning in this paper.
?
This work was completed when the first author visited Arizona State University.
1
Contributions: We propose to employ a capped-?1 ,?1 regularized formulation (non-convex) to
learn the features specific to each task as well as the common features shared among tasks. To
solve the non-convex optimization problem, we propose a Multi-Stage Multi-Task Feature Learning
(MSMTFL) algorithm, using the concave duality [26]. Although the MSMTFL algorithm may not
obtain a globally optimal solution, we theoretically show that this solution achieves good performance. Specifically, we present a detailed theoretical analysis on the parameter estimation error
bound for the MSMTFL algorithm. Our analysis shows that, under the sparse eigenvalue condition
which is weaker than the incoherence condition in Jalali et al. (2010) [9], MSMTFL improves the
error bound during the multi-stage iteration, i.e., the error bound at the current iteration improves
the one at the last iteration. Empirical studies on both synthetic and real-world data sets demonstrate
the effectiveness of the MSMTFL algorithm in comparison with the state of the art algorithms.
Notations: Scalars and vectors are denoted by lower case letters and bold face lower case letters, respectively. Matrices and sets are denoted by capital letters and calligraphic capital letters, respectively. The ?1 norm, Euclidean norm, ?? norm and Frobenius norm are denoted by
? ? ?1 , ? ? ?, ? ? ?? and ? ? ?F , respectively. | ? | denotes the absolute value of a scalar or the
number of elements in a set, depending on the context. We define the ?p,q norm of a matrix X as
(? ( ?
)p )1/p
q 1/q
?X?p,q =
. We define Nn as {1, ? ? ? , n} and N (?, ? 2 ) as a normal disi (
j |xij | )
tribution with mean ? and variance ? 2 . For a d?m matrix W and sets Ii ? Nd ?{i}, I ? Nd ?Nd ,
we let wIi be a d ? 1 vector with the j-th entry being wji , if (j, i) ? Ii , and 0, otherwise. We also
let WI be a d ? m matrix with the (j, i)-th entry being wji , if (j, i) ? I, and 0, otherwise.
2
The Proposed Formulation
Assume we are given m learning tasks associated with training data {(X1 , y1 ), ? ? ? , (Xm , ym )},
where Xi ? Rni ?d is the data matrix of the i-th task with each row as a sample; yi ? Rni is
the response of the i-th task; d is the data dimensionality; ni is the number of samples for the i-th
task. We consider learning a weight matrix W = [w1 , ? ? ? , wm ] ? Rd?m consisting of the weight
vectors for m linear predictive models: yi ? fi (Xi ) = Xi wi , i ? Nm . In this paper, we propose a
non-convex multi-task feature learning formulation to learn these m models simultaneously, based
on the capped-?1 ,?1 regularization. Specifically, we first impose the ?1 penalty on each row of W ,
obtaining a column vector. Then, we impose the capped-?1 penalty [26, 27] on that vector. Formally,
we formulate our proposed model as follows:
?
?
d
?
?
( j
)?
min l(W ) + ?
min ?w ?1 , ?
,
(1)
W ?
?
j=1
where l(W ) is an empirical loss function of W ; ? (> 0) is a parameter balancing the empirical loss
and the regularization; ? (> 0) is a thresholding parameter; wj is the j-th row of the matrix W . In
?m 1
2
this paper, we focus on the quadratic loss function: l(W ) = i=1 mn
?Xi wi ? yi ? .
i
Algorithm 1: MSMTFL: Multi-Stage Multi-Task Feature Learning
(0)
1 Initialize ?j = ?;
2 for ? = 1, 2, ? ? ? do
? (?) be a solution of the following problem:
3
Let W
?
?
min
W ?Rd?m
?
l(W ) +
d
?
j=1
(??1)
?j
?wj ?1
?
?
?
.
(2)
(?)
? (?) and I(?)
? (?) )j ?1 < ?) (j = 1, ? ? ? , d), where (w
? (?) )j is the j-th row of W
Let ?j = ?I(?(w
denotes the {0, 1} valued indicator function.
4 end
Intuitively, due to the capped-?1 , ?1 penalty, the optimal solution of Eq. (1) denoted as W ? has many
zero rows. For a nonzero row (w? )k , some entries may be zero, due to the ?1 -norm imposed on each
2
row of W . Thus, under the formulation in Eq. (1), a certain feature can be shared by some tasks
but not all the tasks. Therefore, the proposed formulation can leverage the common features shared
among tasks.
The formulation in Eq. (1) is non-convex and is difficult to solve. To this end, we propose a MultiStage Multi-Task Feature Learning (MSMTFL) algorithm (see Algorithm 1). Note that if we terminate the algorithm with ? = 1, the MSMTFL algorithm is equivalent to the ?1 regularized multi-task
feature learning algorithm (Lasso). Thus, the solution obtained by MSMTFL can be considered
as a refinement of that of Lasso. Although Algorithm 1 may not find a globally optimal solution,
the solution has good performance. Specifically, we will theoretically show that the solution obtained by Algorithm 1 improves the performance of the parameter estimation error bound during
the multi-stage iteration. Moreover, empirical studies also demonstrate the effectiveness of our proposed MSMTFL algorithm. We provide more details about intuitive interpretations, convergence
analysis and reproducibility discussions of the proposed algorithm in the full version [7].
3
Theoretical Analysis
In this section, we theoretically analyze the parameter estimation performance of the solution obtained by the MSMTFL algorithm. To simplify the notations in the theoretical analysis, we assume
that the number of samples for all the tasks are the same. However, our theoretical analysis can be
easily extended to the case where the tasks have different sample sizes.
We first present a sub-Gaussian noise assumption which is very common in the analysis of sparse
regularization literature [23, 25, 26, 27].
? = [w
? 1, ? ? ? , w
? m ] ? Rd?m be the underlying sparse weight matrix and
Assumption 1 Let W
? i + ?i , Eyi = Xi w
? i , where ?i ? Rn is a random vector with all entries ?ji (j ? Nn , i ?
y i = Xi w
Nm ) being independent
sub-Gaussians:
there exists ? > 0 such that ?j ? Nn , i ? Nm , t ? R:
(
)
E?ji exp(t?ji ) ? exp ? 2 t2 /2 .
Remark 1 We call the random variable satisfying the condition in Assumption 1 sub-Gaussian,
since its moment generating function is upper bounded by that of the zero mean Gaussian random variable. That is, (if a normal
random variable x ? N (0, ? 2 ), then we have E exp(tx) =
)
(
)
??
??
2
1
x
1
exp(tx) ?2?? exp ? 2?2 dx = exp(? 2 t2 /2) ?? ?2??
exp ?(x ? ? 2 t)2 /(2? 2 ) dx =
??
exp(? 2 t2 /2) ? E?ji exp(t?ji ).
Remark 2 Based on the(Hoeffding?s
) Lemma, for any random variable x ? [a, b] and Ex = 0, we
t2 (b?a)2
have E(exp(tx)) ? exp
. Therefore, both zero mean Gaussian and zero mean bounded
8
random variables are sub-Guassians. Thus, the sub-Gaussian noise assumption is more general
than the Gaussian noise assumption which is commonly used in the literature [9, 11].
We next introduce the following sparse eigenvalue concept which is also common in the analysis of
sparse regularization literature [22, 23, 25, 26, 27].
Definition 1 Given 1 ? k ? d, we define
{
}
?Xi w?2
+
:
?w?
?
k
, ?+
?+
(k)
=
sup
0
max (k) = max ?i (k),
i
i?Nm
n?w?2
w
{
}
?Xi w?2
?
??
(k)
=
inf
:
?w?
?
k
, ??
0
i
min (k) = min ?i (k).
w
i?Nm
n?w?2
?
T
Remark 3 ?+
i (k) (?i (k)) is in fact the maximum (minimum) eigenvalue of (Xi )S (Xi )S /n, where
S is a set satisfying |S| ? k and (Xi )S is a submatrix composed of the columns of Xi indexed by
?
S. In the MTL setting, we need to exploit the relations of ?+
i (k) (?i (k)) among multiple tasks.
We present our parameter estimation error bound on MSMTFL in the following theorem:
3
Theorem 1 Let Assumption 1 hold. Define F?i = {(j, i) : w
?ji ?= 0} and F? = ?i?Nm F?i . Denote r?
? . We assume that
as the number of nonzero rows of W
? ?w
? j ?1 ? 2?
?(j, i) ? F,
(3)
?+
i (s)
?
?i (2?
r + 2s)
s
,
2?
r
where s is some integer satisfying s ? r?. If we choose ? and ? such that for some s ? r?:
?
2?+
max (1) ln(2dm/?)
? ? 12?
,
n
11m?
?? ?
,
?min (2?
r + s)
then the following parameter estimation error bound holds with probability larger than 1 ? ?:
?
?
39.5m?
?+
r)(7.4?
r + 2.7 ln(2/?))/n
max (?
? (?) ? W
? ?2,1 ? 0.8?/2 9.1m? r? +
?W
,
?
?
?min (2?
r + s)
?min (2?
r + s)
? (?) is a solution of Eq. (2).
where W
and
?1+
(4)
(5)
(6)
(7)
? is away from zero. This
Remark 4 Eq. (3) assumes that the ?1 -norm of each nonzero row of W
requires the true nonzero coefficients should be large enough, in order to distinguish them from
the noise. Eq. (4) is called the sparse eigenvalue condition [27], which requires the eigenvalue ratio
?
?+
i (s)/?i (s) to grow sub-linearly with respect to s. Such a condition is very common in the analysis
of sparse regularization [22, 25] and it is slightly weaker than the RIP condition [3, 27].
Remark 5 When ? = 1 (corresponds to Lasso), the first term of the right-hand side of Eq. (7)
dominates the error bound in the order of
( ?
)
? Lasso ? W
? ?2,1 = O m r? ln(dm/?)/n ,
?W
(8)
since ? satisfies the condition in Eq. (5). Note that the first term of the right-hand side ?
of Eq. (7)
shrinks exponentially as ? increases. When ? is sufficiently large in the order of O(ln(m r?/n) +
ln ln(dm)), this term tends to zero and we obtain the following parameter estimation error bound:
( ?
)
? (?) ? W
? ?2,1 = O m r?/n + ln(1/?)/n .
?W
(9)
)
(?
? Dirty ?W
? ??,? = O
ln(dm/?)/n
Jalali et al. (2010) [9] gave an ??,? -norm error bound ?W
? and W
? . A direct comparison between these two
as well as a sign consistency result between W
bounds is difficult due to the use of different norms. On the other hand, the worst-case estimate of
the ?2,1 -norm error bound of the algorithm
in Jalali et )al. (2010) [9] is in the same order with Eq. (8),
( ?
? Dirty ? W
? ?2,1 = O m r? ln(dm/?)/n . When dm is large and the ground truth has
that is: ?W
a large number of sparse rows (i.e., r? is a small constant), the bound in Eq. (9) is significantly better
than the ones for the Lasso and Dirty model.
Remark 6 Jalali et al. (2010) [9] presented an ??,? -norm parameter estimation error bound and
hence a sign consistency result can be obtained. The results are derived under the incoherence
condition which is more restrictive than the RIP condition and hence more restrictive than the sparse
eigenvalue condition in Eq. (4). From the viewpoint of the parameter estimation error, our proposed
algorithm can achieve a better bound under weaker conditions. Please refer to [19, 25, 27] for
more details about the incoherence condition, the RIP condition, the sparse eigenvalue condition
and their relationships.
Remark 7 The capped-?1 regularized formulation in Zhang (2010) [26] is a special case of our formulation when m = 1. However, extending the analysis from the single task to the multi-task setting
is nontrivial. Different from previous work on multi-stage sparse learning which focuses on a single
task [26, 27], we study a more general multi-stage framework in the multi-task setting. We need to
?
exploit the relationship among tasks, by using the relations of sparse eigenvalues ?+
i (k) (?i (k))
and treating the ?1 -norm on each row of the weight matrix as a whole for consideration. Moreover,
we simultaneously exploit the relations of each column and each row of the matrix.
4
4
Proof Sketch
We first provide several important lemmas (please refer to the full version [7] or supplementary
materials for detailed proofs) and then complete the proof of Theorem 1 based on these lemmas.
? = [?
? i ? yi ) (i ? Nm ). Define
Lemma 1 Let ?
?1 , ? ? ? , ??m ] with ??i = [?
?1i , ? ? ? , ??di ]T = n1 XiT (Xi w
? ? F? such that (j, i) ? H
? (?i ? Nm ), provided there exists (j, g) ? F? (H
? is a set consisting of
H
? ). Under the conditions of Assumption 1 and the
the indices of all entries in the nonzero rows of W
notations of Theorem 1, the followings hold with probability larger than 1 ? ?:
?
+
? ?,? ? ? 2?max (1) ln(2dm/?) ,
???
(10)
n
? H? ?2F ? m? 2 ?+
r)(7.4?
r + 2.7 ln(2/?))/n.
(11)
??
max (?
? with respect to W
? . We note that Eq. (10) and
Lemma 1 gives bounds on the residual correlation (?)
Eq. (11) are closely related to the assumption on ? in Eq. (5) and the second term of the right-hand
side of Eq. (7) (error bound), respectively. This lemma provides a fundamental basis for the proof
of Theorem 1.
Lemma 2 Use the notations of Lemma 1 and consider Gi ? Nd ? {i} such that F?i ? Gi = ? (i ?
?i = ?
? (??1) =
? = W
? (?) be a solution of Eq. (2) and ?W
? = W
? ?W
? . Denote ?
Nm ). Let W
i
(??1) T
(??1)
? G = min(j,i)?G ?
? ji , ?
? G = mini?G ?
? G and ?
? 0i = maxj ?
? ji , ?
?0 =
, ? ? ? , ?d
] . Let ?
[?1
i
i
i
i
?
?
maxi ?0i . If 2??
?i ?? < ?Gi , then the following inequality holds at any stage ? ? 1:
m
m
?
?
?
?0 ?
? ?,? + ?
2???
(?)
(?)
|w
?ji | ?
|?w
?ji |.
?
?
?G ? 2????,? i=1 (j,i)?G c
i=1 (j,i)?Gi
i
? (?) = W
? (?) . Lemma 2
Denote G = ?i?Nm Gi , F? = ?i?Nm F?i and notice that F? ? G = ? ? ?W
(?)
(?)
(?)
? ?1,1 = ?W
? ?1,1 is upper bounded in terms of ??W
? c ?1,1 , which indicates that
says that ??W
G
G
G
the error of the estimated coefficients locating outside of F? should be small enough. This provides
an intuitive explanation why the parameter estimation error of our algorithm can be small.
? (??1) = ?} =
? c ? {(j, i) : ?
Lemma 3 Using the notations of Lemma 2, we denote G = G(?) = H
ji
? being defined as in Lemma 1 and Gi ? Nd ? {i}. Let Ji be the indices of the
?i?Nm Gi with H
? Gi , Ii = Gic ? Ji , I = ?i?Nm Ii and F? = ?i?Nm F?i .
largest s coefficients (in absolute value) of w
Then, the following inequalities hold at any stage ? ? 1:
? )?
(
)
(
r
? (??1) )2
? G c ?2 + ?
1 + 1.5 2?
8m 4??
? (?ji
(j,i)?F
s
(?) F
? (?) ?2,1 ?
??W
,
(12)
??
r + s)
min (2?
?
? (?) ?2,1 ? 9.1m? r? .
??W
(13)
??
r + s)
min (2?
Lemma 3 is established based on Lemma 2, by considering the relationship between Eq. (5) and
Eq. (10), and the specific definition of G = G(?) . Eq. (12) provides a parameter estimation error
? (??1) (see the definition
? G c ?2 and the regularization parameters ?
bound in terms of ?2,1 -norm by ??
(?)
F
ji
? ji (?
? (??1) ) in Lemma 2). This is the result directly used in the proof of Theorem 1. Eq. (13)
of ?
ji
states that the error bound is upper bounded in terms of ?, the right-hand side of which constitutes
the shrinkage part of the error bound in Eq. (7).
( j
)
? ji = ?I ?w
? ? Rd?m . H
? ? F? is defined
? ?1 < ?, j ? Nd , ?i ? Nm with some W
Lemma 4 Let ?
in Lemma 1. Then under the condition of Eq. (3), we have:
?
?
?2 ?
? 2 ? m?2 ?W
? H? ? W
? H? ?2 /?2 .
?
?
ji
?
(j,i)?F
ji
2,1
?
(j,i)?H
5
?
? 2 by ?W
? H? ? W
? H? ?2 , which is critical for
?
2,1
ji
(??1)
? ?2,1 and ?W
?
? ?2,1 in the proof of
building the recursive relationship between ?W ? W
?W
Theorem 1. This recursive relation is crucial for the shrinkage part of the error bound in Eq. (7).
Lemma 4 establishes an upper bound of
4.1
?
(j,i)?F
? (?)
Proof of Theorem 1
Proof For notational simplicity, we denote the right-hand side of Eq. (11) as:
u = m? 2 ?+
r)(7.4?
r + 2.7 ln(2/?))/n.
max (?
??
Based on H
c
G(?)
,
(14)
Lemma 1 and Eq. (5), the followings hold with probability larger than 1 ? ?:
? G c ?2F = ??
? H? ?2F + ??
? Gc
??
(?)
(?)
2
? ?F
\H
c
? ??
? 2?,?
? u + |G(?)
\ H|?
c
?
? (??1)
?
? u + ?2 |G(?)
\ H|/144
? u + (1/144)m?2 ??2 ?W
? ? WG c
G c \H
(?)
(?)
2
? ?2,1 ,
\H
(15)
c
? ?(w
? (??1) )j ?21 /?2 = ?(w
? (??1) )j ?
\ H,
where the last inequality follows from ?(j, i) ? G(?)
2
?
? ? m??2 ?W
? (??1)
? j ?2 /?2 ? 1 ? |G c \ H|
. According to Eq. (12), we have:
w
??
c
? ? W G c \H
1
(?)
G(?) \H
(?)
2,1
? )2 (
(
)
r
? (??1) )2
? G c ?2 + ?
8m 1 + 1.5 2?
4??
? (?ji
(j,i)?F
s
(?) F
? (?) ? W
? ?22,1 = ??W
? (?) ?22,1 ?
?W
r + s))2
(??
min (2?
(
2 )
? (??1)
?
78m 4u + (37/36)m?2 ??2
W
?W
2
312mu
2,1
? (??1)
?
?
?
+
0.8
W
?
W
?
?
2,1
(?min (2?
r + s))2
(?min (2?
r + s))2
2
2 2 2
?
9.1 m ? r?
1560mu
1 ? 0.8
312mu
? (0)
?
? 0.8? ?
+ ?
.
? 0.8?
W
?W
+ ?
2
2
1
?
0.8
2,1
(?min (2?
(?min (2?
(?min (2?
r + s))
r + s))
r + s))2
In the above derivation, the first inequality is due to Eq. (12); the second inequality is due to the
assumption s ? r? in Theorem 1, Eq. (15) and Lemma 4; the third inequality is due to Eq. (6); the
last
follows
from Eq. (13) and 1 ? 0.8? ? 1 (? ? 1). Thus, following the inequality
?
? inequality
?
a + b ? a + b (?a, b ? 0), we obtain:
?
?
?
39.5 mu
(?)
?/2 9.1m? r
?
?
?W ? W ?2,1 ? 0.8
+
.
??
r + s) ??
r + s)
min (2?
min (2?
Substituting Eq. (14) into the above inequality, we verify Theorem 1.
5 Experiments
We compare our proposed MSMTFL algorithm with three competing multi-task feature learning
algorithms: ?1 -norm multi-task feature learning algorithm (Lasso), ?1,2 -norm multi-task feature
learning algorithm (L1,2) [14] and dirty model multi-task feature learning algorithm (DirtyMTL)
[9]. In our experiments, we employ the quadratic loss function for all the compared algorithms.
5.1 Synthetic Data Experiments
We generate synthetic data by setting the number of tasks as m and each task has n samples which
are of dimensionality d; each element of the data matrix Xi ? Rn?d (i ? Nm ) for the i-th task is
sampled i.i.d. from the Gaussian distribution N (0, 1) and we then normalize all columns to length 1;
? ? Rd?m is sampled i.i.d. from the uniform distribution
each entry of the underlying true weight W
? as zero vectors and 80% elements of
in the interval [?10, 10]; we randomly set 90% rows of W
the remaining nonzero entries as zeros; each entry of the noise ?i ? Rn is sampled i.i.d. from the
? i + ?i (i ? Nm ).
Gaussian distribution N (0, ? 2 ); the responses are computed as yi = Xi w
? ?W
? ?2,1 vs. Stage (?) plots for MSMTFL
We first report the averaged parameter estimation error ?W
(Figure 1). We observe that the error decreases as ? increases, which shows the advantage of our proposed algorithm over Lasso. This is consistent with the theoretical result in Theorem 1. Moreover,
the parameter estimation error decreases quickly and converges in a few stages.
6
120
?=5e?005
?=0.0001
?=0.0002
?=0.0005
100
80
60
40
20
0
2
4
6
Stage
8
10
m=20,n=30,d=200,?=0.005
200
?=5e?005
?=0.0001
?=0.0002
?=0.0005
150
100
50
0
2
4
6
Stage
8
10
Paramter estimation error (L2,1)
m=15,n=40,d=250,?=0.01
Paramter estimation error (L2,1)
Paramter estimation error (L2,1)
? ?W
? ?2,1 in comparison with four alWe then report the averaged parameter estimation error ?W
gorithms in different parameter settings (Figure 2). For a fair comparison, we compare the smallest
estimation errors of the four algorithms in all the parameter settings [25, 26]. As expected, the parameter estimation error of the MSMTFL algorithm is the smallest among the four algorithms. This
empirical result demonstrates the effectiveness of the MSMTFL algorithm. We also have the following observations: (a) When ? is large enough, all four algorithms tend to have the same parameter
? ?s obtained by the four algorithms are
estimation error. This is reasonable, because the solutions W
all zero matrices, when ? is very large. (b) The performance of the MSMTFL algorithm is similar
for different ??s, when ? exceeds a certain value.
m=10,n=60,d=300,?=0.001
100
80
60
?=5e?005
?=0.0001
?=0.0002
?=0.0005
40
20
0
2
4
6
Stage
8
10
2
10
1
10
0
10
?5
10
0
?
10
3
m=20,n=30,d=200,?=0.005
2
10
1
10
0
10
?6
10
m=10,n=60,d=300,?=0.001
3
10
Paramter estimation error (L2,1)
m=15,n=40,d=250,?=0.01
3
10
Paramter estimation error (L2,1)
Paramter estimation error (L2,1)
? ?W
? ?2,1 vs. Stage (?) plots for MSMTFL on
Figure 1: Averaged parameter estimation error ?W
?
the synthetic data set (averaged over 10 runs). Here we set ? = ? ln(dm)/n, ? = 50m?. Note
that ? = 1 corresponds to Lasso; the results show the stage-wise improvement over Lasso.
?4
10
?2
?
10
0
10
10
Lasso
L1,2
DirtyMTL(1?)
DirtyMTL(0.5?)
DirtyMTL(0.2?)
DirtyMTL(0.1?)
MSMTFL(50m?)
MSMTFL(10m?)
MSMTFL(2m?)
MSMTFL(0.4m?)
2
10
1
10
0
10
?1
10
?6
10
?4
10
?2
?
10
0
10
? ?W
? ?2,1 vs. ? plots on the synthetic data set
Figure 2: Averaged parameter estimation error ?W
(averaged over 10 runs). MSMTFL has the smallest parameter estimation error among the four algorithms. Both DirtyMTL and MSMTFL have two parameters; we set ?s /?b = 1, 0.5, 0.2, 0.1
for DirtyMTL (1/m ? ?s /?b ? 1 was adopted in Jalali et al. (2010) [9]) and ?/? =
50m, 10m, 2m, 0.4m for MSMTFL.
5.2
Real-World Data Experiments
We conduct experiments on two real-world data sets: MRI and Isolet data sets. (1) The MRI data
set is collected from the ANDI database, which contains 675 patients? MRI data preprocessed using
FreeSurfer1 . The MRI data include 306 features and the response (target) is the Mini Mental State
Examination (MMSE) score coming from 6 different time points: M06, M12, M18, M24, M36, and
M48. We remove the samples which fail the MRI quality controls and have missing entries. Thus,
we have 6 tasks with each task corresponding to a time point and the sample sizes corresponding to
6 tasks are 648, 642, 293, 569, 389 and 87, respectively. (2) The Isolet data set2 is collected from
150 speakers who speak the name of each English letter of the alphabet twice. Thus, there are 52
samples from each speaker. The speakers are grouped into 5 subsets which respectively include 30
similar speakers, and the subsets are named Isolet1, Isolet2, Isolet3, Isolet4, and Isolet5. Thus, we
naturally have 5 tasks with each task corresponding to a subset. The 5 tasks respectively have 1560,
1560, 1560, 1558, and 1559 samples (Three samples are historically missing), where each sample
includes 617 features and the response is the English letter label (1-26).
In the experiments, we treat the MMSE and letter labels as the regression values for the MRI data
set and the Isolet data set, respectively. For both data sets, we randomly extract the training samples
from each task with different training ratios (15%, 20% and 25%) and use the rest of samples to form
the test set. We evaluate the four multi-task feature learning algorithms in terms of normalized mean
squared error (nMSE) and averaged means squared error (aMSE), which are commonly used in
1
2
www.loni.ucla.edu/ADNI/
www.zjucadcg.cn/dengcai/Data/data.html
7
Table 1: Comparison of four multi-task feature learning algorithms on the MRI data set in terms of
averaged nMSE and aMSE (standard deviation), which are averaged over 10 random splittings.
measure
traning ratio
0.15
0.20
0.25
0.15
0.20
0.25
nMSE
aMSE
Lasso
0.6651(0.0280)
0.6254(0.0212)
0.6105(0.0186)
0.0189(0.0008)
0.0179(0.0006)
0.0172(0.0009)
L1,2
0.6633(0.0470)
0.6489(0.0275)
0.6577(0.0194)
0.0187(0.0010)
0.0184(0.0005)
0.0183(0.0006)
DirtyMTL
0.6224(0.0265)
0.6140(0.0185)
0.6136(0.0180)
0.0172(0.0006)
0.0171(0.0005)
0.0167(0.0008)
MSMTFL
0.5539(0.0154)
0.5542(0.0139)
0.5507(0.0142)
0.0159(0.0004)
0.0161(0.0004)
0.0157(0.0006)
multi-task learning problems [28, 29]. For each training ratio, both nMSE and aMSE are averaged
over 10 random splittings of training and test sets and the standard deviation is also shown. All
parameters of the four algorithms are tuned via 3-fold cross validation.
0.7
0.17
0.16
aMSE
nMSE
0.65
0.6
Lasso
L1,2
DirtyMTL
MSMTFL
0.55
0.5
0.15
0.15
0.14
Lasso
L1,2
DirtyMTL
MSMTFL
0.13
0.2
Training Ratio
0.12
0.25
0.15
0.2
Training Ratio
0.25
Figure 3: Averaged test error (nMSE and aMSE) vs. training ratio plots on the Isolet data set. The
results are averaged over 10 random splittings.
Table 1 and Figure 3 show the experimental results in terms of averaged nMSE (aMSE) and the
standard deviation. From these results, we observe that: (a) Our proposed MSMTFL algorithm outperforms all the competing feature learning algorithms on both data sets, with the smallest regression
errors (nMSE and aMSE) as well as the smallest standard deviations. (b) On the MRI data set, the
MSMTFL algorithm performs well even in the case of a small training ratio. The performance for
the 15% training ratio is comparable to that for the 25% training ratio. (c) On the Isolet data set,
when the training ratio increases from 15% to 25%, the performance of the MSMTFL algorithm
increases and the superiority of the MSMTFL algorithm over the other three algorithms is more
significant. Our results demonstrate the effectiveness of the proposed algorithm.
6
Conclusions
In this paper, we propose a non-convex multi-task feature learning formulation based on the capped?1 ,?1 regularization. The proposed formulation learns the specific features of each task as well as the
common features shared among tasks. We propose to solve the non-convex optimization problem
by employing a Multi-Stage Multi-Task Feature Learning (MSMTFL) algorithm, using concave
duality. We also present a detailed theoretical analysis in terms of the parameter estimation error
bound for the MSMTFL algorithm. The analysis shows that our MSMTFL algorithm achieves good
performance under the sparse eigenvalue condition, which is weaker than the incoherence condition.
Experimental results on both synthetic and real-world data sets demonstrate the effectiveness of our
proposed MSMTFL algorithm in comparison with the state of the art multi-task feature learning
algorithms. In our future work, we will focus on a general non-convex regularization framework for
multi-task feature learning settings (involving different loss functions and non-convex regularization
terms) and derive theoretical bounds.
Acknowledgements
This work is supported in part by 973 Program (2013CB329503), NSFC (Grant No. 91120301,
60835002 and 61075004), NIH (R01 LM010730) and NSF (IIS-0953662, CCF-1025177).
8
References
[1] A. Argyriou, T. Evgeniou, and M. Pontil. Convex multi-task feature learning. Machine Learning,
73(3):243?272, 2008.
[2] J. Bi, T. Xiong, S. Yu, M. Dundar, and R. Rao. An improved multi-task learning approach with applications in medical diagnosis. Machine Learning and Knowledge Discovery in Databases, pages 117?132,
2008.
[3] E. Candes and T. Tao. Decoding by linear programming. IEEE Transactions on Information Theory,
51(12):4203?4215, 2005.
[4] J. Chen, J. Liu, and J. Ye. Learning incoherent sparse and low-rank patterns from multiple tasks. In
SIGKDD, pages 1179?1188, 2010.
[5] D. Donoho, M. Elad, and V. Temlyakov. Stable recovery of sparse overcomplete representations in the
presence of noise. IEEE Transactions on Information Theory, 52(1):6?18, 2006.
[6] T. Evgeniou and M. Pontil. Regularized multi?task learning. In SIGKDD, pages 109?117, 2004.
[7] P. Gong, J. Ye, and C. Zhang. Multi-stage multi-task feature learning. arXiv:1210.5806, 2012.
[8] P. Gong, J. Ye, and C. Zhang. Robust multi-task feature learning. In SIGKDD, pages 895?903, 2012.
[9] A. Jalali, P. Ravikumar, S. Sanghavi, and C. Ruan. A dirty model for multi-task learning. In NIPS, pages
964?972, 2010.
[10] S. Kim and E. Xing. Tree-guided group lasso for multi-task regression with structured sparsity. In ICML,
pages 543?550, 2009.
[11] K. Lounici, M. Pontil, A. Tsybakov, and S. Van De Geer. Taking advantage of sparsity in multi-task
learning. In COLT, pages 73?82, 2009.
[12] S. Negahban and M. Wainwright. Joint support recovery under high-dimensional scaling: Benefits and
perils of ?1,? -regularization. In NIPS, pages 1161?1168, 2008.
[13] S. Negahban and M. Wainwright. Estimation of (near) low-rank matrices with noise and high-dimensional
scaling. The Annals of Statistics, 39(2):1069?1097, 2011.
[14] G. Obozinski, B. Taskar, and M. Jordan. Multi-task feature selection. Statistics Department, UC Berkeley,
Tech. Rep, 2006.
[15] G. Obozinski, M. Wainwright, and M. Jordan. Support union recovery in high-dimensional multivariate
regression. Annals of statistics, 39(1):1?47, 2011.
[16] S. Parameswaran and K. Weinberger. Large margin multi-task metric learning. In NIPS, pages 1867?
1875, 2010.
[17] N. Quadrianto, A. Smola, T. Caetano, S. Vishwanathan, and J. Petterson. Multitask learning without label
correspondences. In NIPS, pages 1957?1965, 2010.
[18] A. Schwaighofer, V. Tresp, and K. Yu. Learning gaussian process kernels via hierarchical bayes. In NIPS,
pages 1209?1216, 2005.
[19] S. Van De Geer and P. B?uhlmann. On the conditions used to prove oracle results for the lasso. Electronic
Journal of Statistics, 3:1360?1392, 2009.
[20] X. Yang, S. Kim, and E. Xing. Heterogeneous multitask learning with joint sparsity constraints. In NIPS,
pages 2151?2159, 2009.
[21] K. Yu, V. Tresp, and A. Schwaighofer. Learning gaussian processes from multiple tasks. In ICML, pages
1012?1019, 2005.
[22] C. Zhang and J. Huang. The sparsity and bias of the lasso selection in high-dimensional linear regression.
The Annals of Statistics, 36(4):1567?1594, 2008.
[23] C. Zhang and T. Zhang. A general theory of concave regularization for high dimensional sparse estimation
problems. Statistical Science, 2012.
[24] J. Zhang, Z. Ghahramani, and Y. Yang. Learning multiple related tasks using latent independent component analysis. In NIPS, pages 1585?1592, 2006.
[25] T. Zhang. Some sharp performance bounds for least squares regression with ?1 regularization. The Annals
of Statistics, 37:2109?2144, 2009.
[26] T. Zhang. Analysis of multi-stage convex relaxation for sparse regularization. JMLR, 11:1081?1107,
2010.
[27] T. Zhang. Multi-stage convex relaxation for feature selection. Bernoulli, 2012.
[28] Y. Zhang and D. Yeung. Multi-task learning using generalized t process. In AISTATS, 2010.
[29] J. Zhou, J. Chen, and J. Ye. Clustered multi-task learning via alternating structure optimization. In NIPS,
pages 702?710, 2011.
9
|
4729 |@word multitask:2 version:2 mri:8 norm:15 nd:6 tnlist:1 moment:1 liu:1 contains:1 score:1 tuned:1 mmse:2 outperforms:1 existing:1 current:1 dx:2 remove:1 treating:1 plot:4 v:4 asu:1 mental:1 provides:3 zhang:12 direct:1 dengcai:1 prove:1 introduce:1 theoretically:3 expected:1 multi:52 globally:2 considering:1 provided:1 moreover:4 notation:5 underlying:2 bounded:4 disi:1 berkeley:1 concave:3 demonstrates:1 control:1 medical:2 grant:1 superiority:1 engineering:1 treat:1 tsinghua:3 tends:1 nsfc:1 tempe:1 incoherence:5 twice:1 china:1 bi:1 averaged:13 recursive:2 tribution:1 union:1 pontil:3 empirical:6 significantly:1 selection:3 context:1 www:2 equivalent:1 imposed:1 jieping:2 center:1 missing:2 convex:17 formulate:1 simplicity:1 recovery:3 isolet:5 isolet5:1 annals:4 target:1 rip:4 speak:1 programming:1 element:3 recognition:1 satisfying:3 database:2 taskar:1 worst:1 wj:2 caetano:1 decrease:2 mu:4 multistage:2 m18:1 predictive:1 basis:1 easily:1 joint:2 tx:3 regularizer:5 derivation:1 alphabet:1 shortcoming:1 outside:1 larger:3 solve:4 valued:1 supplementary:1 say:1 otherwise:2 wg:1 elad:1 statistic:6 gi:8 advantage:2 eigenvalue:9 propose:9 coming:1 relevant:1 reproducibility:1 achieve:1 intuitive:2 frobenius:1 normalize:1 az:1 exploiting:1 convergence:1 extending:1 generating:1 converges:1 depending:1 derive:1 gong:3 eq:32 guided:1 closely:1 material:1 generalization:2 clustered:1 hold:6 sufficiently:1 considered:1 ground:1 normal:2 exp:11 substituting:1 achieves:4 smallest:5 estimation:29 label:3 visited:1 uhlmann:1 largest:1 changshui:1 grouped:1 successfully:2 establishes:1 gaussian:10 aim:1 zhou:1 shrinkage:2 derived:1 focus:4 xit:1 notational:1 improvement:1 rank:3 indicates:1 bernoulli:1 tech:1 sigkdd:3 kim:2 parameswaran:1 nn:3 relation:4 tao:1 among:11 classification:2 html:1 denoted:4 colt:1 art:3 special:1 initialize:1 uc:1 ruan:1 evgeniou:2 yu:3 icml:2 constitutes:1 future:1 sanghavi:1 t2:4 report:2 intelligent:1 simplify:1 employ:2 few:1 randomly:2 composed:1 simultaneously:2 national:1 petterson:1 maxj:1 consisting:2 n1:1 indexed:1 conduct:1 euclidean:1 tree:1 overcomplete:1 theoretical:9 column:4 rao:1 deviation:4 entry:9 subset:3 uniform:1 too:2 synthetic:7 traning:1 fundamental:1 negahban:2 informatics:2 decoding:1 ym:1 quickly:1 w1:1 squared:2 nm:17 choose:1 huang:1 hoeffding:1 de:2 bold:1 automation:1 coefficient:3 includes:1 eyi:1 analyze:1 sup:1 wm:1 xing:2 bayes:1 candes:1 contribution:1 square:1 ni:1 variance:1 who:1 peril:1 handwritten:1 bayesian:1 definition:3 dm:8 naturally:1 associated:1 proof:8 di:1 sampled:3 knowledge:1 improves:3 dimensionality:2 gic:1 m24:1 mtl:2 response:4 improved:1 loni:1 formulation:13 lounici:1 shrink:1 stage:20 biomedical:1 smola:1 correlation:1 hand:6 sketch:1 quality:1 usa:1 building:1 ye:6 requiring:1 concept:1 remedy:1 true:2 verify:1 regularization:14 hence:2 normalized:1 alternating:1 ccf:1 laboratory:2 nonzero:6 during:2 please:2 speaker:4 generalized:1 complete:1 demonstrate:5 performs:1 l1:5 wise:1 consideration:1 novel:1 fi:1 nih:1 common:11 ji:22 exponentially:1 interpretation:1 refer:2 significant:1 rd:5 consistency:2 stable:1 similarity:1 zcs:1 multivariate:1 inf:1 certain:3 inequality:9 calligraphic:1 rep:1 yi:5 wji:2 minimum:1 impose:2 ii:5 multiple:5 full:2 exceeds:1 adni:1 cross:1 ravikumar:1 involving:1 regression:6 heterogeneous:1 vision:1 patient:1 metric:2 arxiv:1 iteration:4 kernel:1 yeung:1 interval:1 grow:1 crucial:1 rest:1 tend:1 dundar:1 name:1 effectiveness:6 jordan:2 call:1 integer:1 near:1 leverage:2 presence:1 yang:2 enough:3 gave:1 lasso:16 suboptimal:3 competing:2 cn:2 penalty:3 locating:1 speech:1 splittings:3 remark:7 detailed:4 tsybakov:1 generate:1 xij:1 nsf:1 notice:1 sign:2 estimated:1 diagnosis:2 group:1 key:1 four:9 capital:2 preprocessed:1 relaxation:3 beijing:1 run:2 letter:7 named:1 reasonable:1 electronic:1 scaling:2 comparable:1 submatrix:1 bound:26 distinguish:1 correspondence:1 fold:1 quadratic:2 arizona:2 oracle:1 nontrivial:1 vishwanathan:1 constraint:1 ucla:1 min:19 department:2 structured:1 according:1 slightly:1 character:1 wi:3 lm010730:1 intuitively:1 isolet1:1 ln:13 paramter:6 loose:1 fail:1 end:2 adopted:1 wii:1 gaussians:1 observe:2 hierarchical:1 away:1 xiong:1 weinberger:1 denotes:2 dirty:7 assumes:1 remaining:1 completed:1 include:2 medicine:1 exploit:4 restrictive:5 ghahramani:1 approximating:1 r01:1 jalali:8 evolutionary:1 biodesign:1 subspace:1 mail:2 collected:2 length:1 index:2 relationship:5 mini:2 ratio:11 difficult:2 looseness:1 upper:4 m12:1 observation:1 extended:1 y1:1 rn:3 gc:1 sharp:1 established:1 nip:8 capped:6 usually:2 pattern:1 xm:1 sparsity:4 program:1 including:2 max:7 explanation:1 wainwright:3 critical:1 examination:1 regularized:5 indicator:1 residual:1 mn:1 improve:2 technology:2 historically:1 incoherent:1 extract:1 tresp:2 prior:1 literature:3 l2:6 acknowledgement:1 discovery:1 loss:5 limitation:1 amse:8 validation:1 rni:2 consistent:1 thresholding:1 viewpoint:1 share:1 balancing:1 row:14 supported:1 last:3 english:2 side:5 weaker:4 bias:1 institute:1 face:1 taking:1 absolute:2 sparse:22 van:2 benefit:1 overcome:1 world:6 author:1 commonly:2 refinement:1 employing:1 transaction:2 guassians:1 temlyakov:1 approximate:1 xi:15 latent:1 why:1 table:2 learn:3 terminate:1 robust:1 obtaining:2 aistats:1 linearly:1 whole:1 noise:7 quadrianto:1 fair:1 x1:1 nmse:8 gorithms:1 sub:6 jmlr:1 third:1 learns:1 theorem:11 specific:4 showing:1 maxi:1 dominates:1 exists:2 margin:1 chen:2 set2:1 schwaighofer:2 scalar:2 corresponds:2 truth:1 satisfies:1 obozinski:2 formulated:1 donoho:1 shared:10 specifically:3 lemma:20 called:2 geer:2 duality:2 experimental:2 formally:1 support:2 evaluate:1 argyriou:1 ex:1
|
4,121 | 473 |
Constructing Proofs in Symmetric Networks
Gadi Pinkas
Computer Science Department
Washington University
Campus Box 1045
St. Louis, MO 63130
Abstract
This paper considers the problem of expressing predicate calculus in connectionist networks that are based on energy minimization. Given a firstorder-logic knowledge base and a bound k, a symmetric network is constructed (like a Boltzman machine or a Hopfield network) that searches
for a proof for a given query. If a resolution-based proof of length no
longer than k exists, then the global minima of the energy function that
is associated with the network represent such proofs. The network that
is generated is of size cubic in the bound k and linear in the knowledge
size. There are no restrictions on the type of logic formulas that can be
represented. The network is inherently fault tolerant and can cope with
inconsistency and nonmonotonicity.
1
Introduction
The ability to reason from acquired knowledge is undoubtedly one of the basic and
most important components of human intelligence. Among the major tools for
reasoning in the area of AI are deductive proof techniques. However, traditional
methods are plagued by intractability, inability to learn and adjust, as well as by
inability to cope with noise and inconsistency. A connectionist approach may be
the missing link: fine grain, massively parallel architecture may give us real-time
approximation; networks are potentially trainable and adjustable; and they may be
made tolerant to noise as a result of their collective computation.
Most connectionist reasoning systems that implement parts of first-order logic
(see for examples: (Holldobler 90], [Shastri et a1. 90]) use the spreading activation
paradigm and usually trade expressiveness with time efficiency. In contrast, this
217
218
Pinkas
paper uses the energy minimization paradigm (like [Derthick 88], [Ballard 86] and
[Pinkas 91c]), representing an intractable problem, but trading time with correctness; i.e., as more time is given, the probability of converging to a correct answer
increases.
Symmetric connectionist networks used for constraint satisfaction are the
target platform [Hopfield 84b], [Hinton, Sejnowski 86], (peterson, Hartman 89],
[Smolensky 86]. They are characterized by a quadratic energy function that should
be minimized. Some of the models in the family may be seen as performing a search
for a global minimum of their energy function. The task is therefore to represent
logic deduction that is bound by a finite proof length as energy minimization (without a bound on the proof length, the problem is undecidable). When a query is
clamped, the network should search for a proof that supports the query. If a proof
to the query exists, then every global minimum of the energy function associated
with the network represents a proof. If no proof exists, the global minima represent
the lack of a proof.
The paper elaborates the propositional case; however, due to space limitations, the
first-order (FOL) case is only sketched. For more details and full treatment of FOL
see [Pinkas 91j].
2
Representing proofs of propositional logic
I'll start by assuming that the knowledge base is propositional.
The proof area:
A proof is a list of clauses ending with the query such that every clause used is
either an original clause, a copy (or weakening) of a clause that appears earlier in
the proof, or a result of a resolution step of the two clauses that appeared just
earlier. The proof emerges as an activation pattern on special unit structures called
the proof area, and is represented in reverse to the common practice (the query
appears first). For example: given a knowledge base of the following clauses:
1) A
2) ..,Av B vC
3) ..,Bv D
4) ..,CV D
we would like to prove the query D, by generating the following list of clauses:
1)
2)
3)
4)
D
A
..,Av D
..,CV D
5) -.AVCv D
6) -.Bv D
7) ..,Av B vC
(obtained by resolution of clauses 2 and 3 by canceling A).
(original clause no. 1).
(obtained by resolution of clauses 4 and 5 by canceling C).
(original clause no. 4).
(obtained by resolution of clauses 6 and 7 by canceling B).
(original clause no. 3).
(original clause no. 2).
Each clause in the proof is either an original clause, a copy of a clause from earlier
in the proof, or a resolution step.
The matrix C in figure 1, functions as a clause list. This list represents an ordered
set of clauses that form the proof. The query clauses are clamped onto this area
Constructing Proofs in Symmetric Networks
and activate hard constraints that force the rest of the units of the matrix to form
a valid proof (if it exists).
IN
D
1
A
2
.,AvD
3
-CvD
4
.,AvCvD
S
Query:
-JJvD
6
.,AvBvC
7
r--
0
1
A
2
-
0
123
k
4
e
4
k
0
0
~
@
@
0
G>
0
~
!0~
k
0
0
0
0
0
0 G> \G>J
3
R
RES KB O'Y
0
0@
1
2
3
4
k
1
k
2
1
0
2
0
B
C
~
0
0
0
0
0
0
k
/2
C
3
0
3
0
0
D
4
0
S
0
6
n
p
7
0
t
K
D
l"igure 1: The proof area for a propositional case
Variable binding is performed by dynamic allocation of instances using a technique
similar to [Anand an et a!. 891 and [Barnden 91]. In this technique, if two symbols
need to be bound together, an instance is allocated from a pool of general purpose
instances, and is connected to both symbols. An instance can be connected to a
literal in a clause, to a predicate type, to a constant, to a function or to a slot
of another instance (for example, a constant that is bound to the first slot of a
predicate).
The clauses that participate in the proof are represented using a 3-dimensional
matrix (C.",;) and a 2-dimensional matrix (P";) as illustrated in figure 1. The
rows of C represent clauses of the proof, while the rows of P represent atomic
219
220
Pinkas
propositions. The columns of both matrices represent the pool of instances used for
binding propositions to clauses.
A clause is a list of negative and positive instances that represent literals. The
instance thus behaves as a two-way pointer that binds composite structures like
clauses with their constituents (the atomic propositions). A row i in the matrix
C represents a clause which is composed of pairs of instances. If the unit C+,i,i is
set, then the matrix represents a positive literal in clause i. If PA,i is also set, then
C+,',j represents a positive literal of clause i that is bound to the atomic proposition
A. Similarly C-"J represents a negative literal.
The first row of matrix C in the figure is the query clause D. It contains only one
positive literal that is bound to atomic proposition D via instance 4. For another
example consider the third row of the C which represents a clause of two literals: a
positive one that is bound to D via instance 4, and a negative one bound to A via
instance 1 (it is the clause ..,A V D, generated as a result of a resolution step).
Participation in the proof: The vector IN represents whether clauses in C
participate in the proof. In our example, all the clauses are in the proof; however,
in the general case some of the rows of C may be meaningless. When IN. is on, it
means that the clause i is in the proof and must be proved as well. Every clause that
participates in the proof is either a result of a resolution step (RES. is set), a copy
of a some clause (CPYi is set), or it is an original clause from the knowledge base
(K B. is set). The second clause of C in figure 1 for example is an original clause
of the knowledge base. If a clause j is copied, it must be in the proof itself and
therefore I Nj is set. Similarly, if clause i is a result of a resolution step, then the two
resolved clauses must also be in the proof (I Ni+l,i and I Ni+2,i) and therefore must
be themselves resolvents, copies or originals. This chain of constraints continues
until all constraints are satisfied and a valid proof is generated.
Posting a query: The user posts a query clamping its clauses onto the first rows
of C and setting the appropriate IN units. This indicates that the query clauses
participate in the proof and should be proved by either a resolution step, a copy
step or by an original clause. Figure 1 represents the complete proof for the query
D . We start by allocating an instance (4) for D in the P matrix, and clamping a
positive literal D in the first row of C (C+,l ,4); the rest of the first row's units are
clamped zero. The unit INl is biased (to have the value of one), indicating that
the query is in the proof; this cause a chain of constraints to be activated that are
satisfied only by a valid proof. If no proof exists, the I Nl unit will become zero;
i.e., the global minima is obtained by setting I Nl to zero despite the bias.
Representing resolutions steps: The vector RES is a structure of units that
indicates which are the clauses in C that are obtained by a resolution step. If RES,
is set, then the ith row is obtained by resolving row i + 1 of C with row i + 2.
Thus, the unit RESl in figure 1 indicates that the clause D of the first row of
C is a resolvent of the second and the third rows of C representing ..,A V D and
A respectfully. Two literals cancel each other if they have opposite signs and are
represented by the same instance. In figure 1, literal A of the third row of C and
literal ..,A of the second row cancel each other, generating the clause of the first
row.
The rows of matrix R represent literals canceled by resolution steps. If row i of
Constructing Proofs in Symmetric Networks
C is the result of a resolution step, there must be one and only one instance j such
that both clause i + 1 and clause i + 2 include it with opposite signs. For example
(figure 1): clause D in the first row of C is the result of resolving clause A with
clause..,A V D which are in the second and third rows of C respectfully. Instance 1,
representing atomic proposition A, is the one that is canceled; RI,I is set therefore,
indicating that clause 1 is obtained by a resolution step that cancels the literals of
instance 1.
Copied and original clauses: The matrix D indicates which clauses are copied
to other clauses in the proof area. Setting Di,i means that clause i is obtained by
copying (or weakening) clause j into clause i (the example does not use copy steps).
The matrix K indicates which original knowledge-base clauses participate in the
proof. The unit Ki,J indicates that a clause i in the proof area is an original clause,
and the syntax of the j-th clause in the knowledge base must be imposed on the
units of clause i. In figure 1 for example, clause 2 in the proof (the second row in
C), assumes the identity of clause number 1 in the knowledge base and therefore
K l ,2 is set.
3
Constraints
We are now ready to specify the constraints that must be satisfied by the units so
that a proof is found. The constraints are specified as well formed logic formulas.
For example the formula (A V B) "C imposes a constraint over the units (A,B,C)
such that the only possible valid assignments to those units are (011), (101), (111).
A general method to implement an arbitrary logical constraint on connectionist
networks is shown in [Pinkas 90b]. Most of the constraints specified in this section
are hard constraints; i.e., must be satisfied for a valid proof to emerge. Towards the
end of this section, some soft constraints are presented.
In-proof constraints: If a clause participates in the proof, it must be either a
result of a resolution step, a copy step or an original clause. In logic, the constraints
may be expressed as: Vi : INi- RESi V CP'Yi V K Bi. The three units (per clause
i) consist a winner takes all subnetwork (WTA). This means that only one of the
three units is actually set. The WTA constraints may be expressed as:
RESi-..,CP'Yi " ..,K Bi
CP'Yi--,RESi " ..,K Bi
K Bi--,RESi " ..,CP'Yi
The WTA property may be enforced by inhibitory connections between every pair
of the three units.
Copy constraints: If CPYi is set then clause i must be a copy of another clause
j in the proof. This can be expressed as Vi : C P'Yi- V.(Di,i " I Ni ). The rows of
Dare WTAs allowing-i to be a copy of only one j. In addition, if clause j is copied
or weakened into clause i then every unit set in clause j must also be set in clause
i. This may be specified as: Vi,j,l : Di,,- ?C+,.,' +- C+",') " (C_,.,' +- C_",,?.
Resolution constraints: If a clause i is a result of resolving the two clauses
+ 1 and i + 2, then there must be one and only one instance (j) that is canceled
(represented by Ra,i)' and C. is obtained by copying both the instances of CHI and
CH2, without the instance j. These constraints may be expressed as:
i
221
222
Pinkas
Vi : RE Si- Vi Ri,i
Vi,j,j',j' ? j: Ri,i--'Ri,i'
Vi, j : ~,i-(C+,i+l,i " C-,i+2,i) V (C-,i+1J "C+,i+2,j)
Vi : RESi-INi+l "INi+2
Vi : RE Si-( C+,i,i +-+(C+,i+l,i V C+,i+2,i) " "'Ri,i
Vi: RESi -(C-,iJ+-+(C-,i+1J V C-,i+2J) " -'~,i
at least one instance is canceled
only one instance is canceled (WT.t
cancel literals with opposite signs.
the two resolvents are also in proof
copy positive literals
copy negative literals
Clause-instance constraints: The sign of an instance in a clause should be
unique; therefore, any instance pair in the matrix Cis WTA: Vi, j : C+,i,i--,C-,iJ'
The columns of matrix P are WTAs since an instance is allowed to represent only
one atomic prop06ition: VA, i, B :F A : PA,i-",PB,i. The rows of P may be also
WTAs: VA,i,j:f; i: PA,i-"'PA,j (this constraint is not imposed in the FOL case).
Knowledge base constraints: If a clause i is an original knowledge base clause,
then there must be a clause j (out of the m original clauses) whose syntax is forced
upon the units of the i-th row of matrix C. This constraint can be expressed as:
Vi : K Bi- Vj Ki,i' The rows of K are WTA networks so that only one original
clause is forced on the units of clause i: Vi, j, j' :F j : K',i--,Ki,i"
The only hard constraints that are left are those that force the syntax of a particular
clause from the knowledge base. Assume for example that Ki,4 is set, meaning that
clause i in C must have the syntax of the fourth clause in the knowledge base of our
example (..,CV D). Instances j and j' must be allocated to the atomic propositions
C and D respectfully, and must appear also in clause i as the literals C-,iJ and
C+,i,i" The following constraints capture the syntax of (..,CV D):
Vi : Ki,4- V.(C_ ,iJ " PC,i)
Vi: K i ,4- (D+,i,i "Pc,i)
V;
there exists a negative literal that is bound to C;
there exists a positive literal that is bound to D.
FOL extension:
In first-order predicate logic (FOL) instead of atomic propositions we must deal
with predicates (see [pinkas 91j] for details). As in the propositional case, a literal
in a clause is represented by a positive or negative instance; however, the instance
must be allocated now to a predicate name and may have slots to be filled by other
instances (representing functions and constants). To accommodate such complexity
a new matrix (NEST) is added, and the role of matrix P is revised.
The matrix P must accommodate now function names, predicate names and constant names instead of just atomic propositions. Each row of P represents a name,
and the columns represent instances that are allocated to those names. The rows
of P that are associated with predicates and functions may contain several different instances of the same predicate or function, thus, they are not WTA anymore.
In order to represent compound terms and predicates, instances may be bound to
slots of other instances. The new matrix (N ESn,i,p) is capable of representing
such bindings. If N ESn,i,p is set, then instance i is bound to the p slot of instance
j. The columns of NEST are WTA, allowing only one instance to be bound to
a certain slot of another instance. When a clause i is forced to have the syntax
of some original clause I, syntactic constraints are triggered so that the literals of
clause i become instantiated by the relevant predicates, functions, constants and
variables imposed by clause I.
Constructing Proofs in Symmetric Networks
Unification is implicitly obtained if two predicates are representing by the same
instance while still satisfying all the constraints (imposed by the syntax of the
two clauses). When a resolution step is needed, the network tries to allocate the
same instance to the two literals that need to cancel each other. If the syntactic
constraints on the literals permit such sharing of an instance, then the attempt
to share the instance is successful and a unification occurs (occur check is done
implicitly since the matrix NEST allows the only finite trees to be represented).
Minimizing the violation of soft constraints: Among the valid proofs some
are preferable to others. By means of soft constraints and optimization it is possible
to encourage the network to search for preferred proofs. Theorem-proving thus is
viewed as a constraint optimization problem. A weight may be assigned to each
of the constraints [Pinkas 91c) and the network tries to minimize the weighted sum
of the violated constraints, so that the set of the optimized solutions is exactly the
set of the preferred proofs. For example, preference of proofs with most general
unification is obtained by assignment of small penalties (negative bias) to every
binding of a function to a position of another instance (in NEST). Using similar
techniques, the network can be made to prefer shorter, more parsimonious or more
reliable proofs, low-cost plans or even more specific arguments as in nonmonotonic
reasonmg.
4
Summary
Given a finite set T of m clauses, where n is the number of different predicates,
functions and constants, and given also a bound k over the proof length, we can
generate a network that searches for a proof with length not longer then k, for
a clamped query Q. If a global minimum is found then an answer is given as to
whether there exists such a proof, and the proof (with MGU's) may be extracted
from the state of the visible units. Among the possible valid proofs the system
prefers some "better" proofs by minimizing the violation of soft constraints. The
concept of "better" proofs may apply to applications like planning (minimize the
cost), abduction (parsimony) and nonmonotonic reasoning (specificity).
In the propositional case the generated network is of O(k2 + km + kn) units and
O( k 3 + km + kn) connections. For predica;te logic there are O( k 3 + km + kn) units
and connections, and we need to add O( Pm) connections and hidden units, where
i is the complexity-level of the syntactic constraints [Pinkas 91j).
The results improve an earlier approach [Ballard 86]: There are no restrictions on
the rules allowed; every proof no longer than the bound is allowed; the network
is compact and the representation of bindings (unifications) is efficient; nesting of
functions and multiple uses of rules are allowed; only one relaxation phase is needed;
inconsistency is allowed in the knowledge base, and the query does not need to be
negated and pre-wired (it can be clamped during query time).
The architecture discussed has a natural fault-tolerance capability: When a unit
becomes faulty, it simply cannot assume a role in the proof, and other units are
allocated instead.
Acknowledgment: I wish to thank Dana Ballard, Bill Ball, Rina Dechter,
Peter Had dawy, Dan Kimura, Stan Kwasny, Ron Loui and Dave Touretzky for
223
224
Pinkas
helpful conunents.
References
[Anand an et al. 89] P. Anandan, S. Letovsky, E. Mjolsness, "Connectionist variable
binding by optimization," Proceedings of the 11th Cognitive Science
Society 1989.
[Ballard 86] D. H. Ballard "Parallel Logical Inference and Energy Minimization,"
Proceedings of the 5th National Conference on Artificial Intelligence,
Philadelphia, pp. 203-208, 1986.
[Bamden 91] J .A. Barnden, "Encoding complex symbolic data structures with some
unusual connectionist techniques," in J.A Barnden and J.B. Pollack,
Advances in Connectionist and Neural Computation Theory 1, Highlevel connectionist models, Ablex Publishing Corporation, 1991.
[Derthick 88] M. Derthick "Mundane reasoning by parallel constraint satisfaction,"
PhD thesis, CMU-CS-88-182 Carnegie Mellon University, Sept. 1988
[Hinton, Sejnowski 86] G.E Hinton and T.J. Sejnowski, "Learning and re-learning
in Boltzman Machines," in J. L. McClelland and D. E. Rumelhart,
Parallel Distributed Processing: Explorations in The Microstructure
of Cognition I, pp. 282 - 317, MIT Press, 1986.
[Holldobler 90] S. Holldobler, "CHCL, a connectionist inference system for Horn
logic based on connection method and using limited resources," International Computer Science Institute TR-90-042, 1990.
[Hopfield 84b] J. J. Hopfield "Neurons with graded response have collective computational properties like those of two-state neurons," Proceedings of
the National Academy of Sciences 81, pp. 3088-3092, 1984.
[Peterson, Hartman 89] C. Peterson, E. Hartman, "Explorations of mean field theory learning algorithm," Neural Networks t, no. 6, 1989.
[Pinkas 90b] G. Pinkas, "Energy minimization and the satisfiability of propositional
calculus," Neural Computation 9, no. 2, 1991.
[Pinkas 91c] G. Pinkas, "Propositional Non-Monotonic Reasoning and Inconsistency in Synunetric Neural Networks," Proceedings of IlCAI, Sydney,
1991.
[Pinkas 91j] G. Pinkas, "First-order logic proofs using connectionist constraint relaxation," technical report, Department of Computer Science, Washington University, WUCS-91-S4, 1991.
[Shastri et al. 90] L. Shastri, V. Ajjanagadde, "From simple associations to systematic reasoning: A connectionist representation of rules, variables
and dynamic bindings," technical report, University of Pennsylvania,
Philadelphia, MS-CIS-90-0S, 1990.
[Smolensky 86] P. Smolensky, "Information processing in dynamic systems: Foundations of harmony theory," in J.L.McClelland and D.E.Rumelhart,
Parallel Distributed Processing: Explorations in The Microstructure
of Cognition I , MIT Press, 1986.
|
473 |@word km:3 calculus:2 tr:1 accommodate:2 contains:1 activation:2 si:2 must:19 grain:1 dechter:1 visible:1 intelligence:2 ith:1 pointer:1 ron:1 preference:1 constructed:1 become:2 prove:1 dan:1 acquired:1 ra:1 themselves:1 planning:1 chi:1 becomes:1 campus:1 parsimony:1 corporation:1 nj:1 kimura:1 every:7 firstorder:1 preferable:1 exactly:1 k2:1 unit:26 appear:1 louis:1 positive:9 bind:1 despite:1 encoding:1 weakened:1 limited:1 bi:5 unique:1 acknowledgment:1 horn:1 atomic:9 practice:1 implement:2 area:7 composite:1 pre:1 specificity:1 symbolic:1 onto:2 cannot:1 faulty:1 restriction:2 bill:1 imposed:4 missing:1 resolution:18 rule:3 nesting:1 proving:1 target:1 user:1 us:2 pa:4 rumelhart:2 satisfying:1 continues:1 dare:1 role:2 capture:1 connected:2 rina:1 mjolsness:1 trade:1 complexity:2 dynamic:3 ablex:1 upon:1 efficiency:1 resolved:1 ajjanagadde:1 hopfield:4 represented:7 resl:1 forced:3 instantiated:1 activate:1 sejnowski:3 query:18 artificial:1 nonmonotonic:2 whose:1 elaborates:1 ability:1 hartman:3 syntactic:3 itself:1 derthick:3 triggered:1 highlevel:1 relevant:1 academy:1 constituent:1 wired:1 generating:2 ij:4 sydney:1 c:1 trading:1 correct:1 vc:2 kb:1 human:1 exploration:3 microstructure:2 proposition:9 extension:1 c_:3 plagued:1 cognition:2 mo:1 major:1 purpose:1 harmony:1 spreading:1 deductive:1 correctness:1 tool:1 weighted:1 minimization:5 mit:2 indicates:6 check:1 abduction:1 contrast:1 helpful:1 inference:2 weakening:2 hidden:1 deduction:1 sketched:1 canceled:5 among:3 plan:1 platform:1 special:1 field:1 washington:2 wtas:3 represents:10 cancel:5 minimized:1 connectionist:12 others:1 report:2 composed:1 national:2 phase:1 attempt:1 undoubtedly:1 adjust:1 violation:2 gadi:1 nl:2 pc:2 activated:1 chain:2 allocating:1 capable:1 unification:4 encourage:1 shorter:1 filled:1 tree:1 re:7 pollack:1 instance:43 column:4 earlier:4 soft:4 assignment:2 cost:2 predicate:13 successful:1 kn:3 answer:2 st:1 international:1 systematic:1 participates:2 pool:2 together:1 thesis:1 satisfied:4 literal:23 nest:4 cognitive:1 vi:15 resolvent:3 performed:1 try:2 fol:5 start:2 barnden:3 parallel:5 capability:1 undecidable:1 formed:1 ni:3 minimize:2 dave:1 touretzky:1 canceling:3 sharing:1 energy:9 esn:2 pp:3 proof:66 associated:3 di:3 proved:2 treatment:1 logical:2 knowledge:15 emerges:1 satisfiability:1 actually:1 appears:2 specify:1 response:1 done:1 box:1 just:2 until:1 avd:1 lack:1 name:6 contain:1 concept:1 assigned:1 symmetric:6 illustrated:1 deal:1 ll:1 during:1 m:1 syntax:7 ini:3 complete:1 cp:4 reasoning:6 meaning:1 common:1 resi:6 behaves:1 clause:98 winner:1 discussed:1 association:1 expressing:1 mellon:1 ai:1 cv:4 pm:1 similarly:2 had:1 longer:3 base:13 add:1 reverse:1 massively:1 compound:1 certain:1 fault:2 inconsistency:4 yi:5 seen:1 minimum:6 anandan:1 paradigm:2 resolving:3 full:1 multiple:1 technical:2 characterized:1 post:1 a1:1 va:2 converging:1 basic:1 cmu:1 represent:11 addition:1 fine:1 allocated:5 biased:1 rest:2 meaningless:1 anand:2 architecture:2 pennsylvania:1 opposite:3 whether:2 allocate:1 penalty:1 peter:1 cause:1 prefers:1 s4:1 mcclelland:2 generate:1 ilcai:1 inhibitory:1 pinkas:17 sign:4 per:1 carnegie:1 pb:1 relaxation:2 sum:1 enforced:1 ch2:1 fourth:1 family:1 parsimonious:1 prefer:1 mundane:1 bound:17 ki:5 igure:1 copied:4 quadratic:1 bv:2 occur:1 constraint:37 ri:5 argument:1 performing:1 department:2 cvd:1 ball:1 wta:7 resource:1 needed:2 mgu:1 end:1 unusual:1 permit:1 apply:1 appropriate:1 anymore:1 original:18 assumes:1 include:1 publishing:1 graded:1 society:1 added:1 occurs:1 traditional:1 subnetwork:1 link:1 thank:1 participate:4 considers:1 reason:1 assuming:1 length:5 copying:2 minimizing:2 shastri:3 potentially:1 negative:7 respectfully:3 collective:2 adjustable:1 negated:1 allowing:2 av:3 neuron:2 revised:1 finite:3 hinton:3 arbitrary:1 expressiveness:1 holldobler:3 propositional:8 pair:3 specified:3 connection:5 optimized:1 usually:1 pattern:1 smolensky:3 appeared:1 reliable:1 satisfaction:2 natural:1 force:2 participation:1 representing:8 improve:1 stan:1 ready:1 philadelphia:2 sept:1 nonmonotonicity:1 bamden:1 limitation:1 allocation:1 dana:1 foundation:1 imposes:1 intractability:1 share:1 row:28 summary:1 copy:12 bias:2 institute:1 peterson:3 emerge:1 tolerance:1 distributed:2 ending:1 valid:7 made:2 boltzman:2 cope:2 compact:1 implicitly:2 preferred:2 logic:11 global:6 tolerant:2 search:5 learn:1 ballard:5 inherently:1 complex:1 constructing:4 vj:1 noise:2 allowed:5 cubic:1 position:1 wish:1 clamped:5 third:4 posting:1 formula:3 theorem:1 specific:1 symbol:2 list:5 exists:8 intractable:1 consist:1 ci:2 phd:1 te:1 clamping:2 simply:1 expressed:5 ordered:1 inl:1 binding:7 monotonic:1 extracted:1 slot:6 identity:1 viewed:1 towards:1 hard:3 wt:1 called:1 indicating:2 support:1 inability:2 violated:1 trainable:1
|
4,122 | 4,730 |
Deep Learning of Invariant Features via Simulated
Fixations in Video
Will Y. Zou1 , Shenghuo Zhu3 , Andrew Y. Ng2 , Kai Yu3
Department of Electrical Engineering, Stanford University, CA
2
Department of Computer Science, Stanford University, CA
3
NEC Laboratories America, Inc., Cupertino, CA
{wzou, ang}@cs.stanford.edu {zsh, kyu}@sv.nec-labs.com
1
Abstract
We apply salient feature detection and tracking in videos to simulate fixations and
smooth pursuit in human vision. With tracked sequences as input, a hierarchical
network of modules learns invariant features using a temporal slowness constraint.
The network encodes invariance which are increasingly complex with hierarchy.
Although learned from videos, our features are spatial instead of spatial-temporal,
and well suited for extracting features from still images. We applied our features to
four datasets (COIL-100, Caltech 101, STL-10, PubFig), and observe a consistent
improvement of 4% to 5% in classification accuracy. With this approach, we
achieve state-of-the-art recognition accuracy 61% on STL-10 dataset.
1
Introduction
Our visual systems are amazingly competent at recognizing patterns in images. During their development, training stimuli are not incoherent sequences of images, but natural visual streams modulated by fixations [1]. Likewise, we expect a machine vision system to learn from coherent image
sequences extracted from the natural environment. Through this learning process, it is desired that
features become robust to temporal transfromations and perform significantly better in recognition.
In this paper, we build an unsupervised deep learning system which exhibits theses properties, thus
achieving competitive performance on concrete computer vision benchmarks.
As a learning principle, sparsity is essential to understanding the statistics of natural images [2].
However, it remains unclear to what extent sparsity and subspace pooling [3, 4] could produce
invariance exhibited in higher levels of visual systems. Another approach to learning invariance is
temporal slowness [1, 5, 6, 7]. Experimental evidence suggests that high-level visual representations
become slow-changing and tolerant towards non-trivial transformations, by associating low-level
features which appear in a coherent sequence [5].
To learn features using slowness, a key observation is that during our visual fixations, moving objects
remain in visual focus for a sustained amount of time through smooth pursuit eye movements. This
mechanism ensures that the same object remains in visual exposure, avoiding rapid switching or
translations. Simulation for such a mechanism forms an essential part of our proposal. In natural
videos, we use spatial-temporal feature detectors to simulate fixations on salient features. At these
feature locations, we apply local contrast normalization [8], template matching [9] to find local
correspondences between successive video frames. This approach produces training sequences for
our unsupervised algorithm. As shown in Figure 1, training input to the neural network is free from
abrupt changes but contain non-trivial motion transformations.
In prior work [10, 11, 12], a single layer of features learned using temporal slowness results in
translation-invariant edge detectors, reminiscent of complex-cells. However, it remains unclear
whether higher levels of invariances [1], such as ones exhibited in IT, can be learned using temporal
1
Figure 1: Simulating smooth pursuit eye movements. (Left) Sequences extracted from fixed spatial
locations in a video. (Right) Sequences produced by our tracking algorithm.
slowness. In this paper, we focus on developing algorithms that capture higher levels of invariance,
by learning multiple layers of representations. By stacking learning modules, we are able to learn
features that are increasingly invariant. Using temporal slowness, the first layer units become locally
translational invariant, similar to subspace or spatial pooling; the second layer units can then encode
more complex invariances such as out-of-plane transformations and non-linear warping.
Using this approach, we show a surprising result that despite being trained on videos, our features
encode complex invariances which translate to recognition performance on still images. We carry
out our experiments using the self-taught learning framework [13]. We first learn a set of features
using simulated fixations in unlabeled videos, and then apply the learned features to classification
tasks. The learned features improve accuracy by a significant 4% to 5% across four still image
recognition datasets. In particular, we show best classification results to date 61% on the STL10 [14] dataset. Finally, we quantify the invariance learned using temporal slowness and simulated
fixations by a set of control experiments.
2
Related work
Unsupervised learning image features from pixels is a relatively new approach in computer vision.
Nevertheless, there have been successful application of unsupervised learning algorithms such as
Sparse Coding [15, 16], Independent Component Analysis [17], even clustering algorithms [14] on
a convincing range of datasets. These algorithms often use such principles as sparsity and feature
orthogonality to learn good representations.
Recent work in deep learning such as Le et. al. [18] showed promising results for the application
of deep learning to vision. At the same time, these advances suggest challenges for learning deeper
layers [19] using purely unsupervised learning. Mobahi et. al. [20] showed that temporal slowness could improve recognition on a video-like COIL-100 dataset. Despite being one of the first to
apply temporal slowness in deep architectures, the authors trained a fully supervised convolutional
network and used temporal slowness as a regularizing step in the optimization procedure. The influential work of Slow Feature Analysis (SFA) [7] was an early example of unsupervised algorithm
using temporal slowness. SFA solves a constrained problem and optimizes for temporal slowness
by mapping data into a quadratic expansion and performing eigenvector decomposition. Despite its
elegance, SFA?s non-linear (quadratic) expansion is slow computationally when applied to high dimensional data. Applications of SFA to computer vision have had limited success, applied primarily
to artificially generated graphics data [21]. Bergstra et. al. [12] proposed to train deep architectures
with temporal slowness and decorrelation, and illustrated training a first layer on MNIST digits.
[22, 23] proposed bi-linear models to represents natural images using a factorial code. Cadieu et.
al. [24] trained a two-layer algorithm to learn visual transformations in videos, with limited emphasis
on temporal slowness.
The computer vision literature has a number of works which, similar to us, use the idea of video
tracking to learn invariant features. Stavens et. al. [25] show improvement in performance when
SIFT/HOG parameters are optimized using tracked image patch sequences in specific application
domains. Leistner et. al. [26] used natural videos as ?weakly supervised? signals to improve random forest classifiers. Lee et. al. [27] introduced video-based descriptors used in hand-held visual
recognition systems. In contrast to these recent examples, our algorithm learns features directly
from raw image pixels, and adapts to pixel-level image statistics?in particular, it does not rely
on hand-designed preprocessing such as SIFT/HOG. Further, since it is implemented by a neural
2
network, our method can also be used in conjunction with such techniques as fine-tuning with backpropagation. [28, 29]
3
Learning Architecture
In this section, we describe the basic modules and the architecture of the learning algorithm. In
particular, our learning modules use a combination of temporal slowness and a non-degeneracy
principle similar to orthogonality [30, 31]. Each module implements a linear transformation followed by a pooling step. The modules can be activated in a feed-forward manner, making them
suitable for forming a deep architecture. To learn invariant features with temporal slowness, we use
a two layer network, where the first layer is convolutional and replicates neurons with local receptive
field across dense grid locations, and the second (non-convolutional) layer is fully connected.
3.1
Learning Module
The input data to our learning module is a coherent sequence of image frames, and all frames in the
sequence are indexed by t. To learn hidden features p(t) from data x(t) , the modules are trained by
solving the following unconstrained minimization problem:
minimize
W
(t)
The hidden features p
Figure 2:
?
N
?1
X
kp(t) ? p(t+1) k1 +
N
X
kx(t) ? W T W x(t) k22
(1)
t=1
t=1
are mapped from data x(t) by a feed-forward pass in the network shown
p(t) =
q
H(W x(t) )2
(2)
This equation describes L2 pooling on a linear network layer. The square and square-root operations
are element-wise. This pooling mechanism is implemented by a subspace pooling matrix H with
a group size of two [30]. More specifically, each row of H picks and sums two adjacent feature
dimensions in a non-overlapping fashion.
The second term in Equation 1 is from the Reconstruction ICA algorithm [31]. It helps avoid degeneracy in the features, and plays a role similar to orthogonalization in Independent Component
Analysis [30]. The network encodes the data x(t) by a matrix-vector multiplication z(t) = W x(t) ,
? (t) = W T z(t) . This term can also be
and reconstructs the data with another feed-forward pass x
interpreted as an auto-encoder reconstruction cost. (See [31] for details.)
Although the algorithm is driven by temporal slowness, sparsity also helps to obtain good features
from natural images. Thus, in practice, we further add to Equation 1 an L1 -norm sparsity regularPN
ization term ? t=1 kp(t) k1 , to make sure the obtained features have sparse activations.
This basic algorithm trained on the Hans van Hateren?s natural video repository [24] produced oriented edge filters. The learned features are highly invariant to local translations. The reason for
this is that temporal slowness requires hidden features to be slow-changing across time. Using the
visualization method of [24], in Figure 3, we vary the interpolation angle in-between pairs of pooled
features, and produce a motion of smooth translations. A video of this illustration is also available
online.1
3.2
Stacked Architecture
The first layer modules described in the last section are trained on a smaller patch size (16x16 pixels)
of locally tracked video sequences. To construct the set of inputs to the second stacked layer, first
layer features are replicated on a dense grid in a larger scale (32x32 pixels). The input to layer two
is extracted after L2 pooling. This architecture produces an over-complete number of local 16x16
features across the larger feature area.
The two layer architecture is shown in Figure 4. Due to the high dimensionality of the first layer
outputs, we apply PCA to reduce their dimensions for the second layer algorithm. Afterwards,
1
http://ai.stanford.edu/ wzou/slow/first layer invariance.avi
3
Figure 2: Neural network architecture of the basic learning module
Figure 3: Translational invariance in first layer
features; columns correspond to interpolation angle ? at multiples of 45 degrees
a fully connected module is trained with temporal slowness on the output of PCA. The stacked
architecture learns features in a signficantly larger 2-D area than the first layer algorithm, and able
to learn invariance to larger-scale transformations seen in videos.
Figure 4: Two-layer architecture of our algorithm used to learn invariance from videos.
3.3
Invariance Visualization
After unsupervised training with video sequences, we visualize the features learned by the two layer
network. On the left of Figure 5, we show the optimal stimuli which maximally activates each of
the first layer pooling units. This is obtained by analytically finding the input that maximizes the
output of a pooling unit (subject to the constraint that the input x has unit norm, kxk2 = 1). The
optimal stimuli for units learned without slowness are shown at the top, and appears to give high
frequency grating-like patterns. At the bottom, we show the optimal stimuli for features learned
with slowness; here, the optimal stimuli appear much smoother because the pairs of Gabor-like
features being pooled over are usually a quadrature pair. This implies that the pooling unit is robust
to changes in phase positions, which correspond to translations of the Gabor-like feature.
The second layer features are learned on top of the pooled first layer features. We visualize the
second layer features by plotting linear combinations of the first layer features? optimal stimuli (as
shown on the left of Figure 5), and varying the interpolation angle as in [24]. The result is shown
on right of Figure 5, where each row corresponds to the visualization of a single pooling unit. Each
row corresponds to a motion sequence to which we would expect the second layer features to be
roughly invariant. From this visualization, non-trivial invariances are observed such as non-linear
warping, rotation, local non-affine changes and large scale translations. A video animation of this
visualization is also available online2 .
2
http://ai.stanford.edu/ wzou/slow/second layer invariance.avi
4
Figure 5: (Left) Comparison of optimal stimuli of first layer pooling units (patch size 16x16) learned
without (top) and with (bottom) temporal slowness. (Right) visualization of second layer features
(patch size 32x32), with each row corresponding to one pooling unit. We observe a few non-trivial
invariances, such as warping (rows 9 and 10), rotation (first row), local non-affine changes (rows 3,
4, 6, 7), large scale translations (rows 2 and 5).
4
Experiments
Our experiments are carried out in a self-taught learning setting [13]. We first train the algorithm on
the Hans van Hateren natural scene videos, to learn a set of features. The learned features are then
used to classify single images in each of four datasets. Throughout this section, we use gray-scale
features to perform recognition.
4.1
Training with Tracked Sequences
To extract data from the Hans van Hateren natural video repository, we apply spatial-temporal
Difference-of-Gaussian blob detector and select areas of high response to simulate visual fixations.
After the initial frame is selected, the image patch is tracked across 20 frames using a tracker we
built and customized for this task. The tracker finds local correspondences by calculating Normalized Cross Correlation (NCC) of patches across time which are processed with local contrast
normalization.
The first layer algorithm is learned on 16x16 patches with 128 features (pooled from 256 linear
bases). The bases are then convolved within the larger 32x32 image patches with a stride of 2. PCA
is used to first reduce the dimensions of the response maps to 300 before learning the second layer.
The second layer learns 150 features (pooled from 300 linear bases).
4.2
Vision Datasets
COIL-100 contains images of 100 objects, each with 72 views. We followed testing protocols in [32,
20]. The videos we trained on to obtain the temporal slowness features were based on the van
Hataren videos, and were thus unrelated to COIL-100. The classification experiment is performed
on all 100 objects.
In Caltech 101, we followed the common experiment setup given in [33]: we pick 30 images per
class as training set, and randomly pick 50 per class (if fewer than 50 left, take the rest) as test set.
This is performed 10 times and we report the average classification accuracy.
The STL-10 [34] dataset contains 10 object classes with 5000 training and 8000 test images. There
are 10 pre-defined folds of training images, with 500 images in each fold. In each fold, a classifier
5
Table 2: Ave. acc. Caltech 101
Table 1: Acc. COIL-100 (unrelated video)
Method
VTU [32]
ConvNet regularized with video [20]
Our results without video
Our results using video
Performance increase by training on video
Acc.
79.1%
79.77%
82.0%
87.0%
+5.0%
Method
Two-layer ConvNet [36]
ScSPM [37]
Hierarchical sparse-coding [38]
Macrofeatures [39]
Our results without video
Our results using video
Performance increase with video
Ave. acc.
66.9%
73.2%
74.0%
75.7%
66.5%
74.6%
+8.1%
Table 3: Ave. acc. STL-10
Method
Reconstruction ICA [31]
Sparse Filtering [40]
SC features, K-means encoding [16]
SC features, SC encoding [16]
Local receptive field selection [19]
Our result without video
Our result using video
Performance increase with video
Ave. acc.
52.9%
53.5%
56.0%
59.0%
60.1%
56.5%
61.0%
+4.5%
Table 4: Acc. PubFig faces
Method
Our result without video
Our result using video
Performance increase with video
Acc.
86%
90.0%
+4.0%
is trained on a specific set of 500 training images, and tested on all 8000 testing images. Similar
to prior work, the evaluation metric we report is average accuracy across 10 folds. The dataset is
suitable for developing unsupervised feature learning and self-taught learning algorithms, since the
number of supervised training labels is relatively small.
PubFig [35] is a face recognition dataset with 58,797 images of 200 persons. Face images contain
large variation in pose, expression, background and image conditions. Since some of the URL links
provided by the authors were broken, we only compare our results using video against our own
baseline result without video. 10% of the downloaded data was used as the test set.
4.3
Test Pipeline
On still images, we apply our trained network to extract features at dense grid locations. A linear
SVM classifier is trained on features from both first and second layers. We did not apply fine-tuning.
For COIL-100, we cross validate the average pooling size. A simple four-quadrant pooling is used
for STL-10 and PubFig datasets. For Caltech 101, we use a three layer spatial pyramid.
4.4
Recognition Results
We report results on COIL-100, Caltech 101, STL-10 and PubFig datasets in tables 1, 2, 3 and 4.
In these experiments, the hyper-parameters are cross-validated. However, performance is not particularly sensitive to the weighting between temporal slowness objective compared to reconstruction
objective in Equation 1, as we will illustrate in Section 4.5.2. For each dataset, we compare results
using features trained with and without the temporal slowness objective term in Equation 1. Despite
the feature being learned from natural videos and then being transferred to different recognition tasks
(i.e., self-taught learning), they give excellent performance in our experiments. The application of
temporal slowness increases recognition accuracy consistently by 4% to 5%, bringing our results to
be competitive with the state-of-the-art.
4.5
4.5.1
Control Experiments
Effect of Fixation Simulation and Tracking
We carry out a control experiment to elucidate the difference between features learned using our
fixation and smooth pursuit method for extracting video frames (as in Figure 1, right) compared
to features learned using non-tracked sequences (Figure 1, left). As shown on the left of Figure 6,
training on tracked sequences reduces the translation invariance learned in the second layer. In
6
comparison to other forms of invariances, translation is less useful because it is easy to encode with
spatial pooling [17]. Instead, the features encode other invariance such as different forms of nonlinear warping. The advantage of using tracked data is reflected in object recognition performance
on the STL-10 dataset. Shown on the right of Figure 6, recognition accuracy is increased by a
considerable margin by training on tracked sequences.
Figure 6: (Left) Comparison of second layer invariance visualization when training data was obtained with tracking and without; (Right) Ave. acc. on STL-10 with features trained on tracked
sequences compared to non-tracked; ? in this plot is slowness weighting parameter from Equation 1
.
4.5.2 Importance of Temporal Slowness to Recognition Performance
To understand how much the slowness principle helps to learn good features, we vary the slowness
parameter across a range of values to observe its effect on recognition accuracy. Figure 7 shows
recognition accuracy on STL-10, plotted as a function of a slowness weighting parameter ? in the
first and second layers. On both layers, accuracy increases considerably with ?, and then levels off
slowly as the weighting parameter becomes large. The performance also appears to be reasonably
robust to the choice of ?, so long as the parameter is in the high-value regime.
Figure 7: Performance on STL-10 versus the amount of temporal slowness, on the first layer (left)
and second layer (right); in these plots ? is the slowness weighting parameter from Equation 1;
different colored curves are shown for different ? values in the other layer.
4.5.3
Invariance Tests
We quantify invariance encoded in the unsupservised learned features with invariance tests. In this
experiment, we take the approach described in [4] and measure the change in features as input image
undergoes transformations. A patch is extracted from a natural image, and transformed through tranlation, rotation and zoom. We measure the Mean Squared Error (MSE) between the L2 normalized
feature vector of the transformed patch and the feature vector of the original patch 3 . The normalized
MSE is plotted against the amount of translation, rotation, and zoom. Results of invariance tests are
3
MSE is normalized against feature dimensions, and averaged across 100 randomly sampled patches. Since
the largest distortion makes almost a completely uncorrelated patch, for all features, MSE is normalized against
the value at the largest distortion.
7
shown in Figure 84 . In these plots, lower curves indicates higher levels of invariance. Our features
trained with temporal slowness have better invariance properties compared to features learned only
using sparity, and SIFT 5 . Further, simulation of fixation with feature detection and tracking has a
visible effect on feature invariance. Specifically, as shown on the left of Figure 8, feature tracking
reduces translation invariance in agreement with our analysis in Section 4.5.1. At the same time,
middle and right plots of Figure 8 show that feature tracking increases the non-trivial rotation and
zoom invariance in the second layer of our temporal slowness features.
Figure 8: Invariance tests comparing our temporal slowness features using tracked and non-tracked
sequences, against SIFT and features trained only with sparsity, shown for different transformations:
Translation (left), Rotation (middle) and Zoom (right).
5
Conclusion
We have described an unsupervised learning algorithm for learning invariant features from video
using the temporal slowness principle. The system is improved by using simulated fixations and
smooth pursuit to generate the video sequences provided to the learning algorithm. We illustrate
by virtual of visualization and invariance tests, that the learned features are invariant to a collection
of non-trivial transformations. With concrete recognition experiments, we show that the features
learned from natural videos not only apply to still images, but also give competitive results on a
number of object recognition benchmarks. Since our features can be extracted using a feed-forward
neural network, they are also easy to use and efficient to compute.
References
[1] N. Li and J. J. DiCarlo. Unsupervised natural experience rapidly alters invariant object representation in
visual cortex. Science, 2008.
[2] A. Hyvarinen and P. Hoyer. Topographic independent component analysis as a model of v1 organization
and receptive fields. Neural Computation, 2001.
[3] J.H. van Hateren and D.L. Ruderman. Independent component filters of natural images compared with
simple cells in primary visual cortex. Proc Royal Society, 1998.
[4] K. Kavukcuoglu, M. Ranzato, R. Fergus, and Y. LeCun. Learning invariant features through topographic
filter maps. In CVPR, 2009.
[5] D. Cox, P. Meier, N. Oertelt, and J. DiCarlo. ?Breaking? position-invariant object recognition. Nature
Neuroscience, 2005.
[6] T. Masquelier and S.J. Thorpe. Unsupervised learning of visual features through spike timing dependent
plasticity. PLoS Computational Biology, 2007.
[7] P. Berkes and L. Wiskott. Slow feature analysis yields a rich repertoire of complex cell properties. Journal
of Vision, 2005.
[8] E. P. Simoncelli S. Lyu. Nonlinear image representation using divisive normalization. In CVPR, 2008.
[9] J. P. Lewis. Fast normalized cross-correlation. In Vision Interface, 1995.
[10] A. Hyvarinen, J. Hurri, and J. Vayrynen. Bubbles: a unifying framework for low-level statistical properties
of natural image sequences. Optical Society of America, 2003.
4
Translation test is performed with 16x16 patches and first layer features, rotation and zoom tests are performed with 32x32 patches and second layer features.
5
We use SIFT in the VLFeat toolbox [41] http://www.vlfeat.org/
8
[11] J. Hurri and A. Hyvarinen. Temporal coherence, natural image sequences and the visual cortex. In NIPS,
2006.
[12] J. Bergstra and Y. Bengio. Slow, decorrelated features for pretraining complex cell-like networks. In
NIPS, 2009.
[13] R. Raina, A. Madhavan, and A. Y. Ng. Large-scale deep unsupervised learning using graphics processors.
In ICML, 2009.
[14] A. Coates, H. Lee, and A. Y. Ng. An analysis of single layer networks in unsupervised feature learning.
In AISTATS, 2011.
[15] B.A. Olshausen and D.J. Field. How close are we to understanding v1? Neural Computation, 2005.
[16] A. Coates and A. Ng. The importance of encoding versus training with sparse coding and vector quantization. In ICML, 2011.
[17] Q. V. Le, J. Ngiam, Z. Chen, D. Chia, P. W. Koh, and A. Y. Ng. Tiled convolutional neural networks. In
Advances in Neural Information Processing Systems, 2010.
[18] Q. V. Le, M. A. Ranzato, R. Monga, M. Devin, K. Chen, G. S. Corrado, J. Dean, and A. Y. Ng. Building
high-level features using large scale unsupervised learning. In ICML, 2012.
[19] A. Coates and A. Y. Ng. Selecting receptive fields in deep networks. In NIPS, 2011.
[20] H. Mobahi, R. Collobert, and Jason Weston. Deep learning from temporal coherence in video. In ICML,
2009.
[21] M. Franzius, N. Wilbert, and L. Wiskott. Invariant object recognition with Slow Feature Analysis. In
ICANN, 2008.
[22] B. Olshausen, C. Cadieu, J. Culpepper, and D.K. Warland. Bilinear models of natural images. In Proc.
SPIE 6492, 2007.
[23] R. P. N. Rao D. B. Grimes. Bilinear sparse coding for invariant vision.
[24] C. Cadieu and B. Olshausen. Learning tranformational invariants from natural movies. In NIPS, 2009.
[25] S. Thrun D. Stavens. Unsupervised learning of invariant features using video. In CVPR, 2010.
[26] C. Leistner, M. Godec, S. Schulter, M. Werlberger, A. Saffari, and H. Bischof. Improving classifiers with
unlabeled weakly-related videos. In CVPR, 2011.
[27] T. Lee and S. Soatto. Video-based descriptors for object recognition. Image and Vision Computing, 2011.
[28] D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning representations by back-propagating errors.
Nature, 1986.
[29] Y. Bengio and Y. LeCun. Scaling learning algorithms towards AI. In Large-Scale Kernel Machines, 2007.
[30] A. Hyvarinen, J. Hurri, and P.O. Hoyer. Natural Image Statistics. Springer, 2009.
[31] Q. V. Le, A. Karpenko, J. Ngiam, and A. Y. Ng. ICA with reconstruction cost for efficient overcomplete
feature learning. In NIPS, 2011.
[32] H. Wersing and E. Kr?oner. Learning optimized features for hierarchical models of invariant object recognition. Neural Computation, 2003.
[33] L. Fei-Fei, R. Fergus, and P. Perona. Learning generative visual models from few training examples: an
incremental bayesian approach tested on 101 object categories.
[34] A. Coates, H. Lee, and A. Ng. An analysis of single-layer networks in unsupervised feature learning. In
AISTATS 14, 2010.
[35] N. Kumar, A. C. Berg, P. N. Belhumeur, and S. K. Nayar. Attribute and simile classifiers for face verification. In ICCV, 2009.
[36] K. Kavukcuoglu, P. Sermanet, Y. Boureau, K. Gregor, M. Mathieu, and Y. LeCun. Learning convolutional
feature hierarchies for visual recognition. In NIPS, 2010.
[37] J. Yang, K. Yu, Y. Gong, and T. Huang. Linear spatial pyramid matching using sparse coding for image
classification. In CVPR, 2009.
[38] K. Yu, Y. Lin, and J. Lafferty. Learning image representations from the pixel level via hierarchical sparse
coding. In CVPR, 2011.
[39] Y-Lan Boureau, Francis Bach, Yann LeCun, and Jean Ponce. Learning mid-level features for recognition.
In CVPR, 2010.
[40] J. Ngiam, P. W. Koh, Z. Chen, S. Bhaskar, and A. Y. Ng. Sparse filtering. In NIPS, 2011.
[41] A. Vedaldi and B. Fulkerson. VLFeat: An open and portable library of computer vision algorithms, 2008.
9
|
4730 |@word repository:2 middle:2 cox:1 norm:2 open:1 simulation:3 decomposition:1 pick:3 carry:2 initial:1 contains:2 selecting:1 com:1 comparing:1 surprising:1 activation:1 reminiscent:1 devin:1 visible:1 plasticity:1 designed:1 plot:4 generative:1 selected:1 fewer:1 plane:1 colored:1 location:4 successive:1 org:1 become:3 fixation:12 sustained:1 manner:1 ica:3 rapid:1 roughly:1 becomes:1 provided:2 unrelated:2 maximizes:1 what:1 interpreted:1 eigenvector:1 sfa:4 finding:1 transformation:9 temporal:35 classifier:5 control:3 unit:10 vlfeat:3 appear:2 before:1 engineering:1 local:10 timing:1 switching:1 despite:4 encoding:3 bilinear:2 interpolation:3 emphasis:1 shenghuo:1 suggests:1 limited:2 range:2 bi:1 averaged:1 lecun:4 testing:2 practice:1 implement:1 backpropagation:1 digit:1 procedure:1 area:3 significantly:1 gabor:2 matching:2 vedaldi:1 pre:1 quadrant:1 suggest:1 unlabeled:2 selection:1 close:1 www:1 map:2 dean:1 exposure:1 williams:1 abrupt:1 x32:4 fulkerson:1 variation:1 hierarchy:2 play:1 elucidate:1 agreement:1 element:1 rumelhart:1 recognition:24 particularly:1 bottom:2 role:1 module:12 observed:1 electrical:1 capture:1 ensures:1 connected:2 ranzato:2 plo:1 movement:2 environment:1 broken:1 trained:15 weakly:2 solving:1 purely:1 completely:1 america:2 train:2 stacked:3 fast:1 describe:1 kp:2 sc:3 avi:2 hyper:1 jean:1 encoded:1 kai:1 stanford:5 larger:5 distortion:2 cvpr:7 godec:1 encoder:1 statistic:3 topographic:2 online:1 sequence:22 blob:1 advantage:1 reconstruction:5 karpenko:1 date:1 rapidly:1 translate:1 achieve:1 adapts:1 stl10:1 validate:1 produce:4 incremental:1 object:13 help:3 illustrate:2 andrew:1 propagating:1 gong:1 pose:1 grating:1 solves:1 implemented:2 c:1 implies:1 quantify:2 attribute:1 filter:3 human:1 saffari:1 virtual:1 leistner:2 repertoire:1 tracker:2 mapping:1 lyu:1 visualize:2 vary:2 early:1 proc:2 label:1 sensitive:1 largest:2 macrofeatures:1 minimization:1 activates:1 gaussian:1 avoid:1 varying:1 conjunction:1 encode:4 validated:1 focus:2 ponce:1 improvement:2 consistently:1 indicates:1 contrast:3 ave:5 baseline:1 dependent:1 hidden:3 perona:1 transformed:2 pixel:6 translational:2 classification:6 development:1 spatial:9 art:2 constrained:1 field:5 construct:1 ng:9 cadieu:3 biology:1 represents:1 yu:2 unsupervised:16 icml:4 report:3 stimulus:7 culpepper:1 primarily:1 few:2 thorpe:1 oriented:1 randomly:2 zoom:5 phase:1 ng2:1 detection:2 vtu:1 organization:1 highly:1 evaluation:1 replicates:1 grime:1 activated:1 amazingly:1 held:1 edge:2 experience:1 indexed:1 desired:1 plotted:2 overcomplete:1 increased:1 column:1 classify:1 rao:1 stacking:1 cost:2 recognizing:1 successful:1 graphic:2 sv:1 considerably:1 person:1 lee:4 off:1 concrete:2 thesis:1 squared:1 reconstructs:1 huang:1 slowly:1 li:1 stride:1 bergstra:2 coding:6 pooled:5 inc:1 stream:1 collobert:1 performed:4 root:1 view:1 lab:1 jason:1 francis:1 competitive:3 minimize:1 square:2 accuracy:10 convolutional:5 descriptor:2 oner:1 likewise:1 correspond:2 yield:1 raw:1 bayesian:1 kavukcuoglu:2 produced:2 processor:1 ncc:1 acc:9 detector:3 decorrelated:1 against:5 frequency:1 elegance:1 unsupservised:1 spie:1 degeneracy:2 sampled:1 dataset:8 dimensionality:1 back:1 appears:2 feed:4 higher:4 supervised:3 reflected:1 response:2 maximally:1 improved:1 vayrynen:1 correlation:2 hand:2 ruderman:1 nonlinear:2 overlapping:1 undergoes:1 gray:1 olshausen:3 building:1 effect:3 k22:1 contain:2 normalized:6 ization:1 analytically:1 soatto:1 laboratory:1 illustrated:1 adjacent:1 during:2 self:4 complete:1 motion:3 l1:1 interface:1 orthogonalization:1 image:40 wise:1 regularizing:1 common:1 rotation:7 tracked:13 cupertino:1 significant:1 ai:3 tuning:2 unconstrained:1 grid:3 had:1 moving:1 han:3 cortex:3 add:1 base:3 berkes:1 own:1 recent:2 showed:2 optimizes:1 driven:1 slowness:37 success:1 caltech:5 seen:1 belhumeur:1 corrado:1 signal:1 smoother:1 multiple:2 afterwards:1 simoncelli:1 reduces:2 smooth:6 cross:4 long:1 chia:1 lin:1 bach:1 basic:3 vision:13 metric:1 normalization:3 monga:1 kernel:1 pyramid:2 cell:4 proposal:1 background:1 fine:2 rest:1 exhibited:2 bringing:1 sure:1 pooling:16 subject:1 lafferty:1 bhaskar:1 extracting:2 yang:1 bengio:2 easy:2 architecture:11 associating:1 reduce:2 idea:1 whether:1 expression:1 pca:3 url:1 pretraining:1 deep:10 useful:1 factorial:1 amount:3 ang:1 locally:2 masquelier:1 mid:1 processed:1 category:1 http:3 generate:1 coates:4 alters:1 neuroscience:1 per:2 taught:4 group:1 key:1 salient:2 four:4 nevertheless:1 lan:1 achieving:1 changing:2 v1:2 sum:1 angle:3 throughout:1 almost:1 yann:1 patch:15 coherence:2 scaling:1 layer:50 followed:3 correspondence:2 fold:4 quadratic:2 constraint:2 orthogonality:2 fei:2 scene:1 encodes:2 simulate:3 kumar:1 performing:1 simile:1 optical:1 relatively:2 transferred:1 department:2 developing:2 influential:1 signficantly:1 combination:2 remain:1 across:9 increasingly:2 describes:1 smaller:1 making:1 iccv:1 invariant:19 koh:2 pipeline:1 computationally:1 equation:7 visualization:8 remains:3 mechanism:3 franzius:1 schulter:1 pursuit:5 operation:1 available:2 apply:9 observe:3 hierarchical:4 simulating:1 online2:1 convolved:1 original:1 top:3 clustering:1 unifying:1 calculating:1 kyu:1 warland:1 k1:2 build:1 society:2 gregor:1 warping:4 objective:3 spike:1 receptive:4 primary:1 unclear:2 exhibit:1 hoyer:2 subspace:3 convnet:2 link:1 mapped:1 simulated:4 thrun:1 extent:1 portable:1 trivial:6 reason:1 code:1 dicarlo:2 illustration:1 scspm:1 convincing:1 sermanet:1 setup:1 hog:2 perform:2 observation:1 neuron:1 datasets:7 benchmark:2 hinton:1 frame:6 introduced:1 pair:3 meier:1 toolbox:1 optimized:2 bischof:1 coherent:3 learned:22 nip:7 able:2 usually:1 pattern:2 regime:1 sparsity:6 challenge:1 built:1 royal:1 video:49 suitable:2 decorrelation:1 natural:20 rely:1 regularized:1 customized:1 raina:1 improve:3 movie:1 eye:2 library:1 mathieu:1 carried:1 bubble:1 incoherent:1 auto:1 extract:2 prior:2 understanding:2 literature:1 l2:3 multiplication:1 fully:3 expect:2 filtering:2 versus:2 downloaded:1 degree:1 madhavan:1 affine:2 verification:1 consistent:1 wiskott:2 principle:5 plotting:1 uncorrelated:1 translation:13 row:8 last:1 free:1 deeper:1 understand:1 template:1 face:4 sparse:9 van:5 curve:2 dimension:4 rich:1 author:2 forward:4 collection:1 preprocessing:1 replicated:1 hyvarinen:4 tolerant:1 hurri:3 fergus:2 table:5 promising:1 nature:2 learn:13 robust:3 ca:3 reasonably:1 forest:1 improving:1 expansion:2 ngiam:3 excellent:1 complex:6 pubfig:5 artificially:1 domain:1 protocol:1 mse:4 did:1 aistats:2 dense:3 icann:1 animation:1 competent:1 quadrature:1 zsh:1 fashion:1 x16:5 slow:9 position:2 kxk2:1 breaking:1 weighting:5 learns:4 specific:2 sift:5 mobahi:2 svm:1 stl:10 evidence:1 essential:2 mnist:1 quantization:1 importance:2 kr:1 nec:2 kx:1 margin:1 chen:3 boureau:2 suited:1 forming:1 visual:16 tracking:8 springer:1 corresponds:2 lewis:1 extracted:5 coil:7 weston:1 towards:2 considerable:1 change:5 wersing:1 specifically:2 pas:2 invariance:31 experimental:1 divisive:1 tiled:1 select:1 berg:1 modulated:1 hateren:4 tested:2 avoiding:1 nayar:1
|
4,123 | 4,731 |
Clustering Aggregation as
Maximum-Weight Independent Set
Nan Li
Longin Jan Latecki
Department of Computer and Information Sciences
Temple University, Philadelphia, USA
{nan.li,latecki}@temple.edu
Abstract
We formulate clustering aggregation as a special instance of Maximum-Weight
Independent Set (MWIS) problem. For a given dataset, an attributed graph is constructed from the union of the input clusterings generated by different underlying
clustering algorithms with different parameters. The vertices, which represent the
distinct clusters, are weighted by an internal index measuring both cohesion and
separation. The edges connect the vertices whose corresponding clusters overlap. Intuitively, an optimal aggregated clustering can be obtained by selecting an
optimal subset of non-overlapping clusters partitioning the dataset together. We
formalize this intuition as the MWIS problem on the attributed graph, i.e., finding
the heaviest subset of mutually non-adjacent vertices.
This MWIS problem exhibits a special structure. Since the clusters of each input clustering form a partition of the dataset, the vertices corresponding to each
clustering form a maximal independent set (MIS) in the attributed graph. We propose a variant of simulated annealing method that takes advantage of this special
structure. Our algorithm starts from each MIS, which is close to a distinct local
optimum of the MWIS problem, and utilizes a local search heuristic to explore its
neighborhood in order to find the MWIS. Extensive experiments on many challenging datasets show that: 1. our approach to clustering aggregation automatically decides the optimal number of clusters; 2. it does not require any parameter
tuning for the underlying clustering algorithms; 3. it can combine the advantages
of different underlying clustering algorithms to achieve superior performance; 4.
it is robust against moderate or even bad input clusterings.
1
Introduction
Clustering is a fundamental problem in data analysis, and has extensive applications in statistics, data
mining, computer vision and even in social sciences. The goal is to partition the data objects into
a set of groups (clusters) such that objects in the same group are similar, while objects in different
groups are dissimilar.
In the past two decades, many different clustering algorithms have been developed. Some popular
ones include K-means, DBSCAN, Ward?s algorithm, EM-clustering and so on. However, there are
potential shortcomings for each of the known clustering algorithms. For instance, K-means [7]
and its variations have difficulty detecting the ?natural? clusters, which have non-spherical shapes
or widely different sizes or densities. Furthermore, in order to achieve good performance, they
require an appropriate number of clusters as the input parameter, which is usually very hard to
specify. DBSCAN [8], a density-based clustering algorithm, can detect clusters of arbitrary shapes
and sizes. However, it has trouble with data which have widely varying densities. Also, DBSCAN
requires two input parameters specified by the user: the radius, Eps, to define the neighborhood of
each data object, and the minimum number, minP ts, of data objects required to form a cluster.
1
Consensus clustering, also called clustering aggregation or clustering ensemble, refers to a kind of
methods which try to find a single (consensus) superior clustering from a number of input clusterings obtained by different algorithms with different parameters. The basic motivation of these
methods is to combine the advantages of different clustering algorithms and overcome their respective shortcomings. Besides generating stable and robust clusterings, consensus clustering methods
can be applied in many other scenarios, such as categorical data clustering, ?privacy-preserving?
clustering and so on. Some representative methods include [1, 2, 9, 11, 12, 13, 14]. [2] formulates
clustering ensemble as a combinatorial optimization problem in terms of shared mutual information.
That is, the relationship between each pair of data objects is measured based on their cluster labels
from the multiple input clusterings, rather than the original features. Then a graph representation is
constructed according to these relationships, and finding a single consolidated clustering is reduced
to a graph partitioning problem. Similarly, in [1], a number of deterministic approximation algorithms are proposed to find an ?aggregated? clustering which agrees as much as possible with the
input clusterings. [9] also applies a similar idea to combine multiple runs of K-means algorithm.
[11] proposes to capture the notion of agreement using an measure based on a 2D string encoding.
They derive a nonlinear optimization model to maximize the new agreement measure and transform
it into a strict 0-1 Semidefinite Program. [12] presents three iterative EM-like algorithms for the
consensus clustering problem.
A common feature of these consensus clustering methods is that they usually do not access to the
original features of the data objects. They utilize the cluster labels in different input clusterings as
the new features of each data object to find an optimal clustering. Consequently, the success of these
consensus clustering methods heavily relies on a premise that the majority of the input clusterings
are reasonably good and consistent, which is not often the case in practice. For example, given a new
challenging dataset, it is probable that only some few of the chosen underlying clustering algorithms
can generate good clusterings. Many moderate or even bad input clustering can mislead the final
?consensus?. Furthermore, even if we choose the appropriate underlying clustering algorithms, in
order to obtain good input clusterings, we still have to specify the appropriate input parameters.
Therefore, it is desired to devise new consensus clustering methods which are more robust and do
not need the optimal input parameters to be specified.
In this paper, our definition of ?clustering aggregation? is different. Informally, for each of the
clusters in the input clusterings, we evaluate its quality with some internal indices measuring both
the cohesion and separation. Then we select an optimal subset of clusters, which partition the
dataset together and have the best overall quality, as the ?aggregated clustering?. (We give a formal
statement of our ?clustering aggregation? problem in Sec. 2). In this framework, ideally, we can
find the optimal ?aggregated clustering? even if only a minority of the input clusterings are good
enough. Therefore, we only need to specify an appropriate range of the input parameters, rather
than the optimal values, for the underlying clustering algorithms.
We formulate this ?clustering aggregation? problem as a special instance of Maximum-Weight Independent Set (MWIS) problem. An attributed graph is constructed from the union of the input
clusterings. The vertices, which represent the distinct clusters, are weighted by an internal index
measuring both cohesion and separation. The edges connect the vertices whose corresponding clusters overlap (In practice, we may tolerate a relatively small amount of overlap for robustness). Then
selecting an optimal subset of non-overlapping clusters partitioning the dataset together can be formulated as seeking the MWIS of the attributed graph, which is the heaviest subset of mutually
non-adjacent vertices. Moreover, this MWIS problem exhibits a special structure. Since the clusters
of each input clustering form a partition of the dataset, the vertices corresponding to each clustering
form a maximal independent set (MIS) in the attributed graph.
The most important source of motivation for our work is [3]. In [3], image segmentation is formulated as a MWIS problem. Specifically, given an image, they first segment it with different bottom-up
segmentation schemes to get an ensemble of distinct superpixels. Then they select a subset of the
most ?meaningful? non-overlapping superpixels to partition the image. This selection procedure is
formulated as solving a MWIS problem. In this respect, our work is very similar to [3]. The only
difference is that our work applies the MWIS formulation to a more general problem, clustering
aggregation.
MWIS problem is known to be NP-hard. Many heuristic approaches are proposed to find approximate solutions. As we mentioned before, in the context of clustering aggregation, the formulated
2
MWIS problem exhibits a special structure. That is, the vertices corresponding to each clustering
form a maximal independent set (MIS) in the attributed graph. This special structure is valuable
for finding good approximations to the MWIS because, although these MISs may not be the global
optimum of the MWIS, they are close to distinct local optimums. We propose a variant of simulated annealing method that takes advantage of this special structure. Our algorithm starts from each
MIS and utilizes a local search heuristic to explore its neighborhood in order to find better approximations to the MWIS. The best solution found in this process is returned as the final approximate
MWIS. Since the exploration for each MIS is independent, our algorithm is suitable for parallel
computation.
Finally, since the selected clusters may not be able to cover the entire dataset, our approach performs
a post-processing to assign the missing data objects to their nearest clusters.
Extensive experiments on many challenging datasets show that: 1. our approach to clustering aggregation automatically decides the optimal number of clusters; 2. it does not require any parameter
tuning for the underlying clustering algorithms; 3. it can combine the advantages of different underlying clustering algorithms to achieve superior performance; 4. it is robust against moderate or even
bad input clusterings.
Paper Organization In Sec. 2, we present the formal statement of the clustering aggregation problem and its formulation as a special instance of MWIS problem. In Sec. 3, we present our algorithm.
The experimental evaluations and conclusion are given in Sec. 4 and Sec. 5 respectively.
2
MWIS Formulation of Clustering Aggregation
Consider a set of n data objects D = {d1 , d2 , ..., dn }. A clustering Ci of D is obtained by applying
an exclusive clustering algorithm with a specific set of input parameters on D. The disjoint clusters
Sk
ci1 , ci2 , ..., cik of Ci are a partition of D, i.e. j=1 cij = D and cip ? ciq = ? for all p 6= q.
With different clustering algorithms and different parameters, we can obtain a set of m different
clusterings of D: C1 , C2 , ..., Cm . For each cluster cij in the union of these m clusterings, we
evaluate its quality with an internal index measuring both cohesion and separation.
We use the average silhouette coefficient of a cluster as such an internal index in this paper. The
silhouette coefficient is defined for an individual data object. It is a measure of how similar that data
object is to data objects in its own cluster compared to data objects in other clusters. Formally, the
silhouette coefficient for the tth data object, St , is defined as
St =
bt ? at
max(at , bt )
(1)
where at is the average distance from the tth data object to the other data objects in the same cluster
as t, and bt is the minimum average distance from the tth data object to data objects in a different
cluster, minimized over clusters.
Silhouette coefficient ranges from -1 to +1 and a positive value is desirable. The quality of a particular cluster cij can be evaluated with the average of the silhouette coefficients of the data objects
belonging to it.
P
t?cij St
(2)
ASCcij =
|cij |
where St is the silhouette coefficient of the tth data object in cluster cij , |cij | is the cardinality of
cluster cij .
We select an optimal subset of non-overlapping clusters from the union of all the clusterings, which
partition the dataset together and have the best overall quality, as the ?aggregated clustering?. The
selection of clusters is formulated as a special instance of the Maximum-Weight Independent Set
(MWIS) problem.
Formally, consider an undirected and weighted graph G = (V, E), where V = {1, 2, ..., n} is
the vertex set and E ? V ? V is the edge set. For each vertex i ? V , a positive weight wi is
associated with i. A = (aij )n?n is the adjacency matrix of G, where aij = 1 if (i, j) ? E is an
3
edge of G, and aij = 0 if (i, j) ?
/ E. A subset of V can be represented by an indicator vector
x = (xi ) ? {0, 1}n , where xi = 1 means that i is in the subset, and xi = 0 means that i is not in the
subset. An independent set is a subset of V , whose elements are pairwise nonadjacent. Then finding
a maximum-weight independent set, denoted as x? can be posed as the following:
x? = argmaxx wT x,
s.t. ?i ? V : xi ? {0, 1},
xT Ax = 0
(3)
The weight wi on vertex i is defined as:
wi = ASCci ? |ci |
(4)
where ci is the cluster represented by vertex i, ASCci and |ci | are its quality measure and cardinality
respectively.
Our problem (3) is a special instance of MWIS problem, since graph G exhibits an additional structure, which we will unitize in the proposed algorithm. The vertex set V can be partitioned into
disjoint subsets P = {P1 , P2 , ..., Pm }, where Pi corresponds to the clustering Ci , such that each Pi
is also a maximal independent set (MIS), which means it is not a subset of any other independent
set. This follows from the fact that each clustering Ci is a partition of the dataset D. Formally,
m
[
Pi = V, Pi ? Pj = ?, i 6= j, and Pi is MIS, ? i, j ? {1, 2, ..., m}
(5)
i=1
3
Our Algorithm
The basic idea of our algorithm is to explore the neighborhood of each known MIS Pi independently
with a local search heuristic in order to find better solutions. The proposed algorithm is an instance
of simulated annealing methods [10] with multiple initializations.
Our algorithm starts with a particular MIS Pi , denoted by x0 . xt+1 , which is a neighbor of xt , is
obtained by replacing some lower-weight vertices in xt with higher-weight vertices under the constraint of always being an independent set. Specifically, we first reduce xt by removing a proportion
q of lower-weight vertices. Here we remove a proportion, rather than a fixed number, of vertices in
order to make the reduction adaptive with respect to the number s of vertices in xt . In practice, we
use ceil(s ? q) to make sure at least one vertex will be removed. Note that this step is probabilistic,
rather than deterministic. The probability that a vertex i will be retained is proportional to its W D
value, which is defined as follows.
wi
W Di = P
(6)
j?Ni wj
where Ni is the set of vertices which are connected with vertex i in G.
Intuitively, larger W D value indicates larger weight, less conflict with other vertices or both. Therefore, the obtained x0t is likely to contain vertices with large weights and have large potential room
for improvement. The parameter of proportion q is used to control the ?radius? of the neighborhood
to be explored.
Then our algorithm iteratively improves x0t by adding compatible vertices one by one. In each
iteration, it first identifies all the vertices compatible with the existing ones in current x0t , called
candidates. Then a ?local? measure W D0 is calculated to evaluate each of these candidates:
wi
W D0i = P
(7)
j?N 0 wj
i
where Ni0 is the set of candidate vertices which are connected with vertex i.
The large value of W D0i indicates that candidate i either can bring large improvement this time
(numerator) or has small conflict with further improvements (denominator) or both.
The candidate with the largest W D0 value is added to x0t . In next iteration, this new x0t will be
further improved. This iterative procedure continues until x0t cannot be further improved. We obtain
x0t as a randomized neighbor of xt .
4
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Algorithm 1:
Input: Graph G, weights w, adjacency matrix A, the known MIS P = {P1 , P2 , ..., Pm }
Output: An approximate solution to MWIS
Calculate W D for each vertex;
for Each MIS Pi do
Initialize x0 with Pi ;
for t = 1, 2, ..., n do
Reduce xt to x0t probabilistically by removing a proportion q of vertices with relatively
lower W D values;
repeat
Identify candidate vertices compatible with current x0t ;
Calculate W D0 for each candidate;
Update x0t by adding the candidate with the largest W D0 ;
until x0t cannot be further improved;
0
t
Calculate ? = min[1, e(W (xt )?W (xt ))/? ];
Update xt+1 as x0t with probability ?, otherwise xt+1 = xt ;
end
end
return the best solution found in the process;
0
t
Now our algorithm calculates the acceptance ratio ? = e(W (xt )?W (xt ))/? , where W (x) = wT x;
0 < ? < 1 is a constant which is usually picked to be close to 1. If ? ? 1, then x0t is accepted as
xt+1 . Otherwise, it is accepted with probability ?.
This exploration starting from Pi continues for a number of iterations, or until xt converges. The
best solution encountered in this process is recorded. After exploring the neighborhood for all the
known MISs, the best solution is returned. A formal description can be found in Algorithm 1.
Our algorithm is essentially a variant of simulated annealing method [10], since the maximization of
W (x) = wT x is equivalent to the minimization of the energy function E(x) = ?W (x) = ?wT x.
Lines 5 to 10 in Alg. 1 define a randomized ?moving? procedure of making a transition from xt to its
0
t
neighbor x0t . When calculating the acceptance ratio ? = e(W (xt )?W (xt ))/? , suppose T0 = 1 (initial
0
t
0
t
temperature), then it is equivalent to ? = e(?(W (xt )?W (xt )))/(? ) = e(?(E(xt )?E(xt )))/(? ) . Hence
Algorithm 1 is a variant of simulated annealing. Therefore, our algorithm converges in theory.
In practice, the convergence of our algorithm is fast. In all the experiments presented in next section,
our algorithm converges in less than 100 iterations. The reason is that our algorithm takes advantage
of that the known MISs are close to distinct local maximum. Also, the local search heuristic of our
algorithm is effective to find better candidate in the neighborhood.
The parameter q controls the ?radius? of the neighborhood to be explored in each iteration. Small
q means small ?radius? and results in more iterations to converge. On the other side, using large q
will take less advantage of the known MISs. Unstable exploration also results in more iterations to
converge.
Since our algorithm explores the neighborhood of each known MIS independently, its efficiency can
be further improved by using parallel computation.
4
Results
We evaluate the performance of our approach with three experiments. In these experiments, for the
underlying clustering algorithms, including K-means, single linkage, complete linkage and Ward?s
clustering, we use the implementations in MATLAB. Unless specified explicitly, the parameters
are MATLAB?s defaults. For example, when using K-means, we only specify the number K of
desired clusters. The default ?Squared Euclidean distance? is used as the distance measure. When
calculating silhouette coefficients, we use MATLAB?s function ?silhouette(X,clust)? and the default
metric ?Squared Euclidean distance?. For robustness in our experiments, we tolerate slight overlap
5
|c ?c |
i
j
> 0.1, and
between clusters. That is, for the adjacency matrix A = (aij )n?n , aij = 1 if min(|c
i |,|cj |)
aij = 0 otherwise. In these experiments, the parameters of our local search algorithm are: q = 0.3;
? = 0.999; iteration number n = 100. We test different combinations of q = 0.1 : 0.1 : 0.5 and
n = 100 : 100 : 1000. The results are almost the same.
In the first experiment, we evaluate our approach?s ability to achieve good performance without
specifying the optimal input parameters for the underlying clustering algorithms. We use the dataset
from [6]. This dataset consists of 4 subsets (S1, S2, S3, S4) of synthetic 2-d data points. Each subset
contains 5000 vectors in 15 Gaussian clusters, but with different degree of cluster overlapping. We
choose K-means as the underlying clustering algorithm and vary the parameter K = 5 : 1 : 25,
which is the desired number of clusters. Since different runs of K-means starting from random
initialization of centroids typically produce different clustering results, we run K-means 5 times for
each value of K. That is, there are a total of 21 ? 5 = 105 different input clusterings. Note that,
in order to show the performance of our approach clearly, we do not perform the post-processing of
assigning the missing data points to their nearest clusters.
5
10
x 10
5
S1
10
x 10
5
S2
10
x 10
5
S3
10
8
8
8
8
6
6
6
6
4
4
4
4
2
2
2
2
0
0
5
10
0
0
5
5
5
10
x 10
10
0
0
5
5
x 10
5
10
x 10
0
5
Our S2
10
x 10
5
5
Our S3
10
8
8
6
6
6
6
4
4
4
4
2
2
2
2
5
10
5
x 10
0
0
5
10
5
x 10
0
0
5
10
5
x 10
10
x 10
8
0
5
x 10
8
0
0
S4
5
x 10
Our S1
10
x 10
0
x 10
0
Our S4
5
10
5
x 10
Figure 1: Clustering aggregation without parameter tuning. (top row) Original data. (bottom row)
Clustering results of our approach. Best viewed in color.
As shown in Fig. 1, on each of the four subsets, the aggregated clustering obtained by our approach
has the correct number (15) of clusters and near-perfect structure. Only a very small portion of
data points is not assigned to any cluster. These results confirm that our approach can automatically
decide the optimal number of clusters without any parameter tuning for the underlying clustering
algorithms.
In the second experiment, we evaluate our approach?s ability of combining the advantages of different underlying clustering algorithms and canceling out the errors introduced by them. The dataset is
from [1]. As shown in the fifth panel of Fig. 2, this synthetic dataset consists of 7 distinct groups of
2-d data points, which have significantly different shapes and sizes. There are also some ?bridges?
between different groups of data points. Consequently, this dataset is very challenging for any single
clustering algorithm. In this experiment, we use four different underlying clustering algorithms implemented in MATLAB: single linkage, complete linkage, Ward?s clustering and K-means. The first
two are both agglomerative bottom-up algorithms. The only difference between them is that when
merging pairs of clusters, single linkage is based on the minimum distance, while complete linkage
is based on maximum distance. The third one, Ward?s clustering algorithm, is also an agglomerative
bottom-up algorithm. In each merging step, it chooses the pair of clusters which minimize the sum
of the square of distances from each point to the mean of the two clusters. The fourth algorithm is
K-means.
6
For each of the underlying clustering algorithms, we vary the input parameter of desired number of
clusters as 4 : 1 : 10. That is, we have a total of 7 ? 4 = 28 input clusterings.
Note that, unlike [1], we do not use the average linkage clustering algorithm, because by specifying
the correct number of clusters, it can generate near-perfect clustering by itself. We abandon the
best algorithm here in order to show the performance of our approach clearly. But, in practice,
by utilizing good underlying clustering algorithms, it can significantly increase the chance for our
approach to obtain superior aggregated clusterings. Like experiment 1, we do not perform the postprocessing in this experiment.
Single Linkage
Complete Linkage
30
30
20
20
10
10
0
0
10
20
30
0
40
0
10
Ward's clustering
30
30
20
20
10
10
0
0
10
20
30
0
40
0
10
Original data
30
20
20
10
10
0
10
20
30
40
20
30
40
30
40
Our result
30
0
20
K-means
30
0
40
0
10
20
Figure 2: Clustering aggregation on four different input clusterings. Best viewed in color.
In the first four panels of Fig. 2, we show the clustering results obtained by the four underlying
clustering algorithms with the number of clusters set to be 7. Obviously, even with the optimal input
parameters, the results of these algorithms are far from being correct. The ground truth and the result
of our approach are shown in the fifth and sixth panels, respectively. As we can see, our aggregated
clustering is almost perfect, except for the three green data points in the ?bridge? between the cyan
and green ?balls?. These results confirm that our approach can effectively combine the advantages
of different clustering algorithms and cancel out the errors introduced by them. Also, in contrast to
the other consensus clustering algorithms, such as [1], our aggregated clustering is obtained without
specifying the optimal input parameters for any of the underlying clustering algorithm. This is a
very desirable feature in practice.
In the third experiment, we compare our approach with some other popular consensus clustering
algorithms, including Cluster-based Similarity Partitioning Algorithm (CSPA) [2], HyperGraph Partitioning Algorithm (HGPA) [2], Meta-Clustering Algorithm (MCLA) [2], the Furthest (Furth) algorithm [1], the Agglomerative (Agglo) [1] algorithm and the Balls (Balls) algorithm [1].
The performance is evaluated on three datasets: 8D5K [2] , Iris [4] and Pen-Based Recognition of
Handwritten Digits (PENDIG) [5]. 8D5K is an artificial dataset. It contains 1000 points from five
multivariate Gaussian distributions (200 points each) in 8D space. Iris is a real dataset. It consists
of 150 instances of three classes (50 each). There are four numeric attributes for each instance.
PENDIG is also a real dataset. It contains a total of 7494 + 3498 = 10992 instances in 10 classes.
Each instance has 16 integer attributes.
For our approach and all those consensus clustering algorithms, we choose K-means and Ward?s
algorithm as the underlying clustering algorithms. The multiple clusterings for each dataset are
obtained by varying the desired number of clusters for both K-means and Ward?s algorithm. Specif7
ically, for the test on 8D5K, we set the desired numbers of clusters as 3:1:7. Consequently, there
are 5 ? 2 = 10 different input clusterings. For Iris and PENDIG, the numbers are 3:1:7 and 8:1:12
respectively. So there are also 10 different input clusterings for each of them.
In this paper, we use Jaccard coefficient to measure the quality of clusterings.
f11
Jaccard Coef f icient =
(8)
f01 + f10 + f11
where f11 is the number of object pairs which are in the same class and in the same cluster; f01 and
is the number of object pairs which are in different classes but the same cluster; f10 is the number
of object pairs which are in the same class but in different cluster.
Figure 3: Results of comparative experiments on different datasets. Best viewed in color.
As shown in Fig. 3, the performance of our approach is better than those of the other consensus
clustering algorithms. The main reason is that, with a range of different input parameters, most
clusterings generated by the underlying clustering algorithms are not good enough. The ?consensus?
based on these moderate or even bad input clusterings and much less good ones cannot be good.
In contrast, by selecting an optimal subset of the clusters, our approach can still achieve superior
performance as long as there are good clusters in the input clusterings. Therefore, our approach is
much more robust, as confirmed by the results of this experiment.
5
Conclusion
The contribution of this paper is twofold: 1. We formulate clustering aggregation as a MWIS problem with a special structure. 2. We propose a novel variant of simulated annealing method, which
takes advantage of the special structure, for solving this special MWIS problem. Experimental results confirm that: 1. our approach to clustering aggregation automatically decides the optimal
number of clusters; 2. it does not require any parameter tuning for the underlying clustering algorithms; 3. it can combine the advantages of different underlying clustering algorithms to achieve
superior performance; 4. it is robust against moderate or even bad input clusterings.
Acknowledgments
This work was supported by US Department of Energy Award 71498-001-09 and by US National
Science Foundation Grants IIS-0812118, BCS-0924164, OIA-1027897.
8
References
[1] Gionis, A. & Mannila, H. & Tsaparas, P. (2005) ?Clustering aggregation?. Proceedings of the 21st ICDE
[2] Strehl, A. & Ghosh, J. (2003) ?Cluster ensembles?a knowledge reuse framework for combining multiple
partitions?. The Journal of Machine Learning Research (3):583-617.
[3] Brendel, W. & Todorovic, S. (2010) ?Segmentation as maximum-weight independent set?. Neural Information Processing Systems
[4] Fisher, R.A. (1936) ?The use of multiple measurements in taxonomic problems?. Annual Eugenics (7) Part
II: 179-188
[5] Alimoglu, F. & Alpaydin, E. (1996) ?Methods of Combining Multiple Classifiers Based on Different Representations for Pen-based Handwriting Recognition?. Proceedings of the Fifth Turkish Artificial Intelligence
and Artificial Neural Networks Symposium (TAINN 96)
[6] Franti, P. & Virmajoki, O. (2006) ?Iterative shrinking method for clustering problems?. Pattern Recognition
39 (5), 761-765
[7] Lloyd, S. P. (1982) ?Least squares quantization in PCM?. IEEE Transactions on Information Theory 28 (2):
129-137
[8] Martin Ester, Hans-Peter Kriegel, Jorg Sander, Xiaowei Xu (1996) ?A density-based algorithm for discovering clusters in large spatial databases with noise?. Proceedings of the Second International Conference on
Knowledge Discovery and Data Mining (KDD-96)
[9] Fred, A.L.N. & Jain, A.K. (2002) ?Data clustering using evidence accumulation?. Proceedings of the
International Conference on Pattern Recognition(ICPR) 276-280
[10] Kirkpatrick, S. & Gelatt, C. D. & Vecchi, M. P. (1983). ?Optimization by Simulated Annealing?. Science
220 (4598): 671C680
[11] Vikas Singh & Lopamudra Mukherjee & Jiming Peng & Jinhui Xu (2008) ?Ensemble Clustering using
Semidefinite Programming?. Advances in Neural Information Processing Systems 20: 1353?1360
[12] Nguyen, N. & Caruana, R. (2007) ?Consensus clusterings?. IEEE International Conference on Data
Mining ICDM 2007 607?612
[13] X. Z. Fern & C. E. Brodley (2004) ?Solving cluster ensemble problems by bipartite graph partitioning?.
Proc. of International Conference on Machine Learning page 36
[14] Topchy, A. & Jain, A.K. & Punch, W. (2003) ?Combining multiple weak clusterings?. IEEE International
Conference on Data Mining, ICDM 2003 331?338
9
|
4731 |@word proportion:4 d2:1 ci2:1 reduction:1 initial:1 contains:3 selecting:3 past:1 existing:1 current:2 assigning:1 partition:9 kdd:1 shape:3 remove:1 update:2 intelligence:1 selected:1 discovering:1 detecting:1 five:1 dn:1 constructed:3 c2:1 symposium:1 consists:3 combine:6 privacy:1 x0:2 pairwise:1 peng:1 p1:2 f11:3 spherical:1 automatically:4 cardinality:2 latecki:2 underlying:22 moreover:1 panel:3 kind:1 consolidated:1 string:1 cm:1 developed:1 finding:4 ghosh:1 classifier:1 partitioning:6 control:2 grant:1 before:1 positive:2 local:9 ceil:1 dbscan:3 encoding:1 initialization:2 specifying:3 challenging:4 range:3 acknowledgment:1 union:4 practice:6 mannila:1 procedure:3 digit:1 jan:1 turkish:1 significantly:2 refers:1 get:1 cannot:3 close:4 selection:2 context:1 applying:1 accumulation:1 equivalent:2 deterministic:2 missing:2 starting:2 independently:2 formulate:3 mislead:1 utilizing:1 notion:1 variation:1 suppose:1 heavily:1 user:1 programming:1 alimoglu:1 agreement:2 element:1 recognition:4 continues:2 mukherjee:1 database:1 bottom:4 capture:1 calculate:3 wj:2 connected:2 ni0:1 alpaydin:1 removed:1 valuable:1 mentioned:1 intuition:1 hypergraph:1 ideally:1 nonadjacent:1 singh:1 solving:3 segment:1 bipartite:1 efficiency:1 represented:2 distinct:7 fast:1 shortcoming:2 effective:1 jain:2 artificial:3 neighborhood:9 whose:3 heuristic:5 widely:2 posed:1 larger:2 d0i:2 otherwise:3 ability:2 statistic:1 ward:7 transform:1 itself:1 abandon:1 final:2 obviously:1 advantage:11 jinhui:1 propose:3 maximal:4 combining:4 achieve:6 f10:2 description:1 convergence:1 cluster:62 optimum:3 produce:1 generating:1 perfect:3 converges:3 comparative:1 object:24 derive:1 measured:1 nearest:2 p2:2 implemented:1 radius:4 correct:3 attribute:2 exploration:3 adjacency:3 require:4 premise:1 assign:1 ci1:1 probable:1 exploring:1 ground:1 vary:2 cohesion:4 proc:1 combinatorial:1 label:2 bridge:2 agrees:1 largest:2 weighted:3 minimization:1 clearly:2 always:1 gaussian:2 rather:4 varying:2 probabilistically:1 ax:1 improvement:3 indicates:2 superpixels:2 contrast:2 centroid:1 detect:1 entire:1 bt:3 typically:1 overall:2 denoted:2 proposes:1 spatial:1 special:14 initialize:1 mutual:1 jiming:1 cancel:1 minimized:1 np:1 few:1 national:1 individual:1 organization:1 acceptance:2 cip:1 mining:4 evaluation:1 kirkpatrick:1 semidefinite:2 edge:4 respective:1 unless:1 euclidean:2 desired:6 instance:11 temple:2 cover:1 formulates:1 measuring:4 caruana:1 maximization:1 vertex:32 subset:17 connect:2 synthetic:2 chooses:1 st:5 density:4 fundamental:1 randomized:2 explores:1 international:5 probabilistic:1 together:4 heaviest:2 squared:2 recorded:1 choose:3 ester:1 return:1 li:2 potential:2 sec:5 lloyd:1 coefficient:8 gionis:1 mwis:24 explicitly:1 try:1 picked:1 portion:1 start:3 aggregation:17 parallel:2 contribution:1 minimize:1 square:2 ni:2 brendel:1 ensemble:6 identify:1 weak:1 handwritten:1 clust:1 fern:1 confirmed:1 coef:1 canceling:1 definition:1 sixth:1 against:3 energy:2 associated:1 attributed:7 mi:13 handwriting:1 di:1 dataset:19 popular:2 color:3 knowledge:2 improves:1 segmentation:3 formalize:1 cj:1 cik:1 tolerate:2 higher:1 specify:4 improved:4 formulation:3 evaluated:2 furthermore:2 until:3 replacing:1 nonlinear:1 overlapping:5 quality:7 usa:1 contain:1 hence:1 assigned:1 iteratively:1 adjacent:2 numerator:1 iris:3 complete:4 performs:1 bring:1 temperature:1 postprocessing:1 image:3 novel:1 superior:6 common:1 x0t:14 slight:1 eps:1 measurement:1 tuning:5 pm:2 similarly:1 furth:1 moving:1 stable:1 access:1 similarity:1 han:1 multivariate:1 own:1 moderate:5 scenario:1 meta:1 success:1 devise:1 preserving:1 minimum:3 additional:1 aggregated:9 maximize:1 converge:2 ii:2 multiple:8 desirable:2 bcs:1 d0:4 long:1 icdm:2 post:2 award:1 calculates:1 variant:5 basic:2 denominator:1 vision:1 essentially:1 metric:1 iteration:8 represent:2 longin:1 c1:1 annealing:7 source:1 unlike:1 strict:1 sure:1 undirected:1 integer:1 near:2 enough:2 sander:1 reduce:2 idea:2 t0:1 reuse:1 linkage:9 peter:1 returned:2 todorovic:1 matlab:4 informally:1 amount:1 s4:3 tth:4 reduced:1 generate:2 punch:1 s3:3 disjoint:2 group:5 four:6 pj:1 utilize:1 graph:13 icde:1 sum:1 run:3 taxonomic:1 fourth:1 almost:2 decide:1 separation:4 utilizes:2 jaccard:2 cyan:1 nan:2 encountered:1 annual:1 constraint:1 vecchi:1 min:2 f01:2 relatively:2 martin:1 department:2 according:1 icpr:1 combination:1 ball:3 belonging:1 em:2 wi:5 partitioned:1 making:1 s1:3 lopamudra:1 intuitively:2 mutually:2 end:2 appropriate:4 gelatt:1 robustness:2 franti:1 vikas:1 original:4 top:1 clustering:127 include:2 trouble:1 calculating:2 seeking:1 added:1 exclusive:1 exhibit:4 distance:8 simulated:7 majority:1 agglomerative:3 consensus:14 unstable:1 reason:2 furthest:1 minority:1 besides:1 index:5 relationship:2 retained:1 ratio:2 cij:8 statement:2 implementation:1 jorg:1 perform:2 datasets:4 t:1 pendig:3 arbitrary:1 introduced:2 pair:6 required:1 specified:3 extensive:3 conflict:2 able:1 eugenics:1 kriegel:1 usually:3 pattern:2 program:1 max:1 including:2 green:2 oia:1 overlap:4 suitable:1 difficulty:1 natural:1 indicator:1 topchy:1 scheme:1 brodley:1 identifies:1 categorical:1 philadelphia:1 mcla:1 discovery:1 ically:1 proportional:1 foundation:1 degree:1 consistent:1 minp:1 pi:10 strehl:1 row:2 compatible:3 repeat:1 supported:1 aij:6 formal:3 side:1 neighbor:3 tsaparas:1 fifth:3 overcome:1 calculated:1 default:3 transition:1 numeric:1 fred:1 adaptive:1 nguyen:1 far:1 social:1 transaction:1 approximate:3 silhouette:8 confirm:3 global:1 decides:3 xi:4 search:5 iterative:3 pen:2 decade:1 sk:1 reasonably:1 robust:6 argmaxx:1 alg:1 main:1 ciq:1 motivation:2 s2:3 noise:1 xu:2 fig:4 representative:1 shrinking:1 candidate:9 third:2 removing:2 bad:5 specific:1 xt:24 explored:2 evidence:1 quantization:1 adding:2 merging:2 effectively:1 ci:7 explore:3 likely:1 pcm:1 applies:2 corresponds:1 truth:1 chance:1 relies:1 xiaowei:1 goal:1 formulated:5 viewed:3 consequently:3 room:1 shared:1 twofold:1 fisher:1 hard:2 specifically:2 except:1 wt:4 miss:4 called:2 total:3 accepted:2 experimental:2 meaningful:1 select:3 formally:3 internal:5 dissimilar:1 evaluate:6 d1:1
|
4,124 | 4,732 |
Pointwise Tracking the Optimal Regression Function
Ran El-Yaniv and Yair Wiener
Computer Science Department
Technion ? Israel Institute of Technology
{rani,wyair}@{cs,tx}.technion.ac.il
Abstract
This paper examines the possibility of a ?reject option? in the context of least
squares regression. It is shown that using rejection it is theoretically possible to
learn ?selective? regressors that can ?-pointwise track the best regressor in hindsight from the same hypothesis class, while rejecting only a bounded portion of
the domain. Moreover, the rejected volume vanishes with the training set size,
under certain conditions. We then develop efficient and exact implementation of
these selective regressors for the case of linear regression. Empirical evaluation
over a suite of real-world datasets corroborates the theoretical analysis and indicates that our selective regressors can provide substantial advantage by reducing
estimation error.
1 Introduction
Consider a standard least squares regression problem. Given m input-output training pairs,
(x1 , y1 ), . . . , (xm , ym ), we are required to learn a predictor, f? ? F, capable of generating accurate
output predictions, f?(x) ? R, for any input x. Assuming that input-output pairs are i.i.d. realizations of some unknown stochastic source, P (x, y), we would like to choose f? so as to minimize the
standard least squares risk functional,
Z
R(f?) = (y ? f?(x))2 dP (x, y).
Let f ? = argminf ?F R(f ) be the optimal predictor in hindsight (based on full knowledge of P ).
A classical result in statistical learning is that under certain structural conditions on F and possibly
on P , one can learn a regressor that approaches the average optimal performance, R(f ? ), when the
sample size, m, approaches infinity [1].
In this paper we contemplate the challenge of pointwise tracking the optimal predictions of f ? after
observing only a finite (and possibly small) set of training samples. It turns out that meeting this
difficult task can be made possible by harnessing the ?reject option? compromise from classification.
Instead of predicting the output for the entire input domain, the regressor is allowed to abstain from
prediction for part of the domain. We present here new techniques for regression with a reject
option, capable of achieving pointwise optimality on substantial parts of the input domain, under
certain conditions.
Section 3 introduces a general strategy for learning selective regressors. This strategy is guaranteed
to achieve ?-pointwise optimality (defined in Section 2) all through its region of action. This result
is proved in Theorem 3.8, which also shows that the guaranteed coverage increases monotonically
with the training sample size and converges to 1. This type of guarantee is quite strong, as it ensures
tight tracking of individual optimal predictions made by f ? , while covering a substantial portion of
the input domain.
At the outset, the general strategy we propose appears to be out of reach because accept/reject
decisions require the computation of a supremum over a a very large, and possibly infinite hypothesis
1
subset. In Section 4, however, we show how to compute the strategy for each point of interest
using only two constrained ERM calculations. This useful reduction, shown in Lemma 4.2, opens
possibilities for efficient implementations of optimal selective regressors whenever the hypothesis
class of interest allows for efficient (constrained) ERM (see Definition 4.1).
For the case of linear least squares regression we utilize known techniques for both ERM and constrained ERM and derive in Section 5 exact implementation achieving pointwise optimal selective
regression. The resulting algorithm is efficient and can be easily implemented using standard matrix
operations including (pseudo) inversion. Theorem 5.3 in this section states a novel pointwise bound
on the difference between the prediction of an ERM linear regressor and the prediction of f ? for
each individual point. Finally, in Section 6 we present numerical examples over a suite of real-world
regression datasets demonstrating the effectiveness of our methods, and indicating that substantial
performance improvements can be gained by using selective regression.
Related work. Utilizations of a reject option are quite common in classification where this technique
was initiated more than 50 years ago with Chow?s pioneering work [2, 3]. However, the reject
option is only scarcely and anecdotally mentioned in the context of regression. In [4] a boosting
algorithm for regression is proposed and a few reject mechanisms are considered, applied both
on the aggregate decision and/or on the underlying weak regressors. A straightforward thresholdbased reject mechanism (rejecting low response values) is applied in [5] on top of support vector
regression. This mechanism was found to improve false positive rates.
The present paper is inspired and draws upon recent results on selective classification [6, 7, 8],
and can be viewed as a natural continuation of the results of [8]. In particular, we adapt the basic
definitions of selectivity and the general outline of the derivation and strategy presented in [8].
2 Selective regression and other preliminary definitions
We begin with a definition of the following general and standard regression setting. A finite training
m
sample of m labeled examples, Sm , {(xi , yi )}m
i=1 ? (X ? Y) , is observed, where X is some
feature space and Y ? R. Using Sm we are required to select a regressor f? ? F, where F is a fixed
hypothesis class containing potential regressors of the form f : X ? Y. It is desired that predictions
f?(x), for unseen instances x, will be as accurate as possible. We assume that pairs (x, y), including
training instances, are sampled i.i.d. from some unknown stochastic source, P (x, y), defined over
X ? Y. Given a loss function, ? : Y ? Y ? [0, ?), we quantify the prediction quality of any f
through its true error or risk, R(f ), defined as its expected loss with respect to P ,
Z
R(f ) , E(x,y) {?(f (x), y)} = ?(f (x), y)dP (x, y).
While R(f ) is an unknown quantity, we do observe the empirical error of f , defined as
m
X
? ), 1
R(f
?(f (xi ), yi ).
m i=1
? ) be the empirical risk minimizer (ERM), and f ? , arg inf f ?F R(f ), the
Let f? , arg inf f ?F R(f
true risk minimizer.
Next we define selective regression using the following definitions, which are taken, as is, from the
selective classification setting of [6]. Here again, we are given a training sample Sm as above, but
are now required to output a selective regressor defined to be a pair (f, g), with f ? F being a
standard regressor, and g : X ? {0, 1} is a selection function, which is served as qualifier for f as
follows. For any x ? X ,
reject, if g(x) = 0;
(f, g)(x) ,
(1)
f (x),
if g(x) = 1.
Thus, the selective regressor abstains from prediction at a point x iff g(x) = 0. The general performance of a selective regressor is characterized in terms of two quantities: coverage and risk. The
coverage of (f, g) is
?(f, g) , EP [g(x)] .
2
The true risk of (f, g) is the risk of f restricted to its region of activity as qualified by g, and
normalized by its coverage,
R(f, g) ,
EP [?(f (x), y) ? g(x)]
.
?(f, g)
We say that the selective regressor (f, g) is ?-pointwise optimal if
?x ? {x ? X : g(x) = 1} ,
|f (x) ? f ? (x)| ? ?.
Note that pointwise optimality is a considerably stronger property than risk, which only refers to
average performance.
We define a (standard) distance metric over the hypothesis class F . For any probability measure ?
on X , let L2 (?) be the Hilbert space of functions from X to R, with the inner product defined as
hf, gi , E?(x) f (x)g(x).
The distance function induced by the inner product is
q
p
2
?(f, g) ,k f ? g k= hf ? g, f ? gi = E?(x) (f (x) ? g(x)) .
Finally, for any f ? F we define a ball in F of radius r around f ,
B(f, r) , {f ? ? F : ?(f, f ? ) ? r} .
3 Pointwise optimality with bounded coverage
In this section we analyze the following strategy for learning a selective regressor, which turns out
to ensure ?-pointwise optimality with monotonically increasing coverage (with m). We call it a
strategy rather than an algorithm because it is not at all clear at the outset how to implement it. In
subsequent sections we develop efficient and precise implementation for linear regression.
We require the following definition. For any hypothesis class F , target hypothesis f ? F, distribution P , sample Sm , and real r > 0, define,
n
o
? r) , f ? ? F : R(f
? ? ) ? R(f
? )+r .
V(f, r) , {f ? ? F : R(f ? ) ? R(f ) + r} and V(f,
(2)
Strategy 1 A learning strategy for ?-pointwise optimal selective regressors
Input: Sm , m, ?, F, ?
Output: A selective regressor (f?, g) achieving ?-pointwise optimality
1: Set f? = ERM
Sm ), i.e., f? is any empirical
? risk minimizer from F
? (F,
`
?
2
?
?
?
?
2: Set G = V f , ?(m, ?/4, F) ? 1 ? R(f )
/* see Definition 3.3 and (2) */
3: Construct g such that g(x) = 1 ?? ?f ? ? G
|f ? (x) ? f?(x)| < ?
For the sake of brevity, throughout this section we often write f instead of f (x), where f is any
regressor. The following Lemma 3.1 is based on the proof of Lemma A.12 in [9].
Lemma 3.1 ([9]). For any f ? F. Let ? : Y ? Y ? [0, ?) be the squared loss function and F be
a convex hypothesis class. Then, E(x,y) (f ? (x) ? y)(f (x) ? f ? (x)) ? 0.
?
Lemma 3.2. Under the same conditions of Lemma 3.1, for any r > 0, V(f ? , r) ? B (f ? , r) .
Proof. If f ? V(f ? , r), then by definition,
R(f ) ? R(f ? ) + r.
(3)
R(f ) ? R(f ? ) = E {?(f, y) ? ?(f ? , y)} = E (f ? y)2 ? (f ? ? y)2
n
o
2
= E (f ? f ? ) ? 2(y ? f ? )(f ? f ? ) = ?2 (f, f ? ) + 2E(f ? ? y)(f ? f ? ).
p
?
Applying Lemma 3.1 and (3) we get, ?(f, f ? ) ? R(f ) ? R(f ? ) ? r.
3
Definition 3.3 (Multiplicative Risk Bounds). Let ?? , ? (m, ?, F ) be defined such that for any
0 < ? < 1, with probability of at least 1 ? ? over the choice of Sm from P m , any hypothesis f ? F
satisfies
? ) ? ? (m, ?, F ) .
R(f ) ? R(f
? ) ? R(f ) ? ? (m, F , ?), holds under the same conditions.
Similarly, the reverse bound , R(f
Remark 3.1. The purpose of Definition 3.3 is to facilitate the use of any (known) risk bound as a
plug-in component in subsequent derivations. We define ? as a multiplicative bound, which is common in the treatment of unbounded loss functions such as the squared loss (see discussion by Vapnik
in [10], page 993). Instances of such bounds can be extracted, e.g., from [11] (Theorem 1), and from
bounds discussed in [10]. We also developed the entire set of results that follow while relying on
additive bounds, which are common when using bounded loss functions. These developments will
be presented in the full version of the paper.
The proof of the following lemma follows closely the proof of Lemma 5.3 in [8]. However, it
considers a multiplicative risk bound rather than additive.
Lemma 3.4. For any r > 0, and 0 < ? < 1, with probability of at least 1 ? ?,
? f?, r) ? V f ? , (? 2 ? 1) ? R(f ? ) + r ? ??/2 .
V(
?/2
Lemma 3.5. Let F be a convex hypothesis space, ? : Y ? Y ? [0, ?), a convex loss function, and
f? be an ERM. Then, with probability of at least 1 ? ?/2, for any x ? X ,
|f ? (x) ? f?(x)| ?
sup
?
?
? f?,(?2 ?1)?R(
? f?)
f ?V
?/4
|f (x) ? f?(x)|.
Proof. Applying the multiplicative risk bound, we get that with probability of at least 1 ? ?/4,
? ? ) ? R(f ? ) ? ??/4 .
R(f
Since f ? minimizes the true error, R(f ? ) ? R(f?). Applying the multiplicative risk bound on f?,
? f?) ? ??/4 . Combining the three
we know also that with probability of at least 1 ? ?/4, R(f?) ? R(
inequalities by using the union bound we get that with probability of at least 1 ? ?/2,
? f?).
? f?) + ? 2 ? 1 ? R(
? ? ) ? R(
? f?) ? ? 2 = R(
R(f
?/4
?/4
2
? f?)
Hence, with probability of at least 1 ? ?/2 we get f ? ? V? f?, (??/4
? 1) ? R(
Let G ? F. We generalize the concept of disagreement set [12, 6] to real-valued functions. The
?-disagreement set w.r.t. G is defined as
DIS? (G) , {x ? X : ?f1 , f2 ? G s.t. |f1 (x) ? f2 (x)| ? ?} .
For any G ? F, distribution P , and ? > 0, we define ?? G , P rP {DIS? (G)} . In the following
definition we extend Hanneke?s disagreement coefficient [13] to the case of real-valued functions.1
Definition 3.6 (?-disagreement coefficient). The ?-disagreement coefficient of F under P is,
?? , sup
r>r0
?? B(f ? , r)
.
r
(4)
Throughout this paper we set r0 = 0. Our analyses for arbitrary r0 > 0 will be presented in the full
version of this paper.
The proof of the following technical statement relies on the same technique used for the proof of
Theorem 5.4 in [8].
1
Our attemps to utilize a different known extension of the disagreement coefficient [14] were not successful.
Specifically, the coefficient proposed there is unbounded for the squared loss function when Y is unbounded.
4
Lemma 3.7. Let F be a convex hypothesis class, and assume ? : Y ? Y ? [0, ?) is the squared
loss function. Let ? > 0 be given. Assume that F has ?-disagreement coefficient ?? . Then, for any
r > 0 and 0 < ? < 1, with probability of at least 1 ? ?,
r
2 ? 1 ? R(f ? ) + r ? ?
?
?
?? V(f , r) ? ??
??/2
?/2 .
The following theorem is the main result of this section, showing that Strategy 1 achieves ?-pointwise
optimality with a meaningful coverage that converges to 1. Although R(f ? ) in the bound (5) is an
unknown quantity, it is still a constant, and as ? approaches 1, the coverage lower bound approaches
1 as well. When using a typical additive risk bound, R(f ? ) disappears from the RHS.
Theorem 3.8. Assume the conditions of Lemma 3.7 hold. Let (f, g) be the selective regressor chosen
by Strategy 1. Then, with probability of at least 1 ? ?,
r
2 ? 1 ? R(f ? ) + ?
? ?
(5)
??/4
?(f, g) ? 1 ? ??
?/4 ? R(f )
and
?x ? {x ? X : g(x) = 1}
|f (x) ? f ? (x)| < ?.
?
Proof. According to Strategy 1, if g(x) = 1 then supf ?V(
? f?, ?2
?/4
?
? f?))
?1 ?R(
|f (x) ? f?(x)| < ?.
Applying Lemma 3.5 we get that, with probability of at least 1 ? ?/2,
?x ? {x ? X : g(x) = 1}
2
? f?) = G wet get
Since f? ? V? f?, (??/4
? 1) ? R(
(
?(f, g) = E{g(X)} = E I
(
= 1?E I
(
? 1?E I
|f (x) ? f ? (x)| < ?.
sup |f (x) ? f?(x)| < ?
f ?G
sup |f (x) ? f?(x)| ? ?
f ?G
!)
sup |f1 (x) ? f2 (x)| ? ?
f1 ,f2 ?G
!)
!)
= 1 ? ?? G.
Applying Lemma 3.7 and the union bound we conclude that with probability of at least 1 ? ?,
r
2 ? 1 ? R(f ? ) + ?
? ?
?(f, g) = E{g(X)} ? 1 ? ??
??/4
?/4 ? R(f ) .
4 Rejection via constrained ERM
In Strategy 1 we are required to track the supremum of a possibly infinite hypothesis subset, which
in general might be intractable. The following Lemma 4.2 reduces the problem of calculating the
supremum to a problem of calculating a constrained ERM for two hypotheses.
Definition 4.1 (constrained ERM). Let x ? X and ? ? R be given. Define,
n
o
? ) | f (x) = f?(x) + ? ,
f??,x , argmin R(f
f ?F
where f?(x) is, as usual, the value of the unconstrained ERM regressor at point x.
Lemma 4.2. Let F be a convex hypothesis space, and ? : Y ? Y ? [0, ?), a convex loss function.
Let ? > 0 be given, and let (f, g) be a selective regressor chosen by Strategy 1 after observing the
training sample Sm . Let f? be an ERM. Then,
g(x) = 0
?
? f??,x ) ? R(
? f?) ? ? 2
R(
?/4
5
?
? f???,x ) ? R(
? f?) ? ? 2 .
R(
?/4
2
? f?) , and assume there exists f ? G such that |f (x)? f?(x)| ?
Proof. Let G , V? f?, (??/4
? 1) ? R(
?. Assume w.l.o.g. (the other case is symmetric) that f (x) ? f?(x) = a ? ?. Since F is convex,
? ? ?
f? = 1 ?
? f + ? f ? F.
a
a
We thus have,
?
?
? ?
? ?
? f (x) + ? f (x) = 1 ?
? f (x) + ? f?(x) + a = f?(x) + ?.
f ? (x) = 1 ?
a
a
a
a
Therefore, by the definition of f??,x , and using the convexity of ?, together with Jensen?s inequality,
? f??,x ) ?
R(
m
m
X
1 X
? ?
?
? ?) = 1
R(f
?(f ? (xi ), yi ) =
? 1?
? f (xi ) + ? f (xi ), yi
m i=1
m i=1
a
a
m
m
? 1 X
? 1 X ?
? f (xi ), yi + ?
? (f (xi ), yi )
?
a
m i=1
a m i=1
? ? ?
? ?
? ? ?
? ? ?
2
=
1?
? R(f ) + ? R(f
? R(f ) + ? R(
)? 1?
f ) ? ??/4
a
a
a
a
? 2
2
?
?
?
?
?
?
= R(f ) + ? ??/4 ? 1 ? R(f ) ? R(f ) ? ??/4 .
a
? f??,x ) ? R(
? f?) ? ? 2 . Then f??,x ? G and f??,x(x) ? f?(x) = ?.
As for the other direction, if R(
?/4
?
1?
So far we have discussed the case where ? is given, and our objective is to find an ?-pointwise
optimal regressor. Lemma 4.2 provides the means to compute such an optimal regressor assuming
that a method to compute a constrained ERM is available (as is the case for squared loss linear
regressors ; see next section). However, as was discussed in [6], in many cases our objective is to
explore the entire risk-coverage trade-off, in other words, to get a pointwise bound on |f ? (x)?f (x)|,
i.e., individually for any test point x. The following theorem states such a pointwise bound.
Theorem 4.3. Let F be a convex hypothesis class, ? : Y ? Y ? [0, ?), a convex loss function, and
let f? be an ERM. Then, with probability of at least 1 ? ?/2 over the choice of Sm from P m , for any
x ? X,
n
o
? f??,x ) ? R(
? f?) ? ? 2
|f ? (x) ? f?(x)| ? sup |?| : R(
.
?/4
??R
Proof. Define f? ,
argmax
?
?
? f?,(?2 ?1)?R(
? f?)
f ?V
?/4
|f (x)? f?(x)|. Assume w.l.o.g (the other case is symmetric)
? f?a,x ) ? R(
? f?) ? R(
? f?) ? ? 2 . Define
that f?(x) = f?(x) + a. Following Definition 4.1 we get R(
?/4
o
n
2
?
?
?
?
?
? = sup??R |?| : R(f?,x ) ? R(f ) ? ??/4 . We thus have,
sup
?
?
? f)
?
? f?,(?2 ?1)?R(
f ?V
?/4
|f (x) ? f?(x)| = a ? ?? .
An application of Lemma 3.5 completes the proof.
We conclude this section with a general result on the monotonicity of the empirical risk attained by
constrained ERM regressors. This property, which will be utilized in the next section, can be easily
proved using a simple application of Jensen?s inequality.
Lemma 4.4 (Monotonicity). Let F be a convex hypothesis space, ? : Y? Y ? [0, ?), a
convex
?1
?
?
?
?
?
?
?
loss function, and 0 ? ?1 < ?2 , be given. Then, R(f?1 ,x0 ) ? R(f ) ? ?2 R(f?2 ,x0 ) ? R(f ) . The
result also holds for the case 0 ? ?1 > ?2 .
6
5 Selective linear regression
We now restrict attention to linear least squares regression (LLSR), and, relying on Theorem 4.3 and
Lemma 4.4, as well as on known closed-form expressions for LLSR, we derive efficient implementation of Strategy 1 and a new pointwise bound. Let X be an m ? d training sample matrix whose
ith row, xi ? Rd , is a feature vector. Let y ? Rm be a column vector of training labels.
Lemma 5.1 (ordinary least-squares estimate [15]). The ordinary least square (OLS) solution of
the following optimization problem, min? kX? ? yk2 , is given by ?? , (X T X)+ X T y, where the
sign + represents the pseudoinverse.
Lemma 5.2 (constrained least-squares estimate [15], page 166). Let x0 be a row vector and c a
label. The constrained least-squares (CLS) solution of the following optimization problem
minimize kX? ? yk2
s.t x0 ? = c,
is given by ??C (c) , ?? + (X T X)+ xT0 (x0 (X T X)+ xT0 )+ c ? x0 ?? , where ?? is the OLS solution.
Theorem 5.3. Let F be the class of linear regressors, and let f? be an ERM. Then, with probability
of at least 1 ? ? over choices on Sm , for any test point x0 we have,
kX ?? ? yk q 2
|f ? (x0 ) ? f?(x0 )| ?
where K = (X T X)+ xT0 (x0 (X T X)+ xT0 )+ .
??/4 ? 1,
kXKk
? f??,x0 ) is strictly monotonically increasing for
Proof. According to Lemma 4.4, for squared loss, R(
? f?) ? ? 2 , where ? is the
? f??,x0 ) = R(
? > 0, and decreasing for ? < 0. Therefore, the equation, R(
?/4
unknown, has precisely two solutions for any ? > 1. Denoting these solutions by ?1 , ?2 we get,
o
n
? f?) ? ? 2
? f??,x0 ) ? R(
sup |?| : R(
?/4 = max(|?1 |, |?2 |).
??R
Applying Lemma 5.1 and 5.2 and setting c = X0 ?? + ?, we obtain,
1
? f?) ? ? 2 = 1 kX ?? ? yk2 ? ? 2 .
? f??,x0 ) = R(
kX ??C x0 ?? + ? ? yk2 = R(
?/4
?/4
m
m
2
Hence, kX ?? + XK? ? yk2 = kX ?? ? yk2 ? ??/4
, so, 2(X ?? ? y)T XK? + kXKk2?2 = kX ?? ?
yk2 ? (? 2 ? 1). We note that by applying Lemma 5.1 on (X ?? ? y)T X, we get,
?/4
(X ?? ? y)T X = X T X(X T X)+ X T y ? y
Therefore, ?2 =
2
?
kX ??yk
kXKk2
T
= (X T y ? X T y)T = 0.
2
? (??/4
? 1). Application of Theorem 4.3 completes the proof.
6 Numerical examples
Focusing on linear least squares regression, we empirically evaluated the proposed method. Given a
labeled dataset we randomly extracted two disjoint subsets: a training set Sm , and a test set Sn . The
selective regressor (f, g) is computed as follows. The regressor f is an ERM over Sm , and for any
coverage value ?, the function g selects a subset of Sn of size n ? ?, including all test points with
lowest value of the bound in Theorem 5.3.2
We compare our method relative to the following simple and natural 1-nearest neighbor (NN) technique for selection. Given the training set Sm and the
p test set Sn , let N N (x) denote the nearest
neighbor of x in Sm , with corresponding ?(x) , kN N (x) ? xk2 distance to x. These ?(x)
distances, corresponding to all x ? Sn , were used as alternative method to reject test points in
decreasing order of their ?(x) values.
We tested the algorithm on 10 of the 14 LIBSVM [16] regression datasets. From this repository we
took all sets that are not too small and have reasonable feature dimensionality.3 Figure 1 depicts
2
3
2
We use here the theorem only for ranking test points, so any constant > 1 can be used instead of ??/4
.
Two datasets having less than 200 samples, and two that have over 150,000 features were excluded.
7
results obtained for five different datasets, each with training sample size m = 30, and test set size
n = 200. The figure includes a matrix of 2 ? 5 graphs. Each column corresponds to a single dataset.
Each of the graphs on the first row shows the average absolute difference between the selective
regressor (f, g) and the optimal regressor f ? (taken as an ERM over the entire dataset) as a function
of coverage, where the average is taken over the accepted instances. Our method appears in solid
red line, and the baseline NN method, in dashed black line. Each curve point is an average over 200
independent trials (error bars represent standard error of the mean). It is evident that for all datasets
the average distance monotonically increases with coverage. Furthermore, in all cases the proposed
method significantly outperforms the NN baseline.
?3
x 10
bodyfat
x 10
4
cadata
1
cpusmall
x 10
0
housing
0.85
2.90
4.25
0.56
0.80
0
0.5
1.78
0
1
0.5
c
?5
x 10
2.14
0.37
0
1
0.5
c
bodyfat
x 10
9
1.58
0
1
cadata
x 10
0.5
cpusmall
x 10
x 10
6.46
0.61
3.59
1.72
0.09
0.5
1
3.00
0
c
0.5
1
R(f,g)
1.31
R(f,g)
2.33
R(f,g)
6.81
R(f,g)
4.32
0.39
0
1.89
0.01
0
0.5
c
c
0.5
1
1.00
0
1
c
housing
9.47
4.40
space
2.39
0
1
c
1
2.38
0.72
?2
3.19
c
3
space
|f*?f|
3.04
|f*?f|
1.29
|f*?f|
5.66
|f*?f|
3.93
|f*?f|
1.29
2.33
?2
x 10
3.99
1.01
R(f,g)
x 10
1.64
1.27
0.5
c
1
0.94
0
0.5
1
c
Figure 1: (top row) absolute difference between the selective regressor (f, g) and the optimal regressor f ? . (bottom row) test error of selective regressor (f, g). Our proposed method in solid red
line and the baseline method in dashed black line. In all curves the y-axis has logarithmic scale.
Each of the graphs in the second row shows the test error of the selective regressor (f, g) as a function
of coverage. This curve is known as the RC (risk-coverage) trade-off curve [6]. In this case we see
again that the test error is monotonically increasing with coverage. In four datasets out of the five
we observe a clear domination of the entire RC curve, and in one dataset the performance of our
method is statistically indistinguishable from that of the NN baseline method.
7 Concluding remarks
Rooted in the centuries-old linear least squares method of Gauss and Legendre, regression estimation remains an indispensable routine in statistical analysis, modeling and prediction. This paper
proposes a novel rejection technique allowing for a least squares regressor, learned from a finite and
possibly small training sample, to pointwise track, within its selected region of activity, the predictions of the globally optimal regressor in hindsight (from the same class). The resulting algorithm,
which is motivated and derived entirely from the theory, is efficient and practical.
Immediate plausible extensions are the handling of other types of regressions including regularized,
and kernel regression, as well as extensions to other convex loss functions such as the epsiloninsensitive loss. The presence of the ?-disagreement coefficient in our coverage bound suggests a
possible relation to active learning, since the standard version of this coefficient has a key role in
characterizing the efficiency of active learning in classification [17]. Indeed, a formal reduction of
active learning to selective classification was recently found, whereby rejected points are precisely
those points to be queried in a stream based active learning setting. Moreover, ?fast? coverage
bounds in selective classification give rise to fast rates in active learning [7]. Borrowing their intuition to our setting, one could consider devising a querying function for active regression that is
based on the pointwise bound of Theorem 5.3.
Acknowledgments
The research leading to these results has received funding from both Intel and the European Union?s
Seventh Framework Programme under grant agreement n? 216886.
8
References
[1] V. Vapnik. Statistical learning theory. 1998. Wiley, New York, 1998.
[2] C.K. Chow. An optimum character recognition system using decision function. IEEE Trans.
Computer, 6(4):247?254, 1957.
[3] C.K. Chow. On optimum recognition error and reject trade-off. IEEE Trans. on Information
Theory, 16:41?36, 1970.
[4] B. K?egl. Robust regression by boosting the median. Learning Theory and Kernel Machines,
pages 258?272, 2003.
? Ays?eg?ul, G. Mehmet, A. Ethem, and H. T?urkan. Machine learning integration for predicting
[5] O.
the effect of single amino acid substitutions on protein stability. BMC Structural Biology, 9.
[6] R. El-Yaniv and Y. Wiener. On the foundations of noise-free selective classification. The
Journal of Machine Learning Research, 11:1605?1641, 2010.
[7] R. El-Yaniv and Y. Wiener. Active learning via perfect selective classification. Journal of
Machine Learning Research, 13:255?279, 2012.
[8] R. El-Yaniv and Y. Wiener. Agnostic selective classification. In Neural Information Processing
Systems (NIPS), 2011.
[9] W.S. Lee. Agnostic Learning and Single Hidden Layer Neural Networks. PhD thesis, Australian National University, 1996.
[10] V.N. Vapnik. An overview of statistical learning theory. Neural Networks, IEEE Transactions
on, 10(5):988?999, 1999.
[11] R.M. Kil and I. Koo. Generalization bounds for the regression of real-valued functions. In
Proceedings of the 9th International Conference on Neural Information Processing, volume 4,
pages 1766?1770, 2002.
[12] S. Hanneke. A bound on the label complexity of agnostic active learning. In ICML, pages
353?360, 2007.
[13] S. Hanneke. Theoretical Foundations of Active Learning. PhD thesis, Carnegie Mellon University, 2009.
[14] A. Beygelzimer, S. Dasgupta, and J. Langford. Importance weighted active learning. In ICML
?09: Proceedings of the 26th Annual International Conference on Machine Learning, pages
49?56. ACM, 2009.
[15] J.E. Gentle. Numerical linear algebra for applications in statistics. Springer Verlag, 1998.
[16] C.C. Chang and C.J. Lin. LIBSVM: A library for support vector machines. ACM Transactions on Intelligent Systems and Technology, 2:27:1?27:27, 2011. Software available at
?http://www.csie.ntu.edu.tw/ cjlin/libsvm?.
[17] S. Hanneke. Rates of convergence in active learning. The Annals of Statistics, 39(1):333?361,
2011.
9
|
4732 |@word trial:1 repository:1 version:3 rani:1 inversion:1 stronger:1 open:1 solid:2 kxkk:1 reduction:2 substitution:1 denoting:1 outperforms:1 beygelzimer:1 subsequent:2 additive:3 numerical:3 selected:1 devising:1 xk:2 ith:1 provides:1 boosting:2 five:2 unbounded:3 rc:2 x0:16 theoretically:1 indeed:1 expected:1 inspired:1 relying:2 decreasing:2 globally:1 increasing:3 begin:1 bounded:3 moreover:2 underlying:1 agnostic:3 lowest:1 israel:1 argmin:1 minimizes:1 developed:1 hindsight:3 suite:2 guarantee:1 pseudo:1 rm:1 utilization:1 grant:1 positive:1 initiated:1 koo:1 might:1 black:2 suggests:1 statistically:1 practical:1 acknowledgment:1 union:3 implement:1 empirical:5 significantly:1 reject:11 outset:2 word:1 refers:1 protein:1 get:10 selection:2 context:2 risk:18 applying:7 www:1 straightforward:1 attention:1 convex:12 examines:1 century:1 stability:1 annals:1 target:1 exact:2 hypothesis:16 agreement:1 recognition:2 utilized:1 labeled:2 observed:1 ep:2 bottom:1 role:1 csie:1 region:3 ensures:1 trade:3 ran:1 mentioned:1 substantial:4 vanishes:1 convexity:1 yk:2 intuition:1 complexity:1 tight:1 algebra:1 compromise:1 upon:1 f2:4 efficiency:1 easily:2 tx:1 derivation:2 fast:2 aggregate:1 harnessing:1 quite:2 whose:1 valued:3 plausible:1 say:1 statistic:2 gi:2 unseen:1 housing:2 advantage:1 took:1 propose:1 product:2 combining:1 realization:1 iff:1 achieve:1 gentle:1 convergence:1 yaniv:4 optimum:2 generating:1 perfect:1 converges:2 derive:2 develop:2 ac:1 nearest:2 received:1 strong:1 coverage:17 c:1 implemented:1 australian:1 quantify:1 direction:1 radius:1 closely:1 stochastic:2 abstains:1 require:2 f1:4 generalization:1 preliminary:1 ntu:1 extension:3 strictly:1 hold:3 around:1 considered:1 ays:1 achieves:1 xk2:1 purpose:1 estimation:2 wet:1 label:3 individually:1 weighted:1 rather:2 derived:1 improvement:1 indicates:1 baseline:4 el:4 nn:4 entire:5 accept:1 chow:3 borrowing:1 relation:1 hidden:1 selective:31 contemplate:1 selects:1 arg:2 classification:10 development:1 proposes:1 constrained:10 integration:1 construct:1 having:1 bmc:1 represents:1 biology:1 icml:2 intelligent:1 few:1 randomly:1 national:1 individual:2 argmax:1 interest:2 possibility:2 evaluation:1 introduces:1 accurate:2 capable:2 old:1 desired:1 theoretical:2 instance:4 column:2 modeling:1 ordinary:2 subset:4 qualifier:1 predictor:2 technion:2 cpusmall:2 successful:1 seventh:1 too:1 kn:1 scarcely:1 considerably:1 international:2 lee:1 off:3 regressor:28 ym:1 together:1 again:2 squared:6 thesis:2 containing:1 choose:1 possibly:5 leading:1 potential:1 includes:1 coefficient:8 ranking:1 stream:1 multiplicative:5 closed:1 observing:2 analyze:1 portion:2 sup:9 hf:2 option:5 red:2 minimize:2 il:1 square:12 wiener:4 acid:1 generalize:1 weak:1 rejecting:2 served:1 hanneke:4 ago:1 reach:1 whenever:1 definition:15 proof:13 sampled:1 proved:2 treatment:1 dataset:4 knowledge:1 dimensionality:1 hilbert:1 routine:1 appears:2 focusing:1 attained:1 follow:1 response:1 evaluated:1 furthermore:1 rejected:2 langford:1 quality:1 facilitate:1 effect:1 concept:1 true:4 normalized:1 hence:2 excluded:1 symmetric:2 eg:1 indistinguishable:1 covering:1 rooted:1 whereby:1 outline:1 evident:1 abstain:1 novel:2 recently:1 funding:1 common:3 ols:2 functional:1 kil:1 empirically:1 overview:1 volume:2 discussed:3 extend:1 mellon:1 queried:1 rd:1 unconstrained:1 similarly:1 yk2:7 recent:1 inf:2 reverse:1 selectivity:1 certain:3 indispensable:1 verlag:1 inequality:3 meeting:1 yi:6 r0:3 monotonically:5 dashed:2 full:3 reduces:1 technical:1 adapt:1 calculation:1 characterized:1 plug:1 lin:1 prediction:11 regression:26 basic:1 metric:1 represent:1 kernel:2 completes:2 median:1 source:2 induced:1 effectiveness:1 call:1 structural:2 presence:1 restrict:1 inner:2 expression:1 motivated:1 ul:1 york:1 action:1 remark:2 useful:1 clear:2 continuation:1 http:1 sign:1 disjoint:1 track:3 write:1 carnegie:1 dasgupta:1 key:1 four:1 demonstrating:1 achieving:3 libsvm:3 utilize:2 graph:3 year:1 throughout:2 reasonable:1 draw:1 decision:3 entirely:1 bound:25 layer:1 guaranteed:2 annual:1 activity:2 infinity:1 precisely:2 attemps:1 software:1 sake:1 optimality:7 min:1 concluding:1 department:1 according:2 ball:1 legendre:1 character:1 tw:1 restricted:1 erm:19 taken:3 equation:1 remains:1 turn:2 mechanism:3 cjlin:1 know:1 available:2 operation:1 observe:2 disagreement:8 alternative:1 yair:1 rp:1 top:2 ensure:1 calculating:2 classical:1 objective:2 quantity:3 strategy:15 usual:1 dp:2 distance:5 considers:1 assuming:2 pointwise:20 difficult:1 statement:1 argminf:1 rise:1 implementation:5 unknown:5 allowing:1 datasets:7 sm:14 finite:3 immediate:1 precise:1 y1:1 arbitrary:1 pair:4 required:4 learned:1 nip:1 trans:2 bar:1 xm:1 challenge:1 pioneering:1 including:4 max:1 natural:2 regularized:1 predicting:2 improve:1 technology:2 library:1 disappears:1 axis:1 sn:4 mehmet:1 l2:1 relative:1 loss:16 querying:1 foundation:2 row:6 free:1 qualified:1 dis:2 formal:1 institute:1 neighbor:2 characterizing:1 absolute:2 curve:5 world:2 made:2 regressors:11 programme:1 far:1 transaction:2 cadata:2 supremum:3 monotonicity:2 pseudoinverse:1 active:11 conclude:2 corroborates:1 xi:8 learn:3 robust:1 anecdotally:1 cl:1 european:1 domain:5 main:1 rh:1 noise:1 allowed:1 amino:1 x1:1 intel:1 depicts:1 wiley:1 theorem:14 showing:1 jensen:2 ethem:1 intractable:1 exists:1 false:1 vapnik:3 gained:1 importance:1 phd:2 egl:1 kx:9 rejection:3 supf:1 logarithmic:1 explore:1 xt0:4 tracking:3 chang:1 springer:1 corresponds:1 minimizer:3 satisfies:1 relies:1 extracted:2 acm:2 viewed:1 infinite:2 specifically:1 reducing:1 typical:1 lemma:26 accepted:1 gauss:1 meaningful:1 domination:1 indicating:1 select:1 bodyfat:2 support:2 brevity:1 tested:1 handling:1
|
4,125 | 4,733 |
Fast Resampling Weighted v-Statistics
Chunxiao Zhou
Mark O. Hatfield Clinical Research Center
National Institutes of Health
Bethesda, MD 20892
[email protected]
Jiseong Park
Dept of Math
George Mason Univ
Fairfax, VA 22030
[email protected]
Yun Fu
Dept of ECE
Northeastern Univ
Boston, MA 02115
[email protected]
Abstract
In this paper, a novel and computationally fast algorithm for computing weighted
v-statistics in resampling both univariate and multivariate data is proposed. To
avoid any real resampling, we have linked this problem with finite group action
and converted it into a problem of orbit enumeration. For further computational
cost reduction, an efficient method is developed to list all orbits by their symmetry orders and calculate all index function orbit sums and data function orbit
sums recursively. The computational complexity analysis shows reduction in the
computational cost from n! or nn level to low-order polynomial level.
1
Introduction
Resampling methods (e.g., bootstrap, cross-validation, and permutation) [3,5] are becoming increasingly popular in statistical analysis due to their high flexibility and accuracy. They have been successfully integrated into most research topics in machine learning, such as feature selection, dimension reduction, supervised learning, unsupervised learning, reinforcement learning, and active
learning [2, 3, 4, 7, 9, 11, 12, 13, 20].
The key idea of resampling is to generate the empirical distribution of a test statistic by resampling
with or without replacement from the original observations. Then further statistical inference can
be conducted based on the empirical distribution, i.e., resampling distribution. One of the most
important problems in resampling is calculating resampling statistics, i.e., the expected values of
test statistics under the resampling distribution, because resampling statistics are compact representatives of the resampling distribution. In addition, a resampling distribution may be approximated
by a parametric model with some resampling statistics, for example, the first several moments of
a resampling distribution [5, 16]. In this paper, we focus on computing resampling weighted vstatistics [18] (see Section 2 for the formal definition). Suppose our data includes n observations,
a weighted v-statistic is a summation of products of data function terms and index function terms,
i.e., weights, over all possible k observations chosen from n observations, where k is the order of
the weighted v-statistic. If we treat our data as points in a multi-dimensional space, a weighted
v-statistic can be considered as an average of all possible weighted k-points distances. The higher k,
the more complicated interactions among observations can be modeled in the weighted v-statistic.
Machine learning researchers have already used weighted v-statistics in hypothesis testing, density
estimation, dependence measurement, data pre-processing, and classification [6, 14, 19, 21] .
Traditionally, estimation of resampling statistics is solved by random sampling since exhaustive examination of the resampling space is usually ill advised [5,16]. There is a tradeoff between accuracy
and computational cost with random sampling. To date, there is no systematic and efficient solution
to the issue of exact calculation of resampling statistics. Recently, Zhou et.al. [21] proposed a recursive method to derive moments of permutation distributions (i.e., empirical distribution generated by
resampling without replacement). The key strategy is to divide the whole index set (i.e., indices of
all possible k observations ) into several permutation equivalent index subsets such that the summa1
tion of the data/index function term over all permutations is invariant within each subset and can be
calculated without conducting any permutation. Therefore, moments are obtained by summing up
several subtotals. However, methods for listing all permutation equivalent index subsets and calculating of the respective cardinalities were not emphasized in the previous publication [21]. There is
also no systematic way to obtain coefficients in the recursive relationship. Even only for calculating
the first four moments of a second order resampling weighted v statistic, hundreds of index subsets
and thousands of coefficients have to be derived manually. The manual derivation is very tedious and
error-prone. In addition, Zhou?s work is limited to permutation (resampling without replacement)
and is not applicable to bootstrapping (resampling with replacement) statistics.
In this paper, we propose a novel and computationally fast algorithm for computing weighted vstatistics in resampling both univariate and multivariate data. In the proposed algorithm, the calculation of weighted v-statistics is considered as a summation of products of data function terms and
index function terms over a high-dimensional index set and all possible resamplings with or without
replacement. To avoid any resampling, we link this problem with finite group actions and convert
it into a problem of orbit enumeration [10]. For further computational cost reduction, an efficient
method has been developed to list all orbits by their symmetry order and to calculate all index function orbit sums and data function orbit sums recursively. With computational complexity analysis,
we have reduced the computational cost from n! or nn level to low-order polynomial level. Detailed
proofs have been included in the supplementary material.
In comparison with previous work [21], this study gives a theoretical justification of the permutation
equivalence partition idea and extends it to other types of resamplings. We have built up a solid
theoretical framework that explains the symmetry of resampling statistics using a product of several symmetric groups. In addition, by associating this problem with finite group action, we have
developed an algorithm to enumerate all orbits by their symmetry order and generated a recursive
relationship for orbits sum calculation systematically. This is a critical improvement which makes
the whole method fully programmable and frees ourselves from onerous derivations in [21].
2
Basic idea
In general, people prefer choosing statistics which have some symmetric properties. All resampling
strategies, such as permutation and bootstrap, are also more or less symmetric. These facts motivated
us to reduce the computational cost by using abstract algebra.
Pn
This study is focused on computing resampling weighted v-statistics, i.e., T (x) =
i1 =1 ? ? ?
Pn
T
w(i
,
?
?
?
,
i
)h(x
,
?
?
?
x
),
where
x
=
(x
,
x
,
?
?
?
,
x
)
is
a
collection
of
n
observa1
d
i
i
1
2
n
1
d
id =1
tions (univariate/multivariate), w is an index function of d indices, and h is a data function of d
observations. Both w and h are symmetric, i.e., invariant under permutations of the order of variables. Weighted v-statistics cover a large amount of popular statistics. For example, in the case of
multiple comparisons, observations are collected from g groups: first group (x1 , ? ? ? , xn1 ), second
group (xn1 +1 , ? ? ? , xn1 +n2 ), and last group (xn ng +1 , ? ? ? , xn ), where n1 , n2 , ? ? ? , ng are numbers
of observations in each group. In order to test the difference among groups, it is common to use the
P n1
Pn1 +n2
Pn
modified F test statistic T (x) = ( i=1
xi )2 /n1 +( i=n
xi )2 /n2 +? ? ?+( i=n ng +1 xi )2 /ng ,
1 +1
where n = n1 + n2 + ? ? ? + ngP
. We can
Pnrewrite the modified F statistic [3] as a second order
n
weighted v-statistic, i.e., T (x) = i1 =1 i2 =1 w(i1 , i2 )h(xi1 , xi2 ), here h(xi1 , xi2 ) = xi1 xi2 and
w(i1 , i2 ) = 1/nk if both xi1 and xi2 belong to the k-th group, and w(i1 , i2 ) = 0 otherwise.
The r-th moment of a resampling weighted v-statistic is:
?
?
? X
E T r (x) = E
w(i1 , ? ? ? , id )h(x ?i1 , ? ? ? , x
=E
=
?
1
|R|
i1 ,??? ,id
X
i11 ,??? ,i1d ,??? ,ir1 ,??? ,ird
X?
2R
X
r
n? Y
k=1
i11 ,??? ,i1d ,??? ,ir1 ,??? ,ird
w(ik1 , ? ? ? , ikd )
r
n? Y
k=1
r
?? Y
h(x
k=1
w(ik1 , ? ? ? , ikd )
2
?id )
r
?? Y
k=1
?r
?o
,???
?ik
1
,x
h(x
,??? ,x
?ik
1
)
?ik
d
?ik
d
)
?o
,
(1)
where is a resampling which is uniformly distributed in the whole resampling space R. |R|, the
number of all possible resamplings, is equal to n! or nn for resampling without or with replacement.
Thus the r-th moment of a resampling weighted v-statistic can be considered as a summation of
products of data function terms and index function terms over a high-dimensional index set Udr =
{1, ? ? ? , n}dr and all possible resamplings in R. Since both index space and resampling space are
huge, it is computationally expensive for calculating resampling statistics directly.
For terminology convenience, {(i11 , ? ? ? , i1d ), ? ? ? , (ir1 , ? ? ? , ird )} is called an index paragraph, which
includes r index sentences (ik1 , ? ? ? , ikd ), k = 1, ? ? ? , r, and each index sentence has d index words
ikj , j = 1, ? ? ? , d. Note that there are three different types of symmetry in computing resampling
weighted v-statistics. The first symmetry is that permutation of the order of index words will not
affect the result since the data function is assumed to be symmetric. The second symmetry is the
permutation of the order of index sentences since multiplication is commutative. The third symmetry
is that each possible resampling is equally likely to be chosen.
In order to reduce the computational cost, first, the summation order is exchanged,
r
r
?
?
n? Y
? ?Y
X
E T r (x) =
w(ik1 , ? ? ? , ikd ) E
h(x ?ik1 , ? ? ? , x
where E
?Q
i11 ,??? ,i1d ,??? ,ir1 ,??? ,ird
r
k=1
h(x
?ik
1
,??? ,x
?ik
d
k=1
?
) =
1
|R|
k=1
P
2R
?Q
r
k=1
h(x
?ik
1
,??? ,x
?ik
d
?
) .
)
?ik
d
?o
,
(2)
n
{1, ? ? ? , n}dr
= {(i11 , ? ? ? , i1d ), ? ? ? , (ir1 , ? ? ? , ird )}|ikm 2
o
{1, ? ? ? , n}; m = 1, ? ? ? , d; k = 1, ? ? ? , r is then divided into disjoint index subsets, in
?Q
?
r
which E
h(x
is invariant. The above index set partition simplifies
k , ? ? ? , x ?ik )
?i
k=1
1
d
the ?computing of resampling ?statistics in the following sense: (a) we only need to calculate
Qr
E
, ? ? ? , x ?ikd ) once per each index subset, (b) due to the symmetry of resamk=1 h(x ?ik
1
?Q
?
r
pling, the calculation of E
h(x
is equivalent to calculating the average of
k , ? ? ? , x ?ik )
?i
k=1
1
d
all data function terms within the corresponding index subset, then we can completely replace all
resamplings with simple summations, and (c) for further computational cost reduction, we can sort
all index subsets in their symmetry order and calculate all index subset summations recursively. We
will discuss the details in the following sections for both resampling without or with replacement.
The abstract algebra terms used in this paper are listed as follows.
The whole index set Udr
=
Terminology. A group is a non-empty set G with a binary operation satisfying the following axioms:
closure, associativity, identity, and invertibility. The symmetric group on a set, denoted as Sn , is the
group consisting of all bijections or permutations of the set. A semigroup has an associative binary
operation defined and is closed with respect to this operation, but not all its elements need to be
invertible. A monoid is a semigroup with an identity element. A set of generators is a subset of
group elements such that all the elements in the group can be generated by repeated composition of
the generators. Let X be a set and G be a group. A group action is a mapping G ? X ! X which
satisfies the following two axioms: (a) e ? x 7! x for all x 2 X, and (b) for all a, b 2 G and x 2 X,
a ? (b ? x) = (ab) ? x. Here the 0 ?0 denotes the action. It is well known that a group action defines an
equivalence relationship on the set X, and thus provides a disjoint set partition on it. Each part of
the set partition is called an orbit that denotes the trajectory moved by all elements within the group.
We use symbol [ ] to represent an orbit. Two elements, x and y 2 X fall into the same orbit if there
exists a g 2 G such that x = g ? y. The set of orbits is denoted by G X. A transversal of orbits is
a set of representatives containing exactly one element from each orbit. In this paper, we limit our
discussion to only finite groups [10,17].
3
Permutation
For permutation statistics, observations are permuted in all possible ways, i.e., R = Sn . Based on
the three types of symmetry, we link the permutation statistics calculation with a group action.
Definition 1. The action of G := Sn ? Sr ? Sd r on the index set Udr is defined as
3
( , ?, ?1 , ? ? ? , ?r ) ? ikm :=
? i??
k
1
1
?k
,
?m
where m 2 {1, ? ? ? , d}, and k 2 {1, ? ? ? , r}.
Here, ?k denotes the permutation of the order of index words within the k-th index sentence, ?
denotes the permutation of the order of r index sentences, and denotes the permutation of the value
of an index word from 1 to n. For example, let n = 4, d = 2, r = 2, ?1 = ?1 1 = 1 ! 2, 2 ! 1,
?2 = ?2 1 = 1 ! 1, 2 ! 2, ? = ? 1 = 1 ! 2, 2 ! 1, and
= 1 ! 2, 2 ! 4, 3 !
3, 4 ! 1, then ( , ?, ?1 , ?2 ) ? {(1, 4)(3, 4)} = {(3, 1)(1, 2)} by {(1, 4)(3, 4)} ! {(4, 1)(3, 4)} !
{(3, 4)(4, 1)} ! {(3, 1)(1, 2)}. Note that the reason to define the action in this way is to guarantee
G ? Udr ! Udr is a group action.
In most applications, both r and d are much less than the sample size n, we assume throughout this
paper that n
dr.
?Q
?
r
Proposition 1. The data function sum E
,
?
?
?
,
x
is invariant within each
k)
?i
k=1 h(x ?ik
1
d
index orbit? of group action G := Sn ?? Sr ? Sd r acting on the index set Udr as defined in definition
Qr
, ? ? ? , x ?ikd ) =
1, and E
k=1 h(x ?ik
1
Qr
X
k=1 h(xj1k , ? ? ? , xjdk )
?
? , (3)
1
1
r
r
{(j11 ,??? ,jd1 ),??? ,(j1r ,??? ,jdr )}2[{(i11 ,??? ,i1d ),??? ,(ir1 ,??? ,ird )}] card [{(i1 , ? ? ? , id ), ? ? ? , (i1 , ? ? ? , id )}]
?
?
where card [{(i11 , ? ? ? , i1d ), ? ? ? , (ir1 , ? ? ? , ird )}] is the cardinality of the index orbit, i.e., the number
of indices within the index orbit [{(i11 , ? ? ? , i1d ), ? ? ? , (ir1 , ? ? ? , ird )}].
?Q
?
r
Due to the invariance property of E
h(x
k , ? ? ? , x ?ik ) , the calculation of permutation
?i
k=1
1
d
statistics can be simplified by summing up all index function product terms in each index orbit.
Proposition 2. The r-th moment of permutation statistics can be obtained by summing up the
product of the data function orbit sum h and the index function orbit sum w over all index orbits,
?
? X w h
E T r (x) =
,
(4)
card([ ])
2L
where =
is a representative index paragraph, [ ] is the index
orbit including , and L is a transversal of all index orbits . The data function orbit sum is
{(i11 , ? ? ?
, i1d ), ? ? ?
, (ir1 , ? ? ?
, ird )}
X
h =
{(j11 ,??? ,jd1 ),??? ,(j1r ,??? ,jdr )}2[
r
Y
] k=1
h(xj1k , ? ? ? , xjdk ),
(5)
and the index function orbit sum is
w =
X
{(j11 ,??? ,jd1 ),??? ,(j1r ,??? ,jdr )}2[
r
Y
] k=1
w(j1k , ? ? ? , jdk ).
(6)
Proposition 2 shows that the calculation of resampling weighted v-statistics can be solved by computing data function orbit sums, index function orbit sums, and cardinalities of all orbits defined in
definition 1. We don?t need to conduct any real permutation at all.
Now we demonstrate how to calculate orbit cardinalities, h and w .
The following
shows a naive algorithm to enumerate all index paragraphs and cardinality of each orbit
of G
Udr , which are needed to calculate h and w . We construct a Cayley Action
Graph with a vertex set of all possible index paragraphs in Udr . We connect a directed
edge from {(i11 , ? ? ? , i1d ), ? ? ? , (ir1 , ? ? ? , ird )} to {(j11 , ? ? ? , jd1 ), ? ? ? , (j1r , ? ? ? , jdr )} if {(j11 , ? ? ? , jd1 ),
? ? ? , (j1r , ? ? ? , jdr )} = gk {(i11 , ? ? ? , i1d ), ? ? ? , (ir1 , ? ? ? , ird )}, where gk is a generator 2 {g1 , ? ? ? , gp }.
{g1 , ? ? ? , gp } is the set of generators of group G, i.e., G = hg1 , ? ? ? , gp i. It is sufficient and efficient
to use the set of generators of group to construct the Cayley Action Graph, instead of using the set of
all group elements. For example, we can choose {g1 , ? ? ? , gp } = { 1 , 2 } ? {?1 , ?2 } ? {?1 , ?2 }r ,
where 1 = (12 ? ? ? n), 2 = (12), ?1 = (12 ? ? ? r), ?2 = (12), ?1 = (12 ? ? ? d), and ?2 = (12).
Here 1 = (12 ? ? ? n) denotes the permutation 1 ! 2, 2 ! 3, ? ? ? , n ! 1, and 2 = (12) denotes
4
1 ! 2, 2 ! 1, 3 ! 3, ? ? ? , n ! n. Note that listing the index paragraphs of each orbit is equivalent
to finding all connected components in the Cayley Action Graph, which can be performed by using
existing depth-first or breadth-first search methods [15]. Figure 1 demonstrates the Cayley Action
Graph of G U21 , where d = 2, r = 1, and n = 3. Since the main effort here is to construct the
Cayley Action Graph, the computational cost of the naive algorithm is O(ndr p) = O(ndr 22+r ).
Moreover, the memory cost is O(ndr ). Unfortunately, this algorithm is not an offline one since we
usually do not know the data size n before we have the data at hand, even d and r can be preset.
In other words, we can not list all index orbits before we know the data size n. Moreover, since
ndr 22+r is still computationally expensive, the naive algorithm is ill advised even if n is preset.
i21
1
2
G \ \U dr
3
1
G * \ \U dr
*
i11 2
3
Cayley action graph
Set of orbits U 21 [{(1,1)}] [{(1,2)}]
Figure 1: Cayley action graph for G
?
U21 .
transversal
Figure 2: Finding the transversal.
In table 1, we propose an improved offline algorithm in which we assume that d and r are preset.
For computing h and w , we find that we do not need to know all the index paragraphs within
each index orbit. Since each orbit is well structured, it is enough to only list a transversal of orbits
G Udr and corresponding cardinalities. For example, there are two orbits, [{(1, 1)}] and [{(1, 2)}],
when d = 2 and r = 1. [{(1, 1)}], with cardinality n, includes all index paragraphs with i11 = i12 .
1
1
[{(1, 2)}], with cardinality
n(n 1),
n
o includes all index paragraphs with i1 6= i2 . Actually, the
transversal L = {(1, 1)}, {(1, 2)} carries all the above information. This finding reduces the
computation cost dramatically.
n
Definition 2. We define an index set Udr ? = {1, ? ? ? , dr}dr = {(i11 , ? ? ? , i1d ), ? ? ? , (ir1 , ? ? ? , ird )}|ikm
o
2 {1, ? ? ? , dr}; m = 1, ? ? ? , d; k = 1, ? ? ? , r and a group G? := Sdr ? Sr ? Sd r .
Since we assumed n
dr, Udr ? is a subset of the index set Udr . The group G? can be considered a
subgroup of G since the group Sdr can be naturally embedded into the group Sn . Both Udr ? and G?
are unrelated to the sample size n.
Proposition 3. The transversal of G?
Udr ? is also a transversal of G
Udr .
By proposition 3, we notice that the listing of the transversal of G Udr is equivalent to the listing of
the transversal of G? Udr ? (see Figure 2). The latter is computationally much easier than the former
since the cardinalities of G? and Udr ? are much smaller than those of G and Udr when n
dr.
Furthermore, finding the transversal of G? Udr ? can be done without knowning sample size n. Due
to the structure of each orbit of G Udr , we can calculate the cardinality of each orbit of G Udr
with the transversal of G? Udr ? , although G Udr and G? Udr ? have different caridnalities for
corresponding orbits.
Table 1: Offline double sided searching algorithm for listing the transversal
Input: d and r,
1. Starting from an orbit representative {(1, ? ? ? , d), ? ? ? , ((r 1)d + 1, ? ? ? , rd)}
2. Construct the transversal of Sdr Udr ? by merging
3. Construct the transversal of of G? Udr ? by graph isomorphism testing
4. Ending to an orbit representative {(1, ? ? ? , 1), ? ? ? , (1, ? ? ? , 1)}
Output: a transversal L of G Udr , #( ), #( ! ?), and merging order(symmetry order) of orbits
Comparing with the Cayley Action Graph naive algorithm, our improved algorithm lists the transversal of G Udr and calculates the cardinalities of all orbits more efficiently. In addition, the improved
algorithm also assigns a symmetry order to all orbits, which helps further reduce the computational
5
cost of the data function orbit sum h and the index function orbit sum w . The base of our improved algorithm is on the fact that a subgroup acting on the same set causes a finer partition. On
one hand, it is challenging to directly list the transversal of G? Udr ? . On the other hand, it is much
easier to find two related group actions, causing finer and coarser partitions of Udr ? . These two group
actions help us find the transversal of G? Udr ? efficiently with a double sided searching method.
Definition 3. The action of Sdr on the index set Udr ? is defined as
Sdr , m 2 {1, ? ? ? , d}, and k 2 {1, ? ? ? , r}. Each orbit of Sdr
[{(i11 , ? ? ? , i1d ), ? ? ? , (ir1 , ? ? ? , ird )}]s .
? ikm , where
2
Udr ? is denoted by
Note the group action defined in definition 3 only allows permutation of index values, it does not
allow shuffling of index words within each index sentence or of index sentences. Since Sdr is
embedded in G? , the set of orbits Sdr Udr ? is a finer partition of G? Udr ? . For example, both
[{(1, 2)(1, 2)}]s and [{(1, 2)(2, 1)}]s are finer partitions of [{(1, 2)(1, 2)}]. In addition, it is easy to
construct a transversal of Sdr Udr ? by merging distinct index elements.
Definition 4. Given a representative I, which includes at least two distinct index values, for example
i 6= j, an operation called merging replaces all index values of i or j with min(i, j).
For example, [{(1, 2)(2, 3)}] becomes [{(1, 1)(1, 3)}] after merging the index values of 1 and 2.
?
1
?(k,m)
Definition 5. The action of Sdr ?Sdr on the index set Udr ? is defined as ?i? 1 ?(k,m)sw , where ? 2 Sdr
denotes a permutation of all dr index words without any restriction, i.e. ? 1 ? (k, m)s denotes the
index sentence location after permutation ?, and ? 1 ? (k, m)w denotes the index word location after
permutation ?. The orbit of Sdr ? Sdr Udr ? is denoted by [{(i11 , ? ? ? , i1d ), ? ? ? , (ir1 , ? ? ? , ird )}]l .
Since the group action defined in definition 5 allows free shuffling of the order of all dr index
words, the order does not matter for Sdr ? Sdr Udr ? and shuffling can across different sentences.
For example, [{(1, 2)(1, 2)}]l = [{(1, 1)(2, 2)}]l . Sdr ?Sdr Udr ? is a coarser partition of G? Udr ? .
Proposition 4. A transversal of Sdr
Udr ? can be generated by all possible mergings of
s
[{(1, ? ? ? , d), ? ? ? , (d(r 1) + 1, ? ? ? , dr)}] .
Proposition 5. Enumerating a transversal of Sdr ? Sdr
of dr.
Udr ? is equivalent to the integer partition
We start the transversal graph construction from an initial orbit [{(1, ? ? ? , d), ? ? ? , (d(r
1) +
1, ? ? ? , dr)}]s , i.e, all index elements have distinct values. Then we generate new orbits of Sdr Udr ?
by merging distinct index values in existing orbits until we meet [{(1, ? ? ? , 1), ? ? ? , (1, ? ? ? , 1)}]s ,
i.e., all index elements have equal values. We also add an edge from an existing orbit to a new orbit
generated by merging the existing one. The procedure for d = 2, r = 2 case is shown in Figure 3.
Now we generate the transversal of G? Udr ? from that of Sdr Udr ? . This can be done by checking
whether two orbits in Sdr Udr ? are equivalent in G? Udr ? . Actually, orbit equivalence checking
is equivalent to the classical graph isomorphism problem since we can consider each index word as
a vertex and connect two index words if they belong to the same index sentence.
The graph
? isomorphism
? testing can be done by Luks?s famous algorithm [1,15] with computational
p
?
cost exp O( vlogv) , where v is the number of vertices. Figure 4 shows a transversal of G? U22
?
generated from that of S4 U22 (Figure 3). By proposition 3, it is also a transversal of G U22 .
Since G? Udr ? is a finer partition of Sdr ? Sdr Udr ? , orbit equivalence testing is only necessary
when two orbits of Sdr Udr ? correspond to the same integer partition. This is why we named this
algorithm double sided searching.
[{(1,2)(3,4)}]
[{(1,2)(3,4)}]s
[{(1,1)(2,3)}]
[{(1,1)(3,4)}]s [{(1,2)(1,4)}]s [{(1,2)(3,1)}]s [{(1,2)(2,4)}]s [{(1,2)(3,2)}]s [{(1,2)(3,3)}]s
[{(1,1)(1,4)}]s [{(1,1)(3,1)}]s [{(1,1)(3,3)}]s [{(1,2)(1,1)}]s [{(1,2)(1,2)}]s [{(1,2)(2,1)}]s [{(1,2)(2,2)}]s
[{(1,1)(1,2)}]
[{(1,2)(1,3)}]
[{(1,1)(2,2)}]
[{(1,1)(1,1)}]
[{(1,1)(1,1)}]s
Figure 3: Transversal graph
for S4
[(1,2) (3,4)]
[(1,1)(3,4)]
[(1,2)(1,4)]
[(1,2)(3,1)]
[(1,2)(2,4)]
Figure 4:
graph for G
?
U22 .
[(1,2)(3,2)]
[(1,2)(3,3)]
6
[(1,1)(1,4)] [(1,1)(3,1)] [(1,1)(3,3)] [(1,2)(1,1)] [(1,2)(1,2)] [(1,2)(2,1)] [(1,2)(2,2)]
s
Transversal
U22 .
[{(1,2)(1,2)}]
Definition 6. For any two index orbit representatives 2 L and ? 2 L, we say that ? has a lower
merging or symmetry order than that of , i.e., ?
, if [?] can be obtained from [ ] by several
mergings. Or there is a path from [ ] to [?] in the transversal graph. Here L denotes a transversal set
of all orbits.
Definition 7. We define #( ) as the number of Sdr Udr ? orbits in [ ]. We also define #( ! ?)
as the number of different [?]s s which can be reached from a [ ]s .
It is easy to get #( ) when we generate a transversal graph of G Udr from that of Sdr Udr ? .
The #( ! ?) can also be obtained from the transversal graph of G Udr by counting the number of different [?]s s which can be reached from a [ ]s . For example, there are edges connecting
[{(1, 1)(3, 4)}]s to [{(1, 1)(1, 4)}]s and [{(1, 1)(3, 1)}]s . Since [{(1, 1)(1, 4)}] = [{(1, 1)(3, 1)}] =
[{(1, 1)(1, 2)}], #( = {(1, 1)(2, 3)} ! ? = {(1, 1)(1, 2)}) = 2. Note that this number can also
be obtained from [{(1, 2)(3, 3)}]s to [{(1, 2)(1, 1)}]s and [{(1, 2)(2, 2)}]s .
The difficulty for computing data function orbit sum and index function orbit sum comes from two
constraints: equal constraint and unequal constraint. For example, in the orbit [{(1, 1), (2, 2)}], the
equal constraint is that the first and the second index values are equal and the third and fourth index
values are also equal. On the other hand, the unequal constraint requires that the first two index
values are different from the last two. Due to the difficulties mentioned, we solve this problem
by first relaxing the unequal constraint and then applying the principle of inclusion and exclusion.
Thus, the calculation of an orbit sum can be separated into two parts: the relaxed orbit sum without
unequal constraint and lower order orbit sums. For example, the relaxed index function orbit sum is
?P
?2
P
w? =[{(1,1),(2,2)}] = i,j w(i, i)w(j, j) =
w(i,
i)
.
i
Proposition 6. The index function orbit sum w can be calculated by subtracting all lower order orbit sums from the corresponding relaxed index function orbit sum w? , i.e., w = w?
P
)
w? #(
1) ? ? ? (n q + 1), where q is the
?
#(?) #( ! ?). The cardinality of [ ] is #( )n(n
number of distinct values in . The calculation of the data index function orbit sum h is similar.
So the computational cost mainly depends on the calculation of relaxed orbit sum and the lowest
order orbit sum. The computational cost of the lowest order term is O(n). The calculation of
relaxed orbit can be done by Zhou?s greedy graph search algorithm [21].
Proposition 7. For d 2, let m(m 1)/2 ? rd(d 1)/2 < (m + 1)m/2, where r is the order
of moment and m is an integer. For a d-th order weighted v-statistic, the computational cost of the
orbit sum for the r-th moment is bounded by O(nm ). When d = 1, the computational complexity
of the orbit sum is O(n).
4
Bootstrap
Since Bootstrap is resamping with replacement, we need to change Sn to the set of all possible
endofunctions Endn in our computing scheme. In mathematics, an endofunction is a mapping of a
set to its subset. With this change, H := Endn ? Sr ? Sdr acting on Udr becomes a monoid action
instead of a group action since endofunction is not invertible. The monoid action also divides the Udr
into several subsets. However, these subsets are not necessarily disjoint after mapping. For example,
when d = 2 and r = 1, we can still divide the S
index set U21 into two subsets, i.e., [(1, 1)] and [(1, 2)].
However, [(1, 2)] is mapped to U21 = [(1, 2)] [(1, 1)] by monoid action H ? Udr ! Udr , although
[(1, 1)] is still mapped to itself. Fortunately, the computation of Bootstrap weighted v-statistics
only needs index function orbit sums and relaxed data function orbit sums in the corresponding
permutation computation. Therefore, the Bootstrap weighted v-statistics calculation is just a subproblem of permutation weighted v-statistics calculation.
Proposition 8. We can obtain the r-th moment of bootstrapping weighted v-statistics by summing
up the product of the index function orbit sum w and the relaxed data function orbit sum h? over
all index orbits, i.e.,
X w h?
E (T r (x)) =
,
(7)
card([ ? ])
where
2 Endn , card([
?
2L
]) = #( )n , and q is the number of distinct values in .
q
7
Table 2: Comparison of accuracy and complexity for calculation of resampling statistics.
Linear
Permutation
Quadratic
Linear
Bootstrap
Quadratic
Methods
Exact
Our
Random
Exact
Our
Random
Exact
Our
Random
Exact
Our
Random
2nd moment
0.7172
0.7172
0.7014
1.0611e3
1.0611e3
1.0569e3
3.5166
3.5166
3.4769
2.4739e5
2.4739e5
2.4576e5
3rd
-0.8273
-0.8273
-0.8326
-4.6020e4
-4.6020e4
-4.5783e4
8.9737
8.9737
8.8390
-6.0322e6
-6.0322e6
-5.9825e6
4th
1.0495
1.0495
1.0555
2.1560e6
2.1560e6
2.1825e6
35.4241
35.4241
34.6393
2.6998e8
2.6998e8
2.6589e8
Time
1.1153e3
0.0057
0.5605
1.718e3
0.006
2.405
204.4381
0.0053
0.3294
445.536
0.005
1.987
The computational cost of bootstrapping weighted v-statistics is the same level as that of permutation
statistics.
5
Numerical results
To evaluate the accuracy and efficiency of our mothds, we
simulated data and conduct perPgenerate
n
mutation
and
bootstrapping
for
both
linear
test
statistic
w(i)h(x
i ) and quadratic test statistic
i=1
Pn Pn
w(i
,
i
)h(x
,
x
)
.
To
demonstrate
the
universal
applicability
of our method and
1
2
i
i
1
2
i1 =1
i2 =1
prevent a chance result, we generate w(i), h(xi ), w(i1 , i2 ), h(xi1 , xi2 ) randomly. We compare the
accuracy and complexity among exact permutation/bootstrap, random permutaton/bootrap (10,000
times), and our methods. Table 2 shows comparisons for computing the second, third, and fourth
moments of permutation statistics with 11 observations (the running time is in seconds) and of bootstrap statistics with 8 observations.
In all cases, our method achieves the same moments as those of exact permutation/bootstrap, and reduces computational cost dramatically comparing with both random sampling and exact sampling.
For demonstration purpose, we choose a small sample size here, i.e., sample size is 11 for permutation and 8 for bootstrap. Our method is expected to gain more computational efficiency as n
increases.
6
Conclusion
In this paper, we propose a novel and computationally fast algorithm for computing weighted vstatistics in resampling both univariate and multivariate data. Our theoretical framework reveals that
the three types of symmetry in resampling weighted v-statistics can be represented by a product of
symmetric groups. As an exciting result, we demonstrate the calculation of resampling weighted
v-statistics can be converted into the problem of orbit enumeration. A novel efficient orbit enumeration algorithm has been developed by using a small group acting on a small index set. For further
computational cost reduction, we sort all orbits by their symmetry order and calculate all index function orbit sums and data function orbit sums recursively. With computational complexity analysis,
we have reduced the computational cost from n! or nn level to low-order polynomial level.
7
Acknowledgement
This research was supported by the Intramural Research Program of the NIH, Clinical Research
Center and through an Inter-Agency Agreement with the Social Security Administration, the NSF
CNS 1135660, Office of Naval Research award N00014-12-1-0125, Air Force Office of Scienticfic Research award FA9550-12-1-0201, and IC Postdoctoral Research Fellowship award 201111071400006.
8
References
[01] Babai, L., Kantor, W.M. , and Luks, E.M. (1983), Computational complexity and the classification of finite
simple groups, Proc. 24th FOCS, pp. 162-171.
[02] Minaei-Bidgoli, B., Topchy, A., and Punch, W. (2004), A comparison of resampling methods for clustering
ensembles, In Proc. International Conference on Artificial Intelligence, Vol. 2, pp. 939-945.
[03] Estabrooks, A., Jo, T., and Japkowicz, N. (2004), A Multiple Resampling Method for Learning from
Imbalanced Data Sets, Comp. Intel. 20 (1) pp. 18-36.
[04] Francois, D., Rossib, F., Wertza, V., and Verleysen, M. (2007), Resampling methods for parameter-free
and robust feature selection with mutual information, Neurocomputing 70(7-9):1276-1288.
[05] Good, P. (2005), Permutation, Parametric and Bootstrap Tests of Hypotheses, Springer, New York.
[06] Gretton, A., Borgwardt, K., Rasch, M., Scholkopf, B., and Smola, A. (2007), A kernel method for the
two-sample- problem, In Advances in Neural Information Processing Systems (NIPS).
[07] Guo, S. (2011), Bayesian Recommender Systems: Models and Algorithms, Ph.D. thesis.
[08] Hopcroft, J., and Tarjan, R. (1973), Efficient algorithms for graph manipulation, Communications of the
ACM 16: 372-378.
[09] Huang, J., Guestrin, C., and Guibas, L. (2007), Efficient Inference for Distributions on Permutations, In
Advances in Neural Information Processing Systems (NIPS).
[10] Kerber, A. (1999), Applied Finite Group Actions, Springer-Verlag, Berlin.
[11] Kondor, R., Howard, A., and Jebara, T. (2007), Multi-Object Tracking with Representations of the Symmetric Group, Artificial Intelligence and Statistics (AISTATS).
[12] Kuwadekar, A. and Neville, J. (2011), Relational Active Learning for Joint Collective Classification Models, In International Conference on Machine Learning (ICML), P. 385-392.
[13] Liu, H., Palatucci, M., and Zhang, J.(2009), Blockwise coordinate descent procedures for the multi-task
lasso, with applications to neural semantic basis discovery, In International Conference on Machine Learning
(ICML).
[14] Matthew Higgs and John Shawe-Taylor. (2010), A PAC-Bayes bound for tailored density estimation, In
Proceedings of the International Conference on Algorithmic Learning Theory (ALT).
[15] McKay, B. D. (1981), Practical graph isomorphism, Congressus Numerantium 30: 45-87, 10th. Manitoba
Conf. on Numerical Math. and Computing.
[16] Mielke, P. W., and K. J. Berry (2007), Permutation Methods: A Distance Function Approach, Springer,
New York.
[17] Nicholson, W. K. (2006), Introduction to Abstract Algebra, 3rd ed., Wiley, New York.
[18] Serfling, R. J. (1980), Approximation Theorems of Mathematical Statistics, Wiley, New York.
[19] Song, L. (2008), Learning via Hilbert Space Embedding of Distributions, Ph.D. thesis.
[20] Sutton, R. and Barto, A. (1998), Reinforcement Learning, MIT Press.
[21] Zhou, C., Wang, H., and Wang, Y. M. (2009), Efficient moments-based permutation tests, In Advances in
Neural Information Processing Systems (NIPS), p. 2277-2285.
9
|
4733 |@word kondor:1 polynomial:3 nd:1 tedious:1 closure:1 nicholson:1 solid:1 recursively:4 carry:1 moment:14 reduction:6 liu:1 initial:1 existing:4 com:1 comparing:2 gmail:1 john:1 numerical:2 partition:12 resampling:47 greedy:1 intelligence:2 i1d:14 fa9550:1 provides:1 math:2 jdk:1 location:2 zhang:1 mathematical:1 ik:15 scholkopf:1 focs:1 paragraph:8 inter:1 expected:2 multi:3 gov:1 enumeration:4 cardinality:12 becomes:2 moreover:2 unrelated:1 bounded:1 lowest:2 developed:4 finding:4 bootstrapping:4 ikd:6 monoid:4 guarantee:1 exactly:1 demonstrates:1 before:2 treat:1 sd:3 limit:1 sutton:1 id:6 meet:1 path:1 becoming:1 advised:2 equivalence:4 challenging:1 relaxing:1 limited:1 directed:1 practical:1 testing:4 recursive:3 bootstrap:12 procedure:2 axiom:2 universal:1 empirical:3 pre:1 word:11 get:1 convenience:1 selection:2 applying:1 restriction:1 equivalent:8 center:2 starting:1 focused:1 assigns:1 embedding:1 searching:3 traditionally:1 justification:1 coordinate:1 construction:1 suppose:1 ngp:1 exact:8 ndr:4 chunxiao:2 hypothesis:2 agreement:1 element:11 approximated:1 expensive:2 satisfying:1 cayley:8 coarser:2 i12:1 subproblem:1 solved:2 wang:2 calculate:8 thousand:1 connected:1 e8:3 mentioned:1 agency:1 complexity:7 hatfield:1 subtotal:1 algebra:3 efficiency:2 completely:1 basis:1 hopcroft:1 joint:1 represented:1 derivation:2 univ:2 distinct:6 fast:4 separated:1 artificial:2 i21:1 choosing:1 exhaustive:1 supplementary:1 solve:1 say:1 otherwise:1 statistic:51 g1:3 gp:4 itself:1 associative:1 ir1:14 propose:3 subtracting:1 interaction:1 product:8 causing:1 date:1 flexibility:1 j1k:1 moved:1 qr:3 empty:1 double:3 francois:1 object:1 tions:1 derive:1 help:2 come:1 rasch:1 material:1 explains:1 proposition:11 summation:6 considered:4 ic:1 guibas:1 exp:1 mapping:3 algorithmic:1 matthew:1 achieves:1 purpose:1 estimation:3 proc:2 applicable:1 intramural:1 successfully:1 weighted:28 mit:1 modified:2 ikm:4 zhou:6 avoid:2 pn:5 barto:1 publication:1 office:2 jdr:5 derived:1 focus:1 naval:1 improvement:1 mainly:1 u21:4 sense:1 inference:2 nn:4 integrated:1 associativity:1 japkowicz:1 i1:13 issue:1 among:3 classification:3 ill:2 denoted:4 verleysen:1 transversal:32 mutual:1 equal:6 once:1 construct:6 ng:4 sampling:4 manually:1 park:1 unsupervised:1 icml:2 randomly:1 national:1 babai:1 neurocomputing:1 ourselves:1 replacement:8 consisting:1 sdr:30 n1:4 cns:1 ab:1 huge:1 udr:60 fu:1 edge:3 necessary:1 respective:1 conduct:2 divide:3 taylor:1 exchanged:1 orbit:90 theoretical:3 cover:1 cost:20 applicability:1 vertex:3 subset:15 mckay:1 hundred:1 conducted:1 connect:2 density:2 international:4 borgwardt:1 systematic:2 xi1:5 invertible:2 connecting:1 jo:1 thesis:2 nm:1 containing:1 choose:2 huang:1 dr:15 conf:1 converted:2 includes:5 coefficient:2 invertibility:1 matter:1 depends:1 higgs:1 tion:1 performed:1 closed:1 linked:1 hg1:1 reached:2 start:1 sort:2 bayes:1 complicated:1 mutation:1 air:1 accuracy:5 conducting:1 efficiently:2 listing:5 correspond:1 ensemble:1 famous:1 bayesian:1 trajectory:1 comp:1 researcher:1 finer:5 manual:1 ed:1 neu:1 definition:12 u22:5 pp:3 naturally:1 proof:1 xn1:3 gain:1 popular:2 hilbert:1 actually:2 higher:1 supervised:1 improved:4 done:4 furthermore:1 just:1 smola:1 until:1 hand:4 defines:1 former:1 symmetric:8 semigroup:2 i2:7 semantic:1 yun:1 demonstrate:3 novel:4 recently:1 nih:2 common:1 permuted:1 belong:2 measurement:1 composition:1 pn1:1 shuffling:3 rd:4 mathematics:1 inclusion:1 shawe:1 ik1:5 manitoba:1 pling:1 base:1 add:1 multivariate:4 imbalanced:1 exclusion:1 manipulation:1 resamplings:5 n00014:1 verlag:1 binary:2 guestrin:1 george:1 relaxed:7 fortunately:1 multiple:2 reduces:2 gretton:1 calculation:15 clinical:2 cross:1 dept:2 divided:1 equally:1 award:3 va:1 calculates:1 basic:1 palatucci:1 represent:1 kernel:1 tailored:1 addition:5 fellowship:1 sr:4 j11:5 integer:3 counting:1 enough:1 easy:2 affect:1 associating:1 lasso:1 reduce:3 idea:3 simplifies:1 tradeoff:1 enumerating:1 administration:1 whether:1 motivated:1 isomorphism:4 effort:1 song:1 ird:14 e3:5 york:4 cause:1 action:30 programmable:1 enumerate:2 dramatically:2 detailed:1 listed:1 amount:1 s4:2 bijections:1 ph:2 reduced:2 generate:5 nsf:1 punch:1 notice:1 disjoint:3 per:1 vol:1 group:41 key:2 four:1 terminology:2 prevent:1 breadth:1 graph:20 sum:34 convert:1 fourth:2 j1r:5 named:1 extends:1 throughout:1 prefer:1 bound:1 replaces:1 quadratic:3 constraint:7 kantor:1 min:1 structured:1 smaller:1 across:1 increasingly:1 serfling:1 bethesda:1 ikj:1 invariant:4 sided:3 computationally:6 discus:1 xi2:5 needed:1 know:3 operation:4 original:1 denotes:11 running:1 clustering:1 sw:1 calculating:5 classical:1 already:1 parametric:2 strategy:2 dependence:1 md:1 distance:2 fairfax:1 link:2 card:5 mapped:2 simulated:1 berlin:1 topic:1 collected:1 reason:1 index:94 modeled:1 relationship:3 neville:1 demonstration:1 unfortunately:1 blockwise:1 gk:2 collective:1 recommender:1 observation:12 howard:1 finite:6 descent:1 relational:1 communication:1 tarjan:1 jebara:1 sentence:10 security:1 unequal:4 subgroup:2 nip:3 usually:2 program:1 built:1 including:1 memory:1 critical:1 difficulty:2 examination:1 force:1 topchy:1 scheme:1 naive:4 health:1 sn:6 acknowledgement:1 checking:2 discovery:1 multiplication:1 berry:1 embedded:2 fully:1 permutation:39 generator:5 validation:1 sufficient:1 luks:2 principle:1 exciting:1 systematically:1 prone:1 supported:1 last:2 free:3 offline:3 formal:1 allow:1 institute:1 fall:1 distributed:1 dimension:1 calculated:2 xn:2 depth:1 ending:1 collection:1 reinforcement:2 simplified:1 social:1 compact:1 active:2 reveals:1 summing:4 assumed:2 xi:4 don:1 postdoctoral:1 search:2 why:1 onerous:1 table:4 robust:1 symmetry:16 e5:3 necessarily:1 aistats:1 main:1 whole:4 n2:5 repeated:1 x1:1 representative:7 intel:1 wiley:2 third:3 northeastern:1 e4:3 theorem:1 emphasized:1 pac:1 symbol:1 mason:1 list:6 alt:1 exists:1 i11:16 merging:10 commutative:1 nk:1 easier:2 boston:1 univariate:4 likely:1 tracking:1 springer:3 jd1:5 satisfies:1 chance:1 acm:1 ma:1 identity:2 replace:1 change:2 included:1 uniformly:1 acting:4 preset:3 called:3 ece:2 invariance:1 e6:6 mark:1 people:1 latter:1 guo:1 evaluate:1
|
4,126 | 4,734 |
Diffusion Decision Making for Adaptive
k-Nearest Neighbor Classification
Yung-Kyun Noh, Frank Chongwoo Park
Schl. of Mechanical and Aerospace Engineering
Seoul National University
Seoul 151-744, Korea
{nohyung,fcp}@snu.ac.kr
Daniel D. Lee
Dept. of Electrical and Systems Engineering
University of Pennsylvania
Philadelphia, PA 19104, USA
[email protected]
Abstract
This paper sheds light on some fundamental connections of the diffusion decision
making model of neuroscience and cognitive psychology with k-nearest neighbor
classification. We show that conventional k-nearest neighbor classification can
be viewed as a special problem of the diffusion decision model in the asymptotic
situation. By applying the optimal strategy associated with the diffusion decision
model, an adaptive rule is developed for determining appropriate values of k in knearest neighbor classification. Making use of the sequential probability ratio test
(SPRT) and Bayesian analysis, we propose five different criteria for adaptively
acquiring nearest neighbors. Experiments with both synthetic and real datasets
demonstrate the effectiveness of our classification criteria.
1
Introduction
The recent interest in understanding human perception and behavior from the perspective of neuroscience and cognitive psychology has spurred a revival of interest in mathematical decision theory.
One of the standard interpretations of this theory is that when there is a continuous input of noisy
information, a decision becomes certain only after accumulating sufficient information. It is also
typically understood that early decisions save resources. Among the many theoretical explanations
for this phenomenon, the diffusion decision model offers a particularly appealing explanation of
how information is accumulated and how the time involved in making a decision affects overall accuracy. The diffusion decision model considers the diffusion of accumulated evidence toward one
of the competing choices, and reaches a decision when the evidence meets a pre-defined confidence
level.
The diffusion decision model successfully explains the distribution of decision times for humans
[13, 14, 15]. More recently, this model offers a compelling explanation of the neuronal decision
making process in the lateral intraparietal (LIP) area of the brain for perceptual decision making
based on visual evidence [2, 11, 16]. The fundamental premise behind this model is that there is a
tradeoff between decision times and accuracy, and that both are controlled by the confidence level.
As described in Bogacz et al [3], the sequential probability ratio test (SPRT) is one mathematical
model that explains this tradeoff. More recent studies also demonstrate how SPRT can be used to
explain the evidence as emanated from Poisson processes [6, 21].
Now shifting our attention to machine learning, the well-known k-nearest neighbor classification
uses a simple majority voting strategy that, at least in the asymptotic case, implicitly involves a similar tradeoff between time and accuracy. According to Cover and Hart [4], the expected accuracy of
k-nearest neighbor classification always increases with respect to k when there is sufficient data. At
the same time, there is a natural preference to use less resources, or equivalently, a fewer number of
nearest neighbors. If one seeks to maximize the accuracy for a given number of total nearest neigh1
Figure 1: Diffusion decision model. The evidence of decision making is accumulated, and it diffuses
over time (to the right). Once the accumulated evidence reaches one of the confidence levels of either
choice, z or ?z, the model stops collecting any more evidence and makes a decision.
bors, this naturally leads to the idea of using different ks for different data. At a certain level, this
adaptive idea can be anticipated, but methods described in the existing literature are almost exclusively heuristic-based, without offering a thorough understanding of under what situations heuristics
are effective [1, 12, 19].
In this work, we present a set of simple, theoretically sound criteria for adaptive k-nearest neighbor
classification. We first show that the conventional majority voting rule is identical to the diffusion
decision model when applied to data from two different Poisson processes. Depending on how the
accumulating evidence is defined, it is possible to construct five different criteria based on different
statistical tests. First, we derive three different criteria using the SPRT statistical test. Second, using
standard Bayesian analysis, we derive two probabilities for the case where one density function
is greater than the other. Our five criteria are then used as diffusing evidence; once the evidence
exceeds a certain confidence level, collection of information can cease and a decision can be made
immediately. Despite the complexity of the derivations involved, the resulting five criteria have a
particularly simple and appealing form. This feature can be traced to the memoryless property of
Poisson processes. In particular, all criteria can be cast as a function of the information of only one
nearest neighbor in each class. Using our derivation, we consider this property to be the result of
the assumption that we have sufficient data; the criteria are not guaranteed to work in the event that
there is insufficient data. We present experimental results involving real and synthetic data to verify
this conjecture.
The remainder of the paper is organized as follows. In Section 2, a particular form of the diffusion decision model is reviewed for Poisson processes, and two simple tests based on SPRT are
derived. The relationship between k-nearest neighbor classification and diffusion decision making
is explained in Section 3. In Section 4, we describe the adaptive k-nearest neighbor classification
procedure in terms of the diffusion decision model, and we introduce five different criteria within
this context. Experiments for synthetic and real datasets are presented in Section 5, and the main
conclusions are summarized in Section 6.
2
Diffusion Decision Model for Two Poisson Processes
The diffusion decision model is a stochastic model for decision making. The model considers the
diffusion of an evidence in favor of either of two possible choices by continuously accumulating the
information. After initial wavering between the two choices, the evidence finally reaches a level of
confidence where a decision is made as in Fig. 1.
In mathematical modeling of this diffusion process, Gaussian noise has been predominantly used as a
model for zigzagging upon a constant drift toward a choice [3, 13]. However, when we consider two
competing Poisson signals, a simpler statistical test can be used instead of estimating the direction of
the drift. In the studies of decision making in the lateral intraparietal (LIP) area of the brain [2, 11],
two Poisson processes are assumed to have rate parameters of either ?+ and ?? where we know
that ?+ > ?? , but exact values are unknown. When it should be determined which Poisson process
has the larger rate ?+ , a sequential probability ratio test (SPRT) can be used to explain a diffusion
decision model [6, 21].
2
N
)
The Poisson distribution we use has the form: p(N |?, T ) = (?T
N ! exp(??T ), and we consider two
Poisson distributions for N1 and N2 at time T1 and T2 , respectively: p(N1 |?1 , T1 ) and p(N2 |?2 , T2 ).
Here, ?1 and ?2 are the rate parameters, and either of these parameters has ?+ where the other has
?? . Now, we apply the statistical test of Wald [18] for a confidence ?( > 1):
p(N1 |?1 = ?+ )p(N2 |?2 = ?? )
>?
p(N1 |?1 = ?? )p(N2 |?2 = ?+ )
or
<
1
?
(1)
for the situation where there is N1 number of signals at time T1 for the first Poisson process and N2
number of signals at time T2 for the second process. We can determine that ?1 has the ?+ once the
left term is greater than ?, and ?2 has the ?+ once it is greater than ?1 , otherwise, we must collect
more information. According to Wald and Wolfowitz [18], this test is optimal in that the test requires
the fewest average observations with the same probability of error.
By taking the log on both sides, we can rewrite the test as
?+
log
(N1 ? N2 ) ? (?+ ? ?? )(T1 ? T2 ) > log a
??
< ? log a.
or
(2)
Considering two special situations, this equation can be reduced into two different, simple tests.
First, we can consider observation of the numbers N1 and N2 at a certain time T = T1 = T2 . Then
test in Eq. (2) is reduced into one test previously proposed in [21]:
|N1 ? N2 | > zN
(3)
log ?
. Another simple test can be made by using the
where zN is a constant satisfying zN = log(?
+ /?? )
observation times T1 and T2 when we find the same number of signals N = N1 = N2 :
|T1 ? T2 | > zT
where zT satisfies zT =
(4)
log ?
?+ ??? .
Here, we can consider ?N = N1 ? N2 and ?T = T1 ? T2 as two different evidences in the
diffusion decision model. The evidence diffuses as we collect more information, and we come to
make a decision once the evidence reaches the confidence levels, ?zN for ?N , and ?zT for ?T .
In this work, we refer to the first model, using the criterion ?N , as the ?N rule and the second
model, using ?T , as the ?T rule.
Although the ?N rule has been previously derived and used [21], we propse four more test criteria
in this paper including Eq. (4). Later, we show that the diffusion decision making with these five
criteria is related to different methods for k-nearest neighbor classification.
3
Equivalence of Diffusion Decision Model and k-Nearest Neighbor
Classification
A conventional k-nearest neighbor (k-NN) classification takes a majority voting strategy using k
number of nearest neighbors. According to Cover and Hart [4], in the limit of infinite sampling,
this simple majority voting rule can produce a fairly low expected error and furthermore, this error
decreases even more as a bigger k is used. This theoretical result is obtained from the relationship
between the k-NN classification error and the optimal Bayes error: the expected error with one
nearest neighbor is always less than twice the Bayes error, and the error decreases with the number
of k asymptotically to the Bayes error [4].
In this situation, we can claim that the k-NN classification actually performs the aforementioned
diffusion decision making for Poisson processes. The identity comes from two equivalence relationships: first, the logical equivalence between two decision rules; second, the equivalence of
distribution of nearest neighbors to the Poisson distribution in an asymptotic situation.
3.1
Equivalent Strategy of Majority Voting
Here, we first show an equivalence between the conventional k-NN classification and a novel comparison algorithm:
3
Theorem: For two-class data, we consider the N -th nearest datum of each class from the testing
point. With an odd number k, majority voting rule in k-NN classification is equivalent to the rule
of picking up the class to which a datum with smaller distance to the testing point belongs, for
k = 2N ? 1.
Proof: Among k-NNs of a test point, if there are more than or equal to N data having label C,
for C ? {1, 2}, the test point is classified as class C according to the majority voting because
N = (k + 1)/2 > k2 . If we consider three distances dk to the k-th nearest neighbor among all data,
dN,C to the N -th nearest neighbor in class C, and dN,?C to the N -th nearest neighbor in class nonC, then both dN,C ? dk and dN,?C > dk are satisfied in this case. This completes one direction of
proof that the selection of class C by majority voting implies dN,C < dN,?C . The opposite direction
can be proved similarly.
Therefore, instead of counting the number of nearest neighbors, we can classify a test point using two separate N -th nearest neighbors of two classes and comparing the distances. This logical
equivalence applies regardless of the underlying density functions.
3.2
Nearest neighbors as Poisson processes
The random generation of data from a particular underlying density function induces a density function of distance to the nearest neighbors. When the density function is ?(x) for x ? RD and we
consider a D-dimensional hypersphere of volume V with N -th nearest neighbor on its surface, a
random variable u = M V , which is the volume of the sphere V multiplied by the number of data
M , asymptotically converges in distribution to the Erlang density function [10]:
p(u|?) =
?N
exp(??u)uN ?1
?(N )
(5)
with a large amount of data. Here, the volume element is a function of distance d which can be
? D/2
represented as V = ?dD and ? = ?(D/2+1)
, a proportionality constant for a hypersphere volume.
This Erlang function is a special case of the Gamma density function when the parameter N is an
integer.
We can also note that this Erlang density function implies the Poisson distribution with respect to N
[20], and we can write the distribution of N as follows:
p(N |?) =
?N
exp(??).
?(N + 1)
(6)
This equation shows that the appearance of nearest neighbors can be approximated with Poisson
processes. In other words, with a growing hypersphere at a constant rate in volume, the occurrence
of new points within a hypersphere will follow a Poisson distribution.
This Erlang function in Eq. (5) comes from the asymptotic convergence in distribution of the real
distribution, the binomial distribution with finite N number of samples [10]. Here, we note that,
with a finite number of samples, the memoryless property of the Poisson disappears. This results
in the breakdown of the independency assumption between posterior probabilities for classes which
Cover and Hart used implicitly when they derived the expected error of k-NN classification [4].
On the other hand, once we have enough data, and hence the density functions Eq. (5) and Eq. (6)
explain data correctly, we can expect the equivalence between the diffusion decision making and
k-NN classification. In this case, the nearest neighbors are the samples of a Poisson process, having
the rate parameter ?, which is the probability density at the test point.
Now, we can turn back to the conventional k-NN classification. By theorem 1 and the arguments in
this section, the k-NN classification strategy is the same as the strategy of comparing two Poisson
processes using N -th samples of each class. This connection naturally exploits the conventional
k-NN classification to the adaptive method of using different ks using the confidence level in the
diffusion decision model.
4
4
Criteria for Adaptive k-NN Classification
Using the equivalence settings of the diffusion decision model and the k-NN classification, we can
extend the conventional majority voting strategy to more sophisticated adaptive strategies. First, the
SPRT criteria in the previous section, ?N rule and ?T rule can be used. For the ?N rule in Eq. (3),
we can use the numbers of nearest neighbors N1 and N2 within a fixed distance d, then compare
|?N | = |N1 ? N2 | with a pre-defined confidence level zN . Instead of making an immediate
decision, we can collect more nearst neighbors by increasing d until Eq. (3) is satisfied. This is the
??N rule? for adaptive k-NN classification.
In terms of the ?T rule in Eq. (4), using the correspondence of time in the original SPRT to the volume within the hypersphere in k-NN classification, we can make two different criteria for adaptive
k-NN classification. First, we consider two volume elements, V1 and V2 of N -th nearest neighbors,
and the criterion can be rewritten as |V1 ? V2 | > zV . We refer to this rule as the ??V rule?.
Additional criterion for the ?T rule considers a more conservative rule using the volume of (N +1)th nearest neighbor hypersphere. Since a slightly smaller hypersphere than this hypersphere still
contains N number of nearest neighbors, we can make the same test more difficult to stop diffusing
by replacing the smaller volume in the ?V rule with the volume of (N + 1)-th nearest neighbor
hypersphere of that class. We refer to this rule as the ?Conservative ?V rule? because it is more
cautious in making a decision with this strategy.
In addition to the SPRT method, with which we derive three different criteria, we can also derive
several stopping criteria using the Bayesian approach. If we consider ? as a random variable and
apply an appropriate prior, we can obtain a posterior distribution of ? as well as the probability
of P (?1 > ?2 ) or P (?1 < ?2 ). In the following section, we show how we can derive these
probabilities and how these probabilities can be used as evidence in the diffusion decision making
model.
4.1
Bayesian Criteria
For both Eq. (5) and Eq. (6), we consider ? as a random variable, and we can apply a conjugate prior
for ?:
p(?) =
ba a?1
?
exp(??b)
?(a)
(7)
with constants a and b. The constant a is an integer satisfying a ? 1, and b is a real number. With
this prior Eq. (7), the posteriors for two likelihoods Eq. (5) and Eq. (6) are obtained easily:
p(?|u)
=
p(?|N )
=
(u + b)N +a N +a?1
?
exp(??(u + b))
?(N + a)
(b + 1)N +a N +a?1
?
exp(??(b + 1))
?(N + a)
(8)
(9)
First, we derive P (?1 > ?2 |u1 , u2 ) for u1 and u2 obtained using the N -th nearest neighbors in class
1 and class 2. Because the posterior functions of different classes are independent from each other,
this probability of ?1 > ?2 is simply obtained by the double integration:
Z ?
Z ?2
P (?1 > ?2 |u1 , u2 ) =
p(?2 |u2 )
p(?1 |u1 ) d?1 d?2 .
(10)
0
0
After some calculation, the integration result gives an extremely simple analytic solution:
P (?1 > ?2 |u1 , u2 ) =
NX
+a?1
(u1 + b)m (u2 + b)2N +2a?1?m
2N + 2a ? 1
m
(u1 + u2 + 2b)2N +2a?1
m=0
(11)
Here, we merely consider the case that a = 1, and it is interesting to note that this probability is
equivalent to the probability of flipping a biased coin 2N + 1 times and observing less than or equal
to N number of heads. This probability from the Bayesian approach can be efficiently computed
5
(a)
(b)
Figure 2: Decision making process for the nearest neighbor classification with (a) 80% and (b) 90%
confidence level. Sample data are generated from the probability densities ?1 = 0.8 and ?2 = 0.2.
For incrementing N -th nearest neighbors of different classes, the criterion probabilities P (?1 >
?2 |u1 , u2 ) and P (?1 < ?2 |u1 , u2 ) are calculated and compared with the confidence level. Unless
the probability exceeds the confidence level, the next (N + 1)-th nearest neighbors are collected
and the criterion probabilities are calculated again. In this figure, the diffusion of the criterion
probability P (?1 > ?2 |u1 , u2 ) is displayed for different realizations, where the evidence stops
diffusing once the criterion passes the threshold where enough evidence has accumulated. The
bars represent the number of points that are correctly (Red, upward bars) and incorrectly (Blue,
downward bars) classified at each stage of the computation. Using a larger confidence results in less
error, but with a concomitant increase in the number of nearest neighbors used.
in an incremental fashion, and the nearest neighbor computation can be adaptively stopped with
enough confidence of the evidence probability.
The second probability P (?1 > ?2 |N1 , N2 ) for the number of nearest neighbors N1 and N2 within
a particular distance can be similarly derived. Using the double integration of Eq. (9), we can derive
the analytic result again as
N1X
+a?1
1
N1 + N2 + 2a ? 1
.
(12)
P (?1 > ?2 |N1 , N2 ) = N1 +N2 +2a?1
m
2
m=0
Both the probabilities Eq. (11) and Eq. (12) can be used as evidence that diffuse along with incoming
information. Stopping criteria for diffusion can be derived using these probabilities.
4.2
Adaptive k-NN Classification
Of interest in the diffusion decision model is the relationship between the accuracy and the amount
of resources needed to obtain the accuracy. In a diffusion decision setting for k-NN classification,
we can control the amount of resources using the confidence level. For example, in Fig. 2, we
generated data from two uniform density functions, ?1 = 0.8 and ?2 = 0.2, for different classes,
and we applied different confidence levels, 0.8 and 0.9 in Fig. 2(a) and (b), respectively. Using
the P (?1 > ?2 |u1 , u2 ) criterion in Eq. (11), we applied the adaptive k-NN classification with an
increasing N of two classes.
Fig. 2 shows the decision results of the classification with incrementing N for 1000 realizations,
and a few diffusion examples of the evidence probability in Eq. (11) are presented. According to
the confidence level, the average number of nearest neighbors used differs. For Fig. 2(a) when
the confidence level is lower than Fig. 2(b), the evidence reaches the confidence level at an earlier
stage than Fig. 2(b), while the decision in Fig. 2(b) tends to select the first class more often than
in Fig. 2(a). Considering that the optimal Bayes classification choosing class 1 for ?1 > ?2 , the
decisions for class 2 can be considered as errors. In this sense, we can say with the higher confidence
level, decisions are made more correctly while using more resources. Therefore, the efficiencies
6
0.8
0.77
0.76
PN
DN
PV
CDV
DV
kNN
CNN
Race
minRace
MinMaxRatio
Jigang
0.7
0.65
0
5
10
15
20
25
Average number of nearest neighbors
PN
DN
PV
CDV
DV
kNN
CNN
Race
minRace
MinMaxRatio
Jigang
0.75
Accuracy
Accuracy
0.75
0.74
0.73
0.72
0.71
0
30
(a)
5
10
15
20
Number of nearest neighbors used
25
(b)
0.74
0.55
0.72
0.5
0.4
0.35
0.3
0.25
0.2
0
PN
DN
PV
CDV
DV
kNN
CNN
Race
minRace
MinMaxRatio
Jigang
PN
DN
PV
CDV
DV
kNN
CNN
Race
minRace
MinMaxRatio
Jigang
0.7
Accuracy
Accuracy
0.45
0.68
0.66
0.64
0.62
5
10
15
20
Average number of nearest neighbors
25
(c)
0.6
0
5
10
15
20
Average number of nearest neighbors
(d)
Figure 3: Classification accuracy (vertical axis) versus the average number of nearest neighbors used
(horizontal axis) for adaptive k-NN classification. (a) Uniform probability densities for ?1 = 0.8
and ?2 = 0.2 in 100-dimensional space, (b) CIFAR-10, (c) 2 ? 105 data per class for 5-dimensional
Gaussians, and (d) 2 ? 106 data per class for the same Gaussians in (c) are used.
between strategies can be compared using the accuracies as well as the average number of nearest
neighbors used.
5
Experiments
In the experiments, we compare the accuracy of the algorithms to the number of nearest neighbors
used, for various confidence levels for criteria. We used the conventional k-NN classification as
well as the proposed adaptive methods. Adaptive classification includes the comparison rule of N th nearest neighbors using three criteria?the ?V rule (DV), the Conservative ?V rule (CDV), and
Bayesian probability in Eq. (11) (PV)?, as well as the comparison rule of N1 -th and N2 -th at a
given volume using two rules?the ?N rule (DN) and Bayesian probability in Eq. (12) (PN). We
present the average accuracies resulting from the use of these k-NN classification and five adaptive
rules with respect to the average number of nearest neighbors used.
We first show the results on synthetic datasets. In Fig. 3(a), we used two uniform probability densities ?1 = 0.8 and ?2 = 0.2 in 100-dimensional space, and we classified a test point based on the
nearest neighbors. In this figure, all algorithms are expected to approach the Bayes performance
based on Cover and Hart?s approach when the average number of nearest neighbors increase. In
7
25
this experiment, we can observe that all five proposed adaptive algorithms approach the Bayes error
quicker than other methods showing similar rates with each other.
Here, we also present the results of other adaptive algorithms CNN [12], Race, minRace, MinMaxRatio, and Jigang [19]. They perform majority voting with increasing k; CNN stops collecting
more nearest neighbors once more than a certain amount of consecutive neighbors are found with
the same labels; Race stops when the total amount of neighbors of one class exceeds a certain
level; minRace stops when all classes have at least a predefined amount of neighbors; MinMaxRatio considers the ratio between numbers of nearest neighbors in different classes; lastly, Jigang is a
probability criterion slightly different from Eq. (12). Except for Jigang?s method, all algorithms perform poorly, while our five algorithms perform equally well though they use different information,
probably because the performance produced by diffusion decision making algorithms is optimal.
Fig. 3(b) shows the experiments for a CIFAR-10 subset of the tiny images dataset [17]. The CIFAR10 set has 10-class 32 ? 32 color images. Each class has 6000 images, and they are separated into
one testing set and five training sets. With this 10-class data, we first performed Fisher Discriminant
Analysis to obtain a 9-dimensional subspace, then all different adaptive algorithms are applied on
this subspace. The result is the average accuracy for five different training sets and for all possible
pairs of 10 classes. Because the underlying density is non-uniform here, the result shows the performance decrease when algorithms use non-close nearest neighbors. Except for DV and PV criteria,
all of our adaptive algorithms outperform all other methods. The k-NN classification in the original
data space shows the maximal average performance of 0.721 at k = 3, which is far less than the
overall accuracies in the figure, because the distance information is poor in the high dimensional
space.
Fig. 3(c) and (d) clearly show that our algorithms are not guaranteed to work with insufficient data.
We generated data from two different Gaussian functions and tried to classify a datum located at
one of the modes to figure out the label of this datum. The number of generated data is 2 ? 105 per
class for (c), and 2 ? 106 per class for (d) in 5-dimensional space. We presented the average result
of 5000 realizations, and the comparison of two figures show that our adaptive algorithms work as
expected when Cover and Hart?s asymptotic data condition holds. The Poisson process assumption
also holds when this condition is satisfied.
6
Conclusions
In this work, we showed that k-NN classification in the asymptotic limit is equivalent to the diffusion
decision model for decision making. Nearest neighbor classification and the diffusion decision
model are both very well known models in machine learning and cognitive science respectively,
but the intimate connection between them has not been studied before. Using analysis of Poisson
processes, we showed how classification using incrementally increasing nearest neighbors can be
mapped to a simple threshold based decision model.
In the diffusion decision model, the confidence level plays a key role in determining the tradeoff
between speed and accuracy. The notion of confidence can also be applied to nearest neighbor
classification to adapt the number of nearest neighbors used in making the classification decision.
We presented several different criteria for choosing the appropriate number of nearest neighbors
based on the sequential probability ratio test in addition to Bayesian inference. We demonstrated
the utility of these methods in modulating speed versus accuracy on both simulated and benchmark
datasets.
It is straightforward to extend these methods to other datasets and algorithms that utilize neighborhood information. Future work will investigate how our results would scale with dataset size and
feature representations. Potential benefits of this work include a well-grounded approach to speeding
up classification using parallel computation on very large datasets.
Acknowledgments
This research is supported in part by the US Office of Naval Research, Intel Science and Technology
Center, AIM Center, KIST-CIR, ROSAEC-ERC, SNU-IAMD, and the BK21.
8
References
[1] A. F. Atiya. Estimating the posterior probabilities using the k-nearest neighbor rule. Neural
Computation, 17(3):731?740, 2005.
[2] J. M. Beck, W. J. Ma, R. Kiani, T. Hanks, A. K. Churchland, J. Roitman, M. N. Shadlen, P. E.
Latham, and A. Pouget. Probabilistic population codes for Bayesian decision making. Neuron,
60(6):1142?1152, 2008.
[3] R. Bogacz, E. Brown, J. Moehlis, P. Holmes, and J. D. Cohen. The physics of optimal decision
making: A formal analysis of models of performance in two-alternative forced-choice tasks.
Psychological Review, 113(4):700?765, 2006.
[4] T. Cover and P. Hart. Nearest neighbor pattern classification. IEEE Transactions on Information Theory, 13(1):21?27, 1967.
[5] L. Devroye, L. Gy?orfi, and G. Lugosi. A probabilistic theory of pattern recognition. Applications of mathematics. Springer, 1996.
[6] M. A. Girshick. Contributions to the theory of sequential analysis I. The Annuals of Mathematical Statistics, 17:123?143, 1946.
[7] M. Goldstein. kn -Nearest Neighbor Classification. IEEE Transactions on Information Theory,
IT-18(5):627?630, 1972.
[8] C. C. Holmes and N. M. Adams. A probabilistic nearest neighbour method for statistical
pattern recognition. Journal of the Royal Statistical Society Series B, 64(2):295?306, 2002.
[9] M. D. Lee, I. G. Fuss, and D. J. Navarro. A Bayesian approach to diffusion models of decisionmaking and response time. In Advances in Neural Information Processing Systems 19, pages
809?816. 2007.
[10] N. Leonenko, L. Pronzato, and V. Savani. A class of R?enyi information estimators for multidimensional densities. Annals of Statistics, 36:2153?2182, 2008.
[11] W. J. Ma, J. M. Beck, P. E. Latham, and A. Pouget. Bayesian inference with probabilistic
population codes. Nature Neuroscience, 9(11):1432?1438, 2006.
[12] S. Ougiaroglou, A. Nanopoulos, A. N. Papadopoulos, Y. Manolopoulos, and T. WelzerDruzovec. Adaptive k-nearest-neighbor classification using a dynamic number of nearest
neighbors. In Proceedings of the 11th East European conference on Advances in databases
and information systems, pages 66?82, 2007.
[13] R. Ratcliff and G. Mckoon. The diffusion decision model: theory and data for two-choice
decision tasks. Neural Computation, 20(4):873?922, 2008.
[14] R. Ratcliff and J. N. Rouder. A diffusion model account of masking in two-choice letter identification. Journal of Experimental Psychology Human Perception and Performance, 26(1):127?
140, 2000.
[15] M. N. Shadlen, A. K. Hanks, A. K. Churchland, R. Kiani, and T. Yang. The speed and accuracy of a simple perceptual decision: a mathematical primer. Bayesian brain: Probabilistic
approaches to neural coding, 2006.
[16] M. N. Shadlen and W. T. Newsome. The variable discharge of cortical neurons: Implications
for connectivity, computation, and information coding. Journal of Neuroscience, 18:3870?
3896, 1998.
[17] A. Torralba, R. Fergus, and W. T. Freeman. 80 million tiny images: A large data set for nonparametric object and scene recognition. IEEE Transactions on Pattern Analysis and Machine
Intelligence, 30(11):1958?1970, 2008.
[18] A. Wald and J. Wolfowitz. Optimum character of the sequential probability ratio test. Annals
of Mathematical Statistics, 19:326?339, 1948.
[19] J. Wang, P. Neskovic, and L. N. Cooper. Neighborhood size selection in the k-nearest-neighbor
rule using statistical confidence. Pattern Recognition, 39(3):417?423, 2006.
[20] L. Wasserman. All of Statistics: A Concise Course in Statistical Inference (Springer Texts in
Statistics). Springer, December 2003.
[21] J. Zhang and R. Bogacz. Optimal decision making on the basis of evidence represented in
spike trains. Neural Computation, 22(5):1113?1148, 2010.
9
|
4734 |@word cnn:6 proportionality:1 seek:1 tried:1 concise:1 initial:1 contains:1 exclusively:1 nohyung:1 series:1 daniel:1 offering:1 existing:1 comparing:2 must:1 analytic:2 fuss:1 intelligence:1 fewer:1 papadopoulos:1 hypersphere:9 preference:1 simpler:1 zhang:1 five:11 mathematical:6 dn:11 along:1 introduce:1 theoretically:1 upenn:1 expected:6 behavior:1 growing:1 brain:3 freeman:1 considering:2 increasing:4 becomes:1 estimating:2 underlying:3 bogacz:3 what:1 developed:1 thorough:1 collecting:2 voting:10 multidimensional:1 shed:1 k2:1 control:1 t1:8 before:1 engineering:2 understood:1 tends:1 limit:2 despite:1 meet:1 lugosi:1 twice:1 k:2 studied:1 equivalence:8 collect:3 savani:1 acknowledgment:1 testing:3 differs:1 procedure:1 area:2 orfi:1 pre:2 confidence:24 word:1 bk21:1 close:1 selection:2 context:1 applying:1 accumulating:3 equivalent:4 conventional:8 demonstrated:1 center:2 straightforward:1 attention:1 regardless:1 immediately:1 pouget:2 wasserman:1 rule:30 holmes:2 estimator:1 population:2 notion:1 discharge:1 annals:2 play:1 exact:1 us:1 pa:1 element:2 satisfying:2 particularly:2 approximated:1 located:1 recognition:4 breakdown:1 database:1 role:1 quicker:1 electrical:1 wang:1 revival:1 decrease:3 complexity:1 dynamic:1 rewrite:1 churchland:2 upon:1 efficiency:1 basis:1 easily:1 represented:2 various:1 derivation:2 fewest:1 separated:1 forced:1 enyi:1 effective:1 describe:1 train:1 choosing:2 neighborhood:2 heuristic:2 larger:2 say:1 otherwise:1 favor:1 statistic:5 knn:4 knearest:1 noisy:1 propose:1 maximal:1 remainder:1 realization:3 poorly:1 cautious:1 convergence:1 double:2 optimum:1 decisionmaking:1 sea:1 produce:1 incremental:1 converges:1 adam:1 object:1 depending:1 derive:7 ac:1 schl:1 nearest:68 odd:1 eq:21 involves:1 come:3 implies:2 direction:3 stochastic:1 human:3 mckoon:1 explains:2 premise:1 hold:2 considered:1 exp:6 claim:1 early:1 consecutive:1 torralba:1 label:3 modulating:1 successfully:1 clearly:1 always:2 gaussian:2 aim:1 pn:5 office:1 derived:5 naval:1 likelihood:1 ratcliff:2 sense:1 inference:3 stopping:2 accumulated:5 nn:23 typically:1 diffuses:2 upward:1 noh:1 classification:48 among:3 overall:2 aforementioned:1 special:3 fairly:1 integration:3 equal:2 once:8 construct:1 having:2 sampling:1 identical:1 park:1 anticipated:1 future:1 t2:8 few:1 neighbour:1 gamma:1 national:1 beck:2 n1:18 interest:3 investigate:1 light:1 behind:1 predefined:1 implication:1 erlang:4 cifar10:1 moehlis:1 korea:1 unless:1 girshick:1 theoretical:2 stopped:1 psychological:1 classify:2 modeling:1 compelling:1 earlier:1 cover:6 newsome:1 zn:5 subset:1 uniform:4 kn:1 synthetic:4 nns:1 adaptively:2 density:16 fundamental:2 lee:2 probabilistic:5 physic:1 picking:1 continuously:1 connectivity:1 again:2 satisfied:3 cognitive:3 account:1 potential:1 gy:1 summarized:1 coding:2 includes:1 race:6 later:1 performed:1 observing:1 red:1 bayes:6 parallel:1 masking:1 contribution:1 accuracy:20 efficiently:1 rouder:1 bayesian:12 identification:1 produced:1 classified:3 explain:3 reach:5 involved:2 naturally:2 associated:1 proof:2 stop:6 proved:1 dataset:2 logical:2 color:1 organized:1 sophisticated:1 actually:1 back:1 goldstein:1 higher:1 follow:1 response:1 though:1 hank:2 furthermore:1 stage:2 lastly:1 until:1 hand:1 horizontal:1 replacing:1 incrementally:1 mode:1 usa:1 roitman:1 verify:1 brown:1 hence:1 memoryless:2 criterion:32 demonstrate:2 latham:2 performs:1 image:4 novel:1 recently:1 predominantly:1 cohen:1 volume:11 million:1 extend:2 interpretation:1 refer:3 rd:1 mathematics:1 similarly:2 erc:1 surface:1 posterior:5 recent:2 showed:2 perspective:1 belongs:1 certain:6 greater:3 additional:1 determine:1 maximize:1 wolfowitz:2 signal:4 sound:1 exceeds:3 adapt:1 calculation:1 offer:2 sphere:1 cifar:2 hart:6 equally:1 bigger:1 controlled:1 involving:1 wald:3 poisson:22 represent:1 grounded:1 addition:2 completes:1 biased:1 pass:1 probably:1 navarro:1 december:1 effectiveness:1 integer:2 counting:1 yang:1 enough:3 diffusing:3 affect:1 psychology:3 pennsylvania:1 competing:2 opposite:1 idea:2 tradeoff:4 utility:1 amount:6 nonparametric:1 induces:1 atiya:1 kiani:2 reduced:2 outperform:1 neuroscience:4 intraparietal:2 correctly:3 per:4 blue:1 write:1 zv:1 independency:1 four:1 key:1 threshold:2 traced:1 diffusion:38 utilize:1 v1:2 asymptotically:2 merely:1 letter:1 almost:1 decision:62 guaranteed:2 datum:4 correspondence:1 annual:1 pronzato:1 scene:1 diffuse:1 u1:11 speed:3 argument:1 extremely:1 leonenko:1 conjecture:1 according:5 poor:1 nanopoulos:1 conjugate:1 smaller:3 slightly:2 character:1 appealing:2 snu:2 making:23 explained:1 dv:6 resource:5 equation:2 previously:2 turn:1 needed:1 know:1 gaussians:2 rewritten:1 multiplied:1 apply:3 observe:1 v2:2 appropriate:3 occurrence:1 save:1 alternative:1 coin:1 primer:1 original:2 binomial:1 spurred:1 include:1 exploit:1 society:1 flipping:1 spike:1 strategy:10 subspace:2 distance:8 separate:1 mapped:1 lateral:2 simulated:1 majority:10 zigzagging:1 nx:1 considers:4 collected:1 discriminant:1 toward:2 code:2 devroye:1 relationship:4 insufficient:2 ratio:6 concomitant:1 equivalently:1 difficult:1 frank:1 ba:1 sprt:9 zt:4 unknown:1 perform:3 vertical:1 observation:3 neuron:2 datasets:6 benchmark:1 finite:2 displayed:1 incorrectly:1 immediate:1 situation:6 kyun:1 head:1 drift:2 cast:1 mechanical:1 pair:1 connection:3 aerospace:1 bar:3 perception:2 pattern:5 including:1 royal:1 explanation:3 shifting:1 event:1 natural:1 technology:1 cir:1 disappears:1 axis:2 philadelphia:1 speeding:1 text:1 prior:3 understanding:2 literature:1 review:1 determining:2 asymptotic:6 expect:1 generation:1 interesting:1 neskovic:1 versus:2 sufficient:3 shadlen:3 dd:1 tiny:2 course:1 supported:1 side:1 formal:1 neighbor:70 taking:1 benefit:1 calculated:2 cortical:1 collection:1 adaptive:22 made:4 far:1 transaction:3 implicitly:2 incoming:1 assumed:1 fergus:1 continuous:1 un:1 reviewed:1 lip:2 nature:1 manolopoulos:1 fcp:1 european:1 main:1 incrementing:2 noise:1 n2:18 neuronal:1 fig:12 intel:1 fashion:1 ddlee:1 cooper:1 pv:6 perceptual:2 intimate:1 theorem:2 showing:1 dk:3 cease:1 evidence:23 sequential:6 kr:1 downward:1 simply:1 appearance:1 visual:1 yung:1 u2:11 applies:1 acquiring:1 springer:3 satisfies:1 ma:2 viewed:1 identity:1 fisher:1 bors:1 determined:1 infinite:1 except:2 conservative:3 total:2 experimental:2 east:1 select:1 seoul:2 dept:1 phenomenon:1
|
4,127 | 4,735 |
Random Utility Theory for Social Choice
Hossein Azari Soufiani
SEAS, Harvard University
[email protected]
David C. Parkes
SEAS, Harvard University
[email protected]
Lirong Xia
SEAS, Harvard University
[email protected]
Abstract
Random utility theory models an agent?s preferences on alternatives by drawing
a real-valued score on each alternative (typically independently) from a parameterized distribution, and then ranking the alternatives according to scores. A
special case that has received significant attention is the Plackett-Luce model, for
which fast inference methods for maximum likelihood estimators are available.
This paper develops conditions on general random utility models that enable fast
inference within a Bayesian framework through MC-EM, providing concave loglikelihood functions and bounded sets of global maxima solutions. Results on
both real-world and simulated data provide support for the scalability of the approach and capability for model selection among general random utility models
including Plackett-Luce.
1
Introduction
Problems of learning with rank-based error metrics [16] and the adoption of learning for the purpose
of rank aggregation in social choice [7, 8, 23, 25, 29, 30] are gaining in prominence in recent years.
In part, this is due to the explosion of socio-economic platforms, where opinions of users need to be
aggregated; e.g., judges in crowd-sourcing contests, ranking of movies or user-generated content.
In the problem of social choice, users submit ordinal preferences consisting of partial or total ranks
on the alternatives and a single rank order must be selected to be representative of the reports.
Since Condorcet [6], one approach to this problem is to formulate social choice as the problem
of estimating a true underlying world state (e.g., a true quality ranking of alternatives), where the
individual reports are viewed as noisy data in regard to the true state. In this way, social choice
can be framed as a problem of inference. In particular, Condorcet assumed the existence of a true
ranking over alternatives, with a voter?s preference between any pair of alternatives a, b generated to
agree with the true ranking with probability p > 1/2 and disagree otherwise. Condorcet proposed
to choose as the outcome of social choice the ranking that maximizes the likelihood of observing the
voters? preferences. Later, Kemeny?s rule was shown to provide the maximum likelihood estimator
(MLE) for this model [32].
But Condorcet?s probabilistic model assumes identical and independent distributions on pairwise
comparisons. This ignores the strength in agents? preferences (the same probability p is adopted
for all pairwise comparisons), and allows for cyclic preferences. In addition, computing the winner through the Kemeny rule is ?P
2 -complete [13]. To overcome the first criticism, a more recent
literature adopts the random utility model (RUM) from economics [26]. Consider C = {c1 , .., cm }
alternatives. In RUM, there is a ground truth utility (or score) associated with each alternative.
These are real-valued parameters, denoted by ?~ = (?1 , . . . , ?m ). Given this, an agent independently
samples a random utility (Xj ) for each alternative cj with conditional distribution ?j (?|?j ). Usually
?j is the mean of ?j (?|?j ).1 Let ? denote a permutation of {1, . . . , m}, which naturally corresponds
to a linear order: [c?(1) c?(2) ? ? ? c?(m) ]. Slightly abusing notation, we also use ? to denote
1
?j (?|?j ) might be parameterized by other parameters, for example variance.
1
this linear order. Random utility (X1 , . . . , Xm ) generates a distribution on preference orders, as
~ = Pr(X?(1) > X?(2) > . . . > X?(m) )
Pr(? | ?)
(1)
The generative process is illustrated in Figure 1.
Figure 1: The generative process for RUMs.
Adopting RUMs rules out cyclic preferences, because each agent?s outcome corresponds to an order
on real numbers, and it also captures the strength of preference, and thus overcomes the second
criticism, by assigning a different parameter (?j ) to each alternative.
A popular RUM is Plackett-Luce (P-L) [18, 21], where the random utility terms are generated according to Gumbel distributions with fixed shape parameter [2, 31]. For P-L, the likelihood function
has a simple analytical solution, making MLE inference tractable. P-L has been extensively applied
in econometrics [1, 19], and more recently in machine learning and information retrieval (see [16]
for an overview). Efficient methods of EM inference [5, 14], and more recently expectation propagation [12], have been developed for P-L and its variants. In application to social choice, the P-L
model has been used to analyze political elections [10]. EM algorithm has also been used to learn
the Mallows model, which is closely related to the Condorcet?s probabilistic model [17].
Although P-L overcomes the two difficulties of the Condorcet-Kemeny approach, it is still quite
restricted, by assuming that the random utility terms are distributed as Gumbel, with each alternative
is characterized by one parameter, which is the mean of its corresponding distribution. In fact, little
is known about inference in RUMs beyond P-L. Specifically, we are not aware of either an analytical
solution or an efficient algorithm for MLE inference for one of the most natural models proposed by
Thurstone [26], where each Xj is normally distributed.
1.1
Our Contributions
In this paper we focus on RUMs in which the random utilities are independently generated with
respect to distributions in the exponential family (EF) [20]. This extends the P-L model, since
the Gumbel distribution with fixed shape parameters belonging to the EF. Our main theoretical
contributions are Theorem 1 and Theorem 2, which propose conditions such that the log-likelihood
function is concave and the set of global maxima solutions is bounded for the location family, which
are RUMs where the shape of each distribution ?j is fixed and the only latent variables are the
locations, i.e., the means of ?j ?s. These results hold for existing special cases, such as the P-L
model, and many other RUMs, for example the ones where each ?j is chosen from Normal, Gumbel,
Laplace and Cauchy.
~ as latent variables,
We also propose a novel application of MC-EM. We treat the random utilities (X)
~ The E-step for
and adopt the Expectation Maximization (EM) method to estimate parameters ?.
this problem is not analytically tractable, and for this we adopt a Monte Carlo approximation. We
establish through experiments that the Monte-Carlo error in the E-step is controllable and does not
affect inference, as long as numerical parameterizations are chosen carefully. In addition, for the Estep we suggest a parallelization over the agents and alternatives and a Rao-Blackwellized method,
2
which further increases the scalability of our method. We generally assume that the data provides
total orders on alternatives from voters, but comment on how to extend the method and theory to the
case where the input preferences are partial orders.
We evaluate our approach on synthetic data as well as two real-world datasets, a public election
dataset and one involving rank preferences on sushi. The experimental results suggest that the
approach is scalable despite providing significantly improved modeling flexibility over existing approaches. For the two real-world datasets we have studied, we compare RUMs with normal distributions and P-L in terms of four criteria: log-likelihood, predictive log-likelihood, Akaike information
criterion (AIC), and Bayesian information criterion (BIC). We observe that when the amount of
data is not too small, RUMs with normal distributions fit better than P-L. Specifically, for the loglikelihood, predictive log-likelihood, and AIC criteria, RUMs with normal distributions outperform
P-L with 95% confidence in both datasets.
2
RUMs and Exponential Families
In social choice, each agent i ? {1, . . . , n} has a strict preference order on alternatives. This
provides the data for an inferential approach to social choice. In particular. let L(C) denote the set
of all linear orders on C. Then, a preference-profile, D, is a set of n preference orders, one from each
agent, so that D ? L(C)n . A voting rule r is a mapping that assigns to each preference-profile a set
of winning rankings, r : L(C)n 7? (2L(C) \ ?). In particular, in the case of ties the set of winning
rankings may include more than a singleton ranking.
In the maximum likelihood (MLE) approach to social choice, the preference profile is viewed as
data, D = {? 1 , . . . , ? n }. Given this, the probability (likelihood) of the data given ground truth ?~
~ = Qn Pr(? i | ?),
~ where,
(and for a particular ?
~ ) is Pr(D | ?)
i=1
Z ?
Z ?
Z ?
~ =
P (?|?)
..
??(n) (x?(n) )..??(1) (x?(1) )dx?(1) dx?(2) ..dx?(n) (2)
x?(n) =??
x?(n?1) =x?(n)
x?(1) =x?(2)
The MLE approach to social choice selects as the winning ranking that which corresponds to the ?~
~ In the case of multiple parameters that maximize the likelihood then the
that maximizes Pr(D | ?).
MLE approach returns a set of rankings, one ranking corresponding to each parameterization.
In this paper, we focus on probabilistic models where each ?j belongs to the exponential family
(EF). The density function for each ? in EF has the following format:
Pr(X = x) = ?(x) = e?(?)T (x)?A(?)+B(x) ,
(3)
where ?(?) and A(?) are functions of ?, B(?) is a function of x, and T (x) denotes the sufficient
statistics for x, which could be multidimensional.
Example 1 (Plackett-Luce as an RUM [2]) In the RUM, let ?j ?s be Gumbel distributions. That
?(xj ??j )
is, for alternative j ? {1, . . . , m} we have ?j (xj |?j ) = e?(xj ??j ) e?e
. Then, we have:
Qm
?
?j
Pr(? | ~?) = Pr(x?(1) > x?(2) > .. > x?(m) ) = j=1 Pm0 ?(j)
,
where
?(?
j ) = ?j = e ,
? 0
j =j
?(j )
T (xj ) = ?e?xj , B(xj ) = ?xj and A(?j ) = ??j .This gives us the Plackett-Luce model.
3
Global Optimality and Log-Concavity
In this section, we provide a condition on distributions that guarantees that the likelihood function (2)
~ We also provide a condition under which the set of MLE solutions
is log-concave in parameters ?.
is bounded when any one latent parameter is fixed. Together, this guarantees the convergence of our
MC-EM approach to a global mode with an accurate enough E-step.
We focus on the location family, which is a subset of RUMs where the shapes of all ?j ?s are fixed,
and the only parameters are the means of the distributions. For the location family, we can write
Xj = ?j + ?j , where Xj ? ?j (?|?j ) and ?j = Xj ? ?j is a random variable whose mean is 0
and models an agent?s subjective noise. The random variables ?j ?s do not need to be identically
distributed for all alternatives j; e.g., they can be normal with different fixed variances. We focus on
~ to maximize the log-likelihood function,
computing solutions (?)
3
~ D) =
l(?;
n
X
~
log Pr(? i | ?)
(4)
i=1
Theorem 1 For the location family, if for every j ? m the probability density function for ?j is
~ D) is concave.
log-concave, then l(?;
Proof sketch: The theorem is proved by applying the following lemma, which is Theorem 9 in [22].
~ ?),
~ ..., gR (?,
~ ?)
~ are concave functions in R2m where ?~ is the vector of m
Lemma 1 Suppose g1 (?,
parameters and ?~ is a vector of m real numbers that are generated according to a distribution whose
pdf is logarithmic concave in Rm . Then the following function is log-concave in Rm .
~ G) = Pr(g1 (?,
~ ?)
~ ? 0, ..., gR (?,
~ ?)
~ ? 0), ?~ ? Rm
Li (?,
(5)
To apply Lemma 1, we define a set Gi of function g i ?s that is equivalent to an order ? i in the sense of
inequalities implied by RUM for ? i and Gi (the joint probability in (5) for Gi to be the same as the
~ Suppose g i (?,
~ ?)
~ = ??i (r) + ? i i ? ??i (r+1) ? ? i i
probity of ? i in RUM with parameters ?).
r
? (r+1)
? (r)
for r = 1, .., m ? 1. Then considering that the length of order ? i is R + 1, we have:
~ ? i ) = Li (?,
~ Gi ) = Pr(g i (?,
~ ?)
~ ? 0, ..., g i (?,
~ ?)
~ ? 0), ?~ ? Rm
Li (?,
1
R
(6)
~ ?)
~ ? 0 is equivalent to that in ? i alternative ? i (r) is preferred to alternative
This is because gri (?,
i
? (r + 1) in the RUM sense.
To see how this extends to the case where preferences are specified as partial orders, we consider
in particular an interpretation where an agent?s report for the ranking of mi alternatives implies that
~ ?)
~ =
all other alternatives are worse for the agent, in some undefined order. Given this, define gri (?,
~ ?)
~ = ??i (m ) + ? i i
??i (r) + ??i i (r) ? ??i (r+1) ? ??i i (r+1) for r = 1, .., mi ? 1 and gri (?,
i
? (mi ) ?
i
i
??i (r+1) ? ??i (r+1) for r = mi , .., m ? 1. Considering that gr (?)s are linear (hence, concave) and
i
)?s, we can apply Lemma 1 and prove
using log concavity of the distributions of ?~i = (?1i , ?2i , .., ?m
log-concavity of the likelihood function.
It is not hard to verify that pdfs for normal and Gumbel are log-concave under reasonable conditions
for their parameters, made explicit in the following corollary.
Corollary 1 For the location family where each ?j is a normal distribution with mean zero and
~ D) is
with fixed variance, or Gumbel distribution with mean zeros and fixed shape parameter, l(?;
concave. Specifically, the log-likelihood function for P-L is concave.
The concavity of log-likelihood of P-L has been proved [9] using a different technique.
Using Fact 3.5. in [24], the set of global maxima solutions to the likelihood function, denoted by SD ,
is convex since the likelihood function is log-concave. However, we also need that SD is bounded,
and would further like that it provides one unique order as the estimation for the ground truth.
For P-L, Ford, Jr. [9] proposed the following necessary and sufficient
Pm condition for the set of global
maxima solutions to be bounded (more precisely, unique) when j=1 e?j = 1.
Condition 1 Given the data D, in every partition of the alternatives C into two nonempty subsets
C1 ? C2 , there exists c1 ? C1 and c2 ? C2 such that there is at least one ranking in D where c1 c2 .
We next show that Condition 1 is also a necessary and sufficient condition for the set of global
maxima solutions SD to be bounded in location families, when we set one of the values ?j to be 0
~
(w.l.o.g., let ?1 = 0). If we do not bound any parameter, then SD is unbounded, because for any ?,
~
~
any D, and any number s ? R, l(?; D) = l(? + s; D).
Theorem 2 Suppose we fix ?1 = 0. Then, the set SD of global maxima solutions to l(?; D) is
bounded if and only if the data D satisfies Condition 1.
Proof sketch: If Condition 1 does not hold, then SD is unbounded because the parameters for all
alternatives in C1 can be increased simultaneously to improve the log-likelihood. For sufficiency,
we first present the following lemma whose proof is omitted due to the space constraint.
4
Lemma 2 If alternative j is preferred to alternative j 0 in at least in one ranking then the difference
of their mean parameters ?j 0 ? ?j is bounded from above (?Q where ?j 0 ? ?j < Q) for all the ?~
that maximize the likelihood function.
Now consider a directed graph GD , where the nodes are the alternatives, and there is an edge between cj to cj 0 if in at least one ranking cj cj 0 . By Condition 1, for any pair j 6= j 0 , there is a path
from cj to cj 0 (and conversely, a path from cj 0 to cj ). To see this, consider building a path between
j and j 0 by starting from a partition with C1 = {j} and following an edge from j to j1 in the graph
where j1 is an alternatives in C2 for which there must be such an edge, by Condition 1. Consider the
partition with C1 = {j, j1 }, and repeat until an edge can be followed to vertex j 0 ? C2 . It follows
from Lemma 2 that for any ?~ ? SD we have |?j ? ?j 0 | < Qm, using the telescopic sum of bounded
values of the difference of mean parameters along the edges of the path, since the length of the path
is no more than m (and tracing the path from j to j 0 and j 0 to j), meaning that SD is bounded.
Now that we have the log concavity and bounded property, we need to declare conditions under
which the bounded convex space of estimated parameters corresponds to a unique order. The next
theorem provides a necessary and sufficient condition for all global maxima to correspond to the
same order on alternatives. Suppose that we order the alternatives based on estimated ??s (meaning
that cj is ranked higher than cj 0 iff ?j > ?j 0 ).
Theorem 3 The order over parameters is strict and is the same across all ?~ ? SD if, for all ?~ ? SD
and all alternatives j 6= j 0 , ?j 6= ?j 0 .
~ ?~? ? SD and a pair of
Proof: Suppose for the sake of contradiction there exist two maxima, ?,
0
?
?
0
alternatives j 6= j such that ?j > ?j and ?j 0 > ?j . Then, there exists an ? < 1 such that the jth
and j 0 th components of ??~ + (1 ? ?)?~? are equal, which contradicts the assumption.
Hence, if there is never a tie in the scores in any ?~ ? SD , then any vector in SD will reveal the
unique order.
4
Monte Carlo EM for Parameter Estimation
In this section, we propose an MC-EM algorithm for MLE inference for RUMs where every ?j
belongs to the EF.2
The EM algorithm determines the MLE parameters ?~ iteratively, and proceeds as follows. In each
iteration t + 1, given parameters ?~t from the previous iteration, the algorithm is composed of an
E-step and an M-step. For the E-step, for any given ?~ = (?1 , . . . , ?m ), we compute the conditional
expectation of the complete-data log-likelihood (latent variables ~x and data D), where the latent
variables ~x are distributed according to data D and parameters ?~t from the last iteration. For the
M-step, we optimize ?~ to maximize the expected log-likelihood computed in the E-step, and use it
as the input ?~t+1 for the next iteration:
(
)
n
Y
t
i
i ~
t
~
~
~
E-Step : Q(?, ? ) = E ~ log
Pr(~x , ? | ?) | D, ?
X
i=1
~ ?~t )
M-step : ?~t+1 ? arg max Q(?,
~
?
4.1
Monte Carlo E-step by Gibbs sampler
The E-step can be simplified using (3) as follows:
EX~ {log
n
Y
~ | D, ?~t } = E ~ {log
Pr(~xi , ? i | ?)
X
i=1
=
n X
m
X
i=1 j=1
EXji {log ?j (xij |?j ) | ? i , ?~t } =
n
Y
~ Pr(? i |~xi ) | D, ?~t }
Pr(~xi | ?)
i=1
n X
m
X
(?(?j )EXji {T (xij ) | ? i , ?~t } ? A(?j ) + W,
i=1 j=1
2
Our algorithm can be naturally extended to compute a maximum a posteriori probability (MAP) estimate,
~ Still, it seems hard to motivate the imposition of a prior on
when we have a prior over the parameters ?.
parameters in many social choice domains.
5
~ which means that it can be
where W = EXji {B(xij ) | ? i , ?~t } only depends on ?~t and D (not on ?),
treated as a constant in the M-step.
Hence, in the E-step we only need to compute Sji,t+1 = EXji {T (xij ) | ? i , ?~t } where T (xij ) is the
sufficient statistic for the parameter ?j in the model. We are not aware of an analytical solution for
EXji {T (xij ) | ? i , ?~t }. However, we can use a Monte Carlo approximation, which involves sampling
~xi from the distribution Pr(~xi | ? i , ?~t ) using a Gibbs sampler, and then approximates Sji,t+1 by
PN
i,k
1
k=1 T (xj ) where N is the number of samples in the Gibbs sampler.
N
In each step of our Gibbs sampler for voter i, we randomly choose a position j in ? i and
sample xi?i (j) according to a TruncatedEF distribution Pr(?| x?i (?j) , ?~t , ? i ), where x?i (?j) =
( x?i (1) , . . . , x?i (j?1) , x?i (j+1) , . . . , x?i (m) ). The TruncatedEF is obtained by truncating the tails
of ??i (j) (?|??t i (j) ) at x?i (j?1) and x?i (j+1) , respectively. For example, a truncated normal distribution is illustrated in Figure 2.
Figure 2: A truncated normal distribution.
4.2
Rao-Blackwellized: To further improve the
Gibbs sampler, we use Rao-Blackwellized [4]
i,k
i ~t
estimation using E{T (xi,k
j ) | x?j , ? , ? }
i,k
instead of the sample xi,k
j , where x?j is all
i,k
of ~xi,k except for xj .
Finally, we estii,k
i ~t
mate E{T (xi,k
)
|
x
,
?
,
?
} in each step
j
?j
of the Gibbs sampler using M samples as
PN
k
i ~t
Sji,t+1 ' N1 k=1 E{T (xi,k
j ) | x?j , ? , ? } '
l
l
P
P
N
M
i ,k
1
where xij ,k
?
k=1
l=1 T (xj ),
NM
il ,k
i,k
~
Pr(xj
| x?j , ? i , ?).
Rao-Blackwellization
reduces the variance of the estimator because of conditioning and expectation in
i,k
i ~t
E{T (xi,k
j ) | x?j , ? , ? }.
M-step
In the E-step we have (approximately) computed Sji,t+1 . In the M-step we compute ?~t+1 to maxPn Pm
imize i=1 j=1 (?(?j )EXji {T (xij ) | ? i , ?~t } ? A(?j ) + EXji {B(xij ) | ? i , ?~t }). Equivalently, we
Pn
compute ?jt+1 for each j ? m separately to maximize i=1 {?(?j )EXji {T (xij ) | ? i , ?~t }?A(?j )} =
Pn
?(?j ) i=1 Sji,t+1 ? nA(?j ). For the case of the normal distribution with fixed variance, where
Pn
?(?j ) = ?j and A(?j ) = (?j )2 , we have ?jt+1 = n1 i=1 Sji,t+1 . The algorithm is illustrated in
Figure 3.
Figure 3: The MC-EM algorithm for normal distribution.
6
Theorem 1 and Theorem 2 guarantee the convergence of MC-EM for an exact E-step. In order to
control the error of approximation in the MC-E step we can increase the number of samples with
the iterations, in order to decrease the error in Monte Carlo step [28]. Details are omitted due to the
space constraints and can be found in an extended version online.
5
Experimental Results
We evaluate the proposed MC-EM algorithm on synthetic data as well as two real world data sets,
namely an election data set and a dataset representing preference orders on sushi. For simulated data
we use the Kendall correlation [11] between two rank orders (typically between the true order and
the method?s result) as a measure of performance.
5.1
Experiments for Synthetic Data
We first generate data from Normal models for the random utility terms, with means ?j = j and
equal variance for all terms, for different choices of variance (Var = 2, 4). We evaluate the performance of the method as the number of agents n varies. The results show that a limited number of
iterations in the EM algorithm (at most 3), and samples M N = 4000 (M=5, N=800) are sufficient
for inferring the order in most cases. The performance in terms of Kendall correlation for recovering
ground truth improves for larger number of agents, which corresponds to more data. See Figure 4,
which shows the asymptotic behavior of the maximum likelihood estimator in recovering the true
parameters. Figure 4 left and middle panels show that the more the size of dataset the better the performance of the method. Moreover, for large variances in data generation, due to increasing noise in
the data, the rate that performance gets better is slower than that for the case for smaller variances.
Notice that the scales on the y-axis are different in the left and middle panels.
Figure 4: Left and middle panel: Performance for different number of agents n on synthetic data for m = 5, 10
and Var = 2, 4, with specifications M N = 4000, EM iterations = 3. Right panel: Performance given
access to sub-samples of the data in the public election dataset, x-axis: size of sub-samples, y-axis: Kendall
Correlation with the order obtained from the full data-set. Dashed lines are the 95% confidence intervals.
5.2
Experiments for Model Robustness
We apply our method to a public election dataset collected by Nicolaus Tideman [27], where the
voters provided partial orders on candidates. A partial order includes comparisons among a subset
of alternative, and the non-mentioned alternatives in the partial order are considered to be ranked
lower than the lowest ranked alternative among mentioned alternatives.
The total number of votes are n = 280 and the number of alternatives m = 15. For the purpose of
our experiments, we adopt the order on alternatives obtained by applying our method on the entire
dataset as an assumed ground truth, since no ground truth is given as part of the data. After finding
the ground truth by using all 280 votes (and adopting a normal model), we compare the performance
of our approach as we vary the amount of data available. We evaluate the performance for subsamples consisting of 10, 20, . . . , 280 of samples randomly chosen from the full dataset. For each
sub-sample size, the experiment is repeated 200 times and we report the average performance and
the variance. See the right panel in Figure 4. This experiment shows the robustness of the method,
in the sense that the result of inference on a subset of the dataset shows consistent behavior with the
case that the result on the full dataset. For example, the ranking obtained by using half of the data
7
can still achieve a fair estimate to the results with full data, with an average Kendall correlation of
greater than 0.4.
5.3
Experiments for Model Fitness
In addition to a public election dataset, we have tested our algorithm on a sushi dataset, where 5000
users give rankings over 10 different kinds of sushi [15]. For each experiment we randomly choose
n ? {10, 20, 30, 40, 50} rankings, apply our MC-EM for RUMs with normal distributions where
variances are also parameters.
In the former experiments, both the synthetic data generation and the model for election data, the
variances were fixed to 1 and hence we had the theoretical guarantees for the convergence to global
optimal solutions by Theorem 1 and Theorem 2. When we let the variances to be part of parametrization we lose the theoretical guarantees. However, the EM algorithm can still be applied, and since
the variances are now parameters (rather than being fixed to 1), the model fits better in terms of
log-likelihood.
For this reason, we adopt RUMs with normal distributions in which the variance is a parameter that
is fit by EM along with the mean. We call this model a normal model. We compute the difference
between the normal model and P-L in terms of four criteria: log-likelihood (LL), predictive loglikelihood (predictive LL), AIC, and BIC. For (predictive) log-likelihood, a positive value means
that normal model fits better than P-L, whereas for AIC and BIC, a negative number means that
normal model fits better than P-L. Predictive likelihood is different from likelihood in the sense
that we compute the likelihood of the estimated parameters for a part of the data that is not used for
parameter estimation.3 In particular, we compute predictive likelihood for a randomly chosen subset
of 100 votes. The results and standard deviations for n = 10, 50 are summarized in Table 1.
n = 10
n = 50
Dataset
LL
Pred. LL
AIC
BIC
LL
Pred. LL
AIC
BIC
Sushi 8.8(4.2) -56.1(89.5) -7.6(8.4) 5.4(8.4) 22.6(6.3) 40.1(5.1) -35.2(12.6) -6.1(12.6)
Election 9.4(10.6) 91.3(103.8) -8.8(21.2) 4.2(21.2) 44.8(15.8) 87.4(30.5) -79.6(31.6) -50.5(31.6)
Table 1: Model selection for the sushi dataset and election dataset. Cases where the normal model fits better
than P-L statistically with 95% confidence are in bold.
When n is small (n = 10), the variance is high and we are unable to obtain statistically significant
results in comparing fitness. When n is not too small (n = 50), RUMs with normal distributions
fit better than P-L. Specifically, for log-likelihood, predictive log-likelihood, and AIC, RUMs with
normal distributions outperform P-L with 95% confidence in both datasets.
5.4
Implementation and Run Time
The running time for our MC-EM algorithm scales linearly with number of agents on real world
data (Election Data) with slope 13.3 second per agent on an Intel i5 2.70GHz PC. This is for 100
iterations of EM algorithm with Gibbs sampling number increasing with iterations as 2000 + 300 ?
iteration steps.
Acknowledgments
This work is supported in part by NSF Grant No. CCF- 0915016. Lirong Xia is supported by NSF
under Grant #1136996 to the Computing Research Association for the CIFellows Project. We thank
Craig Boutilier, Jonathan Huang, Tyler Lu, Nicolaus Tideman, Paolo Viappiani, and anonymous
NIPS-12 reviewers for helpful comments and suggestions, or help on the datasets.
References
[1] Steven Berry, James Levinsohn, and Ariel Pakes. Automobile prices in market equilibrium. Econometrica, 63(4):841?890, 1995.
[2] Henry David Block and Jacob Marschak. Random orderings and stochastic theories of responses. In
Contributions to Probability and Statistics, pages 97?132, 1960.
3
The use of predictive likelihood allows us to evaluate the performance of the estimated parameters on the
rest of the data, and is similar in this sense to cross validation for supervised learning.
8
[3] James G. Booth and James P. Hobert. Maximizing Generalized Linear Mixed Model Likelihoods with an
Automated Monte Carlo EM Algorithm. JRSS. Series B, 61(1):265?285, 1999.
[4] Steve Brooks, Andrew Gelman, Galin Jones, and Xiao-Li Meng, editors. Handbook of Markov Chain
Monte Carlo. Chapman and Hall/CRC, 2011.
[5] Francois Caron and Arnaud Doucet. Efficient Bayesian Inference for Generalized Bradley-Terry Models.
Journal of Computational and Graphical Statistics, 21(1):174?196, 2012.
[6] Marquis de Condorcet. Essai sur l?application de l?analyse a` la probabilit?e des d?ecisions rendues a` la
pluralit?e des voix. Paris: L?Imprimerie Royale, 1785.
[7] Vincent Conitzer, Matthew Rognlie, and Lirong Xia. Preference functions that score rankings and maximum likelihood estimation. In Proc. IJCAI, pages 109?115, 2009.
[8] Vincent Conitzer and Tuomas Sandholm. Common voting rules as maximum likelihood estimators. In
Proc. UAI, pages 145?152, 2005.
[9] Lester R. Ford, Jr. Solution of a ranking problem from binary comparisons. The American Mathematical
Monthly, 64(8):28?33, 1957.
[10] Isobel Claire Gormley and Thomas Brendan Murphy. A grade of membership model for rank data.
Bayesian Analysis, 4(2):265?296, 2009.
[11] Przemyslaw Grzegorzewski. Kendall?s correlation coefficient for vague preferences. Soft Computing,
13(11):1055?1061, 2009.
[12] John Guiver and Edward Snelson. Bayesian inference for Plackett-Luce ranking models. In Proc. ICML,
pages 377?384, 2009.
[13] Edith Hemaspaandra, Holger Spakowski, and J?org Vogel. The complexity of Kemeny elections. Theoretical Computer Science, 349(3):382?391, December 2005.
[14] David R. Hunter. MM algorithms for generalized Bradley-Terry models. In The Annals of Statistics,
volume 32, pages 384?406, 2004.
[15] Toshihiro Kamishima. Nantonac collaborative filtering: Recommendation based on order responses. In
Proc. KDD, pages 583?588, 2003.
[16] Tie-Yan Liu. Learning to Rank for Information Retrieval. Springer, 2011.
[17] Tyler Lu and Craig Boutilier. Learning mallows models with pairwise preferences. In Proc. ICML, pages
145?152, 2011.
[18] R. Duncan Luce. Individual Choice Behavior: A Theoretical Analysis. Wiley, 1959.
[19] Daniel McFadden. Conditional logit analysis of qualitative choice behavior. In Frontiers of Econometrics,
pages 105?142, New York, NY, 1974. Academic Press.
[20] Carl N. Morris. Natural Exponential Families with Quadratic Variance Functions. Annals of Statistics,
10(1):65?80, 1982.
[21] R. L. Plackett. The analysis of permutations. JRSS. Series C, 24(2):193?202, 1975.
[22] Andr?s Pr?ekopa. Logarithmic concave measures and related topics. In Stochastic Programming, pages
63?82. Academic Press, 1980.
[23] Ariel D. Procaccia, Sashank J. Reddi, and Nisarg Shah. A maximum likelihood approach for selecting
sets of alternatives. In Proc. UAI, 2012.
[24] Frank Proschan and Yung L. Tong. Chapter 29. log-concavity property of probability measures. FSU
techinical report Number M-805, pages 57?68, 1989.
[25] Magnus Roos, J?org Rothe, and Bj?orn Scheuermann. How to calibrate the scores of biased reviewers by
quadratic programming. In Proc. AAAI, pages 255?260, 2011.
[26] Louis Leon Thurstone. A law of comparative judgement. Psychological Review, 34(4):273?286, 1927.
[27] Nicolaus Tideman. Collective Decisions and Voting: The Potential for Public Choice. Ashgate Publishing,
2006.
[28] Greg C. G. Wei and Martin A. Tanner. A Monte Carlo Implementation of the EM Algorithm and the Poor
Man?s Data Augmentation Algorithms. JASA, 85(411):699?704, 1990.
[29] Lirong Xia and Vincent Conitzer. A maximum likelihood approach towards aggregating partial orders. In
Proc. IJCAI, pages 446?451, 2011.
[30] Lirong Xia, Vincent Conitzer, and J?er?ome Lang. Aggregating preferences in multi-issue domains by using
maximum likelihood estimators. In Proc. AAMAS, pages 399?406, 2010.
[31] John I. Jr. Yellott. The relationship between Luce?s Choice Axiom, Thurstone?s Theory of Comparative
Judgment, and the double exponential distribution. J. of Mathematical Psychology, 15(2):109?144, 1977.
[32] H. Peyton Young. Optimal voting rules. Journal of Economic Perspectives, 9(1):51?64, 1995.
9
|
4735 |@word middle:3 version:1 judgement:1 seems:1 logit:1 prominence:1 jacob:1 cyclic:2 series:2 score:6 liu:1 selecting:1 daniel:1 subjective:1 existing:2 bradley:2 comparing:1 lang:1 assigning:1 dx:3 must:2 peyton:1 john:2 numerical:1 partition:3 j1:3 kdd:1 shape:5 nisarg:1 generative:2 selected:1 half:1 parameterization:1 parametrization:1 imprimerie:1 parkes:2 provides:4 parameterizations:1 node:1 location:7 preference:22 org:2 unbounded:2 blackwellized:3 along:2 c2:6 mathematical:2 qualitative:1 prove:1 pairwise:3 market:1 expected:1 behavior:4 blackwellization:1 grade:1 multi:1 election:11 little:1 considering:2 increasing:2 provided:1 estimating:1 bounded:12 underlying:1 maximizes:2 notation:1 panel:5 moreover:1 lowest:1 fsu:1 cm:1 kind:1 developed:1 finding:1 guarantee:5 every:3 multidimensional:1 voting:4 concave:14 socio:1 tie:3 qm:2 rm:4 control:1 normally:1 grant:2 conitzer:4 lester:1 louis:1 positive:1 declare:1 aggregating:2 treat:1 sushi:6 sd:13 despite:1 meng:1 marquis:1 path:6 approximately:1 might:1 voter:5 studied:1 rendues:1 conversely:1 limited:1 adoption:1 statistically:2 directed:1 unique:4 acknowledgment:1 mallow:2 block:1 probabilit:1 axiom:1 yan:1 significantly:1 inferential:1 confidence:4 suggest:2 get:1 selection:2 gelman:1 applying:2 optimize:1 equivalent:2 telescopic:1 map:1 reviewer:2 maximizing:1 attention:1 economics:1 independently:3 convex:2 starting:1 formulate:1 truncating:1 guiver:1 assigns:1 contradiction:1 estimator:6 rule:6 thurstone:3 laplace:1 annals:2 suppose:5 user:4 exact:1 programming:2 carl:1 akaike:1 harvard:6 econometrics:2 levinsohn:1 steven:1 capture:1 soufiani:1 azari:2 ordering:1 decrease:1 mentioned:2 complexity:1 econometrica:1 motivate:1 predictive:9 vague:1 joint:1 chapter:1 fast:2 monte:9 edith:1 outcome:2 crowd:1 quite:1 whose:3 larger:1 valued:2 loglikelihood:3 drawing:1 otherwise:1 statistic:6 gi:4 g1:2 analyse:1 noisy:1 ford:2 online:1 subsamples:1 analytical:3 propose:3 viappiani:1 ome:1 iff:1 flexibility:1 achieve:1 scalability:2 convergence:3 ijcai:2 double:1 sea:4 francois:1 comparative:2 help:1 andrew:1 received:1 edward:1 recovering:2 involves:1 judge:1 implies:1 closely:1 stochastic:2 enable:1 opinion:1 public:5 orn:1 crc:1 fix:1 anonymous:1 frontier:1 hold:2 mm:1 considered:1 ground:7 normal:22 hall:1 magnus:1 tyler:2 mapping:1 bj:1 equilibrium:1 matthew:1 vary:1 adopt:4 omitted:2 purpose:2 estimation:5 proc:9 lose:1 rather:1 pn:5 corollary:2 gormley:1 focus:4 pdfs:1 rank:8 likelihood:39 nantonac:1 political:1 brendan:1 criticism:2 sense:5 posteriori:1 inference:12 plackett:7 helpful:1 membership:1 pakes:1 typically:2 entire:1 selects:1 issue:1 arg:1 hossein:1 among:3 denoted:2 platform:1 special:2 equal:2 aware:2 never:1 sampling:2 chapman:1 identical:1 holger:1 jones:1 icml:2 report:5 develops:1 randomly:4 composed:1 simultaneously:1 individual:2 fitness:2 murphy:1 hemaspaandra:1 consisting:2 n1:2 ecisions:1 undefined:1 pc:1 chain:1 accurate:1 edge:5 explosion:1 partial:7 necessary:3 theoretical:5 psychological:1 increased:1 modeling:1 soft:1 rao:4 maximization:1 calibrate:1 vertex:1 subset:5 deviation:1 gr:3 too:2 varies:1 eec:1 essai:1 synthetic:5 gd:1 density:2 marschak:1 probabilistic:3 tanner:1 together:1 na:1 augmentation:1 aaai:1 nm:1 choose:3 huang:1 worse:1 american:1 return:1 li:4 potential:1 singleton:1 de:4 summarized:1 bold:1 includes:1 coefficient:1 ranking:22 depends:1 later:1 kendall:5 observing:1 analyze:1 aggregation:1 capability:1 slope:1 contribution:3 collaborative:1 il:1 greg:1 variance:17 correspond:1 judgment:1 cifellows:1 bayesian:5 vincent:4 craig:2 hunter:1 mc:10 carlo:9 lu:2 james:3 naturally:2 associated:1 proof:4 mi:4 dataset:14 proved:2 popular:1 improves:1 cj:11 carefully:1 steve:1 higher:1 supervised:1 response:2 improved:1 wei:1 sufficiency:1 until:1 correlation:5 sketch:2 propagation:1 abusing:1 mode:1 quality:1 reveal:1 building:1 verify:1 true:7 ccf:1 former:1 analytically:1 hence:4 arnaud:1 iteratively:1 illustrated:3 ll:6 criterion:5 generalized:3 pdf:1 complete:2 meaning:2 snelson:1 ef:5 recently:2 novel:1 common:1 overview:1 winner:1 conditioning:1 volume:1 extend:1 interpretation:1 approximates:1 tail:1 association:1 significant:2 monthly:1 caron:1 gibbs:7 framed:1 pm:2 contest:1 had:1 henry:1 specification:1 access:1 recent:2 perspective:1 belongs:2 sji:6 inequality:1 binary:1 lirong:5 greater:1 aggregated:1 maximize:5 dashed:1 multiple:1 full:4 reduces:1 characterized:1 academic:2 cross:1 long:1 retrieval:2 mle:9 variant:1 involving:1 scalable:1 metric:1 expectation:4 iteration:10 adopting:2 c1:8 addition:3 whereas:1 separately:1 interval:1 parallelization:1 rest:1 vogel:1 biased:1 strict:2 comment:2 december:1 call:1 reddi:1 enough:1 identically:1 automated:1 xj:16 affect:1 bic:5 fit:7 gri:3 psychology:1 economic:2 luce:8 utility:13 sashank:1 york:1 boutilier:2 generally:1 amount:2 extensively:1 morris:1 generate:1 outperform:2 exist:1 xij:10 nsf:2 andr:1 notice:1 estimated:4 per:1 write:1 paolo:1 four:2 scheuermann:1 graph:2 year:1 sum:1 imposition:1 run:1 parameterized:2 i5:1 extends:2 family:10 reasonable:1 decision:1 duncan:1 bound:1 followed:1 aic:7 quadratic:2 strength:2 precisely:1 constraint:2 sake:1 generates:1 optimality:1 leon:1 format:1 martin:1 estep:1 according:5 project:1 poor:1 belonging:1 jr:5 across:1 slightly:1 em:21 contradicts:1 smaller:1 sandholm:1 making:1 restricted:1 pr:19 ariel:2 agree:1 royale:1 nonempty:1 ordinal:1 tractable:2 przemyslaw:1 adopted:1 available:2 apply:4 observe:1 ekopa:1 r2m:1 alternative:38 robustness:2 shah:1 slower:1 existence:1 thomas:1 assumes:1 denotes:1 include:1 imize:1 running:1 graphical:1 publishing:1 establish:1 implied:1 fa:1 kemeny:4 unable:1 thank:1 simulated:2 condorcet:7 topic:1 cauchy:1 collected:1 reason:1 assuming:1 length:2 hobert:1 sur:1 tuomas:1 relationship:1 providing:2 equivalently:1 frank:1 negative:1 pm0:1 implementation:2 collective:1 disagree:1 datasets:5 markov:1 mate:1 truncated:2 extended:2 david:3 pred:2 pair:3 namely:1 specified:1 paris:1 yellott:1 nip:1 brook:1 beyond:1 proceeds:1 usually:1 xm:1 including:1 gaining:1 max:1 terry:2 difficulty:1 natural:2 ranked:3 treated:1 representing:1 improve:2 movie:1 axis:3 galin:1 voix:1 prior:2 literature:1 berry:1 review:1 asymptotic:1 law:1 permutation:2 mcfadden:1 mixed:1 generation:2 suggestion:1 filtering:1 var:2 validation:1 rothe:1 agent:15 jasa:1 sufficient:6 consistent:1 xiao:1 editor:1 claire:1 sourcing:1 repeat:1 last:1 supported:2 jth:1 proschan:1 tracing:1 distributed:4 regard:1 xia:5 overcome:1 ghz:1 world:6 rum:24 qn:1 ignores:1 adopts:1 concavity:6 made:1 simplified:1 social:12 preferred:2 overcomes:2 global:10 doucet:1 uai:2 roos:1 handbook:1 assumed:2 xi:12 latent:5 table:2 toshihiro:1 learn:1 controllable:1 automobile:1 domain:2 submit:1 main:1 linearly:1 noise:2 profile:3 repeated:1 fair:1 aamas:1 x1:1 representative:1 intel:1 ny:1 wiley:1 tong:1 sub:3 position:1 inferring:1 explicit:1 exponential:5 winning:3 candidate:1 young:1 theorem:12 jt:2 er:1 pluralit:1 exists:2 gumbel:7 booth:1 logarithmic:2 yung:1 recommendation:1 springer:1 corresponds:5 truth:7 satisfies:1 determines:1 kamishima:1 conditional:3 viewed:2 towards:1 price:1 man:1 content:1 hard:2 specifically:4 except:1 sampler:6 lemma:7 total:3 experimental:2 la:2 vote:3 procaccia:1 support:1 jonathan:1 evaluate:5 tested:1 ex:1
|
4,128 | 4,736 |
Minimizing Sparse High-Order Energies by
Submodular Vertex-Cover
Andrew Delong
University of Toronto
Olga Veksler
Western University
Anton Osokin
Moscow State University
Yuri Boykov
Western University
[email protected]
[email protected]
[email protected]
[email protected]
Abstract
Inference in high-order graphical models has become important in recent years.
Several approaches are based, for example, on generalized message-passing, or
on transformation to a pairwise model with extra ?auxiliary? variables. We focus
on a special case where a much more efficient transformation is possible. Instead
of adding variables, we transform the original problem into a comparatively small
instance of submodular vertex-cover. These vertex-cover instances can then be
attacked by existing algorithms (e.g. belief propagation, QPBO), where they often
run 4?15 times faster and find better solutions than when applied to the original
problem. We evaluate our approach on synthetic data, then we show applications
within a fast hierarchical clustering and model-fitting framework.
1
Introduction
MAP inference on graphical models is a central problem in machine learning, pattern recognition,
and computer vision. Several algorithms have emerged as practical tools for inference, especially
for graphs containing only unary and pairwise factors. Prominent examples include belief propagation [30], more advanced message passing methods like TRW-S [21] or MPLP [33], combinatorial
methods like ?-expansion [6] (for ?metric? factors) and QPBO [32] (mainly for binary problems).
In terms of optimization, these algorithms are designed to minimize objective functions (energies)
containing unary and pairwise terms.
Many inference problems must be modeled using high-order terms, not just pairwise, and such
problems are increasingly important for many applications. Recent developments in high-order inference include, for example, high-arity CRF potentials [19, 38, 25, 31], cardinality-based potentials
[13, 34], global potentials controlling the appearance of labels [24, 26, 7], learning with high-order
loss functions [35], among many others.
One standard approach to high-order inference is to transform the problem to the pairwise case and
then simply apply one of the aforementioned ?pairwise? algorithms. These transformations add many
?auxiliary? variables to the problem but, if the high-order terms are sparse in the sense suggested
by Rother et al. [31], this can still be a very efficient approach. There can be several equivalent
high-order-to-pairwise transformations, and this choice affects the difficulty of the resulting pairwise inference problem. Choosing the ?easiest? transformation is not trivial and has been explicitly
studied, for example, by Gallagher et al. [11].
Our work is about fast energy minimization (MAP inference) for particularly sparse, high-order ?pattern potentials? used in [25, 31, 29]: each energy term prefers a specific (but arbitrary) assignment
to its subset of variables. Instead of directly transforming the high-order problem to pairwise, we
transform the entire problem to a comparatively small instance of submodular vertex-cover ( SVC).
The vertex-cover implicitly provides a solution to the original high-order problem. The SVC instance can itself be converted to pairwise, and standard inference techniques run much faster and are
often more effective on this compact representation.
1
We also show that our ?sparse? high-order energies naturally appear when trying to solve hierarchical clustering problems using the algorithmic approach called fusion moves [27], also conceptually
known as optimized crossover [1]. Fusion is a powerful very large-scale neighborhood search technique [3] that in some sense generalizes ?-expansion. The fusion approach is not standard for the
kind of clustering objective we will consider, but we believe it is an interesting optimization strategy.
The remainder of the paper is organized as follows. Section 2 introduces the class of high-order energies we consider, then derives the transformation to SVC and the subsequent decoding. Section 3
contains experiments that suggest significant speedups, and discusses possible applications.
2
Sparse High-Order Energies Reducible to SVC
?
In what follows
? we use x to denote a vector of binary variables, xP to denote product i?P xi , and
xQ to denote i?Q xi . It will be convenient to adopt the convention that x{} = 1 and x{} = 1. We
always use i to denote a variable index from I, and j to denote a clique index from V.
It is well-known that any pseudo-boolean function (binary energy) can be written in the form
?
?
F (x) =
ai xi ?
bj xPj xQj
i?I
(1)
j?V
where each clique j has coefficient ?bj with bj ? 0, and is defined over variables in sets Pj , Qj ? I.
Our approach will be of practical interest only when, roughly speaking, |V| ? |I|.
For example, if x = (x1 , . . . , x7 ) then a clique j with Pj = {2, 3} and Qj = {4, 5, 6} will explicitly
reward binary configuration (? , 1, 1, 0, 0, 0, ?) by the amount bj (depicted as b1 in Figure 1). If there
are several overlapping (and conflicting) cliques, then the minimization problem can be difficult.
A standard way to minimize F (x) would be to substitute each ?bj xPj xQj term with a collection
of equivalent pairwise
terms. ?
In our experiments, we used the substitution ?xPj xQj = ?1 +
?
miny?{0,1} y + i?Pj xi y + i?Qj xi y where y is an auxiliary variable. This is like the Type-II
transformation in [31], and we found that it worked better than Type-I for our experiments. However,
we aim to minimize F (x) in a novel way, so first we review the submodular vertex-cover problem.
2.1
Review of Submodular Vertex-Cover
The classic minimum-weighted vertex-cover (VC) problem can be stated as a 0-1 integer program
where variable uj = 1 if and only if vertex j is included in the cover.
?
(2)
( VC) minimize
j?V wj uj
subject to uj + uj? ? 1 ?{j, j ? } ? E
uj ? {0, 1}.
(3)
Without loss of generality one can assume wj > 0 and j ?= j ? for all {j, j ? } ? E. If the graph
(V, E) is bipartite, then we call the specialized problem VC - B and it can be solved very efficiently
by specialized bipartite maximum flow algorithms such as [2].
A function f (x) is called submodular if f (x ? y) + f (x ? y) ? f (x) + f (y) for all x, y ? {0, 1}V
where (x ? y)j = xj yj and (x ? y)j = 1 ? xj y j . A submodular function can be minimized in
strongly polynomial time by combinatorial methods [17], but becomes NP-hard when subject to
arbitrary covering constraints like (3).
The submodular vertex-cover ( SVC) problem generalizes VC by replacing the linear (modular)
objective (2) with an arbitrary submodular objective,
(SVC )
minimize f (u)
subject to uj + uj? ? 1 ?{j, j ? } ? E
uj ? {0, 1}.
(4)
Iwata & Nagano [18] recently showed that when f (?) ? 0 a 2-approximation can be found in
polynomial time and that this is the best constant-ratio bound achievable. It turns out that a halfintegral relaxation uj ? {0, 12 , 1} (call this problem SVC - H), followed by upward rounding, gives
2
a 2-approximation much like for standard VC. They also show how to transform any SVC - H
instance into a bipartite instance of SVC (see below); this extends a classic result by Nemhauser &
Trotter [28], allowing specialized combinatorial algorithms like [17] to solve the relaxation.
In the bipartite submodular vertex-cover ( SVC - B) problem, the graph nodes V can be partitioned
into sets J , K so the binary variables are u ? {0, 1}J, v ? {0, 1}K and we solve
( SVC - B)
minimize f (u) + g(v)
subject to uj + vk ? 1 ?{j, k} ? E
uj , vk ? {0, 1} ?j ? J , k ? K
(5)
where both f (?) and g(?) are submodular functions. This SVC - B formulation is a trivial extension
of the construction in [18] (they assume g = f ), and their proof of tractability extends easily to (5).
2.2
Solving Bipartite SVC with Min-Cut
It will be useful to note that if f and g above can be written in a special manner, SVC - B can
be solved by fast s-t minimum cut instead of by [17, 15]. Suppose we have an SVC - B instance
(J , K, E, f, g) where we can write submodular f and g as
?
?
f (u) =
wS uS , and g(v) =
wS vS .
(6)
S?S 0
S?S 1
Here S 0?and S 1 are collections of subsets of J and K respectively, and typescript uS denotes
product j?S uj throughout (as distinct from typescript u, which denotes a vector).
Proposition 1. If wS ? 0 for all |S| ? 2 in (6), then SVC - B reduces to s-t minimum cut.
Proof. We can define an equivalent problem over variables uj and zk = v k . With this substitution,
the covering constraints become uj ? zk . Since ?g(v) submodular in v? implies ?g(1?v) submodular in v,? letting g?(z) = g(z) = g(v) means g?(z) is submodular as a function of z. Minimizing
f (u)+ g?(z) subject to uj ? zk is equivalent to our original problem. Since uj ? zk can be enforced
by large (submodular) penalty on assignment uj zk , SVC - B is equivalent to
?
? uj zk where ? = ?.
minimize f (u) + g?(z) +
(7)
(j,k)?E
When f and g take the form (6), we have g?(z) =
?
S?S 1
wS zS where zS denotes product
?
k?S
zk .
If wS ? 0 for all |S| ? 2, we can build an s-t minimum cut graph corresponding to (7) by directly
applying the constructions in [23, 10]. We can do this because each term has coefficient wS ? 0
when written as u1 ? ? ? u|S| or z 1 ? ? ? z |S| , i.e. either all complemented or all uncomplemented.
2.3
Transforming F (x) to SVC
To get a sense for how our transformation works, see Figure 1. The transformation is reminiscent of
the binary dual of a Constraint Satisfaction Problem (CSP) [37]. The vertex-cover construction of
[4] is actually a special linear (modular) case of our transformation (details in Proposition 2).
?7
Figure 1: Left: factor graph F (x) = i=1 ai xi ?b1 x2 x3 x4 x5 x6 ?b2 x1 x2 x3 x4 x5 ?b3 x3 x4 x5 x6 x7 .
A small white square indicates ai > 0, a black square ai < 0. A hollow edge connecting xi to factor
j indicates i ? Pj , and a filled-in edge indicates i ? Qj . Right: factor graph of our corresponding
SVC instance. High-order factors of the original problem, shown with gray squares on the left, are
transformed into variables of SVC problem. Covering constraints are shown as dashed lines. Two
pairwise factors are formed with coefficients w{1,3} = ?a3 and w{1,2} = a4 + a5 , both ? 0.
3
Theorem 1. For any F (x) there exists an instance of SVC such that an optimum x? ? {0, 1}I for
F can be computed from an optimal vertex-cover u? ? {0, 1}V .
Proof. First we give the construction for SVC instance (V, E, f ). Introduce auxiliary binary variables u ? {0, 1}V where uj = xPj xQj . Because each bj ? 0, minimizing F (x) is equivalent to the
0-1 integer program with non-linear constraints
minimize F (x, u)
subject to uj ? xPj xQj ?j ? V.
(8)
Inequality (8) is sufficient if bj ? 0 because, for any fixed x, equality uj = xPj xQj holds for some
u that minimizes F (x, u).
We try to formulate a minimization problem solely over u. As a consequence of (8) we have uj =
0 ? xPj = 1, xQj = 0. (We use typescript xS to denote vector (xi )i?S , whereas xS denotes a
product?a scalar value.) Notice that, when some Pj and Qj? overlap, not all u ? {0, 1}V can be
feasible with respect to assignments x ? {0, 1}I . For each i ? I, let us collect the cliques that i
participates in: define sets Ji , Ki ? V where Ji = { j | i ? Pj } and Ki = { j | i ? Qj }. We show
that u can be feasible if and only if uJi + uKi ? 1 for all i ? I, where uS denotes a product. In
other words, u can be feasible if and only if, for each i,
? uj = 0, j ? Ji =? uk = 1 ?j ? Ki
(9)
? uk = 0, k ? Ki =? uj = 1 ?j ? Ji .
(?) If uj ? xPj xQj for all j ? V, then having uJi + uKi ? 1 is necessary: if both uJi = 0 and
uKi = 0 for any i it would mean there exists j ? Ji and k ? Ki for which xPj = 1 and xQk = 0,
contradicting any unique assignment to xi .
(?) If uJi + uKi ? 1 for all i ? I, then we can always choose some x ? {0, 1}I for which every
uj ? xPj xQj . It will be convenient to choose a minimum cost assignment for each xi , subject to the
constraints uJi = 0 ? xi = 1 and uKi = 0 ? xi = 0. If both uJi = uKi = 1 then xi could be
either 0 or 1 so choose the best, giving {
0
if uKi = 0
1
if uJi = 0
(10)
x(u)i =
[ai < 0] otherwise.
The assignment x(u) is feasible with respect to (8) because for any uj = 1 we have x(u)Pj = 1
and x(u)Qj = 0.
We have completed the proof that u can be feasible if and only if uJi + uKi ? 1. To express
minimization of F solely in terms of u, first write (10) in equivalent form
{
uKi
if ai < 0
(11)
x(u)i =
1 ? uJi otherwise.
Again, this definition of x(u) minimizes F (x, u) over all x satisfying inequality (8). Use (11) to
write new SVC objective f (u) = F (x(u), u), which becomes
?
?
?
ai (1 ? uJi ) +
ai uKi ?
bj (1 ? uj )
f (u) =
i : ai >0
=
?
i : ai <0
?ai uJi +
i : ai >0
?
ai uKi +
i : ai <0
?
j?V
bj uj + const.
(12)
j?V
To collect coefficients in the first two summands of (12), we must group them by each unique clique
that appears. We define set S = {S ? V | (?Ji = S) ? (?Ki = S)} and write
?
f (u) =
wS uS + const
(13)
S?S
where wS =
?
?ai +
i : ai >0,
Ji =S
?
(
ai
)
+ bj if S = {j} .
(14)
i : ai <0,
Ki =S
Since the high-order terms uS in (13) have non-positive coefficients wS ? 0, then f (u) is submodular [5]. Also note that for each i at most one of Ji or Ki contributes to the sum, so there are at most
|S| ? |I| unique terms uS with wS ?= 0. If |S|, |V| ? |I| then our SVC instance will be small.
Finally, to ensure (9) holds we add a covering constraint uj + uk ? 1 whenever there exists i such
that j ? Ji , k ? Ki . For this SVC instance, an optimal covering u minimizes F (x(u), u).
4
The construction in Theorem 1 suggests the entire minimization procedure below.
M INIMIZE - BY-SVC(F ) where F is a pseudo-boolean function in the form of (1)
1
2
3
4
5
6
7
8
w{j} := bj ?j ? V
for i ? I do
if
ai > 0 then wJi := wJi ? ai
else if ai < 0 then wKi := wKi + ai
E := E ? {{j, k}} ?j ? Ji , k ? Ki
?
let f (u) = S?S wS uS
?
u := S OLVE -SVC(V, E, f )
return x(u? )
(distribute ai to high-order SVC coefficients)
(where index sets Ji and Ki defined in Theorem 1)
(add covering constraints to enforce uJi + uKi ? 1)
(define SVC objective over clique indices V)
(solve with BP, QPBO, Iwata, etc.)
(decode the covering as in (10))
One reviewer suggested an extension that scales better with the number of overlapping cliques. The
idea is to formulate SVC over the
rather than V. Specifically, let y ? {0, 1}S and use
? elements of S?
submodular objective f (y) = S?S wS yS + j?S (bj + 1)yS y{j} , where the inner sum ensures
?
yS = j?S y{j} at a local minimum because w{j} ? bj . For each unique pair {Ji , Ki }, add a
covering constraint yJi + yKi ? 1 (instead of O(|Ji |?|Ki |) constraints). An optimal covering y ? of S
?
then gives an optimal covering of V by assigning uj = y{j}
. Here we use the original construction,
and still report significant speedups. See [8] for discussion of efficient implementation, and an
alternate proof of Theorem 1 based on LP relaxation.
2.4
Special Cases of Note
Proposition 2. If {Pj }j?V are disjoint and, separately, {Qj }j?V are disjoint (equivalently each
|Ji |, |Ki | ? 1), then the SVC instance in Theorem 1 reduces to standard VC .
Proof. Each
? S ? S in objective (13) must be S = {j} for some j ? V. The objective then becomes
f (u) = j?V w{j} uj + const, a form of standard VC .
Proposition 2 shows that the main result of [4] is a special case of our Theorem 1 when Ji = {j}
and Ki = {k} with j, k determined by two labelings being ?fused?. In Section 3, this generalization
of [4] will allow us to apply a similar fusion-based algorithm to hierarchical clustering problems.
Proposition 3. If each particular j ? V has either Pj = {} or Qj = {}, then the construction in
Theorem 1 is an instance of SVC - B. Moreover, it is reducible to s-t minimum cut.
Proof. In this case Ji is disjoint with Ki? for any i, i? ? I, so sets J = {j : |Pj | ? 1} and
K = {j : |Qj | ? 1} are disjoint. Since E contains pairs (j, k) with j ? J and k ? K, graph (V, E)
is bipartite. By the disjointness of any Ji and Ki? , the unique clique sets S can be partitioned into
S 0 = {S ? J | ?Ji = S} and S 1 = {S ? K | ?Ki = S} so that (13) can be written as in
Proposition 1 and thereby reduced to s-t minimum cut.
Corollary 1. If sets {Pj }j?V and {Qj }j?V satisfy the conditions of propositions 2 and 3, then
minimizing F (x) reduces to an instance of VC - B and can be solved by bipartite maximum flow.
We should note that even though SVC has a 2-approximation algorithm [18], this does not give us
a 2-approximation for minimizing F in general. Even if F (x) ? 0 for all x, it does not imply
f (u) ? 0 for configurations of u that violate the covering constraints, as would be required.
3 Applications
Even though any pseudo-boolean function can be expressed in form (1), many interesting problems
would require an exponential number of terms to be expressed in that form. Only certain specific
applications will naturally have |V| ? |I|, so this is the main limitation of our approach. There may
be applications in high-order segmentation. For example, when P n -Potts potentials [19] are incorporated into ?-expansion, the resulting expansion step contains high-order terms that are compact
in this form; in the absence of pairwise CRF terms, Proposition 3 would apply.
The ?-expansion algorithm has also been extended to optimize the facility location objective [7]
commonly used for clustering (e.g. [24]). The resulting high-order terms inside the expansion step
5
lb+5
+7300
+20000
+56000 +120000
1
0.8
0.6
0.4
0.2
lb+5
+7300
+20000
+56000
+120000
ICM
ICM
BP
SVC-ICM
TRWS
SVC-BP
MPLP
SVC-TRWS
QPBO
SVC-MPLP
QPBOP
SVC-QPBO
QPBOI
SVC-QPBOP
SVC-QPBOI
0
?= 1
2
4
8
16
?= 1
2
4
8
16
SVC-Iwata
Figure 2: Effectiveness of each algorithm as strength of high-order coefficients is increased by factor
of ? ? {1..16}. For a fixed ?, the final energy of each algorithm was normalized between 0.0 (best
lower bound) and 1.0 (baseline ICM energy); the true energy gap between lower bound and baseline
is indicated at top, e.g. for ? = 1 the ?lb+5? means ICM was typically within 5 of the lower bound.
also take the form (1) (in fact, Corollary 1 applies here); with no need to build the ?full? high-order
graph, this would allow ?-expansion to work as a fast alternative to the classic greedy algorithm
for facility location, very similar to the fusion-based algorithm in [4]. However, in Section 3.2 we
show that our generalized transformation allows for a novel way to optimize a hierarchical facility
location objective. We will use a recent geometric image parsing model [36] as a specific example.
First, Section 3.1 compares a number of methods on synthetic instances of energy (1).
3.1
Results on Synthetic Instances
Each instance is a function F (x) where x represents a 100 ? 100 grid of binary variables with
random unary coefficients ai ? [?10, 10]. Each instance also has |J | = 50 high-order cliques with
bj ? [250?, 500?] (we will vary ?), where variable sets Pj and Qj each cover a random nj ? nj and
mj ? mj region respectively (here the region size nj , mj ? {10, . . . , 15} is chosen randomly). If
Pj and Qj are not disjoint, then either Pj := Pj \ Qj or Qj := Qj \ Pj , as determined by a coin flip.
We tested the following algorithms: BP [30], TRW-S [21], MPLP [33], QPBO [14], and extensions
QPBO-P and QPBO-I [32]. For BP we actually used the implementation provided by [21] which is
very fast but, we should note, does not support message-damping; convergence of BP may be more
reliable if this were supported. Algorithms were configured as follows: BP for 25 iterations (more
did not help); TRW-S for 800 iterations (epsilon 1); MPLP for 2000 initial iterations + 20 clusters
added + 100 iterations per tightening; QPBO-I with 5 random improve steps. We ran MPLP for a
particularly long time to ensure it had ample time to tighten and converge; indeed, it always yielded
the best lower bound. We also tested M INIMIZE - BY-SVC by applying each of these algorithms to
solve the resulting SVC problem, and in this case also tried the Iwata-Nagano construction [18].
To transform high-order potentials to quadratic, we report results using Type-II binary reduction [31]
because for TRW-S/MPLP it dominated the Type-I reduction in our experiments, and for BP and the
others it made no difference. This runs counter to the conventional used of ?number of supermodular
terms? as an estimate of difficulty: the Type-I reduction would generate one supermodular
edge per
?
high-order term, whereas Type-II generates |Pj | supermodular edges for each term ( i?Pj xi y).
One minor detail is how to evaluate the ?partial? labelings returned by QPBO and QPBO-P. In the
case of minimizing F directly, we simply assigned such variables xi = [ai < 0]. In the case of
M INIMIZE - BY-SVC we included all unlabeled nodes in the cover, which means a variable xi with
uJi and uKi all unlabeled will similarly be assigned xi = [ai < 0].
Figure 2 shows the relative performance of each algorithm, on average. When ? = 1 the high-order
coefficients are relatively weak compared to the unary terms, so even ICM succeeds at finding a
near-optimal energy. For larger ? the high-order terms become more important, and we make a
number of observations:
?
?
?
?
ICM, BP, TRW-S, MPLP all perform much better when applied to the SVC problem.
QPBO-based methods do not perform better when applied to the SVC problem.
QPBO-I consistently gives good results; BP also gives good results if applied to SVC.
The Iwata-Nagano construction is effectively the same as QBPO applied to SVC.
6
We also observed that the TRW-S lower bound was the same with or without transformation to
SVC , but convergence took much fewer iterations when applied to SVC. In principle, TRW on
binary problems solves the same LP relaxation as QPBO [22]. The TRW-S code finds much better
solutions because it uses the final messages as hints to decode a good solution, unlike for QPBO.
Table 1 gives typical running times for each of the cases in Figure 2 on a 2.66 GHz Intel Core2
processor. Code was written in C++, but the SVC transformation was not optimized at all. Still,
SVC-QBPOI is 20 times faster than QPBOI while giving similar energies on average. The overall
results suggest that SVC-BP or SVC-QPBOI are the fastest ways to find a low-energy solution (bold
in Table 1) on problems containing many conflicting high-order terms of the form (1). Running
times were relatively consistent for all ? ? 2.
Table 1: Typical running times of each algorithm. First row uses Type-II binary reduction on F ,
then directly runs each algorithm. Second row first transforms to SVC, does Type-II reduction, runs
the algorithm, and decodes the result; times shown include all these steps.
directly minimize F
M INIMIZE - BY-SVC(F )
3.2
BP
22ms
5.2ms
TRW-S
670ms
19ms
MPLP
25min
80sec
QPBO
30ms
5.4ms
QPBO-P
25sec
99ms
QPBO-I
140ms
7.2ms
Iwata
N/A
5ms
Application: Hierarchical Model-Estimation / Clustering
In clustering and multi-model estimation, it is quite common to either explicitly constrain the number of clusters, or?more relevant to our work?to penalize the number of clusters in a solution.
Penalizing the number of clusters is a kind of complexity penalty on the solution. Recent examples
include [24, 7, 26], but the basic idea has been used in many contexts over a long period. A classic
operations research problem with the same fundamental components is facility location: the clients
(data points) must be assigned to a nearby facility (cluster) but each facility costs money to open.
This can be thought of as a labeling problem, where each data point is a variable, and there is a label
for each cluster.
For hard optimization problems there is a particular algorithmic approach called fusion [27] or optimized crossover [1]. The basic idea is two take two candidate solutions (e.g. two attempts at clustering), and to ?fuse? the best parts of each solution, effectively stitching them together. To see this
more concretely, imagine a labeling problem where we wish to minimize E(l) where l = (li )i?I
is a vector of label assignments. If l0 is the first candidate labeling, and l1 is the second candidate
labeling, a fusion operation seeks a binary string x? such that the crossover labeling l(x) = (lixi )i?I
minimizes E(l(x)). In other words, x? identifies the best possible ?stitching? of the two candidate
solutions with respect to the energy.
In [4] we derived a fusion operation based on the greedy formulation of facility location, and found
that the subproblem reduced to minimum-weighted vertex-cover. We will now show that the fusion
operation for hierarchical facility location objectives requires minimizing an energy of the form (1),
which we have already shown can be transformed to a submodular vertex-cover problem. Givoni
et al. [12] recently proposed a message-passing scheme for hierarchical facility location, with experiments on synthetic and HIV strain data. We focus on more a computer vision-centric application:
detecting a hierarchy of lines and vanishing points in images using the geometric image parsing
objective proposed by Tretyak et al. [36].
The hierarchical energy proposed by [36] contains five ?layers?: edges, line segments, lines, vanishing points, and horizon. Each layer provides evidence for subsequent (higher) layers, and at each
level their is a complexity cost that regulates how much evidence is needed to detect a line, to detect
a vanishing point, etc. For simplicity we only model edges, lines, and vanishing points, but our
fusion-based framework easily extends to the full model. The purpose of our experiments are, first
and foremost, to demonstrate that M INIMIZE - BY-SVC speeds up inference and, secondly, to suggest that a hierarchical clustering framework based on fusion operations (similar to non-hierarchical
[4]) is an interesting and potentially worthwhile alternative to the greedy and local optimization used
in state-of-the-art methods like [36].
7
Let {y i }i?I be a set of oriented edges y i = (xi , yi , ?i ) where (x, y) is position in the image and ?
is an angle; these bottom-level features are generated by a Canny edge detector. Let L be a set of
candidate lines, and let V be a set of candidate vanishing points. These sets are built by randomly
sampling: one oriented edge to generate each candidate line, and pairs of lines to generate each
candidate vanishing point. Each line j ? L is associated with one vanishing point kj ? V. (If a line
passes close to multiple vanishing points, a copy of the line is made for each.) We seek a labeling l
where li ? L ? ? identifies the line (and vanishing point) that edge i belongs to, or assigns outlier
label ?. Let Di (j) = distj (xi , yi ) + distj (?i ) denote the spatial distance and angular deviation of
edge y i to line j, and let the outlier cost be Di (?) = const. Similarly, let Dj = distj (kj ) be the
distance of line j and its associated vanishing point projected onto the Gaussian sphere (see [36]).
Finally let Cl and Cv denote positive constants that penalize the detection of a line and a vanishing
point respectively. The hierarchical energy we minimize is
?
?
?
Cv ?[?kli = k].
(15)
E(l) =
Di (li ) +
(Cl + Dj )?[?li = j] +
i?I
j?L
k?V
This energy penalizes the number of unique lines, and the number of unique vanishing points that
labeling l depends on. Given two candidate labelings l0 , l1 , writing the fusion energy for (15) gives
?
?
?
E(l(x)) =
Di0 + (Di1 ? Di0 )xi +
(Cl + Dj )?(1?xPj xQj ) +
Cv ?(1?xPk xQk ) (16)
i?I
j?L
k?V
where Pj = { i | = j }, Qj = { i | = j }, and Pk = { i | kli0 = k }, Qk = { i | kli1 = k }.
Notice that sets {Pj } are disjoint with each other, but each Pj is nested in subset Pkj , so overall
Proposition 2 does not apply, and so neither does the algorithm in [4].
li0
li1
For each image we used 10,000 edges, generated 8,000 candidate lines and 150 candidate vanishing
points. We then generated 4 candidate labelings, each by allowing vanishing points to be detected
in randomized order, and their associated lines to be detected in greedy order, and then we fused
the labelings together by minimizing (16). Overall inference with QPBOI took 2?6 seconds per
image, whereas SVC-QPBOI took 0.5-0.9 seconds per image with relative speedup of 4?6 times.
The simplified model is enough to show that hierarchical clustering can be done in this new and
potentially powerful way. As argued in [27], fusion is a robust approach because it combines the
strengths?quite literally?of all methods used to generate candidates.
Figure 3: (Best seen in color.) Edge features color-coded by their detected vanishing point. Not
shown are the detected lines that make up the intermediate layer of inference (similar to [36]).
Images taken from York [9] and Eurasia [36] datasets.
Acknowledgements We thank Danny Tarlow for helpful discussion regarding MPLP, and an anonymous
reviewer for suggesting a more efficient way to enforce covering constraints(!). This work supported by NSERC
Discovery Grant R3584A02, Canadian Foundation for Innovation (CFI), and Early Researcher Award (ERA).
References
[1] Aggarwal, C.C., Orlin, J.B., & Tai, R.P. (1997) Optimized Crossover for the Independent Set Problem.
Operations Research 45(2):226?234.
[2] Ahuja, R.K., Orlin, J.B., Stein, C. & Tarjan, R.E. (1994) Improved algorithms for bipartite network flow.
SIAM Journal on Computing 23(5):906?933.
? Orlin, J.B., & Punnen, A.P. (2002) A survey of very large-scale neighborhood
[3] Ahuja, R.K., Ergun, O.,
search techniques. Discrete Applied Mathematics 123(1?3):75?202.
[4] Delong, A., Veksler, O. & Boykov, Y. (2012) Fast Fusion Moves for Multi-Model Estimation. European
Conference on Computer Vision.
[5] Boros, E. & Hammer, P.L. (2002) Pseudo-Boolean Optimization. Discrete App. Math. 123(1?3):155?225.
[6] Boykov, Y., Veksler, O., & Zabih, R. (2001) Fast Approximate Energy Minimization via Graph Cuts.
IEEE Transactions on Pattern Recognition and Machine Intelligence. 23(11):1222?1239.
8
[7] Delong, A., Osokin, A., Isack, H.N., & Boykov, Y. (20120) Fast Approximate Energy Minimization with
Label Costs. International Journal of Computer Vision 96(1):127. Earlier version in CVPR 2010.
[8] Delong, A., Veksler, O., Osokin, A., & Boykov, Y. (2012) Minimizing Sparse High-Order Energies by
Submodular Vertex-Cover. Technical Report, Western University.
[9] Denis, P., Elder, J., & Estrada, F. (2008) Efficient Edge-Based Methods for Estimating Manhattan Frames
in Urban Imagery. European Conference on Computer Vision.
[10] Freedman, D. & Drineas, P. (2005) Energy minimization via graph cuts: settling what is possible. IEEE
Conference on Computer Vision and Pattern Recognition.
[11] Gallagher, A.C., Batra, D., & Parikh, D. (2011) Inference for order reduction in Markov random fields.
IEEE Conference on Computer Vision and Pattern Recognition.
[12] Givoni, I.E., Chung, C., & Frey, B.J. (2011) Hierarchical Affinity Propagation. Uncertainty in AI.
[13] Gupta, R., Diwan, A., & Sarawagi, S. (2007) Efficient inference with cardinality-based clique potentials.
International Conference on Machine Learning.
[14] Hammer, P.L., Hansen, P., & Simeone, B. (1984) Roof duality, complementation and persistency in
quadratic 0-1 optimization. Mathematical Programming 28:121?155.
[15] Hochbaum, D.S. (2010) Submodular problems ? approximations and algorithms. Arxiv preprint
arXiv:1010.1945.
[16] Iwata, S., Fleischer, L. & Fujishige, S. (2001) A combinatorial, strongly polynomial-time algorithm for
minimizing submodular functions. Journal of the ACM 48:761?777.
[17] Iwata, S. & Orlin, J.B. (2009) A simple combinatorial algorithm for submodular function minimization.
ACM-SIAM Symposium on Discrete Algorithms.
[18] Iwata, S. & Nagano, K. (2009) Submodular Function Minimization under Covering Constraints. IEEE
Symposium on Foundations of Computer Science.
[19] Kohli, P., Kumar, M.P. & Torr, P.H.S. (2007) P 3 & Beyond: Solving Energies with Higher Order Cliques.
IEEE Conference on Computer Vision and Pattern Recognition.
[20] Kolmogorov, V. (2010) Minimizing a sum of submodular functions. Arxiv preprint arXiv:1006.1990.
[21] Kolmogorov, V. (2006) Convergent Tree-Reweighted Message Passing for Energy Minimization. IEEE
Transactions on Pattern Analysis and Machine Intelligence 28(10):1568?1583.
[22] Kolmogorov, V., & Wainwright, M.J. (2005) On the optimality of tree-reweighted max-product messagepassing. Uncertainty in Artificial Intelligence.
[23] Kolmogorov, V. & Zabih, R. (2004) What Energy Functions Can Be Optimized via Graph Cuts? IEEE
Transactions on Pattern Analysis and Machine Intelligence 26(2):147?159.
[24] Komodakis, N., Paragios, N., & Tziritas, G. (2008) Clustering via LP-based Stabilities. Neural Information Processing Systems.
[25] Komodakis, N., & Paragios, N. (2009) Beyond pairwise energies: Efficient optimization for higher-order
MRFs. IEEE Computer Vision and Pattern Recognition.
[26] Ladick?y, L., Russell, C., Kohli, P., & Torr, P.H.S (2010) Graph Cut based Inference with Co-occurrence
Statistics. European Conference on Computer Vision.
[27] Lempitsky, V., Rother, C., Roth, S., & Blake, A. (2010) Fusion Moves for Markov Random Field
Optimization. IEEE Transactions on Pattern Analysis and Machine Inference. 32(9):1392?1405.
[28] Nemhauser, G.L. and Trotter, L.E. (1975) Vertex packings: Structural properties and algorithms.
Mathematical Programming 8(1):232?248.
[29] Osokin, A., & Vetrov, D. (2012) Submodular relaxations for MRFs with high-order potentials. HiPot:
ECCV Workshop on Higher-Order Models and Global Constraints in Computer Vision.
[30] Pearl, J. (1988) Fusion, propagation, and structuring in belief networks. Artificial Intell. 29(3):251?288.
[31] Rother, C., Kohli, P., Feng, W., & Jia, J. (2009) Minimizing sparse higher order energy functions of
discrete variables. IEEE Conference on Computer Vision and Pattern Recognition.
[32] Rother, C., Kolmogorov, V., Lempitsky, V., & Szummer, M. (2007) Optimizing Binary MRFs via
Extended Roof Duality. IEEE Conference on Computer Vision and Pattern Recognition.
[33] Sontag, D., Meltzer, T., Globerson, A., Jaakkola, T., & Weiss, Y. (2008) Tightening LP relaxations for
MAP using message passing. Uncertainty in Artificial Intelligence.
[34] Tarlow, D., Givoni, I.E., & Zemel, R.S. (2010) HOPMAP: Efficient message passing with high order
potentials. International Conference on Artificial Intelligence and Statistics.
[35] Tarlow, D., & Zemel, R. (2012) Structured Output Learning with High Order Loss Functions. International Conference on Artificial Intelligence and Statistics.
[36] Tretyak, E., Barinova, O., Kohli, P., & Lempitsky, V. (2011) Geometric Image Parsing in Man-Made
Environments. International Journal of Computer Vision 97(3):305?321.
[37] Tsang, E. (1993) Foundations of constraint satisfaction. Academic Press, London.
[38] Werner, T. (2008) High-arity Interactions, Polyhedral Relaxations, and Cutting Plane Algorithm for Soft
Constraint Optimisation (MAP-MRF). IEEE Conference on Computer Vision and Pattern Recognition.
9
|
4736 |@word kohli:4 version:1 achievable:1 polynomial:3 trotter:2 open:1 seek:2 tried:1 thereby:1 reduction:6 initial:1 configuration:2 contains:4 substitution:2 existing:1 com:2 gmail:2 assigning:1 must:4 written:5 reminiscent:1 parsing:3 danny:1 subsequent:2 designed:1 v:1 greedy:4 fewer:1 intelligence:7 plane:1 vanishing:15 halfintegral:1 persistency:1 tarlow:3 provides:2 detecting:1 node:2 toronto:1 location:7 math:1 denis:1 five:1 mathematical:2 become:3 symposium:2 fitting:1 combine:1 inside:1 polyhedral:1 introduce:1 manner:1 pairwise:14 indeed:1 roughly:1 multi:2 core2:1 cardinality:2 becomes:3 provided:1 estimating:1 wki:2 moreover:1 lixi:1 easiest:1 what:3 kind:2 minimizes:4 string:1 z:2 finding:1 transformation:13 nj:3 pseudo:4 every:1 uk:3 grant:1 appear:1 positive:2 local:2 frey:1 consequence:1 era:1 vetrov:1 solely:2 black:1 studied:1 collect:2 suggests:1 co:1 fastest:1 practical:2 unique:7 globerson:1 yj:1 x3:3 sarawagi:1 procedure:1 cfi:1 crossover:4 thought:1 convenient:2 word:2 suggest:3 get:1 onto:1 unlabeled:2 close:1 context:1 applying:2 writing:1 optimize:2 equivalent:7 map:4 reviewer:2 conventional:1 roth:1 survey:1 formulate:2 simplicity:1 assigns:1 classic:4 stability:1 controlling:1 construction:9 suppose:1 decode:2 imagine:1 hierarchy:1 programming:2 us:2 givoni:3 element:1 recognition:9 particularly:2 satisfying:1 cut:10 observed:1 bottom:1 subproblem:1 reducible:2 preprint:2 solved:3 tsang:1 wj:2 ensures:1 region:2 counter:1 russell:1 ran:1 transforming:2 environment:1 complexity:2 miny:1 reward:1 solving:2 segment:1 bipartite:8 drineas:1 packing:1 easily:2 kolmogorov:5 distinct:1 fast:8 effective:1 london:1 detected:4 artificial:5 labeling:7 zemel:2 choosing:1 neighborhood:2 quite:2 emerged:1 modular:2 solve:5 larger:1 hiv:1 cvpr:1 otherwise:2 statistic:3 transform:5 itself:1 final:2 took:3 interaction:1 product:6 remainder:1 yki:1 canny:1 relevant:1 nagano:4 convergence:2 cluster:6 optimum:1 help:1 andrew:2 minor:1 solves:1 auxiliary:4 tziritas:1 implies:1 convention:1 hammer:2 vc:8 pkj:1 require:1 argued:1 generalization:1 anonymous:1 proposition:9 secondly:1 extension:3 hold:2 blake:1 algorithmic:2 bj:14 vary:1 adopt:1 early:1 purpose:1 estimation:3 combinatorial:5 label:5 hansen:1 tool:1 weighted:2 minimization:11 di0:2 gaussian:1 always:3 aim:1 csp:1 rather:1 jaakkola:1 corollary:2 structuring:1 l0:2 focus:2 derived:1 vk:2 potts:1 consistently:1 indicates:3 mainly:1 ladick:1 baseline:2 sense:3 detect:2 helpful:1 inference:16 mrfs:3 unary:4 entire:2 typically:1 w:12 transformed:2 labelings:5 upward:1 overall:3 among:1 dual:1 aforementioned:1 development:1 delong:5 special:5 art:1 spatial:1 field:2 having:1 sampling:1 x4:3 represents:1 minimized:1 others:2 np:1 report:3 hint:1 di1:1 isack:1 randomly:2 oriented:2 intell:1 roof:2 attempt:1 detection:1 interest:1 message:8 a5:1 introduces:1 edge:14 partial:1 necessary:1 damping:1 filled:1 literally:1 tree:2 penalizes:1 instance:19 increased:1 earlier:1 boolean:4 soft:1 cover:18 assignment:7 werner:1 tractability:1 cost:5 deviation:1 vertex:17 subset:3 veksler:4 rounding:1 synthetic:4 fundamental:1 randomized:1 siam:2 international:5 participates:1 decoding:1 connecting:1 fused:2 together:2 imagery:1 again:1 central:1 containing:3 choose:3 chung:1 return:1 li:4 suggesting:1 potential:9 converted:1 distribute:1 sec:2 b2:1 disjointness:1 bold:1 coefficient:9 configured:1 satisfy:1 li0:1 explicitly:3 depends:1 try:1 jia:1 orlin:4 minimize:11 square:3 formed:1 qk:1 efficiently:1 conceptually:1 anton:2 weak:1 decodes:1 researcher:1 processor:1 app:1 complementation:1 detector:1 whenever:1 definition:1 energy:30 naturally:2 proof:7 associated:3 di:3 color:2 organized:1 segmentation:1 actually:2 trw:9 appears:1 centric:1 elder:1 higher:5 supermodular:3 x6:2 improved:1 wei:1 formulation:2 done:1 though:2 strongly:2 generality:1 just:1 angular:1 replacing:1 overlapping:2 western:3 propagation:4 gray:1 indicated:1 believe:1 b3:1 normalized:1 true:1 facility:9 equality:1 assigned:3 white:1 reweighted:2 komodakis:2 x5:3 covering:13 m:10 generalized:2 prominent:1 trying:1 crf:2 demonstrate:1 l1:2 image:9 svc:58 boykov:5 novel:2 recently:2 common:1 parikh:1 specialized:3 ji:18 regulates:1 typescript:3 significant:2 ai:27 cv:3 grid:1 mathematics:1 similarly:2 submodular:26 had:1 dj:3 money:1 summands:1 add:4 etc:2 recent:4 showed:1 optimizing:1 belongs:1 certain:1 inequality:2 binary:13 yuri:2 yi:2 wji:2 seen:1 minimum:9 estrada:1 converge:1 olve:1 period:1 dashed:1 ii:5 violate:1 full:2 multiple:1 reduces:3 aggarwal:1 technical:1 faster:3 academic:1 long:2 sphere:1 y:3 coded:1 award:1 mrf:1 basic:2 vision:15 metric:1 tretyak:2 foremost:1 arxiv:4 iteration:5 optimisation:1 hochbaum:1 penalize:2 whereas:3 separately:1 else:1 extra:1 unlike:1 pass:1 subject:7 fujishige:1 trws:2 ample:1 flow:3 effectiveness:1 integer:2 call:2 structural:1 near:1 qpboi:6 uki:13 intermediate:1 canadian:1 enough:1 meltzer:1 affect:1 xj:2 li1:1 inner:1 idea:3 regarding:1 qj:17 fleischer:1 penalty:2 returned:1 sontag:1 passing:6 speaking:1 york:1 prefers:1 boros:1 simeone:1 useful:1 amount:1 transforms:1 stein:1 zabih:2 reduced:2 generate:4 notice:2 disjoint:6 per:4 write:4 discrete:4 express:1 group:1 urban:1 pj:21 penalizing:1 neither:1 graph:12 relaxation:7 fuse:1 year:1 sum:3 enforced:1 run:5 angle:1 powerful:2 uncertainty:3 extends:3 throughout:1 bound:6 ki:18 layer:4 followed:1 convergent:1 quadratic:2 yielded:1 strength:2 constraint:16 worked:1 constrain:1 bp:12 x2:2 dominated:1 x7:2 u1:1 generates:1 nearby:1 min:2 speed:1 kumar:1 optimality:1 relatively:2 speedup:3 structured:1 alternate:1 increasingly:1 partitioned:2 lp:4 xpj:11 outlier:2 taken:1 tai:1 discus:1 turn:1 needed:1 letting:1 inimize:5 flip:1 stitching:2 generalizes:2 operation:6 apply:4 hierarchical:13 worthwhile:1 enforce:2 diwan:1 occurrence:1 alternative:2 coin:1 original:6 substitute:1 moscow:1 clustering:11 include:4 denotes:5 completed:1 graphical:2 a4:1 ensure:2 top:1 running:3 const:4 giving:2 epsilon:1 especially:1 uj:32 build:2 comparatively:2 feng:1 objective:13 move:3 added:1 already:1 strategy:1 nemhauser:2 affinity:1 distance:2 thank:1 mplp:10 trivial:2 rother:4 code:2 modeled:1 index:4 ratio:1 minimizing:12 innovation:1 equivalently:1 difficult:1 potentially:2 xqj:10 stated:1 tightening:2 implementation:2 perform:2 allowing:2 observation:1 qpbo:18 datasets:1 markov:2 attacked:1 extended:2 incorporated:1 strain:1 uwo:2 frame:1 arbitrary:3 lb:3 xpk:1 tarjan:1 pair:3 required:1 optimized:5 conflicting:2 pearl:1 beyond:2 suggested:2 below:2 pattern:13 program:2 built:1 reliable:1 max:1 belief:3 wainwright:1 overlap:1 satisfaction:2 difficulty:2 client:1 settling:1 advanced:1 scheme:1 improve:1 imply:1 identifies:2 kj:2 xq:1 review:2 geometric:3 acknowledgement:1 discovery:1 relative:2 manhattan:1 loss:3 interesting:3 limitation:1 foundation:3 sufficient:1 xp:1 consistent:1 principle:1 row:2 eccv:1 supported:2 copy:1 allow:2 sparse:7 ghz:1 concretely:1 collection:2 commonly:1 made:3 projected:1 xqk:2 osokin:5 simplified:1 tighten:1 transaction:4 approximate:2 compact:2 implicitly:1 cutting:1 clique:12 global:2 b1:2 xi:20 yji:1 search:2 uji:13 table:3 mj:3 zk:7 robust:1 ca:2 messagepassing:1 contributes:1 expansion:7 cl:3 european:3 did:1 pk:1 main:2 csd:2 freedman:1 contradicting:1 icm:7 x1:2 intel:1 ahuja:2 paragios:2 position:1 wish:1 exponential:1 candidate:13 theorem:7 specific:3 barinova:1 arity:2 x:2 gupta:1 a3:1 fusion:16 derives:1 exists:3 evidence:2 workshop:1 adding:1 effectively:2 gallagher:2 horizon:1 gap:1 depicted:1 simply:2 appearance:1 expressed:2 nserc:1 scalar:1 applies:1 nested:1 iwata:9 complemented:1 acm:2 lempitsky:3 kli:1 absence:1 feasible:5 hard:2 man:1 included:2 specifically:1 determined:2 typical:2 torr:2 olga:2 called:3 batra:1 duality:2 succeeds:1 support:1 szummer:1 hollow:1 evaluate:2 tested:2
|
4,129 | 4,737 |
Nonparametric Bayesian
Inverse Reinforcement Learning
for Multiple Reward Functions
Jaedeug Choi and Kee-Eung Kim
Department of Computer Science
Korea Advanced Institute of Science and Technology
Daejeon 305-701, Korea
[email protected], [email protected]
Abstract
We present a nonparametric Bayesian approach to inverse reinforcement learning
(IRL) for multiple reward functions. Most previous IRL algorithms assume that
the behaviour data is obtained from an agent who is optimizing a single reward
function, but this assumption is hard to guarantee in practice. Our approach is
based on integrating the Dirichlet process mixture model into Bayesian IRL. We
provide an efficient Metropolis-Hastings sampling algorithm utilizing the gradient
of the posterior to estimate the underlying reward functions, and demonstrate that
our approach outperforms previous ones via experiments on a number of problem
domains.
1
Introduction
Inverse reinforcement learning (IRL) aims to find the agent?s underlying reward function given the
behaviour data and the model of environment [1]. IRL algorithms often assume that the behaviour
data is from an agent who behaves optimally without mistakes with respect to a single reward function. From the Markov decision process (MDP) perspective, the IRL can be defined as the problem
of finding the reward function given the trajectory data of an optimal policy, consisting of stateaction histories. Under this assumption, a number of studies on IRL have appeared in the literature [2, 3, 4, 5]. In addition, IRL has been applied to various practical problems that includes
inferring taxi drivers? route preferences from their GPS data [6], estimating patients? preferences to
determine the optimal timing of living-donor liver transplants [7], and implementing simulated users
to assess the quality of dialogue management systems [8].
In practice, the behaviour data is often gathered collectively from multiple agents whose reward
functions are potentially different from each other. The amount of data generated from a single
agent may be severely limited, and hence we may suffer from the sparsity of data if we try to infer
the reward function individually. Moreover, even when we have enough data from a single agent,
the reward function may change depending on the situation.
However, most of the previous IRL algorithms assume that the behaviour data is generated by a
single agent optimizing a fixed reward function, although there are a few exceptions that address
IRL for multiple reward functions. Dimitrakakis and Rothkopf [9] proposed a multi-task learning
approach, generalizing the Bayesian approach to IRL [4]. In this work, the reward functions are
individually estimated for each trajectory, which are assumed to share a common prior. Other than
the common prior assumption, there is no effort to group trajectories that are likely to be generated
from the same or similar reward functions. On the other hand, Babes?-Vroman et al. [10] took a more
direct approach that combines EM clustering with IRL algorithm. The behaviour data are clustered
1
based on the inferred reward functions, where the reward functions are defined per cluster. However,
the number of clusters (hence the number of reward functions) has to be specified as a parameter in
order to use the approach.
In this paper, we present a nonparametric Bayesian approach using the Dirichlet process mixture
model in order to address the IRL problem with multiple reward functions. We develop an efficient
Metropolis-Hastings (MH) sampler utilizing the gradient of the reward function posterior to infer
reward functions from the behaviour data. In addition, after completing IRL on the behaviour data,
we can efficiently estimate the reward function for a new trajectory by computing the mean of the
reward function posterior given the pre-learned results.
2
Preliminaries
We assume that the environment is modeled as an MDP hS, A, T, R, ?, b0 i where: S is the finite set
of states; A is the finite set of actions; T (s, a, s? ) is the state transition probability of changing to
state s? from state s when action a is taken; R(s, a) is the immediate reward of executing action a
in state s; ? ? [0, 1) is the discount factor; b0 (s) denotes the probability of starting in state s. For
notational convenience, we use the vector r = [r1 , . . . , rD ] to denote the reward function.1
A policy is a mapping ? : S ? A. ThePvalue of policy ? is the expected discounted return of
?
executing the policy, defined as V ? = E [ t=0 ? t R(st , at )|b
P0 , ?]. The value function of policy ?
?
for each state s is computedP
by V (s) = R(s, ?(s)) + ? s? ?S T (s, ?(s), s? )V ? (s? ) so that the
value is calculated
by V ? = s?S b0 (s)V ? (s). Similarly, the Q-function is defined as Q? (s, a) =
P
R(s, a) + ? s? ?S T (s, a, s? )V ? (s? ). Given an MDP, the agent?s objective is to execute an optimal
policy ? ? that maximizes the value function
for all P
the states, which should satisfy the Bellman
optimality equation: V ? (s) = maxa?A R(s, a) + ? s? ?S T (s, a, s? )V ? (s? ) .
We assume that the agent?s behavior data is generated by executing an optimal policy with some
unknown reward function(s) R, given as the set X of M trajectories where the m-th trajectory is an
H-step sequence of state-action pairs: Xm = {(sm,1 , am,1 ), (sm,2 , am,2 ), . . . , (sm,H , am,H )}.2
2.1
Bayesian Inverse Reinforcement Learning (BIRL)
Ramachandran and Amir [4] proposed a Bayesian approach to IRL with the assumption that the
behaviour data is generated from a single reward function. The prior encodes the the reward function
preference and the likelihood measures the compatibility of the reward function with the data.
Assuming that the reward function entries are independently distributed, the prior is defined as
QD
P (r) = d=1 P (rd ). We can use various distributions for the reward prior. For instance, the
uniform distribution can be used if we have no knowledge or preference on rewards other than its
range, and the normal or Laplace distributions can be used if we prefer rewards to be close to some
specific values. The Beta distribution can also be used if we treat rewards as the parameter of the
Bernoulli distribution, i.e. P (?d = 1) = rd with auxiliary binary random variable ?d [11].
The likelihood is defined as an independent exponential distribution, analogous to the softmax distribution over actions:
QM Q H
QM QH exp(?Q? (s ,am,h ;r))
P (X |r, ?) = m=1 h=1 P (am,h |sm,h r, ?) = m=1 h=1 P ? exp(?Qm,h
(1)
? (s
?
m,h ,a ;r))
a
?
where ? is the confidence parameter of choosing optimal actions and Q (?, ?; r) denotes the optimal
Q-function computed using reward function r.
For the sake of exposition, we assume that the reward function entries are independently and
normally distributed with mean ? and variance ? 2 so that the prior is defined as P (r|?, ?) =
QD
d=1 N (rd ; ?, ?), but our approach to be presented in later sections can be generalized to use
many other distributions for the prior. The posterior over the reward functions is then formulated by
1
D denotes the number of features. Note that we can assign individual reward values to every state-action
pair by using |S||A| indicator functions for features.
2
Although we assume that all trajectories are of length H for notational brevity, our formulation trivially
extends to different lengths.
2
Algorithm 1: MH algorithm for DPM-BIRL
Figure 1: Graphical model for BIRL.
Initialize c and {r k }K
k=1
for t = 1 to MaxIter do
for m = 1 to M do
c?m ? P (c|c?m , ?)
if c?m ?
/ c?m then r c?m ? P (r|?, ?)
hcm , r cm i ? hc?m , r c?m i with prob. of
min{1,
P (Xm |r c? ,?)
m
P (Xm |r cm ,?)
}
for k = 1 to K do
? ? N (0, 1)
2
r ?k ? r k + ?2 ? log f (r k ) + ? ?
r k ? r ?k with prob. of min{1,
?
f (r ?
k )g(r k ,r k )
}
f (r k )g(r k ,r ?
)
k
Figure 2: Graphical model for DPM-BIRL.
Bayes rule as follows:
P (r|X , ?, ?, ?) ? P (X |r, ?)P (r|?, ?).
(2)
We can infer the reward function from the model by computing the posterior mean using a Markov
chain Monte Carlo (MCMC) algorithm [4] or the maximum-a-posteriori (MAP) estimates using a
gradient method [12]. Fig. 1 shows the graphical model used in BIRL.
3
Nonparametric Bayesian IRL for Multiple Reward Functions
In this section, we present our approach to IRL for multiple reward functions. We assume that each
trajectory in the behaviour data is generated by an agent with a fixed reward function. In other
words, we assume that the reward function does not change within a trajectory. However, the whole
trajectories are assumed be generated by one or more agents whose reward functions are distinct
from each other. We do not assume any information regarding which trajectory is generated by
which agent as well as the number of agents. Hence, the goal is to infer an unknown number of
reward functions from the unlabeled behaviour data.
A naive approach to this problem setting would be solving M separate and independent IRL problems by treating each trajectory as the sole behaviour data and employing one of the well-known
IRL algorithms designed for a single reward function. We can then use an unsupervised learning
method with the M reward functions as data points. However, this approach would suffer from the
sparsity of data, since each trajectory may not contain a sufficient amount of data to infer the reward
function reliably, or the number of trajectories may not be enough for the unsupervised learning
method to yield a meaningful result. Babes?-Vroman et al. [10] proposed an algorithm that combines
EM clustering with IRL algorithm. It clusters trajectories and assumes that all the trajectories in a
cluster are generated by a single reward function. However, as a consequence of using EM clustering, we need to specify the number of clusters (i.e. the number of distinct reward functions) as a
parameter.
We take a nonparametric Bayesian approach to IRL using the Dirichlet process mixture model. Our
approach has three main advantages. First, we do not need to specify the number of distinct reward
functions due to the nonparametric nature of our model. Second, we can encode our preference
or domain knowledge on the reward function into the prior since it is a Bayesian approach to IRL.
Third, we can acquire rich information from the behaviour data such as the distribution over the
reward functions.
3.1
Dirichlet Process Mixture Models
The Dirichlet process mixture (DPM) model [13] provides a nonparametric Bayesian framework for
clustering using mixture models with a countably infinite number of mixture components. The prior
of the mixing distribution is given by the Dirichlet process, which is a distribution over distributions
3
parameterized by base distribution G0 and concentration parameter ?. The DPM model for a data
M
{xm }M
m=1 using a set of latent parameters {?m }m=1 can be defined as:
G|?, G0 ? DP (?, G0 ),
?m |G ? G
xm |?m ? F (?m )
where G is the prior used to draw each ?m and F (?m ) is the parameterized distribution for data xm .
This is equivalent to the following form with K ? ?:
p|? ? Dirichlet(?/K, . . . , ?/K)
cm |p ? Multinomial(p1 , . . . , pK )
?k ? G0
xm |cm , ? ? F (?cm )
(3)
{pk }K
k=1
of xm so
where p =
is the mixing proportion for the latent classes, cm ? {1, . . . , K} is the class
assignment
that cm = k when xm is assigned to class k, ?k is the parameter of the data
distribution for class k, and ? = {?k }K
k=1 .
3.2
DPM-BIRL for Multiple Reward Functions
We address the IRL for multiple reward functions by extending BIRL with the DPM model. We
place a Dirichlet process prior on the reward functions r k . The base distribution G0 is defined
as the reward function prior, i.e. the product of the normal distribution for each reward entry
QD
d=1 N (rk,d ; ?, ?). The cluster assignment cm = k indicates that the trajectory Xm belongs to
the cluster k, which represents that the trajectory is generated by the agent with the reward function
r k . We can thus regard the behavior data X = {X1 , . . . , XM } as being drawn from the following
generative process:
1. The cluster assignment cm is drawn by the first two equations in Eqn. (3).
QD
2. The reward function r k is drawn from d=1 N (rk,d ; ?, ?).
3. The trajectory Xm is drawn from P (Xm |r cm , ?) in Eqn. (1).
Fig. 2 shows the graphical model of DPM-BIRL. The joint posterior of the cluster assignment c =
K
{cm }M
m=1 and the set of reward functions {r k }k=1 is defined as:
QK
P (c, {r k }K
(4)
k=1 |X , ?, ?, ?, ?) = P (c|?)
k=1 P (r k |Xc(k) , ?, ?, ?)
where Xc(k) = {Xm |cm = k for m = 1, . . . , M } and P (r k |X , ?, ?, ?) are taken from Eqn. (2).
The inference in DPM-BIRL can be done using the Metropolis-Hastings (MH) algorithm that samples each hidden variable in turn. First, note that we can safely assume that there are K distinct
values of cm ?s so that cm ? {1, . . . , K} without loss of generality. The conditional distribution to
sample cm for the MH update can be defined as
P (cm |c?m , {r k }K
k=1 , X , ?, ?) ? P (Xm |r cm , ?)P (cm |c?m , ?)
n?m,cj , if cm = cj for some j
P (cm |c?m , ?) ?
?,
if cm 6= cj for all j
(5)
where c?m = {ci |i 6= m for i = 1, . . . , M }, P (Xm |r cm , ?) is the likelihood defined in Eqn. (1),
and n?m,cj = |{ci = cj |i 6= m for i = 1, . . . , M }| is the number of trajectories, excluding Xm ,
assigned to the cluster cj . Note that if the sampled cm 6= cj for all j then Xm is assigned to a new
cluster. The conditional distribution to sample r k for the MH update is defined as
P (r k |c, r ?k , X , ?, ?, ?) ? P (Xc(k) |r k , ?)P (r k |?, ?)
where P (Xc(k) |r k , ?) is again the likelihood defined in Eqn. (1) and P (r k |?, ?)
QD
d=1 N (rk,d ; ?, ?).
=
In Alg. 1, we present the MH algorithm for DPM-BIRL that uses the above MH updates. The
algorithm consists of two steps. The first step updates the cluster assignment c. We sample new
4
assignment c?m from Eqn. (5). If c?m is not in c?m , i.e., c?m 6= cj for all j, we draw new reward
function r c?m from the reward prior P (r|?, ?). We then set cm = c?m with the acceptance probability
P (Xm |r c? ,?)
of min{1, P (Xm |rcm ,?) }, since we are using a non-conjugate prior [13]. The second step updates
m
?
the reward functions {r k }K
k=1 . We sample a new reward function r k using the equation
2
r ?k = r k + ?2 ? log f (r k ) + ? ?
where ? is a sample from the standard normal distribution N (0, 1), ? is a non-negative scalar for the
scaling parameter, and f (r k ) is the target distribution of the MH update P (Xc(k) |r k , ?)P (r k |?, ?)
which is the unnormalized posterior of the reward function r k . We then set r k = r ?k with the
f (r ? )g(r ? ,r )
acceptance probability of min{1, f (rkk )g(rkk ,rk? ) } where
k
1
g(x, y) = (2?? 2 )D/2 exp ? 2?12 ||x ? y ? 12 ? 2 ? log f (x)||22 .
This step is motivated by the Langevin algorithm [14] which exploits local information (i.e. gradient)
of f in order to efficiently move towards the high probability region. This algorithm is known to
be more efficient than random walk MH algorithms. We can compute the gradient of f using the
results of Choi and Kim [12].
3.3
Information Transfer to a New Trajectory
Suppose that we would like to infer the reward function of a new trajectory after we finish IRL on the
behaviour data consisting of M trajectories. A naive approach would be running IRL from scratch
using all of the M + 1 trajectories. However, it would be more desirable to transfer the relevant
information from the pre-computed IRL results. In order to do so, Babes?-Vroman et al. [10] use the
weighted average of cluster reward functions assuming that the new trajectory is generated from the
same population of the behaviour data. Note that we can relax this assumption and allow the new
trajectory generated by a novel reward function, as a direct result of using DPM model.
Given the cluster assignment c and the reward functions {r k }K
k=1 computed from the behaviour
data, the conditional prior of the reward function r for the new trajectory can be defined as:
PK
1
?
(6)
P (r|c, {r k }K
k=1 , ?, ?, ?) = ?+M P (r|?, ?) + ?+M
k=1 nk ?(r ? r k )
where nk = |{Xm |cm = k for m = 1, . . . , M }| is the number of trajectories assigned to cluster k
and ?(x) is the Dirac delta function. Running Alg. 1 on the behaviour data X , we already have a set
(n)
(n)
N
of N samples {c(n) , {r k }K
k=1 }n=1 drawn from the joint posterior. The conditional posterior of r
for the new trajectory Xnew is then:
P (r|Xnew , X , ?) ? P (Xnew |r, ?)P (r|X , ?)
Z
K
= P (Xnew |r, ?) P (r|c, {r k }K
k=1 , ?, ?, ?)dP (c, {r k }k=1 |X , ?)
PN
(n)
(n)
N
? P (Xnew |r, ?) N1 n=1 P (r|{c(n) , {r k }K
k=1 }n=1 , ?, ?, ?)
PN PK (n) n(n)
(n)
?
1
k
= P (Xnew |r, ?) ?+M P (r|?, ?) + ?+M n=1 k=1 N ?(r ? r k )
where ? = {?, ?, ?, ?}.
We can then re-draw samples of r using the approximated posterior and take the sample average
as the inferred reward function. However, we present a more efficient way of calculating the posterior mean of r without re-drawing the samples. Note that Eqn. (6) is a mixture of a continuous
distribution P (r|?, ?) with a number of point mass distributions on {r k }K
k=1 . If we approximate
the continuous one by a point mass distribution, i.e., P (r|?, ?) ? ?(?
r ), the posterior mean is analytically computable using the above approximation:
R
E[r|Xnew , X , ?] = rdP (r|Xnew , X , ?)
PN PK (n) n(n)
(n)
(n)
k
? Z1 ?P (Xnew |?
(7)
r , ?)?
r + n=1 k=1 N
P (Xnew |r k , ?)r k
where Z is the normalizing constant. We choose r? = argmaxr P (Xnew |r, ?)P (r|?, ?), which is
the MAP estimate of the reward function for the new trajectory Xnew only, ignoring the previous
behaviour data X .
5
0
2
4
6
8
10
# of trajectories per agent
12
0.9
0.8
0.7
0.8
2
4
6
8
10
# of trajectories per agent
12
0.7
2
4
6
8
10
# of trajectories per agent
12
EVD for the new trajectory
0.5
0.9
5
NMI
1
1
# of clusters
1
F?score
Average EVD
1.5
BIRL
EM?MLIRL(3)
EM?MLIRL(6)
EM?MLIRL(9)
DPM?BIRL(U)
DPM?BIRL(G)
4
3
2
2
4
6
8
10
# of trajectories per agent
12
1.5
1
0.5
0
2
4
6
8
10
12
# of trajectories per agent
Figure 3: Results with increasing number of trajectories per agent in the gridworld problem. DPMBIRL uses the uniform (U) and the standard normal (N) priors.
4
Experimental Results
We compared the performance of DPM-BIRL to the EM-MLIRL algorithm [10] and the baseline
algorithm which runs BIRL separately on each trajectory. The experiments consisted of two tasks:
The first task was finding multiple reward functions from the behaviour data with a number of
trajectories. The second task was inferring the reward function underlying a new trajectory, while
exploiting the results learned in the first task.
The performance of each algorithm was evaluated by the expected value difference (EVD)
?
L
|V ? (r A ) ? V ? (r ) (r A )| where r A is the agent?s ground truth reward function, r L is the learned
reward function, ? ? (r) is the optimal policy induced by reward function r, and V ? (r) is the value of
policy ? measured using r. The EVD thus measures the performance difference between the agent?s
optimal policy and the optimal policy induced by the learned reward function. In the first task, we
evaluated the EVD for the true and learned reward functions of each trajectory and computed the
average EVD over the trajectories in the behaviour data. In the second task, we evaluated the EVD
for the new trajectory. The clustering quality on the behaviour data was evaluated by F-score and
normalized mutual information (NMI).
In all the experiments, we assumed that the reward function was linearly parameterized such that
PD
R(s, a) = d=1 rd ?d (s, a) with feature functions ?d : S ? A ? R, hence r = [r1 , . . . , rD ].
4.1
Gridworld Problem
In order to extensively evaluate our approach, we first performed experiments on a small toy domain,
8?8 gridworld, where each of the 64 cells corresponds to the state. The agent can move north, south,
east, or west, but with probability of 0.2, it fails and moves in a random direction. The initial state is
randomly chosen from the states. The grid is partitioned into non-overlapping regions of size 2 ? 2,
and the feature function is defined by a binary indicator function for each region. Random instances
of IRL with three reward functions were generated as follows: each element of r was sampled to
have a non-zero value with probability of 0.2 and the value is drawn from the uniform distribution
between -1 and 1. We obtained the trajectories of 40 time steps and measured the performance as
we increased the number of trajectories per reward function.
Fig. 3 shows the averages and standard errors of the performance results over 10 problem instances.
The left four panels in the figure present the results for the first task of learning multiple reward
functions from the behaviour data. When the size of the behaviour data is small, the clustering
performances of both DPM-BIRL and EM-MLIRL were not good enough due to the sparsity of
data, hence their EVD results were similar to that of the baseline algorithm that independently runs
BIRL on each trajectory. However, as we increased the size of the data, both DPM-BIRL and EMMLIRL achieved better EVD results than the baseline since they could utilize more information by
grouping the trajectories to infer the reward functions. As for EM-MLIRL, we set the parameter
K used for the maximum number of clusters to 3 (ground truth), 6 (2x), and 9 (3x). DPM-BIRL
achieved significantly better results than EM-MLIRL with all of the parameter settings, in terms of
EVD and clustering quality. The rightmost panel in the figure present the results for the second task
of inferring the reward function for a new trajectory. DPM-BIRL clearly outperformed EM-MLIRL
since it exploits the rich information from the reward function posterior. The relatively large error
bars of the EM-MLIRL results are due to the local convergence inherent to EM clustering.
6
Time step: 79
EM?MLIRL(3)
EM?MLIRL(6)
EM?MLIRL(9)
DPM?BIRL(U)
DPM?BIRL(G)
Average EVD
3
2
1
0
0
20
40
60
80
100
Cpu time (sec)
Speed: high
Figure 4: CPU timing results in the
gridworld problem.
Figure 5: Screenshots of Simulated-highway problem
(left) and Mario Bros (right).
Table 1: Results in Simulated-highway problem.
BIRL
EM-MLIRL(3)
EM-MLIRL(6)
DPM-BIRL(U)
DPM-BIRL(N)
Average EVD
F-score
NMI
# of clusters
EVD for Xnew
0.52?0.05
4.53?0.96
0.89?0.57
0.35?0.04
0.36?0.05
n.a.
0.80?0.05
0.96?0.02
0.98?0.01
0.99?0.01
n.a.
0.74?0.09
0.96?0.03
0.97?0.01
0.99?0.01
n.a.
2.20?0.20
3.10?0.18
3.30?0.15
3.10?0.10
0.41?0.00
4.14?0.88
0.82?0.53
0.32?0.04
0.30?0.04
Fig. 4 compares the average CPU timing results of DPM-BIRL and EM-MLIRL with 10 trajectories
per reward function. DPM-BIRL using Alg. 1 took much shorter time to converge than EM-MLIRL.
This is mainly due to the fact that, whereas EM-MLIRL performs full single-reward IRL multiple
times in each iteration, DPM-BIRL takes a sample from the posterior leveraging the gradient that
does not involve a full IRL.
4.2
Simulated-highway Problem
The second set of experiments was conducted in Simulated-highway problem [15] where the agent
drives on a three lane road. The left panel in Fig. 5 shows a screenshot of the problem. The agent
can move one lane left or right and drive at speeds 2 through 3, but it fails to change the lane with
probability of 0.2 and 0.4 respectively in speed 2 and 3. All the other cars on the road constantly
drive at speed 1 and do not change the lane. The reward function is defined by using 6 binary
feature functions: one function for indicating the agent?s collision with other cars, 3 functions for
indicating the agent?s current lane, 2 functions for indicating the agent?s current speed. We generated
three agents having different driving styles. The first one prefers driving at speed 3 in the left-most
lane and avoiding collisions. The second one prefers driving at speed 3 in the right-most lane and
avoiding collisions. The third one prefers driving at speed 2 and colliding with other cars. We
prepared 3 trajectories of 40 time steps per driver agent for the first task and 20 trajectories of 40
time steps yielded by a driver randomly chosen among the three for the second task.
Tbl. 1 presents the averages and standard errors of the results over 10 sets of the behaviour data.
DPM-BIRL significantly outperformed the others while EM-MLIRL suffered from the convergence
to a local optimum.
4.3
Mario Bros.
For the third set of experiments, we used the open source simulator of the game Mario Bros, which
is a challenging problem due to its huge state space. The right panel in Fig. 5 is a screenshot of the
game. Mario can move left, move right, or jump. Mario?s goal is to reach the end of the level by
traversing from left to right while collecting coins and avoiding or killing enemies. We used 8 binary
feature functions, each being an indicator for: Mario successfully reaching the end of the level;
Mario getting killed; Mario killing an enemy; Mario collecting a coin; Mario receiving damage by
an enemy; existence of a wall preventing Mario from moving in the current direction; Mario moving
to the right; Mario moving to the left. We collected the behaviour data from 4 players: The expert
player is good at both collecting coins and killing enemies. The coin collector likes to collect coins
but avoids killing enemies. The enemy killer likes to kill enemies but avoids collecting coins. The
7
Table 2: Cluster assignments in Mario Bros.
c
DPM-BIRL
EM-MLIRL(4)
EM-MLIRL(8)
Expert player
1
1
1
1
1
1
Coin collector
1
1
1
1
1
1
2
1
2
Enemy killer
2
2
2
3
2
3
3
2
3
Speedy Gonzales
4
1
1
5
3
3
5
3
3
5
3
3
Table 3: Results of DPM-BIRL in Mario Bros.
Reward function entry (rk,d )
k from DPM-BIRL
?enemy-killed
?coin-collected
Average feature counts
1
2
3
4
5
1
2
3
4
5
1.00
1.00
-0.81
1.00
1.00
-1.00
1.00
-0.42
-1.00
-1.00
3.10
21.60
1.60
21.55
2.80
7.55
1.90
7.85
0.55
6.75
speedy Gonzales avoids both collecting coins and killing enemies. All the players commonly try
to reach the end of the level while acting according to their own preferences. The behaviour data
consisted of 3 trajectories per player. Since only the simulator of the environment is available instead
of the complete model, we used the relative entropy IRL [16] which is a model-free IRL algorithm.
Tbl. 2 presents the cluster assignment results. Each column represents each trajectory and the number denotes the cluster assignment cm of trajectory Xm . For example, DPM-BIRL produced 5
clusters and trajectories X1 , . . . , X4 are assigned to the cluster 1 representing the expert player. EMMLIRL failed to group the trajectories that align well with the players, even though we restarted it
100 times in order to mitigate the convergence to bad local optima. On the other hand, DPM-BIRL
was incorrect on only one trajectory, assigning a coin collector?s trajectory to the expert player cluster. Tbl. 3 presents the reward function entries (rk,d ) learned from DPM-BIRL and the average
feature counts acquired by the players with the learned reward functions. For the sake of brevity,
we present only two important features (d=enemy-killed, coin-collected) that determine the playing
style. To compute each player?s feature counts, we executed an n-step lookahead policy yielded by
each reward function r k on the simulator in 20 randomly chosen levels. The reward function entries
align well with each playing style. For example, the cluster 2 represents the coin collector, and its
reward function entry for killing an enemy is negative but that for collecting a coin is positive.
As a demonstration, we implemented a small piece of software that visualizes the posterior probability of a gamer?s behavior belonging to one of the clusters including a new one. A demo video is
provided as supplementary material.
5
Conclusion
We proposed a nonparametric Bayesian approach to IRL for multiple reward functions using the
Dirichlet process mixture model, which extends the previous Bayesian approach to IRL assuming
a single reward function. We can learn an appropriate number of reward functions from the behavior data due to the nonparametric nature and facilitates incorporating domain knowledge on the
reward function by utilizing a Bayesian approach. We presented an efficient Metropolis-Hastings
sampling algorithm that draws samples from the posterior of DPM-BIRL, leveraging the gradient
of the posterior. We also provided an analytical way to compute the approximate posterior mean
for the information transfer task. In addition, we showed that DPM-BIRL outperforms the previous
approach in various problem domains.
Acknowledgments
This work was supported by National Research Foundation of Korea (Grant# 2012-007881), the
Defense Acquisition Program Administration and Agency for Defense Development of Korea (Contract# UD080042AD), and the SW Computing R&D Program of KEIT (2011-10041313) funded by
the Ministry of Knowledge Economy of Korea.
8
References
[1] Stuart Russell. Learning agents for uncertain environments (extended abstract). In Proceedings of COLT,
1998.
[2] Andrew Y. Ng and Stuart Russell. Algorithms for inverse reinforcement learning. In Proceedings of
ICML, 2000.
[3] Gergely Neu and Csaba Szepesv?ari. Apprenticeship learning using inverse reinforcement learning and
gradient methods. In Proceedings of UAI, 2007.
[4] Deepak Ramachandran and Eyal Amir. Bayesian inverse reinforcement learning. In Proceedings of IJCAI,
2007.
[5] Brian D. Ziebart, Andrew L. Maas, J. Andrew Bagnell, and Anind K. Dey. Maximum entropy inverse
reinforcement learning. In Proceedings of AAAI, 2008.
[6] Brian D. Ziebart, Andrew L. Maas, Anind K. Dey, and J. Andrew Bagnell. Navigate like a cabbie: probabilistic reasoning from observed context-aware behavior. In Proceedings of the international conference
on Ubiquitous computing, 2008.
[7] Zeynep Erkin, Matthew D. Bailey, Lisa M. Maillart, Andrew J. Schaefer, and Mark S. Roberts. Eliciting
patients? revealed preferences: An inverse Markov decision process approach. Decision Analysis, 7(4),
2010.
[8] Senthilkumar Chandramohan, Matthieu Geist, Fabrice Lefevre, and Olivier Pietquin. User simulation in
dialogue systems using inverse reinforcement learning. In Proceedings of Interspeech, 2011.
[9] Christos Dimitrakakis and Constantin A. Rothkopf. Bayesian multitask inverse reinforcement learning.
In Proceedings of the European Workshop on Reinforcement Learning, 2011.
[10] Monica Babes?-Vroman, Vukosi Marivate, Kaushik Subramanian, and Michael Littman. Apprenticeship
learning about multiple intentions. In Proceedings of ICML, 2011.
[11] Peter Dayan and Geoffrey E. Hinton. Using expectation-maximization for reinforcement learning. Neural
Computation, 9(2), 1997.
[12] Jaedeug Choi and Kee-Eung Kim. MAP inference for Bayesian inverse reinforcement learning. In Proceedings of NIPS, 2011.
[13] Radford M. Neal. Markov chain sampling methods for Dirichlet process mixture models. Journal of
Computational and Graphical Statistics, 9(2), 2000.
[14] Gareth O. Roberts and Jeffrey S. Rosenthal. Optimal scaling of discrete approximations to langevin
diffusions. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 60(1), 1998.
[15] Pieter Abbeel and Andrew Y. Ng. Apprenticeship learning via inverse reinforcement learning. In Proceedings of ICML, 2004.
[16] Abdeslam Boularias, Jens Kober, and Jan Peters. Relative entropy inverse reinforcement learning. In
Proceedings of AISTATS, 2011.
9
|
4737 |@word h:1 multitask:1 proportion:1 open:1 pieter:1 simulation:1 fabrice:1 p0:1 initial:1 series:1 score:3 rightmost:1 outperforms:2 current:3 assigning:1 treating:1 designed:1 update:6 generative:1 amir:2 provides:1 preference:7 marivate:1 direct:2 eung:2 driver:3 beta:1 incorrect:1 consists:1 combine:2 apprenticeship:3 acquired:1 expected:2 behavior:5 p1:1 multi:1 simulator:3 bellman:1 discounted:1 cpu:3 increasing:1 provided:2 estimating:1 underlying:3 moreover:1 maximizes:1 mass:2 panel:4 maxiter:1 killer:2 cm:26 maxa:1 finding:2 csaba:1 guarantee:1 safely:1 mitigate:1 every:1 collecting:6 stateaction:1 qm:3 normally:1 grant:1 positive:1 timing:3 treat:1 local:4 mistake:1 severely:1 consequence:1 taxi:1 collect:1 challenging:1 limited:1 range:1 practical:1 acknowledgment:1 practice:2 jan:1 vukosi:1 significantly:2 confidence:1 integrating:1 pre:2 word:1 road:2 intention:1 convenience:1 close:1 unlabeled:1 context:1 equivalent:1 map:3 starting:1 independently:3 matthieu:1 keit:1 rule:1 utilizing:3 population:1 laplace:1 analogous:1 qh:1 target:1 suppose:1 user:2 olivier:1 gps:1 us:2 element:1 approximated:1 donor:1 observed:1 birl:37 region:3 russell:2 environment:4 pd:1 agency:1 reward:99 ziebart:2 littman:1 solving:1 abdeslam:1 mh:9 joint:2 various:3 geist:1 distinct:4 monte:1 choosing:1 schaefer:1 whose:2 supplementary:1 kaist:2 enemy:12 relax:1 drawing:1 statistic:1 sequence:1 advantage:1 analytical:1 took:2 product:1 kober:1 relevant:1 rcm:1 mixing:2 lookahead:1 dirac:1 ud080042ad:1 getting:1 exploiting:1 convergence:3 cluster:26 optimum:2 r1:2 extending:1 ijcai:1 executing:3 depending:1 develop:1 ac:2 andrew:7 liver:1 measured:2 sole:1 b0:3 auxiliary:1 c:1 implemented:1 pietquin:1 qd:5 direction:2 screenshots:1 material:1 implementing:1 behaviour:26 assign:1 abbeel:1 clustered:1 rkk:2 preliminary:1 wall:1 brian:2 ground:2 normal:4 exp:3 mapping:1 matthew:1 driving:4 outperformed:2 individually:2 highway:4 successfully:1 weighted:1 argmaxr:1 clearly:1 aim:1 reaching:1 pn:3 encode:1 notational:2 bernoulli:1 likelihood:4 indicates:1 mainly:1 kim:3 am:5 baseline:3 posteriori:1 inference:2 economy:1 dayan:1 hidden:1 compatibility:1 among:1 colt:1 development:1 softmax:1 initialize:1 mutual:1 aware:1 having:1 ng:2 sampling:3 x4:1 represents:3 stuart:2 unsupervised:2 icml:3 others:1 inherent:1 few:1 randomly:3 national:1 individual:1 consisting:2 jaedeug:2 jeffrey:1 n1:1 acceptance:2 huge:1 rdp:1 mixture:10 chain:2 constantin:1 korea:5 shorter:1 traversing:1 walk:1 re:2 uncertain:1 instance:3 increased:2 column:1 assignment:10 maximization:1 entry:7 uniform:3 conducted:1 optimally:1 st:1 international:1 contract:1 probabilistic:1 receiving:1 michael:1 monica:1 gergely:1 again:1 aaai:1 management:1 boularias:1 choose:1 expert:4 dialogue:2 style:3 return:1 toy:1 sec:1 includes:1 north:1 satisfy:1 piece:1 later:1 try:2 performed:1 eyal:1 mario:15 bayes:1 ass:1 variance:1 who:2 efficiently:2 qk:1 gathered:1 yield:1 bayesian:17 killing:6 produced:1 carlo:1 trajectory:58 drive:3 visualizes:1 history:1 reach:2 neu:1 acquisition:1 sampled:2 knowledge:4 car:3 ubiquitous:1 cj:8 methodology:1 specify:2 formulation:1 execute:1 done:1 evaluated:4 dey:2 generality:1 though:1 babe:4 ramachandran:2 hand:2 hastings:4 eqn:7 irl:33 overlapping:1 quality:3 mdp:3 contain:1 true:1 normalized:1 consisted:2 hence:5 assigned:5 analytically:1 neal:1 game:2 interspeech:1 kaushik:1 unnormalized:1 generalized:1 transplant:1 complete:1 demonstrate:1 performs:1 rothkopf:2 reasoning:1 novel:1 ari:1 common:2 behaves:1 multinomial:1 kekim:1 ai:1 rd:6 trivially:1 grid:1 similarly:1 killed:3 funded:1 moving:3 base:2 align:2 posterior:18 own:1 showed:1 perspective:1 optimizing:2 belongs:1 route:1 binary:4 mlirl:20 jens:1 ministry:1 determine:2 converge:1 living:1 multiple:14 desirable:1 full:2 infer:7 patient:2 expectation:1 iteration:1 achieved:2 cell:1 addition:3 whereas:1 separately:1 szepesv:1 source:1 suffered:1 south:1 induced:2 facilitates:1 dpm:33 leveraging:2 revealed:1 enough:3 finish:1 regarding:1 computable:1 administration:1 motivated:1 defense:2 effort:1 suffer:2 peter:2 action:7 prefers:3 collision:3 involve:1 amount:2 nonparametric:9 discount:1 prepared:1 extensively:1 estimated:1 delta:1 per:11 rosenthal:1 kill:1 discrete:1 group:2 four:1 tbl:3 drawn:6 changing:1 diffusion:1 utilize:1 dimitrakakis:2 run:2 inverse:14 prob:2 parameterized:3 extends:2 place:1 draw:4 decision:3 prefer:1 scaling:2 completing:1 xnew:13 yielded:2 colliding:1 software:1 encodes:1 lane:7 sake:2 speed:8 optimality:1 min:4 relatively:1 department:1 according:1 conjugate:1 belonging:1 nmi:3 em:24 partitioned:1 metropolis:4 taken:2 equation:3 turn:1 count:3 end:3 available:1 appropriate:1 bailey:1 coin:13 existence:1 denotes:4 dirichlet:10 clustering:8 assumes:1 running:2 graphical:5 sw:1 xc:5 calculating:1 exploit:2 eliciting:1 society:1 objective:1 g0:5 move:6 already:1 damage:1 concentration:1 bagnell:2 gradient:8 dp:2 separate:1 gonzales:2 simulated:5 zeynep:1 collected:3 assuming:3 length:2 modeled:1 demonstration:1 acquire:1 executed:1 robert:2 potentially:1 negative:2 reliably:1 policy:12 unknown:2 markov:4 sm:4 finite:2 immediate:1 situation:1 langevin:2 excluding:1 extended:1 hinton:1 gridworld:4 inferred:2 pair:2 specified:1 z1:1 learned:7 nip:1 address:3 bar:1 xm:22 appeared:1 sparsity:3 program:2 including:1 royal:1 video:1 subramanian:1 indicator:3 advanced:1 representing:1 technology:1 naive:2 prior:16 literature:1 relative:2 loss:1 geoffrey:1 foundation:1 agent:31 sufficient:1 playing:2 share:1 maas:2 supported:1 free:1 allow:1 lisa:1 institute:1 evd:13 deepak:1 distributed:2 regard:1 calculated:1 transition:1 avoids:3 rich:2 preventing:1 commonly:1 reinforcement:15 jump:1 employing:1 approximate:2 countably:1 uai:1 assumed:3 demo:1 continuous:2 latent:2 table:3 scratch:1 nature:2 transfer:3 learn:1 ignoring:1 alg:3 hc:1 european:1 domain:5 aistats:1 pk:5 main:1 linearly:1 whole:1 collector:4 x1:2 fig:6 west:1 christos:1 fails:2 inferring:3 exponential:1 screenshot:2 third:3 rk:6 choi:3 bad:1 specific:1 navigate:1 normalizing:1 grouping:1 incorporating:1 workshop:1 kr:2 ci:2 anind:2 nk:2 vroman:4 entropy:3 generalizing:1 likely:1 failed:1 scalar:1 collectively:1 restarted:1 radford:1 corresponds:1 truth:2 constantly:1 gareth:1 conditional:4 goal:2 formulated:1 kee:2 exposition:1 towards:1 hard:1 change:4 daejeon:1 infinite:1 sampler:1 acting:1 experimental:1 player:10 meaningful:1 east:1 exception:1 indicating:3 speedy:2 mark:1 brevity:2 evaluate:1 mcmc:1 avoiding:3
|
4,130 | 4,738 |
Spiking and saturating dendrites differentially
expand single neuron computation capacity.
Mark Humphries
INSERM U960; University of Manchester
29 rue d?Ulm, 75005 Paris; UK
[email protected]
Romain Caz?e
INSERM U960, Paris Diderot, Paris 7, ENS
29 rue d?Ulm, 75005 Paris
[email protected]
Boris Gutkin
INSERM U960, CNRS, ENS
29 rue d?Ulm, 75005 Paris
[email protected]
Abstract
The integration of excitatory inputs in dendrites is non-linear: multiple excitatory inputs can produce a local depolarization departing from the arithmetic sum
of each input?s response taken separately. If this depolarization is bigger than
the arithmetic sum, the dendrite is spiking; if the depolarization is smaller, the
dendrite is saturating. Decomposing a dendritic tree into independent dendritic
spiking units greatly extends its computational capacity, as the neuron then maps
onto a two layer neural network, enabling it to compute linearly non-separable
Boolean functions (lnBFs). How can these lnBFs be implemented by dendritic
architectures in practise? And can saturating dendrites equally expand computational capacity? To address these questions we use a binary neuron model and
Boolean algebra. First, we confirm that spiking dendrites enable a neuron to compute lnBFs using an architecture based on the disjunctive normal form (DNF).
Second, we prove that saturating dendrites as well as spiking dendrites enable
a neuron to compute lnBFs using an architecture based on the conjunctive normal form (CNF). Contrary to a DNF-based architecture, in a CNF-based architecture, dendritic unit tunings do not imply the neuron tuning, as has been observed
experimentally. Third, we show that one cannot use a DNF-based architecture
with saturating dendrites. Consequently, we show that an important family of
lnBFs implemented with a CNF-architecture can require an exponential number
of saturating dendritic units, whereas the same family implemented with either a
DNF-architecture or a CNF-architecture always require a linear number of spiking
dendritic units. This minimization could explain why a neuron spends energetic
resources to make its dendrites spike.
1
Introduction
Recent progress in voltage clamp techniques has enabled the recording of local membrane voltage in
dendritic branches, and this greatly changed our view of the potential for single neuron computation.
Experiments have shown that when the local dendritic membrane potential reaches a given threshold
a dendritic spike can be elicited [4, 13]. Based on this type of local dendritic non-linearity, it has
been suggested that a CA1 hippocampal pyramidal neuron comprises multiple independent nonlinear spiking units, summating at the soma, and is thus equivalent to a two layer artificial neural
network [12]. This idea is attractive, because this type of feed-forward network can implement any
Boolean function, in particular linearly non-separable Boolean functions (lnBFs), and thus radically
1
extends the computational power of a single neuron. By contrast, a seminal neuron model, the
McCulloch & Pitts unit [10], is restricted to linearly separable Boolean functions.
However attractive this idea, it requires additional investigation. Indeed, spiking dendritic unit may
enable the computation of lnBFs using an architecture, suggested in [9], where the dendritic tuning
implies the neuron tuning (see also Proposition 1). This relation between dendritic and neuron tuning
has not been confirmed experimentally; on the contrary it has been shown in vivo that dendritic
tuning does not imply the neuron tuning [6]: calcium imaging in vivo has shown that the local
calcium signal in dendrites can maximally increase for visual inputs whereas that do not trigger
somatic spiking. We resolve this first issue here by showing how one can implement lnBFs with
spiking dendritic units, whose tunings do not formally imply the somatic tuning.
Moreover, the idea of a neuron implementing a two-layer network is based on spiking dendrites.
Dendritic non-linearities have a variety of shapes, and many neuron types may not have the capacity
to generate dendritic spikes. By contrast, all dendrites can saturate [1, 16, 2]. For instance, glutamate uncaging on cerebellar stellate cell dendrites and simultaneous somatic voltage recording of
these interneurons shows that multiple excitatory inputs on the same dendrite result in a somatic
depolarization smaller than the arithmetic sum of the quantal depolarizations [1]. This type of nonlinearity has been predicted from Rall?s work [7], a model which explains saturation by an increase
in membrane conductance and a decrease in driving force. It is unknown whether local dendritic
saturation can also enhance the general computational capacity of a single neuron in the same way
as local dendritic spiking ? but, if so, this would make plausible the implementation of lnBFs in
potentially any type of neuron. In the present study we show that saturating dendritic units do also
enable the computation of lnBFs (see Proposition 2).
One can wonder why some dendrites support metabolically-expensive spiking if dendritic saturation
is sufficient to compute all Boolean functions. We tackle this issue in the second part of our study.
We show that a family of positive lnBFs may require an exponentially growing number of saturating dendritic units when the number of input variables grow linearly, whereas the same family of
Boolean functions requires a linearly growing number of spiking dendritic units. Consequently dendritic spikes may minimize the number of units necessary to implement all Boolean functions. Thus,
as the number of independent units ? spiking or saturating ? in a dendrite remains an open question
[5], but potentially small [14], it may turn out that certain Boolean functions are only implementable
using spiking dendrites.
2
2.1
Definitions
The binary two stage neuron
We introduce here a neuron model analogous to [12]. Our model is a binary two stage neuron,
where X is a binary input vector of length n and y is a binary variable modelling the neuron output.
First, inputs sum locally within each dendritic unit j given a local weight vector Wj ; then they pass
though a local transfer function Fj accounting for the dendritic non-linear behavior. Second, outputs
of the d dendritic subunits sum at the soma and passes though the somatic transfer function F0 . F0
is a spiking transfer function whereas Fj are either spiking or saturating transfer functions, these
functions are described in the next section and are displayed on Figure 1A. Formally, the output y is
computed with the following equation:
d
X
y = F0
Fj (Wj .X)
j=1
2.2
Sub-linear and supra-linear transfer functions
A transfer function F takes as input a local weighted linear sum x and outputs F (x); this output
depends on the type of transfer function: spiking or saturating, and on a single positive parameter ?
the threshold of the transfer function. The two types of transfer functions are defined as follows:
Definition 1. Spiking transfer function
1 if x ? ?
Fspk (x) =
0 otherwise
2
Table 1: Two examples of positive Boolean functions of 4 variables
x1
x2
x3
x4
g(x1 , x2 , x3 , x4 )
h(x1 , x2 , x3 , x4 )
0
0
0
0
0
0
1
0
0
0
0
0
0
1
0
0
0
0
1
1
0
0
1
0
0
0
1
0
0
0
1
0
1
0
0
1
0
1
1
0
0
1
1
1
1
0
1
1
0
0
0
1
0
0
1
0
0
1
0
1
0
1
0
1
0
1
1
1
0
1
1
1
0
0
1
1
1
0
1
0
1
1
1
1
0
1
1
1
1
1
1
1
1
1
1
1
Definition 2. Saturating transfer function
Fsat (x) =
1
if x ? ?
x/? otherwise
The difference between a spiking and a saturating transfer function is that Fspk (x) = 0 whereas
Fsat (x) = x/? if x is below ?. To formally characterize this difference we define here sublinearity and supra-linearity of a transfer function F on a given interval I. These definitions are
similar to the well-known notions of concavity and convexity:
Definition 3. F is supra-linear on I if and only if F (x1 + x2 ) > F (x1 ) + F (x2 ) for at least one
(x1 , x2 ) ? I 2
F is sub-linear on I if and only if F (x1 + x2 ) < F (x1 ) + F (x2 ) for at least one (x1 , x2 ) ? I 2
F is strictly sub-linear (resp. supra-linear) on I if it is sub-linear (resp. supra-linear) but not
supra-linear (resp. sub-linear) on I.
Note that these definitions also work when using n-tuples instead of couples on the interval (useful
in Lemma 3).
Note that whenever ? > 0, Fspk is both supra and sub-linear on I = [0, +?[ whereas
Fsat is strictly sub-linear on the same interval. Fsat is not supra-linear on I because
Fsat (x1 + x2 ) ? Fsat (x1 ) + Fsat (x2 ) for all (x1 , x2 ) ? I 2 , by definition of Fsat . Moreover, Fsat
is sub-linear on I because Fsat (a + b) = 1 and Fsat (a) + Fsat (b) = 2 for at least one (a, b) ? I 2
such that a ? ? and b ? ?. All in all, Fsat is strictly sub-linear on I.
Similarly to Fsat , Fspk is sub-linear on I because Fspk (a + b) = 1 and Fspk (a) + Fspk (b) = 2
for at least one (a, b) ? I 2 such that a ? ? and b ? ?. Moreover, Fspk is supra-linear because
Fspk (c + d) = 1 and Fspk (c) + Fspk (d) = 0 for at least one (c, d) such that c < ? and d < ? but
c + d ? ?. All in all, Fspk is both sub-linear and supra-linear.
2.3
Boolean Algebra
In order to study the range of possible input-output mappings implementable by a two stage neurons we use Boolean functions, which can efficiently and formally describe all binary input-output
mappings. Let us recall the definition of this extensively studied mathematical object [3, 17]:
Definition 4. A Boolean function of n variables is a function on {0, 1}n into {0, 1}, where n is a
positive integer.
In Table.1 the truth table for two Boolean functions g and h is presented. These Boolean functions
are fully and uniquely defined by their truth table. Both g and h are positive lnBFs (see chapter 9 of
[3] for an extensive study of linear separability); because of its importance we recall the definition
of positive Boolean functions:
Definition 5. Let f be a Boolean function on {0, 1}n . f is positive if and only if f (X) ? f (Z)
?(X, Z) ? {0, 1}n such that X ? Z (meaning that ?i : xi ? zi )
We also recall the notion of implication as it is important to observe that a dendritic input-output
function (or tuning) may or not imply the neuron?s input-output function:
3
Definition 6. Let f and g be two Boolean functions.
f implies g ?? f (X) = 1 =? g(X) = 1 ?X ? {0, 1}n
As will become clear, we can treat each dendritic unit as computing its own Boolean function on its
inputs: for a unit?s output to imply the whole neuron?s output then means that if a unit outputs a 1,
then the neuron outputs a 1.
In order to describe positive Boolean functions, it is useful to decompose them into positive terms
and positive clauses:
Definition 7. Let X (j) be a tuple of k < n positive integers referencing the different variables
present in a term or a clause.
^
A positive term j is a conjunction of variables written as Tj (X) =
xi .
i?X (j)
A positive clause j is a disjunction of variables written as Cj (X) =
_
xi .
i?X (j)
A term or (resp. clause) is prime if it is not implied by (resp. does not imply) any other term (resp.
clause) in a disjunction (resp. conjunction) of multiple terms (resp. clauses).
These terms and clauses can then define the Disjunctive or Conjunctive Normal Form (DNF or CNF)
expression of a Boolean function f , particularly:
Definition 8. A complete positive DNF is a disjunction of prime positive terms T :
_ ^
DNF(f ) :=
xi
Tj ?T
i?X (j)
Definition 9. A complete positive CNF is a conjunction of prime positive clauses C:
^ _
CNF(f ) :=
xi
Cj ?C
i?X (j)
It has been shown that all positive Boolean functions can be expressed as a positive complete DNF
([3] Theorem 1.24); similarly all positive Boolean functions can be expressed as a positive complete
CNF. These complete positive DNF or CNF are the shortest possible DNF or CNF descriptions of
positive Boolean functions. To clarify all these definitions let us introduce a series of examples build
around g and h.
Example 1. Let us take X (1) = (1, 2) and X (2) = (3, 4). These tuples define two positive terms
T1 (X) = x1 ? x2 where T1 (X) = 1 only when x1 = 1 and x2 = 1 and T1 (X) = 0 otherwise;
similarly T2 (X) = x3 ? x4 where T2 (X) = 1 only when x3 = 1 and x4 = 1. These tuples can also
define two positive clauses C1 (X) = x1 ? x2 where C1 (X) = 1 as soon as x1 = 1 or x2 = 1, and
similarly C2 (X) = x3 ? x4 where C2 (X) = 1 as soon as x3 = 1 or x4 = 1. In the disjunction
of terms T1 ? T2 the terms are prime because T1 (X) = 1 is not implied by T2 (X) = 1 for all X
(and vice-versa). Similarly in the conjunction of clauses C1 ? C2 the clauses are prime because
C1 (X) = 1 does not imply that C2 (X) = 1 for all X (and vice-versa). T1 ? T2 is the complete
positive DNF expression of g; alternatively C1 ? C2 is the complete positive CNF expression of h.
The truth tables of g and h are displayed in Table 1
3
Results
We first prove here that a two stage neuron with a sufficient number of only spiking or only saturating dendritic units can implement all positive Boolean functions, particularly lnBFs like g and
h, whereas a classic McCulloch & Pitts unit is restricted to linearly separable Boolean functions.
Moreover, we present two construction architectures for building a two stage neuron implementing
a positive Boolean function based on its complete DNF or CNF expression. Finally we show that
the DNF-based architecture is only possible with spiking dendritic units and not with saturating
dendritic units.
4
Figure 1: Modeling dendritic spikes, dendritic saturations, and their impact on computation
capacity (A) Two types of transfer functions for a unit j with a normalized height to 1 and a variable
threshold ?j . The input is the local weighted sum Wj .X and the output is yj (A1) A spiking transfer
function models somatic spikes and dendritic spikes (A2) A saturating transfer function models
dendritic saturations (B) From left to right, a unit implementing the term T (X) = x1 ? x2 , and
two units implementing the clause C(X) = x3 ? x4 , in circles are synaptic weights and in squares
are threshold and the type of transfer function (spk:spiking, sat:saturating) (C) Two architectures to
implement all positive Boolean functions in a two stage neuron, the d dendritic units correspond
to all terms of a DNF (left) or to all the clauses of a CNF (right), the somatic unit respectively
implements an AND or an OR logic operation
3.1
Computation of positive Boolean functions using non-linear dendritic units
Lemma 1. A two stage neuron with non-negative synaptic weights and increasing transfer functions
necessarily implements positive Boolean functions
Proof. Let f be the Boolean function representing the input-output mapping of a two stage neuron,
and two binary vectors X and Z such that X ? Z. We have ?j ? {1, 2, . . . , d} non-negative local
weights wi,j ? 0, thus for a given dendritic unit j we have:
wi,j xi ? wi,j zi .
We can sum inequalities for all i, and Fj are increasing transfer functions thus:
Fj (Wj .X) ? Fj (Wj .Z).
We can sum the d inequalities corresponding to every dendritic unit, and F0 is an increasing transfer
function thus:
f (X) ? f (Z).
Lemma 2. A term (resp. a clause) can be implemented by a unit with a supra-linear (resp. sublinear) transfer function
Proof. We need to provide the parameter sets of a transfer function implementing a term (resp. a
clause) with the constraint that the transfer function is supra-linear (resp. sub-linear). Indeed, a
supra-linear transfer function (like the spiking transfer function) with the parameter set wi = 1 if
i ? X (j) and wi = 0 otherwise and ? = card(X (j) ) implements the term Tj . A sub-linear transfer
function (like the saturating transfer function) with the parameter set wi = 1 if i ? X (j) and wi = 0
otherwise and ? = 1 implements the clause Cj . These implementation are illustrated by examples
in Figure 1B
5
Lemma 3. A term (resp. a clause) cannot be implemented by a unit with a strictly sub-linear (resp.
supra-linear) transfer function
Proof. We prove this lemma for a term, the proof is similar for a clause. Let Tj be the term defined
by X (j) , with card(X (j) ) ? 2. First, for all input vectors X such that xi = 1 with i ? X (j) and
xk6=i = 0 then Tj (X) = 0 implyingX
that F (W.X) = F (wi xi ) = 0. One can sum all these elements
to obtain the following equality
F (wi xi ) = 0. Second, for all input vectors X such that
i?X (j)
xi = 1 for all i ? X (j) then Tj (X) = 1 implying that F
X
wi xi = 1. Putting the two pieces
i?X ( j)
together we obtain:
F
X
X
wi xi >
F (wi xi )
i?X (j)
i?X (j)
This inequality shows that the tuple of points (wi xi |i ? X (j) ) defining a term must have F supralinear; therefore, by Definition 2, F cannot be both strictly sub-linear and implement a term.
Using these Lemmas we show the possible and impossible implementation architectures of positive
Boolean functions in two-layer neuron models using either spiking or saturating dendritic units.
Proposition 1. A two stage neuron with non-negative synaptic weights and a sufficient number of
dendritic units with spiking transfer functions can implement only and all positive Boolean functions
based on their positive complete DNF
Proof. A two stage neuron can only compute positive Boolean functions (Lemma 1). All positive
Boolean functions can be expressed as a positive complete DNF; because a spiking dendritic unit
has a supra-linear transfer function it can implement all possible terms (Lemma 2). Therefore a two
stage neuron model without inhibition can implement only and all positive Boolean functions with
as many dendritic units as there are terms in the functions? positive complete DNF. This architecture
is represented on Figure 1C (left).
Informally, this simply means that a dendrite is a pattern detector: if a pattern is present in the
input then the dendritic unit elicits a dendritic spike. This architecture has been repeatedly invoked
by theoreticians [8] and experimentalists ([9] in supplementary material) to suggest that dendritic
spikes increase a neuron?s computational capacity. With this architecture, however, the dendritic
transfer function, if it is viewed as a Boolean function, formally implies the neuron?s input-output
mapping. This has not been confirmed experimentally yet.
Proposition 2. A two stage neuron with non-negative synaptic weights and a sufficient number of
dendritic units with spiking or saturating transfer functions can implement only and all positive
Boolean functions based on their positive complete CNF
Proof. A two stage neuron can only compute positive Boolean functions (Lemma 1). All positive
Boolean functions can be expressed as a positive complete CNF; because a spiking or a saturating dendritic unit has a sub-linear transfer function they both can implement all possible clauses
(Lemma 2). Therefore a two stage neuron model without inhibition can implement only and all positive Boolean functions with as many dendritic units as there are clauses in the functions? positive
complete CNF. This architecture is represented on Figure 1C (right).
To our knowledge, this implementation architecture has not yet been proposed in the neuroscience
literature. It shows that saturations can increase the computational power of a neuron as much as
dendritic spikes. It also shows that another implementation architecture is possible using spiking
dendritic units. Using this architecture, the dendritic units? transfer functions do not imply the
somatic output. This independence of dendritic and somatic response to inputs has been observed in
Layer 2/3 neurons [6].
Proposition 3. A two stage neuron with non-negative synaptic weights and only dendritic units with
saturating transfer functions cannot implement a positive Boolean function based on its complete
DNF
6
Proof. The transfer function of a saturating dendritic unit is strictly sub-linear, therefore this unit
cannot implement a term (Lemma 3).
This result suggests that spiking dendritic units are more flexible than saturating dendritic units;
they allow the computation of Boolean functions through either DNF or CNF-based architectures
(illustrated in Figure 2), whereas saturating units are restricted to CNF-based architectures.
3.2
Implementation of a family of positive lnBFs using either spiking or saturating dendrites
Figure 2: Implementation of two linearly non-separable Boolean functions using CNF-based or
DNF-based architectures. Four parameter sets of two-stage neuron models: in circles are synaptic
weights and in squares are threshold and the unit type (spk:spiking, sat:saturating). These parameter
sets implement (A1/A2) g or (B1/B2) h, two lnBFs depicted in Table 1 using: (A1/B1) a DNFbased architecture and spiking dendritic units only; (A2/B2) a CNF-based architecture and saturating
dendritic units only.
The Boolean functions g and h form a family of Boolean functions we call feature binding problems
in reference to [8]. In this section we show how this family can be implemented using either a DNFbased or CNF-based architecture. For some Boolean functions, the DNF and CNF grow at different
rates as a function of the number of variables [3, 11]. This is the case when g and h are defined for
n input variables.
Example 2. Let?s define g by the complete positive DNF expression ? :
?(g(x1 , z1 , . . . , xn , zn )) := x1 z1 ? x2 z2 ? ? ? ? ? xn zn
The same function g has a unique complete positive CNF expression; let?s call it ?. The clauses of
? are exactly those elementary disjunctions of n variables that involve one variable out of each of
the pairs {x1 , z1 }, {x2 , z2 }, . . . , {xn , zn }. Thus ? has 2n clauses.
Example 3. Let?s define h by the complete positive CNF expression ?:
?(h(x1 , z1 , . . . , xn , zn )) := (x1 ? z1 )(x2 ? z2 ) . . . (xn ? zn )
The same function h has a unique complete positive DNF expression; let?s call it ?. The terms of ?
are exactly those elementary conjunctions of n variables that involve one variable out of each of the
pairs {x1 , z1 }, {x2 , z2 }, . . . , {xn , zn }. Thus ? has 2n terms.
Table 2 shows the number of necessary units for g and h depending on the chosen architecture. From
Propositions 1 and 2, it is immediately clear that spiking dendritic units always give access to the
7
Table 2: Number of necessary units
Boolean function
g
h
# of terms in DNF
n
2n
# of clauses in CNF
2n
n
minimal possible two-stage neuron implementation. A neuron with spiking dendritic units can thus
implement g with n units using DNF-based and h with n units using CNF-based architectures; but
saturating units, restricted to CNF-based architectures, can only implement h with 2n units.
4
Discussion
The main result of our study is that dendritic saturations can play a computational role that is as important as dendritic spikes: saturating dendritic units enable a neuron to compute lnBFs (as shown in
Proposition 2). The same Proposition shows that a neuron can compute lnBFs decomposed according to the CNF using spiking dendritic units; with this architecture, dendritic tuning does not imply
the somatic tuning to inputs. Moreover, we demonstrated that an important family of lnBFs formed
by g and h can be implemented in a two stage neuron using either spiking or saturating dendritic
units. We also showed that lnBFs cannot be implemented in a two stage neuron using a DNF-based
architecture with only dendritic saturating units (Proposition 3).
These results nicely separate the implications of saturating and spiking dendritic units in single neuron computation. On the one hand, spiking dendritic units are a more flexible basis for computation,
as they can be employed in two different implementation architectures (Proposition 1 and 2) where
dendritic tunings ? the dendritic unit transfer functions ? can imply or not the tuning of the whole
neuron. The latter may explain why dendrites can have a tuning different from the whole neuron as
has been observed in Layer 2/3 pyramidal cells of the visual cortex [6]. On the other hand, saturating
dendritic units can enhance single neuron computation through implementing all positive Boolean
functions (Proposition 3), while reducing the energetic costs associated with the active ion channels
required for dendritic spikes [4, 13].
For an infinite number of dendritic units, saturating and spiking units lead to the same increase
in computation capacity; for a finite number of dendritic units our results suggests that spiking
dendritic units could have advantages over saturating dendritic units. In the second part of our study
we showed that a family of lnBFs can be described by an expression containing an exponential
or a linear number of elements. Namely, the lnBFs defined by g or h can be implemented with
a linear number of spiking dendritic units whereas for g a neuronal implementation using only
saturations requires an exponential number of saturating dendritic units. Consequently, spiking
dendritic units may allow the minimization of dendritic units necessary to implement this family of
Boolean functions.
The Boolean functions g and h formalize feature binding problems [8] which are important and
challenging computations (see [15] for review). Some single neuron solutions to feature binding
problems have been proposed in [8], but restricted to DNF-based architectures; our results thus
generalize and extend this study by proposing alternative CNF-based solutions. Moreover, we show
that this alternative architecture enables the solution of an important family of binding problems
with a linear number of spiking dendritic unit. Thus we have proposed more efficient solutions to a
family of challenging computations.
Because of their elegance and simplicity stemming from Boolean algebra, we believe our results
are applicable to more complex situations. They can be extended to continuous transfer functions,
which are more biologically plausible; in this case the notion of sub-linearity and supra-linearity
are replaced by concavity and convexity. Moreover, all the parameters used here for proofs and
examples are integer-valued but the same proofs and examples are easily extendable to continuous
steady-state rate models where parameters are real-valued. In conclusion, our results have a solid
formal basis, moreover, they both explain recent experimental findings and suggest a new way to
implement Boolean functions using saturating as well as spiking dendritic units.
8
References
[1] T. Abrahamsson, L. Cathala, K. Matsui, R. Shigemoto, and D.A. DiGregorio. Thin Dendrites
of Cerebellar Interneurons Confer Sublinear Synaptic Integration and a Gradient of Short-Term
Plasticity. Neuron, 73(6):1159?1172, March 2012.
[2] S. Cash and R. Yuste. Linear summation of excitatory inputs by CA1 pyramidal neurons.
Neuron, 22(2):383?394, February 1999.
[3] Y. Crama and P.L. Hammer. Boolean Functions: Theory, Algorithms, and Applications (Encyclopedia of Mathematics and its Applications). Cambridge University Press, 2011.
[4] S. Gasparini, M. Migliore, and J.C. Magee. On the initiation and propagation of dendritic
spikes in CA1 pyramidal neurons. The Journal of Neuroscience, 24(49):11046?11056, December 2004.
[5] M. Hausser and B.W. Mel. Dendrites: bug or feature? Current Opinion in Neurobiology,
13(3):372?383, June 2003.
[6] H. Jia, N.L. Rochefort, X. Chen, and A. Konnerth. Dendritic organization of sensory input to
cortical neurons in vivo. Nature, 464(7293):1307?1312, 2010.
[7] C. Koch. Biophysics of computation : information processing in single neurons. Oxford
University Press, New York, 1999.
[8] R. Legenstein and W. Maass. Branch-Specific Plasticity Enables Self-Organization of Nonlinear Computation in Single Neurons. Journal of Neuroscience, 31(30):10787?10802, July
2011.
[9] A. Losonczy, J.K. Makara, and J.C. Magee. Compartmentalized dendritic plasticity and input
feature storage in neurons. Nature, 452(7186):436?441, March 2008.
[10] W.S. McCulloch and W. Pitts. A logical calculus of the ideas immanent in nervous activity.
Bulletin of mathematical biology, 52(1-2):99?115; discussion 73?97, January 1943.
[11] P.B. Miltersen, J. Radhakrishnan, and I. Wegener. On converting CNF to DNF. Theoretical
computer science, 347:325?335, November 2005.
[12] P. Poirazi, T. Brannon, and B.W. Mel. Pyramidal neuron as two-layer neural network. Neuron,
37(6):989?999, March 2003.
[13] A. Polsky, B.W. Mel, and J. Schiller. Computational subunits in thin dendrites of pyramidal
cells. Nature Neuroscience, 7(6):621?627, June 2004.
[14] M.W.H. H Remme, M. Lengyel, and B.S. Gutkin. Democracy-independence trade-off in oscillating dendrites and its implications for grid cells. Neuron, 66(3):429?37, May 2010.
[15] A.L. Roskies. The Binding Problem. Neuron, 24:7?9, 1999.
[16] K. Vervaeke, A. Lorincz, Z. Nusser, and R.A. Silver. Gap Junctions Compensate for Sublinear
Dendritic Integration in an Inhibitory Network. Science, 335(6076):1624?1628, March 2012.
[17] I. Wegener. Complexity of Boolean Functions. Wiley-Teubner, 1987.
9
|
4738 |@word open:1 calculus:1 accounting:1 solid:1 series:1 makara:1 current:1 z2:4 yet:2 conjunctive:2 written:2 must:1 stemming:1 plasticity:3 shape:1 enables:2 implying:1 nervous:1 theoretician:1 short:1 height:1 mathematical:2 c2:5 become:1 prove:3 introduce:2 indeed:2 behavior:1 growing:2 rall:1 decomposed:1 resolve:1 increasing:3 linearity:5 moreover:8 mcculloch:3 spends:1 depolarization:5 ca1:3 proposing:1 finding:1 every:1 tackle:1 exactly:2 uk:2 unit:77 positive:54 t1:6 local:12 treat:1 oxford:1 studied:1 suggests:2 challenging:2 matsui:1 range:1 unique:2 yj:1 implement:23 x3:8 suggest:2 onto:1 cannot:6 storage:1 impossible:1 seminal:1 equivalent:1 humphries:2 map:1 demonstrated:1 simplicity:1 immediately:1 miltersen:1 enabled:1 classic:1 notion:3 analogous:1 resp:14 construction:1 trigger:1 play:1 ulm:3 romain:2 element:2 expensive:1 particularly:2 democracy:1 observed:3 disjunctive:2 role:1 wj:5 decrease:1 trade:1 convexity:2 complexity:1 practise:1 algebra:3 basis:2 spk:2 easily:1 chapter:1 represented:2 dnf:28 describe:2 artificial:1 disjunction:5 whose:1 supplementary:1 plausible:2 valued:2 otherwise:5 advantage:1 clamp:1 fr:2 description:1 bug:1 differentially:1 manchester:2 supra:16 produce:1 radhakrishnan:1 boris:2 oscillating:1 silver:1 object:1 depending:1 ac:1 progress:1 implemented:9 predicted:1 implies:3 diderot:1 hammer:1 gasparini:1 enable:5 opinion:1 material:1 implementing:6 explains:1 require:3 investigation:1 stellate:1 proposition:11 dendritic:90 decompose:1 elementary:2 summation:1 crama:1 strictly:6 clarify:1 around:1 koch:1 normal:3 mapping:4 pitt:3 driving:1 a2:3 applicable:1 vice:2 weighted:2 minimization:2 always:2 cash:1 voltage:3 conjunction:5 june:2 modelling:1 greatly:2 contrast:2 cnrs:1 relation:1 expand:2 issue:2 flexible:2 integration:3 nicely:1 x4:8 biology:1 thin:2 t2:5 migliore:1 replaced:1 conductance:1 organization:2 interneurons:2 tj:6 implication:3 konnerth:1 tuple:2 necessary:4 tree:1 circle:2 theoretical:1 minimal:1 instance:1 modeling:1 boolean:55 zn:6 cost:1 wonder:1 characterize:1 sublinearity:1 extendable:1 off:1 enhance:2 together:1 containing:1 summating:1 potential:2 b2:2 depends:1 piece:1 teubner:1 view:1 elicited:1 jia:1 vivo:3 minimize:1 square:2 formed:1 efficiently:1 correspond:1 generalize:1 confirmed:2 lengyel:1 explain:3 simultaneous:1 reach:1 detector:1 whenever:1 synaptic:7 definition:17 lorincz:1 elegance:1 proof:9 associated:1 couple:1 logical:1 recall:3 knowledge:1 cj:3 formalize:1 uncaging:1 feed:1 response:2 maximally:1 compartmentalized:1 though:2 stage:19 hand:2 nonlinear:2 propagation:1 believe:1 building:1 normalized:1 equality:1 maass:1 illustrated:2 attractive:2 confer:1 self:1 uniquely:1 steady:1 mel:3 hippocampal:1 complete:19 fj:6 meaning:1 invoked:1 spiking:48 clause:23 exponentially:1 extend:1 versa:2 cambridge:1 tuning:15 grid:1 mathematics:1 similarly:5 nonlinearity:1 gutkin:3 access:1 f0:4 cortex:1 inhibition:2 own:1 recent:2 showed:2 prime:5 certain:1 initiation:1 inequality:3 binary:7 additional:1 employed:1 converting:1 shortest:1 signal:1 arithmetic:3 branch:2 multiple:4 july:1 compensate:1 equally:1 bigger:1 a1:3 biophysics:1 impact:1 experimentalists:1 cerebellar:2 cell:4 c1:5 ion:1 whereas:9 separately:1 interval:3 pyramidal:6 grow:2 pass:1 recording:2 december:1 contrary:2 integer:3 call:3 wegener:2 variety:1 independence:2 zi:2 architecture:35 idea:4 poirazi:1 whether:1 expression:9 energetic:2 york:1 cnf:30 repeatedly:1 useful:2 clear:2 informally:1 involve:2 encyclopedia:1 locally:1 extensively:1 generate:1 inhibitory:1 neuroscience:4 putting:1 four:1 soma:2 threshold:5 imaging:1 sum:10 extends:2 family:12 legenstein:1 layer:7 brannon:1 activity:1 constraint:1 x2:21 separable:5 according:1 march:4 membrane:3 smaller:2 separability:1 wi:13 biologically:1 restricted:5 referencing:1 taken:1 resource:1 equation:1 remains:1 turn:1 junction:1 decomposing:1 operation:1 polsky:1 observe:1 alternative:2 build:1 february:1 implied:2 question:2 spike:13 losonczy:1 gradient:1 elicits:1 card:2 separate:1 capacity:8 schiller:1 length:1 quantal:1 potentially:2 negative:5 implementation:10 calcium:2 unknown:1 neuron:67 enabling:1 implementable:2 finite:1 november:1 displayed:2 january:1 subunit:2 defining:1 situation:1 extended:1 neurobiology:1 somatic:10 pair:2 paris:5 metabolically:1 extensive:1 z1:6 required:1 namely:1 hausser:1 address:1 suggested:2 below:1 pattern:2 saturation:8 power:2 force:1 glutamate:1 representing:1 imply:10 magee:2 review:1 literature:1 fully:1 sublinear:3 yuste:1 sufficient:4 excitatory:4 changed:1 soon:2 formal:1 allow:2 bulletin:1 departing:1 xn:6 cortical:1 concavity:2 sensory:1 forward:1 supralinear:1 logic:1 inserm:3 confirm:1 active:1 sat:2 b1:2 tuples:3 xi:14 alternatively:1 continuous:2 why:3 table:9 channel:1 transfer:38 nature:3 dendrite:25 necessarily:1 complex:1 rue:3 main:1 linearly:7 immanent:1 whole:3 x1:23 neuronal:1 en:4 wiley:1 sub:18 comprises:1 exponential:3 third:1 saturate:1 theorem:1 specific:1 showing:1 importance:1 chen:1 gap:1 depicted:1 simply:1 visual:2 expressed:4 saturating:38 binding:5 radically:1 truth:3 viewed:1 consequently:3 experimentally:3 infinite:1 reducing:1 lemma:11 pas:1 experimental:1 caz:1 formally:5 mark:2 support:1 latter:1
|
4,131 | 4,739 |
Clustering Sparse Graphs
Yudong Chen
Department of Electrical and Computer Engineering
The University of Texas at Austin
Austin, TX 78712
[email protected]
Sujay Sanghavi
Department of Electrical and Computer Engineering
The University of Texas at Austin
Austin, TX 78712
[email protected]
Huan Xu
Mechanical Engineering Department
National University of Singapore
Singapore 117575, Singapore
[email protected]
Abstract
We develop a new algorithm to cluster sparse unweighted graphs ? i.e. partition
the nodes into disjoint clusters so that there is higher density within clusters, and
low across clusters. By sparsity we mean the setting where both the in-cluster and
across cluster edge densities are very small, possibly vanishing in the size of the
graph. Sparsity makes the problem noisier, and hence more difficult to solve.
Any clustering involves a tradeoff between minimizing two kinds of errors: missing edges within clusters and present edges across clusters. Our insight is that in
the sparse case, these must be penalized differently. We analyze our algorithm?s
performance on the natural, classical and widely studied ?planted partition? model
(also called the stochastic block model); we show that our algorithm can cluster
sparser graphs, and with smaller clusters, than all previous methods. This is seen
empirically as well.
1
Introduction
This paper proposes a new algorithm for the following task: given a sparse undirected unweighted
graph, partition the nodes into disjoint clusters so that the density of edges within clusters is higher
than the edges across clusters. In particular, we are interested in settings where even within clusters the edge density is low, and the density across clusters is an additive (or small multiplicative)
constant lower.
Several large modern datasets and graphs are sparse; examples include the web graph, social graphs
of various social networks, etc. Clustering naturally arises in these settings as a means/tool for
community detection, user profiling, link prediction, collaborative filtering etc. More generally,
there are several clustering applications where one is given as input a set of similarity relationships,
but this set is quite sparse. Unweighted sparse graph clustering corresponds to a special case in
which all similarities are either ?1? or ?0?.
As has been well-recognized, sparsity complicates clustering, because it makes the problem noisier.
Just for intuition, imagine a random graph where every edge has a (potentially different) probability
pij (which can be reflective of an underlying clustering structure) of appearing in the graph. Consider
now the edge random variable, which is 1 if there is an edge, and 0 else. Then, in the sparse graph
?
setting of small pij ? 0, the mean of this variable is pij but its standard deviation is pij , which
1
can be much larger. This problem gets worse as pij gets smaller. Another parameter governing
problem difficulty is the size of the clusters; smaller clusters are easier to lose in the noise.
Our contribution: We propose a new algorithm for sparse unweighted graph clustering. Clearly,
there will be two kinds of deviations (i.e. errors) between the given graph and any candidate clustering: missing edges within clusters, and present edges across clusters. Our key realization is that for
sparse graph clustering, these two types of error should be penalized differently. Doing so gives as
a combinatorial optimization problem; our algorithm is a particular convex relaxation of the same,
based on the fact that the cluster matrix is low-rank (we elaborate below). Our main analytical
result in this paper is theoretical guarantees on its performance for the classical planted partition
model [10], also called the stochastic block-model [1, 22], for random clustered graphs. While this
model has a rich literature (e.g., [4, 7, 10, 20]), we show that our algorithm out-performs (upto
at most log factors) every existing method in this setting (i.e. it recovers the true clustering for a
bigger range of sparsity and cluster sizes). Both the level of sparsity and the number and sizes of
the clusters are allowed to be functions of n, the total number of nodes. In fact, we show that in a
sense we are close to the boundary at which ?any? spectral algorithm can be expected to work. Our
simulation study confirms our theoretic finding, that the proposed method is effective in clustering
sparse graphs and outperforms existing methods.
The rest of the paper is organized as follows: Section 1.1 provides an overview of related work;
Section 2 presents both the precise algorithm, and the idea behind it; Section 3 presents the main
results ? analytical results on the planted partition / stochastic block model ? which are shown to
outperform existing methods; Section 4 provides simulation results; and finally, the proof of main
theoretic results is outlined in Section 5.
1.1
Related Work
The general field of clustering, or even graph clustering, is too vast for a detailed survey here; we
focus on the most related threads, and therein too primarily on work which provides theoretical
?cluster recovery? guarantees on the resulting algorithms.
Correlation clustering: As mentioned above, every candidate clustering will have two kinds of errors; correlation clustering [2] weighs them equally, thus the objective is to find the clustering which
minimizes just the total number of errors. This is an NP-hard problem, and [2] develops approximation algorithms. Subsequently, there has been much work on devising alternative approximation
algorithms for both the weighted and unweighted cases, and for both agreement and disagreement
objectives [12, 13, 3, 9]. Approximations based on LP relaxation [11] and SDP relaxation [25, 19],
followed by rounding, have also been developed. All of this line of work is on worst-case guarantees. We emphasize that while we do convex relaxation as well, we do not do rounding; rather, our
convex program itself yields an optimal clustering.
Planted partition model / Stochastic block model: This is a natural and classic model for studying
graph clustering in the average case, and is also the setting for our performance guarantees. Our
results are directly comparable to work here; we formally define this setting in section 3 and present
a detailed comparison, after some notation and our theorem, in section 3 below.
Sparse and low-rank matrix decomposition: It has recently been shown [8, 6] that, under certain
conditions, it is possible to recover a low-rank matrix from sparse errors of arbitrary magnitude; this
has even been applied to graph clustering [17]. Our algorithm turns out to be a weighted version
of sparse and low-rank matrix decomposition, with different elements of the sparse part penalized
differently, based on the given input. To our knowledge, ours is the first paper to study any weighted
version; in that sense, while our weights have a natural motivation in our setting, our results are
likely to have broader implications, for example robust versions of PCA when not all errors are
created equal, but have a corresponding prior.
2
Algorithm
Idea: Our algorithm is a convex relaxation of a natural combinatorial objective for the sparse clustering problem. We now briefly motivate this objective, and then formally describe our algorithm.
Recall that we want to find a clustering (i.e. a partition of the nodes) such that in-cluster connectiv2
ity is denser than across-cluster connectivity. Said differently, we want a clustering that has a small
number of errors, where an error is either (a) an edge between two nodes in different clusters, or
(b) a missing edge between two nodes in the same cluster. A natural (combinatorial) objective is to
minimize a weighted combination of the two types of errors.
The correlation clustering setup [2] gives equal weights to the two types of errors. However, for
sparse graphs, this will yield clusters with a very small number of nodes. This is because there is
sparsity both within clusters and across clusters; grouping nodes in the same cluster will result in a
lot of errors of type (b) above, without yielding corresponding gains in errors of type (a) ? even when
they may actually be in the same cluster. This can be very easily seen: suppose, for example, the
?true? clustering has two clusters with equal size, and the in-cluster and across-cluster edge density
are both less than 1/4. Then, when both errors are weighted equally, the clustering which puts every
node in a separate cluster will have lower cost than the true clustering.
To get more meaningful solutions, we penalize the two types of errors differently. In particular,
sparsity means that we can expect many more errors of type (b) in any solution, and hence we
should give this (potentially much) smaller weight than errors of type (a). Our crucial insight is that
we can know what kind of error will (potentially) occur on any given edge from the given adjacency
matrix itself. In particular, if aij = 1 for some pair i, j, when in any clustering it will either have no
error, or an error of type (a); it will never be an error of type (b). Similarly if aij = 0 then it can only
be an error of type (b), if at all. Our algorithm is a convex relaxation of the combinatorial problem of
finding the minimum cost clustering, with the cost for an error on edge i, j determined based on the
value of aij . Perhaps surprisingly, this simple idea yields better results than the extensive literature
already in place for planted partitions.
We proceed by representing the given adjacency matrix A as the sum of two matrices A = Y + S,
where we would like Y to be a cluster matrix, with yij = 1 if and only if i, j are in the same cluster,
and 0 otherwise12 . S is the corresponding error matrix as compared to the given A, and has values
of +1, -1 and 0.
We now make a cost matrix C ? Rn?n based on the insight above; we choose two values cA and
cAc and set cij = cA if the corresponding aij = 1, and cij = cAc if aij = 0. However, diagonal
cii = 0. With this setup, we have
min
Combinatorial Objective:
Y,S
s.t
kC ? Sk1
(1)
Y +S =A
Y is a cluster matrix
Here C ? S denotes the matrix obtained via element-wise product between the two matrices C, S,
i.e. (C ? S)ij = cij sij . Also k ? k1 denotes the element-wise `1 norm (i.e. sum of absolute values of
elements).
Algorithm: Our algorithm involves solving a convex relaxation of this combinatorial objective, by
replacing the ?Y is a cluster matrix? constraint with (i) constraints 0 ? yij ? 1 for all elements i, j,
and (ii) a nuclear norm3 penalty kY k? in the objective. The latter encourages Y to be low-rank, and
is based on the well-established insight that the cluster matrix (being a block-diagonal collection of
1?s) is low-rank. Thus we have our algorithm:
Sparse Graph Clustering:
min
kY k? + kC ? Sk1
(2)
s.t.
0 ? yij ? 1, ?i, j
Y + S = A,
(3)
Y,S
Once Yb is obtained, check if it is a cluster matrix (say e.g. via an SVD, which will also reveal
cluster membership if it is). If it is not, any one of several rounding/aggregration ideas can be
used empirically. Our theoretical results provide sufficient conditions under which the optimum
of the convex program is integral and a clustering, with no rounding required. Section 3 in the
supplementary material provides details on fast implementation for large matrices; this is one reason
1
In this paper we will assume the convention that aii = 1 and yii = 1 for all nodes i.
In other words, Y is the adjacency matrix of a graph consisting of disjoint cliques.
3
The nuclear norm of a matrix is the sum of its singular values.
2
3
we did not include a semidefinite constraint on Y in our algorithm. Our algorithm has two positive
parameters: cA , cAc . We defer discussion on how to choose them until after our main result.
Comments: Based on the given A and these values, the optimal Yb may or may not be a cluster matrix. If Yb is a cluster matrix, then clearly it minimizes the combinatorial objective above. Additionally, it is not hard to see (proof in the supplementary material) that its performance is ?monotone?,
in the sense that adding edges ?aligned with? Yb cannot result in a different optimum, as summarized
in the following lemma. This shows that, in the terminology of [19, 4, 14], our method is robust
under a classical semi-random model where an adversary can add edge within clusters and remove
edges between clusters.
Lemma 1. Suppose Yb is the optimum of Formulation (2) for a given A. Suppose now we arbitrarily
e by (a) choosing some edges such that ybij = 1 but aij = 0,
change some edges of A to obtain A,
and making e
aij = 1, and (b) choosing some edges where ybij = 0 but aij = 1, and making e
ai j = 0.
b
e
Then, Y is also an optimum of Formulation (2) with A as the input.
Our theoretical guarantees characterize when the optimal Yb will be a cluster matrix, and recover
the clustering, in a natural classical problem setting called the planted partition model [10]. These
theoretical guarantees also provide guidance on how one would pick parameter values in practice;
we thus defer discussion on parameter picking until after we present our main theorem.
3
Performance Guarantees
In this section we provide analytical performance guarantees for our algorithm under a natural and
classical graph clustering setting: (a generalization of) the planted partition model [10]. We first
describe the model, and then our results.
(Generalized) Planted partition model: Consider a random graph generated as follows: the n
nodes are partitioned into r disjoint clusters, which we will refer to as the ?true? clusters. Let K be
the minimum cluster size. For every pair of nodes i, j that belong to the same cluster, edge (i, j) is
present in the graph with probability that is at least p?, while for every pair where the nodes are in
different clusters the edge is present with probability at most q?. We call this model the ?generalized?
planted partition because we allow for clusters to be different sizes, and the edge probabilities also
to be different (but uniformly bounded as mentioned). The objective is to find the partition, given
the random graph generated from it.
Recall that A is the given adjacency matrix, and let Y ? be the matrix corresponding to the true
?
clusters as above ? i.e. yij
= 1 if and only if i, j are in the same true cluster, and 0 otherwise..
Our result below establishes conditions under which our algorithm, specifically the convex program
(2)-(3), yields this Y ? as the unique optimum (without any further need for rounding etc.) with high
probability (w.h.p.). Throughout the paper, with high probability means with probability at least
1 ? c0 n?10 for some absolute constant c0
q
q
1??
q
n
Theorem 1. Suppose we choose cA = 16?n1log n min
,
, and cAc =
q?
log4 n
nq
o
p?
?
?
?1
min
1?p? , 1 . Then (Y , A ? Y ) is the unique optimal solution to Formulation (2)
16 n log n
w.h.p. provided q? ? 14 , and
?
n
p? ? q?
? ? c1
log2 n.
p?
K
where c1 is an absolute positive constant.
Our theorem quantifies the tradeoff between the two quantities governing the hardness of a planted
partition problem ? the difference in edge densities p?q, and the minimum cluster size K ? required
for our algorithm to succeed, i.e. to recover the planted partition without any error. Note that here
p, q and K are allowed to scale with n. We now discuss and remark on our result, and then compare
its performance to past approaches and theoretical results in Table 1.
?
Note that we need K to be ?( n log2 n). This will be achieved only when p? ? q? is a constant
that does not change with n; indeed in this extreme our theorem becomes a ?dense graph? result,
4
?? q
matching e.g. the scaling in [17, 19]. If p??
decreases with n, corresponding to a sparser regime,
p?
then the minimum size of K required will increase.
?
A nice feature of our work is that we only need p? ? q? to be large only as compared to p?; several
other existing results (see Table 1) require a lower bound (as a function only of n, or n, K) on
p? ? q? itself. This allows us to guarantee recovery for much sparser graphs than all existing results.
4
For example, when K is ?(n), p? and p? ? q? can be as small as ?( logn n ). This scaling is close to
optimal: if p? < logn n then each cluster will be almost surely disconnected, and if p? ? q? = o( n1 ),
then on average a node has equally many neighbours in its own cluster and in another cluster ?
both are ill-posed situations in which one can
not4 hope
to recover the underlying clustering. When
?
n
K = ? n log2 n , p? and p? ? q? can be ? n log
, while the previous best result for this regime
K2
2
n
requires at least ? K
[20].
3
Parameters: Our algorithm has two parameters: cA and cAc . The theorem provides a way to choose
their values, assuming we know the values of the bounds p?, q?. To estimate these from data, we can
use the following rule of thumb; our empirical results are based on this rule. If all the clusters have
equal size K, it is easy to verify that the first eigenvalue of E [A ? I] is K(p ? q) ? p + nq with
n
?1, and the third eigenvalue
multiplicity 1, the second eigenvalue is K(p?q)?p with multiplicity K
n
is ?p with multiplicities (n ? K ) [16]. We thus have the following rule of thumb:
1. Compute the eigenvalues of A ? I, denoted as ?1 , . . . , ?n .
2. Let r = arg maxi=1,...,n?1 (?i ? ?i?1 ). Set K = n/r.
3. Solve for p and q from the equations
K(p ? q) ? p + nq = ?1
K(p ? q) ? p = ?2
Table 1: Comparison with literature. This table shows the lower-bound requirements on K and
p ? q that existing literature needs for exact recovery of the planted partitions/clusters. Note that this
table is under the assumption that every cluster is of size K, and the edge densities are uniformly
p and q (for within and across clusters respectively). As can be seen, our algorithm achieves a
better p ? q scaling than every other result. And, we achieve a better K scaling than every other
result except Shamir [23], Oymak & Hassibi [21] and Giesen & Mitsche[15]; we are off by a at
most log2 n factor from each of these. Perhaps more importantly, we use a completely different
algorithmic approach from all of the others.
Paper
Min. cluster size K
Density difference p ? q
Boppana [5]
Jerrum & Sorkin [18]
Condon & Karp [10]
Carson & Impaglizzo [7]
Feige & Kilian [14]
Shamir [23]
n/2
n/2
?(n)
n/2
n/2
?
?( n log n)
McSherry [20]
Giesen & Mitsche[15]
?(n2/3 )
?
?( n)
?
?( n)
n
?( p?log
)
n
1
?( n1/6? )
1
?(?n1/2?
)
p
?( ?n log n)
1
?(?
n log n)
n log n
?( q
)
K
Bollobas [4]
n
?( log1/8
)
n
This paper
?
?( n log2 n)
Oymak & Hassibi [21]
5
?
?(
?
pn2
3 )
Kq
n
log n
K?,
K })
n
?( K )
q
n log n
?(max{ q log
n , n })
?(max{
?
?(
pn log2 n
)
K
0.9
0.8
0.25
0.7
0.2
0.5
p
p
0.6
0.4
0.3
0.1
Our method
SLINK
Spectral
L+S
0.2
0.1
0
0.15
0.1
0.2
0.3
0.4
q
0.5
0.6
0.7
Our method
SLINK
Spectral
L+S
0.05
0
0.8
0
0.02
0.04
0.06
0.08
0.1
q
(a)
(b)
Figure 1: (a) Comparison of our method with Single-Linkage clustering (SLINK), spectral clustering, and low-rank-plus-sparse (L+S) approach. The area above each curve is the values of (p, q) for
which a method successfully recovers the underlying true clustering. (b) More detailed results for
the area in the box in (a). The experiments are conducted on synthetic data with n = 1000 nodes
and r = 5 clusters with equal size K = 200.
4
Empirical Results
We perform experiments on synthetic data, and compare with other methods. We generate a graph
using the planted partition model with n = 1000 nodes, r = 5 clusters with equal size K = 200,
and p, q ? [0, 1]. We apply our method to the data, where we use the fast solver described in the
supplementary material. We estimate p and q using the heuristic described in Section 3, and choose
the weights cA and cAc according to the main theorem4 . Due to numerical accuracy, the output Y?
of our algorithm may not be integer, so we do the following simple rounding: compute the mean
y? of the entries of Y? , and round each entry of Y? to 1 if it is greater than y?, and 0 otherwise. We
measure the error by kY ? ? round(Y? )k1 , which is simply the number of misclassifed pairs. We say
our method succeeds if it misclassifies less than 0.1% of the pairs.
For comparison, we consider three alternative methods: (1) Single-Linkage clustering (SLINK) [24],
which is a hierarchical clustering method that merge the most similar clusters in each iteration. We
use the difference of neighbours, namely kAi? ? Aj? k1 , as the distance measure of node i and j, and
output when SLINK finds a clustering with r = 5 clusters. (2) A spectral clustering method [26],
where we run SLINK on the top r = 5 singular vectors of A. (3) Low-rank-plus-sparse approach
[17, 21], followed by the same rounding scheme. Note the first two methods assume knowledge of
r, which is not available to our method. Success is measured in the same way as above.
For each q, we find the smallest p for which a method succeeds, and average over 20 trials. The
results are shown in Figure 1(a), where the area above each curves corresponds to the range of
feasible (p, q) for each method. It can been seen that our method subsumes all others, in that we
succeed for a strictly larger range of (p, q). Figure 1(b) shows more detailed results for sparse graphs
(p ? 0.3, q ? 0.1), for which SLINK and trace-norm-plus unweighted `1 completely fail, while our
method significantly outperforms the spectral method, the only alternative method that works in this
regime.
5
Proof of Theorem 1
Overview: Let S ? , A ? Y ? . The proof consists of two main steps: (a) developing a new approximate dual certificate condition, i.e. a set of stipulations which, if satisfied by any matrix W , would
4
we point out that searching for the best cA and cAc while keeping cA /cAc fixed might lead to better
performance, which we do not pursue here
6
guarantee the optimality of (Y ? , S ? ), and (b) constructing a W that satisfies these stipulations with
high probability. While at a high level these two steps have been employed in several papers on
sparse and low-rank matrix decomposition, our analysis is different because it relies critically on
the specific clustering setting we are in. Thus, even though we are looking at a potentially more
involved setting with input-dependent weights on the sparse matrix regularizer, our proof is much
simpler than several others in this space. Also, existing proofs do not cover our setting.
Preliminaries: Define support sets ? , support(S ? ), and R , support(Y ? ). Their complements
are ?c and Rc respectively. Due to the constraints (3) in our convex program, if (Y ? + ?, S ? ? ?)
is a feasible solution to the convex program (2), then it has to be that ? ? D, where
D , {M ? Rn?n | ?(i, j) ? R : ?1 ? mij ? 0;
?(i, j) ? Rc : 1 ? mij ? 0}.
Thus we only need to execute steps (a),(b) above for optimality over this restricted set of deviations.
Finally, we define the (now standard) projection operators: P? (M ) is the matrix where the (i, j)th
entry is mij if (i, j) ? ?, and 0 else. Let the SVD of Y ? be U0 ?0 U0> (notice that Y ? is a symmetric
positive semidefinite matrix), and let PT ? (M ) , (I ?U0 U0> )M (I ?U0 U0> ) be the projection of M
onto the space of matrices whose columns and rows are orthogonal to those of Y ? , and PT (M ) ,
M ? PT ? (M ).
Step (a) - Dual certificate condition: The following proposition provides a sufficient condition
for the optimality of (Y ? , S ? ).
Proposition 1 (New Dual Certificate Conditions for Clustering). If there exists a matrix W ?
Rn?n and a positive number obeying the following conditions
1. kPT ? W k ? 1.
2. kPT (W )k? ? 2 min {cAc , cA }
3. P? (U0 U0> + W ), ? = (1 + ) kP? (C ? ?)k1 , ?? ? D.
4. P?c (U0 U0> + W ), ? ? ?(1 ? ) kP?c (C ? ?)k1 , ?? ? D
then (Y ? , S ? ) is the unique optimal solution to the convex program (2).
The proof is in the supplementary material; it also involves several steps unique to our clustering
setup here.
Step (b) - Dual certificate constructions: We now construct a W , and show that it satisfies the
conditions in Proposition 1 w.h.p. (but not always, and this is key to its simple construction). To keep
the notation light, we consider the standard planted partition model, where the edge probabilities are
uniform; that is, for every pair of nodes in the same cluster, there is an edge between them with
probability p ? p?, and for every pair where the nodes are in different clusters, the edge is present
with probability q ? q?. It is straightforward to adapt the proof to the general case with non-uniform
edge probabilities. We define W , W1 + W2 where
r
X
1?p 1
1R ??c ,
p km m
m=1
W1
, ?P? (U0 U0> ) +
W2
cAc (1 ? p)
cA q
, (1 + ) C ? S ? +
1R??c ?
1Rc ??c .
p
1?q
Intuitively speaking, the idea is that W1 and W2 are zero mean random matrices, so they are likely
to have small norms. To prove Theorem 1, it remains to show that W satisfies the desired conditions
w.h.p.; this is done below, with proof in the supplementary, and is much simpler than similar proofs
in the sparse-plus-low-rank literature.
Proposition 2. Under the assumptionsqof Theorem 1, with high probability, W satisfies the condi2
n
n
tions in Proposition 1 with = 2 log
K
p.
7
6
Conclusion
We presented a convex optimization formulation, essentially a weighted version of low-rank matrix
decomposition, to address graph clustering where the graph is sparse. We showed that under a wide
range of problem parameters, the proposed method guarantees to recover the correct clustering. In
fact, our theoretic analysis shows that the proposed method outperforms, i.e., succeeds under less
restrictive conditions, every existing method in this setting. Simulation studies also validates the
efficiency and effectiveness of the proposed method.
This work is motivated by analyzing large-scale social network, where inherently, even actors
(nodes) within one cluster are more than likely not having connections. As such, immediate goals
for future work include faster algorithm implementations, as well as developing effective postprocessing schemes (e.g., rounding) when the obtained solution is not an exact cluster matrix.
Acknowledgments
S. Sanghavi would like to acknowledge NSF grants 0954059 and 1017525, and ARO grant
W911NF1110265. The research of H. Xu is partially supported by the Ministry of Education of
Singapore through NUS startup grant R-265-000-384-133.
References
[1] P. Holland andK.B. Laskey and S. Leinhardt. Stochastic blockmodels: Some first steps. Social
Networks, 5:109?137, 1983.
[2] N. Bansal, A. Blum, and S. Chawla. Correlation clustering. Machine Learning, 56(1):89?113,
2004.
[3] H. Becker.
A survey of correlation clustering.
http://www1.cs.columbia.edu/ hila/clustering.pdf, 2005.
Available
online
at
[4] B. Bollob?as and AD Scott. Max cut for random graphs with a planted partition. Combinatorics,
Probability and Computing, 13(4-5):451?474, 2004.
[5] R.B. Boppana. Eigenvalues and graph bisection: An average-case analysis. In Foundations of
Computer Science, 1987., 28th Annual Symposium on, pages 280?285. IEEE, 1987.
[6] E.J. Candes, X. Li, Y. Ma, and J. Wright. Robust principal component analysis? Arxiv preprint
arXiv:0912.3599, 2009.
[7] T. Carson and R. Impagliazzo. Hill-climbing finds random planted bisections. In Proceedings
of the twelfth annual ACM-SIAM symposium on Discrete algorithms, pages 903?909. Society
for Industrial and Applied Mathematics, 2001.
[8] V. Chandrasekaran, S. Sanghavi, S. Parrilo, and A. Willsky. Rank-sparsity incoherence for
matrix decomposition. SIAM Journal on Optimization, 21(2):572?596, 2011.
[9] M. Charikar, V. Guruswami, and A. Wirth. Clustering with qualitative information. In Foundations of Computer Science, 2003. Proceedings. 44th Annual IEEE Symposium on, pages
524?533. IEEE, 2003.
[10] A. Condon and R.M. Karp. Algorithms for graph partitioning on the planted partition model.
Random Structures and Algorithms, 18(2):116?140, 2001.
[11] E. Demaine and N. Immorlica. Correlation clustering with partial information. Approximation,
Randomization, and Combinatorial Optimization.. Algorithms and Techniques, pages 71?80,
2003.
[12] E.D. Demaine, D. Emanuel, A. Fiat, and N. Immorlica. Correlation clustering in general
weighted graphs. Theoretical Computer Science, 361(2):172?187, 2006.
[13] D. Emanuel and A. Fiat. Correlation clustering?minimizing disagreements on arbitrary
weighted graphs. Algorithms-ESA 2003, pages 208?220, 2003.
[14] U. Feige and J. Kilian. Heuristics for semirandom graph problems. Journal of Computer and
System Sciences, 63(4):639?671, 2001.
8
[15] J. Giesen and D. Mitsche. Bounding the misclassification error in spectral partitioning in the
planted partition model. In Graph-Theoretic Concepts in Computer Science, pages 409?420.
Springer, 2005.
[16] J. Giesen and D. Mitsche. Reconstructing many partitions using spectral techniques. In Fundamentals of Computation Theory, pages 433?444. Springer, 2005.
[17] A. Jalali, Y. Chen, S. Sanghavi, and H. Xu. Clustering partially observed graphs via convex
optimization. Arxiv preprint arXiv:1104.4803, 2011.
[18] M. Jerrum and G.B. Sorkin. The metropolis algorithm for graph bisection. Discrete Applied
Mathematics, 82(1-3):155?175, 1998.
[19] C. Mathieu and W. Schudy. Correlation clustering with noisy input. In Proceedings of the
Twenty-First Annual ACM-SIAM Symposium on Discrete Algorithms, pages 712?728. Society
for Industrial and Applied Mathematics, 2010.
[20] F. McSherry. Spectral partitioning of random graphs. In Foundations of Computer Science,
2001. Proceedings. 42nd IEEE Symposium on, pages 529?537. IEEE, 2001.
[21] S. Oymak and B. Hassibi. Finding dense clusters via ?low rank+ sparse? decomposition. Arxiv
preprint arXiv:1104.5186, 2011.
[22] K. Rohe, S. Chatterjee, and B. Yu. Spectral clustering and the high-dimensional stochastic
block model. Technical report, Technical Report 791, Statistics Department, UC Berkeley,
2010.
[23] R. Shamir and D. Tsur. Improved algorithms for the random cluster graph model. Random
Structures & Algorithms, 31(4):418?449, 2007.
[24] R. Sibson. Slink: an optimally efficient algorithm for the single-link cluster method. The
Computer Journal, 16(1):30?34, 1973.
[25] C. Swamy. Correlation clustering: maximizing agreements via semidefinite programming.
In Proceedings of the fifteenth annual ACM-SIAM symposium on Discrete algorithms, pages
526?527. Society for Industrial and Applied Mathematics, 2004.
[26] U. Von Luxburg. A tutorial on spectral clustering. Statistics and Computing, 17(4):395?416,
2007.
9
|
4739 |@word trial:1 briefly:1 version:4 norm:4 nd:1 c0:2 twelfth:1 km:1 confirms:1 simulation:3 condon:2 decomposition:6 pick:1 mpexuh:1 ours:1 semirandom:1 outperforms:3 existing:8 past:1 must:1 additive:1 partition:22 numerical:1 remove:1 devising:1 nq:3 vanishing:1 provides:6 certificate:4 node:20 simpler:2 ybij:2 rc:3 symposium:6 qualitative:1 consists:1 prove:1 hardness:1 indeed:1 expected:1 sdp:1 solver:1 becomes:1 provided:1 underlying:3 notation:2 bounded:1 what:1 kind:4 minimizes:2 pursue:1 developed:1 finding:3 guarantee:11 berkeley:1 every:12 k2:1 partitioning:3 grant:3 positive:4 engineering:3 analyzing:1 incoherence:1 merge:1 might:1 plus:4 therein:1 studied:1 schudy:1 range:4 unique:4 acknowledgment:1 practice:1 block:6 area:3 kpt:2 empirical:2 significantly:1 matching:1 projection:2 word:1 get:3 onto:1 close:2 cannot:1 operator:1 put:1 missing:3 bollobas:1 pn2:1 straightforward:1 maximizing:1 convex:13 survey:2 recovery:3 insight:4 rule:3 importantly:1 nuclear:2 ity:1 classic:1 searching:1 imagine:1 suppose:4 shamir:3 user:1 exact:2 pt:3 construction:2 programming:1 agreement:2 element:5 cut:1 observed:1 preprint:3 electrical:2 worst:1 kilian:2 decrease:1 mentioned:2 intuition:1 sk1:2 motivate:1 solving:1 efficiency:1 completely:2 easily:1 aii:1 differently:5 various:1 tx:2 regularizer:1 fast:2 effective:2 describe:2 kp:2 startup:1 choosing:2 quite:1 heuristic:2 widely:1 solve:2 larger:2 denser:1 say:2 supplementary:5 otherwise:2 posed:1 kai:1 statistic:2 jerrum:2 itself:3 validates:1 noisy:1 online:1 eigenvalue:5 analytical:3 propose:1 aro:1 leinhardt:1 product:1 yii:1 aligned:1 realization:1 achieve:1 ky:3 cluster:76 optimum:5 requirement:1 tions:1 develop:1 measured:1 ij:1 c:1 involves:3 convention:1 correct:1 stochastic:6 subsequently:1 material:4 adjacency:4 education:1 require:1 clustered:1 generalization:1 preliminary:1 randomization:1 proposition:5 yij:4 strictly:1 wright:1 algorithmic:1 achieves:1 smallest:1 giesen:4 hilum:1 lose:1 combinatorial:8 utexas:2 establishes:1 tool:1 weighted:8 successfully:1 hope:1 clearly:2 always:1 rather:1 pn:1 broader:1 karp:2 focus:1 rank:13 check:1 industrial:3 sense:3 dependent:1 membership:1 kc:2 interested:1 arg:1 dual:4 ill:1 logn:2 denoted:1 proposes:1 misclassifies:1 special:1 uc:1 field:1 equal:6 never:1 once:1 construct:1 having:1 yu:1 future:1 sanghavi:5 np:1 develops:1 others:3 primarily:1 report:2 modern:1 neighbour:2 national:1 consisting:1 n1:3 detection:1 extreme:1 yielding:1 semidefinite:3 behind:1 light:1 mcsherry:2 implication:1 edge:31 integral:1 partial:1 condi2:1 huan:1 andk:1 orthogonal:1 desired:1 guidance:1 weighs:1 theoretical:7 complicates:1 column:1 cover:1 cost:4 deviation:3 entry:3 kq:1 uniform:2 rounding:8 conducted:1 too:2 characterize:1 optimally:1 synthetic:2 density:9 fundamental:1 oymak:3 siam:4 off:1 picking:1 connectivity:1 w1:3 von:1 satisfied:1 choose:5 possibly:1 worse:1 li:1 parrilo:1 impagliazzo:1 summarized:1 subsumes:1 combinatorics:1 ad:1 multiplicative:1 lot:1 analyze:1 doing:1 recover:5 candes:1 defer:2 collaborative:1 contribution:1 minimize:1 accuracy:1 yield:4 climbing:1 norm3:1 thumb:2 critically:1 bisection:3 involved:1 naturally:1 proof:10 recovers:2 gain:1 emanuel:2 recall:2 knowledge:2 organized:1 fiat:2 actually:1 higher:2 improved:1 yb:6 formulation:4 execute:1 box:1 though:1 done:1 just:2 governing:2 correlation:10 until:2 whose:1 web:1 replacing:1 aj:1 perhaps:2 reveal:1 laskey:1 verify:1 true:7 concept:1 hence:2 symmetric:1 round:2 encourages:1 generalized:2 carson:2 bansal:1 pdf:1 hill:1 theoretic:4 performs:1 postprocessing:1 wise:2 recently:1 empirically:2 overview:2 belong:1 refer:1 ai:1 sujay:1 outlined:1 mathematics:4 similarly:1 actor:1 similarity:2 etc:3 add:1 own:1 showed:1 certain:1 arbitrarily:1 success:1 seen:4 minimum:4 greater:1 ministry:1 cii:1 employed:1 recognized:1 surely:1 ii:1 semi:1 u0:12 technical:2 faster:1 adapt:1 profiling:1 equally:3 bigger:1 prediction:1 essentially:1 fifteenth:1 arxiv:6 iteration:1 achieved:1 penalize:1 c1:2 want:2 else:2 singular:2 crucial:1 w2:3 rest:1 comment:1 undirected:1 effectiveness:1 call:1 reflective:1 integer:1 easy:1 sorkin:2 idea:5 tradeoff:2 texas:2 thread:1 motivated:1 pca:1 guruswami:1 linkage:2 becker:1 penalty:1 proceed:1 speaking:1 remark:1 generally:1 detailed:4 theorem4:1 generate:1 http:1 outperform:1 nsf:1 singapore:4 notice:1 tutorial:1 disjoint:4 discrete:4 key:2 sibson:1 terminology:1 blum:1 vast:1 graph:44 relaxation:7 monotone:1 sum:3 run:1 luxburg:1 place:1 throughout:1 almost:1 chandrasekaran:1 scaling:4 comparable:1 bound:3 followed:2 cac:10 stipulation:2 annual:5 occur:1 constraint:4 min:6 optimality:3 tsur:1 department:4 developing:2 according:1 charikar:1 combination:1 disconnected:1 across:10 smaller:4 feige:2 reconstructing:1 partitioned:1 lp:1 metropolis:1 making:2 www1:1 intuitively:1 restricted:1 sij:1 multiplicity:3 equation:1 remains:1 turn:1 discus:1 fail:1 know:2 studying:1 available:2 apply:1 hierarchical:1 upto:1 disagreement:2 spectral:11 chawla:1 appearing:1 alternative:3 swamy:1 denotes:2 clustering:59 include:3 top:1 log2:6 restrictive:1 k1:5 classical:5 society:3 objective:10 already:1 quantity:1 planted:18 diagonal:2 jalali:1 said:1 distance:1 link:2 separate:1 mail:1 reason:1 willsky:1 assuming:1 relationship:1 bollob:1 minimizing:2 difficult:1 setup:3 cij:3 potentially:4 trace:1 implementation:2 twenty:1 perform:1 datasets:1 acknowledge:1 immediate:1 situation:1 looking:1 precise:1 rn:3 arbitrary:2 esa:1 community:1 complement:1 pair:7 mechanical:1 required:3 extensive:1 namely:1 connection:1 established:1 nu:2 address:1 adversary:1 below:4 scott:1 regime:3 sparsity:8 program:6 max:3 misclassification:1 natural:7 difficulty:1 representing:1 scheme:2 mathieu:1 created:1 log1:1 columbia:1 prior:1 sg:1 literature:5 nice:1 expect:1 filtering:1 demaine:2 foundation:3 pij:5 sufficient:2 austin:4 row:1 penalized:3 surprisingly:1 supported:1 keeping:1 aij:8 allow:1 wide:1 absolute:3 sparse:26 yudong:1 boundary:1 curve:2 unweighted:6 rich:1 collection:1 social:4 approximate:1 emphasize:1 keep:1 clique:1 quantifies:1 table:5 additionally:1 robust:3 ca:10 inherently:1 constructing:1 did:1 blockmodels:1 main:7 dense:2 motivation:1 noise:1 bounding:1 n2:1 allowed:2 xu:3 elaborate:1 hassibi:3 obeying:1 candidate:2 third:1 wirth:1 theorem:9 rohe:1 specific:1 maxi:1 grouping:1 exists:1 adding:1 magnitude:1 chatterjee:1 chen:2 sparser:3 easier:1 simply:1 likely:3 partially:2 holland:1 springer:2 mij:3 corresponds:2 satisfies:4 relies:1 acm:3 ma:1 succeed:2 goal:1 boppana:2 feasible:2 hard:2 change:2 determined:1 specifically:1 uniformly:2 except:1 lemma:2 principal:1 called:3 total:2 svd:2 succeeds:3 meaningful:1 formally:2 immorlica:2 log4:1 support:3 latter:1 arises:1 noisier:2
|
4,132 | 474 |
Learning in the Vestibular System:
Simulations of Vestibular Compensation
Using Recurrent Back-Propagation
Thomas J. Anastasio
University of Dlinois
Beckman Institute
405 N. Mathews Ave.
Urbana, II... 61801
Abstract
Vestibular compensation is the process whereby normal functioning is
regained following destruction of one member of the pair of peripheral
vestibular receptors. Compensation was simulated by lesioning a dynamic
neural network model of the vestibulo~ular reflex (VOR) and retraining it
using recurrent back-propagation. The model reproduced the pattern of VOR
neuron activity experimentally observed in compensated animals, but only if
connections heretofore considered uninvolved were allowed to be plastic.
Because the model incorporated nonlinear units, it was able to reconcile
previously conflicting, linear analyses of experimental results on the dynamic
properties of VOR neurons in normal and compensated animals.
1 VESTIBULAR COMPENSATION
Vestibular compensation is one of the oldest and most well studied paradigms in motor
learning. Although it is neurophysiologically well described, the adaptive mechanisms
underlying vestibular compensation, and its effects on the dynamics of vestibular
responses, are still poorly understood. The purpose of this study is to gain insight into
the compensatory process by simulating it as learning in a recurrent neural network
model of the vestibulo-ocular reflex (VOR).
603
604
Anastasio
The VOR stabilizes gaze by producing eye rotations that counterbalance bead
rotations. It is mediated by brainstem neurons in the vestibular nuclei (VN) that relay
head velocity signals from vestibular sensory afferent neurons to the motoneurons of
the eye muscles (Wilson and Melvill Jones 1979). The VOR circuitry also processes
the canal signals, stretching out their time constants by four times before transmitting
this signal to the motoneurons. This process of time constant lengthening is known as
velocity storage (Raphan et al. 1979).
The VOR is a bilaterally symmetric structure that operates in push-pull. The VN are
linked bilaterally by inhibitory commissural connections. Removal of the vestibular
receptors from one side (hemilabyrinthectomy) unbalances the system, resulting in
continuous eye movement that occurs in the absence of head movement, a condition
known as spontaneous nystagmus. Such a lesion also reduces VOR sensitivity (gain)
and eliminates velocity storage. Compensatory restoration of VOR occurs in stages
(Fetter and Zee 1988). It begins by quickly eliminating spontaneous nystagmus, and
continues by increasing VOR gain. Curiously, velocity storage never recovers.
2 NETWORK ARCHITECTURE
The horizontal VOR is modeled as a three-layered neural network (Figure 1). All of
the units are nonlinear, passing their weighted input sums through the sigmoidal
squashing function. This function bounds unit responses between zero and one. Input
units represent afferents from the left (lhc) and right (mc) horizontal semicircular
canal receptors. Output units correspond to motoneurons of the lateral (lr) and medial
(mr) rectus muscles of the left eye. Interneurons in the VN are represented by hidden
units on the left (lvnl, Ivn2) and right (rvnl, rvn2) sides of the model brainstem. Bias
units stand for non-vestibular inputs, on the left (lb) and right (rb) sides.
Network connectivity reflects the known anatomy of mammalian VOR (Wilson and
Melvill Jones 1979). Vestibular commissures are modeled as recurrent connections
between hidden units on opposite sides. All connection weights to the hidden units are
plastic, but those to the outputs are initially fixed, because it is generally believed that
synaptic plasticity occurs only at the VN level in vestibular compensation (Galiana et
al. 1984). Fixed hidden-to-output weights have a crossed, reciprocal pattern.
3 TRAINING THE NORMAL NETWORK
The simulations began by training the network shown in Figure I, with both vestibular
inputs intact (normal network), to produce the VOR with velocity storage (Anastasio
1991). The network was trained using recurrent back-propagation (Williams and
Zipser 1989). The input and desired output sequences correspond to the canal afferent
signals and motoneuron eye-velocity commands that would produce the VOR response
to two impulse head rotational accelerations, one to the left and the other to the right.
One input (rhc) and desired output (lr) sequence is shown in Figure 2A (dotted and
dashed, respectivley). Those for /hc and mr (not shown) are identical but inverted.
The desired output responses are equal in amplitude to the inputs, producing VOR
Learning in the Vestibular System
Figure 1. Recurrent Neural Network Model of the Horizontal Vestibulo-Ocular
Reflex (VOR). /he, The: left and right horizontal semicircular canal afferents; lm1,
lm2, rvnl, rvn2: vestibular nucleus neurons on left and right sides of model
brainstem; lr, mr: lateral and medial rectus muscles of left eye; lb, rb: left and right
non-vestibular inputs. This and subsequent figures redrawn from Anastasio (in press).
eye movements that would perfectly counterbalance head movements. The output
responses decay more slowly than the input responses. reflecting velocity storage.
Between head movements. both desired outputs have the same spontaneous firing rate
of 0.50. With output spontaneous rates (SRs) balanced. no push-pull eye velocity
command is given and. consequently, no VOR eye movement would be made.
The normal network learns the VOR transformation after about 4.000 training
sequence presentations (passes). The network develops reciprocal connections from
input to hidden units. as in the actual VOR (Wilson and Melvill Jones 1979).
Inhibitory recurrent connections form an integrating (lvnl. rvnl) and a non-integrating
(1m2. rvn2) pair of hidden units (Anastasio 1991).
The integrating pair subserve
storage in the network. They have strong mutual inhibition and exert net positive
feedback on themselves. The non-integrating pair have almost no mutual inhibition.
605
606
Anastasio
4 SIMULATING VESTmULAR COMPENSATION
After the normal network is constructed, with both inputs intact, vestibular
compensation can be simulated by removing the input from one side and retraining
with recurrent back-propagation. Left hemilabyrinthectomy produces deficits in the
model that correspond to those observed experimentally. The responses of output unit
Ir acutely (i.e. immediately) following left input removal are shown in Figure 2A.
The SR of Ir (solid) is greatly increased above normal (dashed); that of mr (not shown)
is decreased by the same amount. This output SR imbalance would result in eye
movement to the left in the absence of head movement (spontaneous nystagmus). The
gain of the outputs is greatly decreased. This is due to removal of one haJf the
network input, and to the SR imbalance forcing the output units into the low gain
extremes of ?the squashing function. Velocity storage is also eliminated by left input
removal, due to events at the hidden unit level (see below).
During retraining, the time course of simulated compensation is similar to that
0.65
0.75
en
B
1'\1
w
(J)o.65
Z
2(J)o.55
t,
w
\
a:
r'~
t::0.45
2
::>
Q.35 0
Q.65
A
," ,
\
0.55
' ...
0.45
.V
".,I
I
1/
V
o PASSES
200 PASSES
Q.35
10
20
3l
?l
50
til
Q.65
0
10
20
30
40
!II
C
en
w
(J)
~o.55
D
0.55
....
(J)
w
a:Q.45
-0.45
t::
2
900 PASSES
::>
Q.35
til
0
10
20
3l
40
50
NETWORK CYCLES
6,700 PASSES
til
Q.35
0
10
20
30
40
50
til
NElWORK CYCLES
Figure 2. Simulated Compensation in the VOR Neural Network Model. Response of
Ir (solid) is shown at each stage of compensation: A, acutely (i.e. immediately)
following the lesion; B, after spontaneous nystagmus has been eliminated; C, after
VOR gain has been largely restored; D, after full recovery of VOR. Desired response
of lr (dashed) shown in all plots. Intact input from rhe (dotted) shown in A only.
Learning in the Vestibular System
observed experimentally (Fetter and Zee 1988). Spontaneous nystagmus is eliminated
after 200 passes, as the SRs of the output units are brought back to their normal level
(Figure 2B). Output unit gain is largely restored by 900 passes, but time constant
remains close to that of the inputs (Figure 2C). At this stage. VOR gain would have
increased substantially. but its time constant would remain low. indicating loss of
velocity storage. This stage approximates the extent of experimentally observed
compensation (ibid.). Completely restoring the normal VOR. with full velocity
storage. requires over seven times more retraining (Figure 2D).
The responses of the hidden units during each stage of simulated compensation are
shown in Figure 3A and 3C. Average hidden unit SR and gain are shown as dotted
lines in Figure 3A and 3C, respectively. Acutely following left input removal (AC
stage). the SRs of left (dashed) and right (solid) hidden units decrease and increase.
respectively (Figure 3A). One left hidden unit (lvnl) is actually silenced. Hidden unit
gain at AC stage is greatly reduced bilaterally (Figure 3C). as for the outputs.
At the point where spontaneous nystagmus is eliminated (NE stage). hidden units SRs
are balanced bilaterally. and none of the units are spontaneously silent (Figure 3A).
When VOR gain is largely restored (GR stage. corresponding to experimentally
observed compensation), the gains of the hidden units have substantially increased
(Figure 3C). At GR stage. average hidden unit SR has also increased but the bilateral
SR balance has been strictly maintained (Figure 3A). A comparison with experimental
data (Yagi and Markham 1984; Newlands and Perachio 1990) reveals that the behavior
of hidden units in the model does not correspond to that observed for real VN neurons
in compensated animals. Rather than having bilateral SR balance. the average SR of
VN neurons in compensated animals is lower on the lesion-side and higher on the
intact-side. Moreover, many lesion-side VN neurons are permanently silenced. Also.
rather than substantially recovering gain. the gains of VN neurons in compensated
animals increase little from their low values acutely following the lesion.
The network model adopts its particular (and unphysiological) solution to vestibular
compensation because. with fixed connection weights to the outputs. compensation can
be brought about only by changes in hidden unit behavior. Thus. output SRs will be
balanced only if hidden SRs are balanced. and output gain will increase only if hidden
gain increases. The discrepancy between model and actual VN neuron data suggests
that compensation cannot rely solely on synaptic plasticity at the VN level.
5 RELAXING CONSTRAINTS
A better match between model and experimental VN neuron data can be achieved by
rerunning the compensation simulation with modifiable weights at all allowed network
connections (Figure 1). Bias-to-output and hidden-to-output synaptic weights. which
were previously fixed, are now made plastic. These extra degrees of freedom give the
adapting network greater flexibility in achieving compensation. and release it from a
strict dependency upon the behavior of the hidden units. The time course of
compensation in the all-weights-modifiable example is similar to the previous case
(Figure 2). but each stage is reached after fewer passes.
607
608
Anastasio
W
,
A
,
~
0.8
0.8
~
0.6
0.6
().4
0.4
UJ
0
W
Z
~
Z
~
(/J
02 .....
02
,/
~M
"
AC
NE
GR
~M
3
3
2
2
1
1
AC
NE
GR
~M
,,
.....
.....
~
AC
NE
---GR
D
AC
NE
GR
Figure 3. Behavior of Hidden Units at Various Stages of Compensation in the VOR
Neural Network Model. Spontaneous rate (SR, A and B) and gain (C and D) are
shown for networks with hidden layer weights only modifiable (A and C) or with all
weights modifiable (B and D). Normal average SR (A and B) and gain (C and D)
shown as dotted lines. NM. normal stage; AC, acutely following lesion; NE. after
spontaneous nystagmus is eliminated; GR. after VOR gain is largely restored.
The behavior of the hidden units in the all-weights simulation more closely matches
that of actual VN neurons in compensated animals (Figure 3B and 3D). At NE stage.
even though spontaneous nystagmus is eliminated. there remains a large bilateral
imbalance in hidden unit SR. and one lesion-side hidden unit (lvn}) is silenced (Figure
3B). At GR stage. hidden unit gain has increased only modestly from the low acute
level (Figure 3D). and the bilateral SR imbalance persists. with Ivnl still essentially
spontaneously silent (Figure 3B). This modeling result constitutes a testable
prediction that synaptic plasticity is occurring at the motoneuron as well as at the VN
level in vestibular compensation.
6 NETWORK DYNAMICS
In the all-weights simulation at GR stage. as well as in compensated animals, some
lesion-side VN neurons are silenced. Hidden unit lvnl is silenced by its inhibitory
commissural interaction with rvnl, which in the normal network allowed the pair to
Learning in the Vestibular System
form an integrating, recurrent loop. Silencing of 1m} breaks the commissural loop
and consequently eliminates velocity storage in the network. VN neuron silencing
could also account for the loss of velocity storage in the real, compensated VOR.
Loss of velocity storage in the model, in response to step head rotational acceleration
stimuli, is shown in Figure 4. The output step response that would be expected given
the longer VOR time constant is shown for lr in Figure 4A (dashed). The response of
mr (not shown) is identical but inverted. Instead of expressing the longer VOR time
constant, the actual step response of lr in the all-weights compensated network at GR
stage (Figure 4A, dotted) has a rise time constant that is equal to the canal time
constant, indicating complete loss of velocity storage. This is due to the behavior of
the hidden units. The step responses of the integrating pair of hidden units in the
compensated network at GR stage are shown in Figure 4B (lml, lower dotted; rvnl,
upper dotted). Velocity storage is eliminated because lvnl is silenced, and this breaks
the commissural loop that supports integration in the network.
Paradoxically, in the normal network with all hidden units spontaneously active, the
output step response rise time constant is also equal to that of the canal afferents, again
indicating a loss of velocity storage. This is shown for lr from the normal network in
Figure 4A (solid). The step responses of the hidden units in the normal network are
shown in Figure 4B (lvnl, dashed; rvnl, solid). Unit lml, which is spontaneously
active in the normal network, is quickly driven into cut-off by the step stimulus. This
breaks the commissural loop and eliminates velocity storage, accounting for the short
rise time constants of hidden and output units network wide.
This result can explain some conflicting experimental findings concerning the
0.8
ffi
Ul
0.65
r
Z
~
I
0.55
m
a:
1---'
l
r
A
l. ,,-
0.8
l
t: OAS
Z
:J
0.4
10-1
0.2
l--{
B
,
~
~;---
\
0.35 ______....L.._...10-_ _ _........_""""
o 10 3) al ..0 SJ 8)
NElWORK CYCLES
oo
I
\
'"
I
10
Zl
31
40
50
8)
NETWORK CYCLES
Figure 4. Responses of Units to Step Head Rotational Acceleration Stimuli in VOR
Neural Network Model. A, expected response of lr with VOR time constant (dashed),
and actual responses of lr in normal (solid) and all-weights compensated (dotted)
networks. B, response of lml (dashed) and rvnl (solid) in normal network, and of
lvnl (lower dotted) and rvnl (upper dotted) in all-weights compensated network.
609
610
Anastasio
dynamics of VN neurons in normal and compensated animals. Using sinusoidal
stimuli, the time constants of VN neurons were found to be lower in compensated than
in normal gerbils (Newlands and Perachio 1990). In contrast, using step stimuli, no
difference in rise time constants were found for VN neurons in DOrmal as compared to
compensated cats (Yagi and Markham 1984).
Rather than being a species difference, the disagreement may involve the type of
stimulus used. Step accelerations are intense stimuli that can drive VN neurons to
extreme levels. In response to a step in their off-directions, many VN neurons in
normal cats were observed to cut-off (ibid.). As shown in Figure 4, this would
disrupt commissural interactions and reduce velocity storage and VN neuron rise time
constants, just as if these neurons were silenced as they are in compensated animals. In
fact, VN neuron rise time constants were observed to be low in both normal and
compensated cats (ibid.). In contrast, sinusoidal stimuli at an intensity that does not
cause widespread VN neuron cut-off would not be expected to disrupt velocity storage
in normal animals.
Acknowledgements
This work was supported by a grant from the Whitaker Foundation.
References
Anastasio TJ (1991) Neural network models of velocity storage m the horizontal
vestibulo-ocular reflex. Bioi Cybem 64: 187-196
Anastasio TJ (in press) Simulating vestibular compensation using recurrent backpropagation. Bioi Cybem
Fetter M, Zee DS (1988) Recovery from unilatera1labyrinthectomy in rhesus monkey.
I Neurophysiol 59:370-393
Galiana HL, Flohr H, Melvill Iones G (1984) A reevauation of intervestibular nuclear
coupling: its role in vestibular compensation. J Neurophysiol 51:242-259
Newlands SD, Perachio AA (1990) Compensation of horizontal canal related activity
in the medial vestibular nucleus following unilateral labyrinth ablation in the
decerebrate gerbil. I. type I neurons. Exp Brain Res 82:359-372
Raphan Th, Matsuo V, Cohen B (1979) Velocity storage in the vestibulo-ocular reflex
arc (VOR). Exp Brain Res 35:229-248
Williams RJ, Zipser D (1989) A learning algorithm for continually running fully
recurrent neural networks. Neural Comp 1:270-280
Wilson VI, Melvill Jones G (1979) Mammalian vestibular physiology. Plenum Press,
New York
Yagi T, Markham CH (1984) Neural
hemilabyrinthectomy. Exp Neurol 84:98-108
correlates
of
compensation
after
|
474 |@word eliminating:1 retraining:4 heretofore:1 simulation:5 rhesus:1 accounting:1 solid:7 vor:33 subsequent:1 plasticity:3 motor:1 plot:1 medial:3 fewer:1 oldest:1 reciprocal:2 short:1 lr:9 sigmoidal:1 constructed:1 expected:3 behavior:6 themselves:1 brain:2 actual:5 little:1 increasing:1 begin:1 underlying:1 moreover:1 substantially:3 monkey:1 finding:1 transformation:1 mathews:1 unit:41 zl:1 grant:1 producing:2 continually:1 before:1 positive:1 understood:1 persists:1 sd:1 receptor:3 firing:1 solely:1 exert:1 studied:1 oas:1 suggests:1 relaxing:1 spontaneously:4 restoring:1 backpropagation:1 adapting:1 physiology:1 integrating:6 lesioning:1 cannot:1 close:1 layered:1 storage:20 compensated:17 williams:2 recovery:2 immediately:2 m2:1 insight:1 ivn2:1 nuclear:1 pull:2 plenum:1 spontaneous:11 velocity:22 continues:1 mammalian:2 cut:3 observed:8 role:1 rhc:1 cycle:4 movement:8 decrease:1 balanced:4 dynamic:5 trained:1 upon:1 completely:1 neurophysiol:2 represented:1 various:1 cat:3 lm1:1 reproduced:1 sequence:3 net:1 interaction:2 loop:4 ablation:1 poorly:1 flexibility:1 lengthening:1 produce:3 oo:1 recurrent:11 ac:7 coupling:1 strong:1 recovering:1 direction:1 anatomy:1 closely:1 redrawn:1 brainstem:3 strictly:1 considered:1 normal:23 exp:3 circuitry:1 stabilizes:1 relay:1 purpose:1 beckman:1 weighted:1 reflects:1 brought:2 silencing:2 rather:3 command:2 wilson:4 release:1 lvnl:7 greatly:3 commissure:1 ave:1 contrast:2 destruction:1 initially:1 hidden:33 acutely:5 animal:10 integration:1 mutual:2 equal:3 never:1 having:1 eliminated:7 identical:2 jones:4 constitutes:1 discrepancy:1 stimulus:8 develops:1 lhc:1 freedom:1 interneurons:1 extreme:2 tj:2 zee:3 intense:1 desired:5 re:2 increased:5 modeling:1 restoration:1 unphysiological:1 gr:11 dependency:1 sensitivity:1 off:4 gaze:1 quickly:2 transmitting:1 connectivity:1 again:1 nm:1 slowly:1 til:4 account:1 sinusoidal:2 afferent:5 vi:1 crossed:1 bilateral:4 break:3 linked:1 reached:1 nystagmus:8 ir:3 stretching:1 largely:4 correspond:4 plastic:3 mc:1 none:1 comp:1 drive:1 explain:1 synaptic:4 ocular:4 commissural:6 recovers:1 gain:20 amplitude:1 actually:1 back:5 reflecting:1 rectus:2 higher:1 melvill:5 response:22 though:1 labyrinth:1 just:1 stage:18 bilaterally:4 d:1 horizontal:6 nonlinear:2 propagation:4 widespread:1 impulse:1 effect:1 functioning:1 symmetric:1 during:2 maintained:1 whereby:1 complete:1 rvn2:3 began:1 rotation:2 cohen:1 he:1 approximates:1 expressing:1 subserve:1 gerbil:2 decerebrate:1 acute:1 longer:2 inhibition:2 driven:1 forcing:1 muscle:3 inverted:2 motoneuron:5 regained:1 greater:1 mr:5 paradigm:1 signal:4 ii:2 dashed:8 nelwork:2 full:2 rj:1 reduces:1 match:2 believed:1 concerning:1 prediction:1 essentially:1 represent:1 achieved:1 decreased:2 extra:1 eliminates:3 sr:18 pass:8 strict:1 member:1 zipser:2 paradoxically:1 fetter:3 architecture:1 perfectly:1 opposite:1 silent:2 reduce:1 curiously:1 ul:1 unilateral:1 passing:1 cause:1 york:1 generally:1 involve:1 amount:1 raphan:2 ibid:3 reduced:1 inhibitory:3 dotted:10 canal:7 rb:2 modifiable:4 four:1 achieving:1 sum:1 almost:1 vn:23 bound:1 rhe:1 layer:1 activity:2 constraint:1 yagi:3 peripheral:1 remain:1 hl:1 lml:3 previously:2 remains:2 ffi:1 mechanism:1 lm2:1 disagreement:1 simulating:3 permanently:1 thomas:1 running:1 whitaker:1 unbalance:1 testable:1 uj:1 occurs:3 restored:4 rerunning:1 modestly:1 deficit:1 simulated:5 lateral:2 seven:1 extent:1 modeled:2 rotational:3 balance:2 rise:6 imbalance:4 upper:2 neuron:24 urbana:1 arc:1 semicircular:2 compensation:27 incorporated:1 head:8 lb:2 intensity:1 pair:6 connection:8 compensatory:2 conflicting:2 vestibular:27 able:1 below:1 pattern:2 event:1 rely:1 counterbalance:2 eye:10 ne:7 mediated:1 acknowledgement:1 removal:5 loss:5 fully:1 neurophysiologically:1 matsuo:1 foundation:1 nucleus:3 ivnl:1 degree:1 vestibulo:5 squashing:2 course:2 supported:1 side:11 bias:2 institute:1 wide:1 feedback:1 stand:1 sensory:1 adopts:1 made:2 adaptive:1 correlate:1 sj:1 active:2 reveals:1 cybem:2 disrupt:2 bead:1 continuous:1 hc:1 reconcile:1 allowed:3 lesion:8 silenced:7 en:2 rvnl:8 learns:1 removing:1 anastasio:10 decay:1 neurol:1 galiana:2 push:2 occurring:1 reflex:5 aa:1 ch:1 bioi:2 presentation:1 acceleration:4 consequently:2 absence:2 experimentally:5 change:1 operates:1 ular:1 specie:1 experimental:4 intact:4 indicating:3 support:1
|
4,133 | 4,740 |
Proximal Newton-type methods for convex
optimization
Jason D. Lee? and Yuekai Sun?
Institute for Computational and Mathematical Engineering
Stanford University, Stanford, CA
{jdl17,yuekai}@stanford.edu
Michael A. Saunders
Department of Management Science and Engineering
Stanford University, Stanford, CA
[email protected]
Abstract
We seek to solve convex optimization problems in composite form:
minimize
f (x) := g(x) + h(x),
n
x?R
where g is convex and continuously differentiable and h : Rn ? R is a convex
but not necessarily differentiable function whose proximal mapping can be evaluated efficiently. We derive a generalization of Newton-type methods to handle
such convex but nonsmooth objective functions. We prove such methods are globally convergent and achieve superlinear rates of convergence in the vicinity of an
optimal solution. We also demonstrate the performance of these methods using
problems of relevance in machine learning and statistics.
1
Introduction
Many problems of relevance in machine learning, signal processing, and high dimensional statistics
can be posed in composite form:
minimize
f (x) := g(x) + h(x),
n
x?R
(1)
where g : Rn ? R is a convex, continuously differentiable loss function, and h : Rn ? R is a
convex, continuous, but not necessarily differentiable penalty function. Such problems include: (i)
the lasso [23] (ii) multitask learning [14] and (iii) trace-norm matrix completion [6].
We describe a family of Newton-type methods tailored to these problems that achieve superlinear
rates of convergence subject to standard assumptions. These methods can be interpreted as generalizations of the classic proximal gradient method that use the curvature of the objective function to
select a search direction.
1.1
First-order methods
The most popular methods for solving convex optimization problems in composite form are firstorder methods that use proximal mappings to handle the nonsmooth part. SpaRSA is a generalized
spectral projected gradient method that uses a spectral step length together with a nonmonotone line
?
Equal contributors
1
search to improve convergence [24]. TRIP by Kim et al. also uses a spectral step length but selects
search directions using a trust-region strategy [12]. TRIP performs comparably with SpaRSA and
the projected Newton-type methods we describe later.
A closely related family of methods is the set of optimal first-order methods,
also called acceler?
ated first-order methods, which achieve -suboptimality within O(1/ ) iterations [22]. The two
most popular methods in this family are Auslender and Teboulle?s method [1] and Fast Iterative
Shrinkage-Thresholding Algorithm (FISTA), by Beck and Teboulle [2]. These methods have been
implemented in the software package TFOCS and used to solve problems that commonly arise in
statistics, machine learning, and signal processing [3].
1.2
Newton-type methods
There are three classes of methods that generalize Newton-type methods to handle nonsmooth objective functions. The first are projected Newton-type methods for constrained optimization [20]. Such
methods cannot handle nonsmooth objective functions; they tackle problems in composite form via
constraints of the form h(x) ? ? . PQN is an implementation that uses a limited-memory quasiNewton update and has both excellent empirical performance and theoretical properties [19, 18].
The second class of these methods by Yu et al. [25] uses a local quadratic approximation to the
smooth part of the form
Q(x) := f (x) + sup
g??f (x)
1
g T d + dT Hd,
2
where ?f (x) denotes the subdifferential of f at x. These methods achieve state-of-the-art performance on many problems of relevance, such as `1 -regularized logistic regression and `2 -regularized
support vector machines.
This paper focuses on proximal Newton-type methods that were previously studied in [16, 18] and
are closely related to the methods of Fukushima and Mine [10] and Tseng and Yun [21]. Both use
search directions ?x that are solutions to subproblems of the form
1
minimize ?g(x)T d + dT Hd + h(x + d),
d
2
where H is a positive definite matrix that approximates the Hessian ?2 g(x). Fukushima and Mine
choose H to be a multiple of the identity, while Tseng and Yun set some components of the search
direction ?x to be zero to obtain a (block) coordinate descent direction. Proximal Newton-type
methods were first studied empirically by Mark Schmidt in his Ph.D. thesis [18].
The methods GLMNET [9] (`1 -regularized regression), LIBLINEAR [26] (`1 -regularized classification), QUIC and recent work by Olsen et al. [11, 15] (sparse inverse covariance estimation) are
special cases of proximal Newton-type methods. These methods are considered state-of-the-art for
their specific applications, often outperforming generic methods by orders of magnitude. QUIC and
LIBLINEAR also achieve a quadratic rate of convergence, although these results rely crucially on
the structure of the `1 norm and do not generalize to generic nonsmooth regularizers.
The quasi-Newton splitting method developed by Becker and Fadili is equivalent to a proximal
quasi-Newton method with rank-one Hessian approxiamtion [4]. In this case, they can solve the
subproblem via the solution of a single variable root finding problem, making their method significantly more efficient than a generic proximal Newton-type method.
The methods described in this paper are a special case of cost approximation (CA), a class of methods developed by Patriksson [16]. CA requires a CA function ? and selects search directions via
subproblems of the form
minimize g(x) + ?(x + d) ? ?(x) + h(x + d) ? ?g(x)T d.
d
Cost approximation attains a linear convergence rate. Our methods are equivalent to using the CA
function ?(x) := 21 xT Hx. We refer to [16] for details about cost approximation and its convergence
analysis.
2
2
Proximal Newton-type methods
We seek to solve convex optimization problems in composite form:
minimize
f (x) := g(x) + h(x).
n
x?R
(2)
n
We assume g : R ? R is a closed, proper convex, continuously differentiable function, and its
gradient ?g is Lipschitz continuous with constant L1 ; i.e.
k?g(x) ? ?g(y)k ? L1 kx ? yk
for all x and y in Rn . h : Rn ? R is a closed and proper convex but not necessarily everywhere
differentiable function whose proximal mapping can be evaluated efficiently. We also assume the
optimal value, f ? , is attained at some optimal solution x? , not necessarily unique.
2.1
The proximal gradient method
The proximal mapping of a convex function h at x is
1
proxh (x) = arg min h(y) + ky ? xk2 .
2
y
Proximal mappings can be interpreted as generalized projections because if h is the indicator function of a convex set, then proxh (x) is the projection of x onto the set.
The classic proximal gradient method for composite optimization uses proximal mappings to handle
the nonsmooth part of the objective function and can be interpreted as minimizing the nonsmooth
function h plus a simple quadratic approximation to the smooth function g during every iteration:
xk+1 = proxtk h (xk ? tk ?g(xk ))
1
= arg min ?g(xk )T (y ? xk ) +
ky ? xk k2 + h(y),
2tk
y
where tk denotes the k-th step length. We can also interpret the proximal gradient step as a generalized gradient step
Gf (x) = proxh (x ? ?g(x)) ? x.
(3)
Gf (x) = 0 if and only if x minimizes f so kGf (x)k generalizes the smooth first-order measure of
optimality k?f (x)k.
Many state-of-the-art methods for problems in composite form, such as SpaRSA and the optimal
first-order methods, are variants of this method. Our method uses a Newton-type approximation in
lieu of the simple quadratic to achieve faster convergence.
2.2
The proximal Newton iteration
Definition 2.1 (Scaled proximal mappings). Let h be a convex function and H, a positive definite
matrix. Then the scaled proximal mapping of h at x is defined to be
1
2
(4)
proxH
h (x) := arg min h(y) + ky ? xkH .
2
y
Proximal Newton-type methods use the iteration
xk+1 = xk + tk ?xk ,
(5)
?xk :=
xk ? Hk?1 ?g(xk ) ? xk ,
(6)
where tk > 0 is the k-th step length, usually determined using a line search procedure and Hk is an
approximation to the Hessian of g at xk . We can interpret the search direction ?xk as a step to the
minimizer of the nonsmooth function h plus a local quadratic approximation to g because
k
proxH
xk ? Hk?1 ?g(xk )
h
1
= arg min h(y) + k(y ? xk ) + Hk?1 ?g(xk )k2Hk
2
y
1
= arg min ?g(xk )T (y ? xk ) + (y ? xk )T Hk (y ? xk ) + h(y).
(7)
2
y
k
proxH
h
3
Hence, the search direction solves the subproblem
1
?xk = arg min ?g(xk )T d + dT Hk d + h(xk + d)
2
d
= arg min Qk (d) + h(xk + d).
d
To simplify notation, we shall drop the subscripts and say x+ = x + t?x in lieu of xk+1 =
xk + tk ?xk when discussing a single iteration.
Lemma 2.2 (Search direction properties). If H is a positive definite matrix, then the search direction
?x = arg mind Q(d) + h(x + d) satisfies:
f (x+ ) ? f (x) + t ?g(x)T ?x + h(x + ?x) ? h(x) + O(t2 ),
(8)
?g(x)T ?x + h(x + ?x) ? h(x) ? ??xT H?x.
(9)
Lemma 2.2 implies the search direction is a descent direction for f because we can substitute (9)
into (8) to obtain
f (x+ ) ? f (x) ? t?xT H?x + O(t2 ).
We use a quasi-Newton approximation to the Hessian and a first-order method to solve the subproblem for a search direction, although the user is free to use a method of his or her choice. Empirically,
we find that inexact solutions to the subproblem yield viable descent directions.
We use a backtracking line search to select a step length t that satisfies a sufficient descent condition:
f (x+ ) ? f (x) + ?t?
T
? := ?g(x) ?x + h(x + ?x) ? h(x),
(10)
(11)
where ? ? (0, 0.5). This sufficient descent condition is motivated by our convergence analysis but
it also seems to perform well in practice.
Lemma 2.3 (Step length conditions). Suppose H mI for some m > 0 and ?g is the Lipschitz
continuous with constant L1 . Then the step lengths
2m
t ? min 1,
(1 ? ?) .
(12)
L1
satisfies the sufficient descent condition (10).
Algorithm 1 A generic proximal Newton-type method
Require: x0 in dom f
1: repeat
2:
Update Hk using a quasi-Newton update
rule
?1
k
3:
zk ? proxH
x
?
H
?g(x
)
k
k
h
k
4:
?xk ? zk ? xk
5:
Conduct backtracking line search to select tk
6:
xk+1 ? xk + tk ?xk
7: until stopping conditions are satisfied
3
3.1
Convergence analysis
Global convergence
We assume our Hessian approximations are sufficiently positive definite; i.e. Hk mI, k =
1, 2, . . . for some m > 0. This assumption guarantees the existence of step lengths that satisfy the
sufficient decrease condition.
Lemma 3.1 (First-order optimality conditions). Suppose H is a positive definite matrix. Then x is
a minimizer of f if and only if the search direction is zero at x; i.e.
0 = arg min Q(d) + h(x + d).
d
4
The global convergence of proximal Newton-type methods results from the fact that the search
directions are descent directions and if our Hessian approximations are sufficiently positive definite,
then the step lengths are bounded away from zero.
Theorem 3.2 (Global convergence). Suppose Hk mI, k = 1, 2, . . . for some m > 0. Then the
sequence {xk } generated by a proximal Newton-type method converges to a minimizer of f .
3.2
Convergence rate
If g is twice-continuously differentiable and we use the second order Taylor approximation as our
local quadratic approximation to g, then we can prove {xk } converges Q-quadratically to the optimal
solution x? . We assume in a neighborhood of x? : (i) g is strongly convex with constant m; i.e.
?2 g(x) mI, x ? N (x? )
where N (x? ) := {x | kx ? x? k ? }; and (ii) ?2 g is Lipschitz continuous with constant L2 .
This convergence analysis is similar to that of Fukushima and Min?e [10] and Patriksson [16]. First,
we state two lemmas: (i) that says step lengths of unity satisfy the sufficient descent condition after
sufficiently many iterations and (ii) that the backward step is nonexpansive.
Lemma 3.3. Suppose (i) ?2 g mI and (ii) ?2 g is Lipschitz continuous with constant L2 . If we let
Hk = ?2 g(xk ), k = 1, 2, . . . , then the step length tk = 1 satisfies the sufficient decrease condition
(10) for k sufficiently large.
We can characterize the solution of the subproblem
using the first-order optimality conditions for
?1
(4). Let y denote proxH
x
?
H
?g(x)
,
then
h
H(x ? H ?1 ?g(x) ? y) ? ?h(u).
or equivalently
[H ? ?g] (x) ? [H + ?h] (y)
?1
1
Let R(x) and S(x) denote m (H + ?h)
(x) and m
(H ? ?g) (x) respectively, where m is
the smallest eigenvalue of H. Then
1
y = [H + ?h]
?1
[H ? ?g] (x) = R ? S(x).
.
1
?1
Lemma 3.4. Suppose R(x) = m
H + ?h
(x), where H is positive definite. Then R is firmlynonexpansive; i.e. for x and y in dom f , R satisfies
(R(x) ? R(y))T (x ? y) ? kR(x) ? R(y)k2 .
We note that x? is a fixed point of R ? S; i.e. R ? S(x? ) = x? , so we can express ky ? x? k as
ky ? x? k = kR ? S(x) ? R ? S(x? )k ? kS(x) ? S(x? )k.
Theorem 3.5. Suppose (i) ?2 g mI and (ii) ?2 g is Lipschitz continuous with constant L2 . If we
let Hk = ?2 g(xk ), k = 1, 2, . . . , then {xk } converges to x? Q-quadratically; i.e.
kxk+1 ? x? k
? c.
kxk ? x? k2
We can also use the fact that the proximal Newton method converges quadratically to prove a proximal quasi-Newton method converges superlinearly. We assume the quasi-Newton Hessian approximations satisfy the Dennis-Mor?e criterion [7]:
Hk ? ?2 g(x? ) (xk+1 ? xk )
? 0.
(13)
kxk+1 ? xk k
We first prove two lemmas: (i) step lengths of unity satisfy the sufficient descent condition after
sufficiently many iterations and (ii) the proximal quasi-Newton step is close to the proximal Newton
step.
5
Lemma 3.6. Suppose g is twice-continuously differentiable and the eigenvalues of Hk , k = 1, 2, . . .
are bounded; i.e. there exist M ? m > 0 such that mI H M I. If {Hk } satisfy the
Dennis-Mor?e criterion, then the unit step length satisfies the sufficient descent condition (10) after
sufficiently many iterations.
? are positive definite matrices with bounded eigenvalues; i.e. mI
Lemma 3.7. Suppose H and H
?
?
H M I and mI
? H M I. Let ?x and ??
x denote the search directions generated using H
? respectively; i.e.
and H
?1
?x = proxH
?g(x) ? x,
h x?H
?
? ?1 ?g(x) ? x.
??
x = proxH
x?H
h
Then these two search directions satisfy
s
1/2
?
1 + c(H, H)
?
k?x ? ??
xk ?
(H ? H)?x
k?xk1/2 ,
m
?
where c is a constant that depends on H and H.
Theorem 3.8. Suppose g is twice-continuously differentiable and the eigenvalues of Hk , k =
1, 2, . . . are bounded. If {Hk } satisfy the Dennis-Mor?e criterion, then the sequence {xk } converges
to x? Q-superlinearly; i.e.
kxk+1 ? x? k
? 0.
kxk ? x? k
4
Computational experiments
4.1
PN OPT: Proximal Newton OPTimizer
PN OPT1 is a M ATLAB package that uses proximal Newton-type methods to minimize convex objective functions in composite form. PN OPT can build BFGS and L-BFGS approximation to the Hessian (the user can also supply a Hessian approximation) and uses our implementation of SpaRSA or
an optimal first order method to solve the subproblem for a search direction.
PN OPT uses an early stopping condition for the subproblem solver based on two ideas: (i) the
subproblem should be solved to a higher accuracy if Qk is a good approximation to g and (ii) near a
solution, the subproblem should be solved almost exactly to achieve fast convergence.
We thus require that the solution to the k-th subproblem (7) yk? satisfy
kGQ+h (yk? )k ? ?k kGf (yk? )k,
(14)
where Gf (x) denotes the generalized gradient step at x (3) and ?k is a forcing term. We choose
forcing terms based on the agreement between g and the previous quadratic approximation to g
Qk?1 . We set ?1 := 0.5 and
k?g(xk ) ? ?Qk?1 (xk )k
, k = 2, 3, . . .
(15)
?k := min 0.5,
k?g(xk )k
This choice measures the agreement between ?g(xk ) and ?Qk?1 (xk ) and is borrowed from a
choice of forcing terms for inexact Newton methods described by Eisenstat and Walker [8]. Empirically, we find that this choice avoids ?oversolving? the subproblem and yields desirable convergence
behavior.
We compare the performance of PN OPT, our implementation of SpaRSA, and the TFOCS implementations of Auslender and Teboulle?s method (AT) and FISTA on `1 -regularized logistic regression and Markov random field structure learning. We used the following settings:
1. PN OPT: We use an L-BFGS approximation to the Hessian with L = 50 and set the sufficient decrease parameter to ? = 0.0001. To solve the subproblem, we use the TFOCS
implementation of FISTA.
1
PN OPT is available at www.stanford.edu/group/SOL/software/pnopt.html.
6
Fista
AT
PN100
PN15
SpaRSA
0
log(f?f*)
log(f?f*)
0
10
Fista
AT
PN100
PN15
SpaRSA
?5
10
?5
10
10
0
100
200
300
0
Iteration
(a)
20
40
Time (sec)
60
80
(b)
Figure 1: Figure 1a and 1b compare two variants of proximal Newton-type methods with SpaRSA
and TFOCS on on the MRF structure learning problem.
2. SpaRSA: We use a nonmonotone line search with a 10 iteration memory and also set the
sufficient decrease parameter to ? = 0.0001. Our implementation of SpaRSA is included
in PN OPT as the default solver for the subproblem.
3. AT/FISTA: We set tfocsOpts.restart = -inf to turn on adaptive restarting and
use default values for the rest of the settings.
These experiments were conducted on a machine running the 64-bit version of Ubuntu 12.04 with
an Intel Core i7 870 CPU and 8 GB RAM.
4.2
Markov random field structure learning
We seek the maximum likelihood estimates of the parameters of a Markov random field (MRF)
subject to a group elastic-net penalty on the estimates. The objective function is given by
minimize ?
?
X
?rj (xr , xj ) + log Z(?) +
(r,j)?E
X
2
?1 k?rj k2 + ?2 k?rj kF .
(16)
(r,j)?E
xr is a k state variable; xj is a l state variable, and each parameter block ?rj is a k ? l matrix that
is associated with an edge in the MRF. We randomly generate a graphical model with |V | = 12 and
n = 300. The edges are sample uniformly with p = 0.3. The parameters of the non-zero edges are
sampled from a N (0, 1) distribution.
The group elastic-net penalty regularizes the solution and promotes solutions with a few non-zero
groups ?rj corresponding
to edges of the graphical model [27]. The regularization parameters were
p
set to ?1 = n log |V | and ?2 = .1?1 . These parameter settings are shown to be model selection
consistent under certain irrepresentable conditions [17].
The algorithms for solving (16) require evaluating the value and gradient of the smooth part. For a
discrete graphical model without special structure, the smooth part requires O(k |V | ) operations to
evaluate, where k is the number of states per variable. Thus even for our small example, where k = 3
and |V | = 12, function and gradient evaluations dominate the computational expense required to
solve (16).
We see that for maximum likelihood learning in graphical models, it is important to minimize the
number of function evaluations. Proximal Newton-type methods are well-suited to solve such problems because the main computational expense is shifted to solving the subproblems that do not
require function evaluations.
7
0
0
10
AT
FISTA
SpaRSA
PN
?2
10
Relative suboptimality
Relative suboptimality
10
?4
10
?6
10
AT
FISTA
SpaRSA
PN
?2
10
?4
10
?6
0
1000 2000 3000 4000
Function evaluations
10
5000
(a)
0
100
200
300
Time (sec)
400
500
(b)
Figure 2: Figure 2 compares proximal Newton-type methods with SpaRSA and TFOCS on `1 regularized logistic regression.
4.3
`1 -regularized logistic regression
Given training data (xi , yi ), i = 1, 2, . . . , n, `1 -regularized logistic regression trains a classifier via
the solution of the convex optimization problem
n
minimize
p
w?R
1X
log(1 + exp(?yi wT xi )) + ?kwk1 .
n i=1
(17)
for a set of parameters w in Rp . The regularization term kwk1 avoids overfitting the training data
and promotes sparse solutions. ? is trades-off between goodness-of-fit and model complexity.
We use the dataset gisette, a handwritten digits dataset from the NIPS 2003 feature selection challenge. The dataset is available at http://www.csie.ntu.edu.tw/?cjlin/
libsvmtools/datasets. We train our classifier using the original training set consisting of
6000 examples starting at w = 0. ? was chosen to match the value reported in [26], where it was
chosen by five-fold cross validation on the training set.
The gisette dataset is quite dense (3 million nonzeros in the 6000 ? 5000 design matrix) and the
evaluation of the log-likelihood requires many expensive exp/log operations. We see in figure 2 that
PN OPT outperforms the other methods because the computational expense is shifted to solving the
subproblems, whose objective functions are cheap to evaluate.
5
Conclusion
Proximal Newton-type methods are natural generalizations of first-order methods that account for
curvature of the objective function. They share many of the desirable characteristics of traditional
first-order methods for convex optimization problems in composite form and achieve superlinear
rates of convergence subject to standard assumptions. These methods are especially suited to problems with expensive function evaluations because the main computational expense is shifted to solving subproblems that do not require function evaluations.
6
Acknowledgements
We wish to thank Trevor Hastie, Nick Henderson, Ernest Ryu, Ed Schmerling, Carlos Sing-Long,
and Walter Murray for their insightful comments.
8
References
[1] A. Auslender and M. Teboulle, Interior gradient and proximal methods for convex and conic optimization,
SIAM J. Optim., 16 (2006), pp. 697?725.
[2] A. Beck and M. Teboulle , A fast iterative shrinkage-thresholding algorithm for linear inverse problems,
SIAM J. Imaging Sci., 2 (2009), pp. 183?202.
[3] S. R. Becker, M. J. Cand`es, and M. C. Grant, Templates for convex cone problems with applications to
sparse signal recovery, Math. Program. Comput., 3 (2011), pp. 1?54.
[4] S. Becker and J. Fadili, A quasi-Newton proximal splitting method, NIPS, Lake Tahoe, California, 2012.
[5] S. Boyd and L. Vandenberghe, Convex Optimization, Cambridge University Press, Cambridge, 2004.
[6] E. J. Cand`es and B. Recht, Exact matrix completion via convex optimization, Found. Comput. Math, 9
(2009), pp. 717?772.
[7] J. E. Dennis, Jr. and J. J. Mor?e, A characterization of superlinear convergence and its application to
quasi-Newton methods, Math. Comp., 28, (1974), pp. 549?560.
[8] S. C. Eisenstat and H. F. Walker, Choosing the forcing terms in an inexact Newton method, SIAM J. Sci.
Comput., 17 (1996), pp. 16?32.
[9] J. Friedman, T. Hastie, H. Holfing, and R. Tibshirani, Pathwise coordinate optimization, Ann. Appl. Stat.
(2007), pp. 302?332
[10] M. Fukushima and H. Mine, A generalized proximal point algorithm for certain non-convex minimization
problems, Internat. J. Systems Sci., 12 (1981), pp. 989?1000.
[11] C. J. Hsieh, M. A. Sustik, P. Ravikumar, and I. S. Dhillon, Sparse inverse covariance matrix estimation
using quadratic approximation, NIPS, Granada, Spain, 2011.
[12] D. Kim, S. Sra, and I. S. Dhillon, A scalable trust-region algorithm with applications to mixed-norm
regression, ICML, Haifa, Israel, 2010.
[13] Y. Nesterov, Gradient methods for minimizing composite objective function, CORE discussion paper,
2007.
[14] G. Obozinski, B. Taskar, and M. I. Jordan, Joint covariate selection and joint subspace selection for
multiple classification problems, Stat. Comput. (2010), pp. 231?252
[15] P. Olsen, F. Oztoprak, J. Nocedal, S. Rennie, Newton-like methods for sparse inverse covariance estimation, NIPS, Lake Tahoe, California, 2012.
[16] M. Patriksson, Nonlinear Programming and Variational Inequality Problems, Kluwer Academic Publishers, The Netherlands, 1999.
[17] P. Ravikumar, M. J. Wainwright and J. D. Lafferty, High-dimensional Ising model selection using `1regularized logistic regression, Ann. Statist. (2010), pp. 1287-1319.
[18] M. Schmidt, Graphical Model Structure Learning with l1-Regularization, Ph.D. Thesis (2010), University of British Columbia
[19] M. Schmidt, E. van den Berg, M. P. Friedlander, and K. Murphy, Optimizing costly functions with simple
constraints: a limited-memory projected quasi-Newton algorithm, AISTATS, Clearwater Beach, Florida,
2009.
[20] M. Schmidt, D. Kim, and S. Sra, Projected Newton-type methods in machine learning, in S. Sra, S.
Nowozin, and S. Wright, editors, Optimization for Machine Learning, MIT Press (2011).
[21] P. Tseng and S. Yun, A coordinate gradient descent method for nonsmooth separable minimization, Math.
Prog. Ser. B, 117 (2009), pp. 387?423.
[22] P. Tseng, On accelerated proximal gradient methods for convex-concave optimization, submitted to
SIAM J. Optim. (2008).
[23] R. Tibshirani, Regression shrinkage and selection via the lasso, J. R. Stat. Soc. Ser. B Stat. Methodol., 58
(1996), pp. 267?288.
[24] S. J. Wright, R. D. Nowak, and M. A. T. Figueiredo, Sparse reconstruction by separable approximation,
IEEE Trans. Signal Process., 57 (2009), pp. 2479?2493.
[25] J. Yu, S. V. N. Vishwanathan, S. G?unter, and N. N. Schraudolph, A Quasi-Newton Approach to Nonsmooth
Convex Optimization, ICML, Helsinki, Finland, 2008.
[26] G. X. Yuan, C. H. Ho and C. J. Lin, An improved GLMNET for `1-regularized logistic regression and
support vector machines, National Taiwan University, Tech. Report 2011.
[27] R. H. Zou and T. Hastie, Regularization and variable selection via the elastic net, J. R. Stat. Soc. Ser. B
Stat. Methodol., 67 (2005), pp. 301?320.
9
|
4740 |@word multitask:1 version:1 norm:3 seems:1 k2hk:1 seek:3 crucially:1 covariance:3 hsieh:1 liblinear:2 outperforms:1 nonmonotone:2 optim:2 cheap:1 drop:1 update:3 ubuntu:1 xk:50 core:2 characterization:1 math:4 tahoe:2 five:1 mathematical:1 supply:1 viable:1 yuan:1 prove:4 x0:1 behavior:1 cand:2 globally:1 cpu:1 solver:2 spain:1 notation:1 bounded:4 gisette:2 israel:1 interpreted:3 minimizes:1 superlinearly:2 developed:2 finding:1 guarantee:1 every:1 firstorder:1 concave:1 tackle:1 exactly:1 k2:4 scaled:2 classifier:2 ser:3 unit:1 grant:1 positive:8 engineering:2 local:3 subscript:1 plus:2 twice:3 studied:2 k:1 appl:1 limited:2 unique:1 practice:1 block:2 definite:8 xr:2 digit:1 procedure:1 empirical:1 significantly:1 composite:10 projection:2 boyd:1 patriksson:3 cannot:1 superlinear:4 selection:7 irrepresentable:1 onto:1 close:1 interior:1 www:2 equivalent:2 fadili:2 starting:1 convex:25 splitting:2 recovery:1 eisenstat:2 rule:1 dominate:1 vandenberghe:1 his:2 hd:2 classic:2 handle:5 coordinate:3 suppose:9 user:2 exact:1 programming:1 us:9 agreement:2 expensive:2 ising:1 csie:1 subproblem:13 taskar:1 solved:2 region:2 sun:1 decrease:4 sol:1 trade:1 yk:4 complexity:1 nesterov:1 mine:3 dom:2 solving:5 joint:2 train:2 walter:1 fast:3 describe:2 clearwater:1 neighborhood:1 choosing:1 saunders:2 whose:3 quite:1 stanford:7 solve:9 posed:1 say:2 rennie:1 statistic:3 sequence:2 differentiable:9 eigenvalue:4 net:3 reconstruction:1 achieve:8 ky:5 convergence:18 converges:6 tk:9 derive:1 completion:2 stat:6 borrowed:1 solves:1 soc:2 implemented:1 kgf:2 implies:1 direction:20 closely:2 libsvmtools:1 require:5 hx:1 generalization:3 ntu:1 opt:8 sufficiently:6 considered:1 wright:2 exp:2 mapping:8 optimizer:1 early:1 smallest:1 xk2:1 finland:1 estimation:3 contributor:1 minimization:2 mit:1 pn:11 shrinkage:3 focus:1 rank:1 likelihood:3 hk:16 tech:1 attains:1 kim:3 stopping:2 pqn:1 her:1 sparsa:13 quasi:11 selects:2 arg:9 classification:2 html:1 constrained:1 art:3 special:3 equal:1 field:3 beach:1 yu:2 icml:2 nonsmooth:10 t2:2 simplify:1 report:1 few:1 randomly:1 national:1 murphy:1 beck:2 consisting:1 fukushima:4 friedman:1 evaluation:7 henderson:1 regularizers:1 edge:4 nowak:1 unter:1 conduct:1 taylor:1 haifa:1 theoretical:1 teboulle:5 goodness:1 cost:3 conducted:1 characterize:1 reported:1 proximal:38 recht:1 siam:4 lee:1 off:1 michael:1 together:1 continuously:6 thesis:2 satisfied:1 management:1 choose:2 account:1 bfgs:3 sec:2 satisfy:8 depends:1 ated:1 later:1 root:1 jason:1 closed:2 sup:1 carlos:1 minimize:9 accuracy:1 qk:5 characteristic:1 efficiently:2 yield:2 generalize:2 handwritten:1 comparably:1 comp:1 submitted:1 trevor:1 ed:1 definition:1 inexact:3 pp:14 atlab:1 associated:1 mi:9 sampled:1 dataset:4 popular:2 proxh:10 attained:1 dt:3 higher:1 improved:1 evaluated:2 strongly:1 xk1:1 until:1 dennis:4 trust:2 nonlinear:1 logistic:7 vicinity:1 hence:1 regularization:4 dhillon:2 during:1 suboptimality:3 criterion:3 generalized:5 yun:3 demonstrate:1 performs:1 l1:5 variational:1 empirically:3 million:1 jdl17:1 approximates:1 mor:4 interpret:2 opt1:1 kluwer:1 refer:1 cambridge:2 internat:1 curvature:2 recent:1 optimizing:1 inf:1 forcing:4 certain:2 inequality:1 outperforming:1 discussing:1 kwk1:2 yi:2 signal:4 ii:7 multiple:2 desirable:2 yuekai:2 rj:5 nonzeros:1 smooth:5 match:1 faster:1 academic:1 cross:1 long:1 schraudolph:1 lin:1 xkh:1 ravikumar:2 promotes:2 ernest:1 variant:2 regression:10 mrf:3 scalable:1 iteration:10 tailored:1 subdifferential:1 walker:2 publisher:1 rest:1 comment:1 subject:3 lafferty:1 jordan:1 near:1 iii:1 xj:2 fit:1 hastie:3 lasso:2 idea:1 i7:1 motivated:1 gb:1 becker:3 penalty:3 hessian:10 netherlands:1 ph:2 statist:1 generate:1 http:1 exist:1 shifted:3 per:1 tibshirani:2 discrete:1 shall:1 express:1 group:4 backward:1 nocedal:1 ram:1 imaging:1 cone:1 package:2 inverse:4 everywhere:1 prog:1 family:3 almost:1 lake:2 bit:1 convergent:1 fold:1 quadratic:8 constraint:2 vishwanathan:1 helsinki:1 software:2 min:11 optimality:3 separable:2 department:1 nonexpansive:1 jr:1 unity:2 tw:1 making:1 den:1 previously:1 turn:1 cjlin:1 mind:1 sustik:1 lieu:2 generalizes:1 available:2 operation:2 away:1 spectral:3 generic:4 schmidt:4 ho:1 rp:1 existence:1 substitute:1 original:1 denotes:3 running:1 include:1 florida:1 graphical:5 newton:41 build:1 especially:1 murray:1 objective:10 quic:2 strategy:1 costly:1 traditional:1 gradient:14 subspace:1 thank:1 sci:3 restart:1 tseng:4 taiwan:1 length:13 minimizing:2 equivalently:1 subproblems:5 expense:4 trace:1 implementation:6 design:1 proper:2 perform:1 markov:3 datasets:1 sing:1 descent:11 regularizes:1 rn:5 required:1 trip:2 nick:1 california:2 quadratically:3 ryu:1 nip:4 trans:1 auslender:3 usually:1 challenge:1 program:1 memory:3 wainwright:1 natural:1 rely:1 regularized:10 indicator:1 methodol:2 improve:1 conic:1 columbia:1 gf:3 l2:3 acknowledgement:1 kf:1 friedlander:1 relative:2 loss:1 mixed:1 validation:1 sufficient:10 consistent:1 thresholding:2 editor:1 granada:1 share:1 nowozin:1 repeat:1 free:1 figueiredo:1 institute:1 template:1 sparse:6 van:1 default:2 evaluating:1 avoids:2 commonly:1 adaptive:1 projected:5 restarting:1 olsen:2 global:3 overfitting:1 xi:2 continuous:6 search:21 iterative:2 zk:2 ca:6 elastic:3 sra:3 excellent:1 necessarily:4 zou:1 aistats:1 main:2 dense:1 arise:1 intel:1 wish:1 comput:4 theorem:3 british:1 specific:1 xt:3 covariate:1 tfocs:5 insightful:1 kr:2 magnitude:1 kx:2 suited:2 backtracking:2 oztoprak:1 glmnet:2 kxk:5 pathwise:1 minimizer:3 quasinewton:1 satisfies:6 obozinski:1 identity:1 ann:2 lipschitz:5 fista:8 included:1 determined:1 uniformly:1 wt:1 lemma:10 called:1 e:2 select:3 berg:1 support:2 mark:1 relevance:3 accelerated:1 evaluate:2
|
4,134 | 4,741 |
Deep Neural Networks Segment Neuronal
Membranes in Electron Microscopy Images
Dan C. Cires?an?
IDSIA
USI-SUPSI
Lugano 6900
[email protected]
Alessandro Giusti
IDSIA
USI-SUPSI
Lugano 6900
[email protected]
?
Jurgen
Schmidhuber
IDSIA
USI-SUPSI
Lugano 6900
[email protected]
Luca M. Gambardella
IDSIA
USI-SUPSI
Lugano 6900
[email protected]
Abstract
We address a central problem of neuroanatomy, namely, the automatic segmentation of neuronal structures depicted in stacks of electron microscopy (EM) images. This is necessary to efficiently map 3D brain structure and connectivity. To
segment biological neuron membranes, we use a special type of deep artificial
neural network as a pixel classifier. The label of each pixel (membrane or nonmembrane) is predicted from raw pixel values in a square window centered on it.
The input layer maps each window pixel to a neuron. It is followed by a succession of convolutional and max-pooling layers which preserve 2D information and
extract features with increasing levels of abstraction. The output layer produces
a calibrated probability for each class. The classifier is trained by plain gradient
descent on a 512 ? 512 ? 30 stack with known ground truth, and tested on a
stack of the same size (ground truth unknown to the authors) by the organizers of
the ISBI 2012 EM Segmentation Challenge. Even without problem-specific postprocessing, our approach outperforms competing techniques by a large margin in
all three considered metrics, i.e. rand error, warping error and pixel error. For
pixel error, our approach is the only one outperforming a second human observer.
1
Introduction
How is the brain structured? The recent field of connectomics [2] is developing high-throughput
techniques for mapping connections in nervous systems, one of the most important and ambitious
goals of neuroanatomy. The main tool for studying connections at the neuron level is serial-section
Transmitted Electron Microscopy (ssTEM), resolving individual neurons and their shapes. After
preparation, a sample of neural tissue is typically sectioned into 50-nanometer slices; each slice is
then recorded as a 2D grayscale image with a pixel size of about 4 ? 4 nanometers (see Figure 1).
The visual complexity of the resulting stacks makes them hard to handle. Reliable automated segmentation of neuronal structures in ssTEM stacks so far has been infeasible. A solution of this
problem, however, is essential for any automated pipeline reconstructing and mapping neural connections in 3D. Recent advances in automated sample preparation and imaging make this increas?
webpage: http://www.idsia.ch/?ciresan
1
Figure 1: Left: the training stack (one slice shown). Right: corresponding ground truth; black lines
denote neuron membranes. Note complexity of image appearance.
ingly urgent, as they enable acquisition of huge datasets [6, 21], whose manual analysis is simply
unfeasible.
Our solution is based on a Deep Neural Network (DNN) [12, 13] used as a pixel classifier. The
network computes the probability of a pixel being a membrane, using as input the image intensities
in a square window centered on the pixel itself. An image is then segmented by classifying all of
its pixels. The DNN is trained on a different stack with similar characteristics, in which membranes
were manually annotated.
DNN are inspired by convolutional neural networks introduced in 1980 [16], improved in the 1990s
[25], refined and simplified in the 2000s [5, 33], and brought to their full potential by making them
both large and deep [12, 13]. Lately, DNN proved their efficiency on data sets extending from
handwritten digits (MNIST) [10, 12], handwritten characters [11] to 3D toys (NORB) [13] and faces
[35]. Training huge nets requires months or even years on CPUs, where high data transfer latency
prevented multi-threading code from saving the situation. Our fast GPU implementation [10, 12]
overcomes this problem, speeding up single-threaded CPU code by up to two orders of magnitude.
Many other types of learning classifiers have been applied to segmentation of TEM images, where
different structures are not easily characterized by intensity differences, and structure boundaries
are not correlated with high image gradients, due to noise and many confounding micro-structures.
In most binary segmentation problems, classifiers are used to compute one or both of the following probabilities: (a) probability of a pixel belonging to each class; (b) probability of a boundary
dividing two adjacent pixels. Segmentation through graph cuts [7] uses (a) as the unary term, and
(b) as the binary term. Some use an additional term to account for the expected geometry of neuron
membranes[23].
We compute pixel probabilities only (point (a) above), and directly obtain a segmentation by mild
smoothing and thresholding, without using graph cuts. Our main contribution lies therefore in the
classifier itself. Others have used off-the-shelf random forest classifiers to compute unary terms of
neuron membranes [22], or SVMs to compute both unary and binary terms for segmenting mitochondria [28, 27]. The former approach uses haar-like features and texture histograms computed on
a small region around the pixel of interest, whereas the latter uses sophisticated rotational [17] and
ray [34] features computed on superpixels [3]. Feature selection mirrors the researcher?s expectation of which characteristics of the image are relevant for classification, and has a large impact on
classification accuracy. In our approach, we bypass such problems, using raw pixel values as inputs.
Due to their convolutional structure, the first layers of the network automatically learn to compute
meaningful features during training.
The main contribution of the paper is a practical state-of-the-art segmentation method for neuron
membranes in ssTEM data, described in Section 2. It outperforms existing methods as validated
in Section 3. The contribution is particularly meaningful because our approach does not rely on
problem-specific postprocessing: fruitful application to different biomedical segmentation problems
is therefore likely.
2
Figure 2: Overview of our approach (see text).
2
Methods
For each pixel we consider two possible classes, membrane and non-membrane. The DNN classifier
(Section 2.1) computes the probability of a pixel p being of the former class, using as input the
raw intensity values of a square window centered on p with an edge of w pixels?w being an odd
number to enforce symmetry. When a pixel is close to the image border, its window will include
pixels outside the image boundaries; such pixels are synthesized by mirroring the pixels in the actual
image across the boundary (see Figure 2).
The classifier is first trained using the provided training images (Section 2.2). After training, to
segment a test image, the classifier is applied to all of its pixels, thus generating a map of membrane probabilities?i.e., a new real-valued image the size of the input image. Binary membrane
segmentation is obtained by mild postprocessing techniques discussed in Section 2.3, followed by
thresholding.
2.1
DNN architecture
A DNN [13] consists of a succession of convolutional, max-pooling and fully connected layers. It
is a general, hierarchical feature extractor that maps raw pixel intensities of the input image into a
feature vector to be classified by several fully connected layers. All adjustable parameters are jointly
optimized through minimization of the misclassification error over the training set.
Each convolutional layer performs a 2D convolution of its input maps with a square filter. The
activations of the output maps are obtained by summing the convolutional responses which are
passed through a nonlinear activation function.
The biggest architectural difference between the our DNN and earlier CNN [25] are max-pooling
layers [30, 32, 31] instead of sub-sampling layers. Their outputs are given by the maximum activation over non-overlapping square regions. Max-pooling are fixed, non-trainable layers which select
the most promising features. The DNN also have many more maps per layer, and thus many more
connections and weights.
After 1 to 4 stages of convolutional and max-pooling layers several fully connected layers further
combine the outputs into a 1D feature vector. The output layer is always a fully connected layer
with one neuron per class (two in our case). Using a softmax activation function for the last layer
guarantees that each neuron?s output activation can be interpreted as the probability of a particular
input image belonging to that class.
2.2
Training
To train the classifier, we use all available slices of the training stack, i.e., 30 images with a 512?512
resolution. For each slice, we use all membrane pixels as positive examples (on average, about
50000), and the same amount of pixels randomly sampled (without repetitions) among all nonmembrane pixels. This amounts to 3 million training examples in total, in which both classes are
equally represented.
As is often the case in TEM images?but not in other modalities such as phase-contrast
microscopy?the appearance of structures is not affected by their orientation. We take advantage of
3
this property, and synthetically augment the training set at the beginning of each epoch by randomly
mirroring each training instance, and/or rotating it by ?90? .
2.3
Postprocessing of network outputs
Because each class is equally represented in the training set but not in the testing data, the network
outputs cannot be directly interpreted as probability values; instead, they tend to severely overestimate the membrane probability. To fix this issue, a polynomial function post-processor is applied to
the network outputs.
To compute its coefficients, a network N is trained on 20 slices of the training volume Ttrain and
tested on the remaining 10 slices of the same volume (Ttest , for which ground truth is available). We
compare all outputs obtained on Ttest (a total of 2.6 million instances) to ground truth, to compute
the transformation relating the network output value and the actual probability of being a membrane;
for example, we measure that, among all pixels of Ttest which were classified by N as having a 50%
probability of being membrane, only about 18% have in fact such a ground truth label; the reason
being the different prevalence of membrane instances in Ttrain (i.e. 50%) and in Ttest (roughly 20%).
The resulting function is well approximated by a monotone cubic polynomial, whose coefficients
are computed by least-squares fitting. The same function is then used to calibrate the outputs of all
trained networks.
After calibration (a grayscale transformation in image processing terms), network outputs are spatially smoothed by a 2-pixel-radius median filter. This results in regularized of membrane boundaries
after thresholding.
2.4
Foveation and nonuniform sampling
We experimented with two related techniques for improving the network performance by manipulating its input data, namely foveation and nonuniform sampling (see Figure 3).
Foveation is inspired by the structure of human photoreceptor topography [14], and has recently been
shown to be very effective for improving nonlocal-means denoising algorithms [15]. It imposes a
spatially-variant blur on the input window pixels, such that full detail is kept in the central section
(fovea), while the peripheral parts are defocused by means of a convolution with a disk kernel, to
remove fine details. The network, whose task is to classify the center pixel of the window, is then
forced to disregard such peripheral fine details, which are most likely irrelevant, while still retaining
the general structure of the window (context).
Figure 3: Input windows with w = 65, from the training set. First row shows original window
(Plain); other rows show effects of foveation (Fov), nonuniform sampling (Nu), and both (Fov+Nu).
Samples on the left and right correspond to instances of class Membrane and Non-membrane, respectively. The leftmost image illustrates how a checkerboard pattern is affected by such transformations.
Nonuniform sampling is motivated by the observation that (in this and other applications) larger
window sizes w generally result in significant performance improvements. However, a large w
4
results in much bigger networks, which take longer to train and, at least in theory, require larger
amounts of training data to retain their generalization ability. With nonuniform sampling, image
pixels are directly mapped to neurons only in the central part of the window; elsewhere, their source
pixels are sampled with decreasing resolution as the distance from the window center increases. As
a result, the image in the window is deformed in a fisheye-like fashion, and covers a larger area of
the input image with fewer neurons.
Simultaneously applying both techniques is a way of exploiting data at multiple resolutions?fine at
the center, coarse in the periphery of the window.
2.5
Averaging outputs of multiple networks
We observed that large networks with different architectures often exhibit significant output differences for many image parts, despite being trained on the same data. This suggests that these
powerful and flexible classifiers exhibit relatively large variance but low bias. It is therefore reasonable to attempt to reduce such variance by averaging the calibrated outputs of several networks with
different architectures.
This was experimentally verified. The submissions obtained by averaging the outputs of multiple
large networks scored significantly better in all metrics than the single networks.
3
Experimental results
All experiments are performed on a computer with a Core i7 950 3.06GHz processor, 24GB of RAM,
and four GTX 580 graphics cards. A GPU implementation [12] accelerates the forward propagation
and back propagation routines by a factor of 50.
We validate our approach on the publicly-available dataset [9] provided by the organizers of the ISBI
2012 EM Segmentation Challenge [1], which represents two portions of the ventral nerve cord of a
Drosophila larva. The dataset is composed by two 512 ? 512 ? 30 stacks, one used for training, one
for testing. Each stack covers a 2 ? 2 ? 1.5 ?m volume, with a resolution of 4 ? 4 ? 50 nm/pixel.
For the training stack, a manually annotated ground truth segmentation is provided. For the testing
stack, the organizers obtained (but did not distribute) two manual segmentations by different expert
neuroanatomists. One is used as ground truth, the other to evaluate the performance of a second
human observer and provide a meaningful comparison for the algorithms? performance.
A segmentation of the testing stack is evaluated through an automated online system, which computes three error metrics in relation to the hidden ground truth:
Rand error: defined as 1 ? Frand , where Frand represents the F1 score of the Rand index [29], which
measures the accuracy with which pixels are associated to their respective neurons.
Warping error: a segmentation metric designed to account for topological disagreements [19];
it accounts for the number of neuron splits and mergers required to obtain the candidate
segmentation from ground truth.
Pixel error: defined as 1 ? Fpixel , where Fpixel represents the F1 score of pixel similarity.
The automated system accepts a stack of grayscale images, representing membrane probability values for each pixel; the stack is thresholded using 9 different threshold values, obtaining 9 binary
stacks. For each of the stacks, the system computes the error measures above, and returns the minimum error.
Pixel error is clearly not a suitable indicator of segmentation quality in this context, and is reported
mostly for reference. Rand and Warping error metrics have various strengths and weaknesses, without clear consensus in favor of any. The former tends to provide a more consistent measure but
penalizes even slightly misplaced borders, which would not be problematic in most practical applications. The latter has a more intuitive interpretation, but completely disregards non-topological
errors.
We train four networks N1, N2, N3 and N4, with slightly different architectures, and window sizes
w = 65 (for N1, N2, N3) and w = 95 (for N4); all networks use foveation and nonuniform sampling,
5
Figure 4: Above, from left to right: part of a source image from the test set; corresponding calibrated
outputs of networks N1, N2, N3 and N4; average of such outputs; average after filtering. Below, the
performance of each network, as well as the significantly better performance due to averaging their
outputs. All results are computed after median filtering (see text).
except N3, which uses neither. As the input window size increases, the network depth also increases
because we keep the convolutional filter sizes small. The architecture of N4 is the deepest, and is
reported in Table 1.
Training time for one epoch varies from approximately 170 minutes for N1 (w = 65) to 340 minutes
for N4 (w = 95). All nets are trained for 30 epochs, which leads to a total training time of several
days. However, once networks are trained, application to new images is relatively fast: classifying the 8 million pixels comprising the whole testing stack takes 10 to 30 minutes on four GPUs.
Such implementation is currently being further optimized (with foreseen speedups of one order of
magnitude at least) in view of application to huge, terapixel-class datasets [6, 21].
Table 1: 11-layer architecture for network N4, w = 95.
Layer
0
1
2
3
4
5
6
7
8
9
10
Type
Maps and neurons
input
convolutional
max pooling
convolutional
max pooling
convolutional
max pooling
convolutional
max pooling
fully connected
fully connected
1 map of 95x95 neurons
48 maps of 92x92 neurons
48 maps of 46x46 neurons
48 maps of 42x42 neurons
48 maps of 21x21 neurons
48 maps of 18x18 neurons
48 maps of 9x9 neurons
48 maps of 6x6 neurons
48 maps of 3x3 neurons
200 neurons
2 neurons
Kernel size
4x4
2x2
5x5
2x2
4x4
2x2
4x4
2x2
1x1
1x1
The outputs of four such networks are shown in Figure 4, along with their performance after filtering.
By averaging the outputs of all networks, results improve significantly. The final result for one slice
of the test stack is shown in Figure 5.
Our results are compared to competing methods in Table 2.
Since our pure pixel classifier method aims at minimizing pixel error, Rand and warping errors are
just minimized as a side-effect, but never explicitly accounted for during segmentation. In contrast,
some competing segmentation approaches adopt different post-processing techniques directly opti6
Figure 5: Left: slice 16 of the test stack. Right: corresponding output.
Table 2: Results of our approach and competing algorithms. For comparison, the first two rows
report the performance of the second human observer and of a simple thresholding approach.
Rand error [?10?3 ]
Warping error [?10?6 ]
Pixel error [?10?3 ]
Second Human Observer
Simple Thresholding
27
445
344
15522
67
222
Our approach
Laptev et al. [24] (1)
Laptev et al. [24] (2)
Sumbul et al.
Liu et al. [26] (1)
Kaynig et al. [23]
Liu et al. [26] (2)
Kamentsky et al. [20]
Burget et al. [8]
Tan et al. [36]
Bas et al. [4]
Iftikhar et al. [18]
48
65
70
76
84
84
89
90
139
153
162
230
434
556
525
646
1602
1124
1134
1512
2641
685
1613
16156
60
83
79
65
134
157
78
100
102
88
109
150
Group
mizing the rand error. Nevertheless, their results are inferior. But such post-processing techniques?
which unlike our general classifier are specific to this particular problem?could be successfully
applied to finetune our outputs, further improving results. Preliminary results in this direction are
encouraging: the problem-specific postprocessing techniques in [20] and [24], operating on our segmentation, reduce the Rand error to measure to 36?10?3 and 32?10?3 , respectively. Further research
along these lines is planned for the near future.
4
Discussion and conclusions
The main strength of our approach to neuronal membrane segmentation in EM images lies in a
deep and wide neural network trained by online back-propagation to become a very powerful pixel
classifier with superhuman pixel-error rate, made possible by an optimized GPU implementation
more than 50 times faster than equivalent code on standard microprocessors.
7
Our approach outperforms all other approaches in the competition, despite not even being tailored
to this particular segmentation task. Instead, the DNN acts as a generic image classifier, using raw
pixel intensities as inputs, without ad-hoc post-processing. This opens interesting perspectives on
applying similar techniques to other biomedical image segmentation tasks.
Acknowledgments
This work was partially supported by the Supervised Deep / Recurrent Nets SNF grant, Project Code
140399.
References
[1] Segmentation of neuronal structures in EM stacks challenge - ISBI 2012. http://tinyurl.com/
d2fgh7g.
[2] The Open Connectome Project. http://openconnectomeproject.org.
[3] R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, and S. S?usstrunk. Slic superpixels. Technical Report
149300 EPFL, (June), 2010.
[4] Erhan Bas, Mustafa G. Uzunbas, Dimitris Metaxas, and Eugene Myers. Contextual grouping in a concept:
a multistage decision strategy for EM segmentation. In Proc. of ISBI 2012 EM Segmentation Challenge.
[5] Sven Behnke. Hierarchical Neural Networks for Image Interpretation, volume 2766 of Lecture Notes in
Computer Science. Springer, 2003.
[6] Davi D. Bock, Wei-Chung A. Lee, Aaron M. Kerlin, Mark L. Andermann, Greg Hood, Arthur W. Wetzel,
Sergey Yurgenson, Edward R. Soucy, Hyon S. Kim, and R. Clay Reid. Network anatomy and in vivo
physiology of visual cortical neurons. Nature, 471(7337):177?182, 2011.
[7] Y. Boykov, O. Veksler, and R. Zabih. Fast approximate energy minimization via graph cuts. Pattern
Analysis and Machine Intelligence, IEEE Transactions on, 23(11):1222?1239, 2001.
[8] Radim Burget, Vaclav Uher, and Jan Masek. Trainable Segmentation Based on Local-level and Segmentlevel Feature Extraction. In Proc. of ISBI 2012 EM Segmentation Challenge.
[9] Albert Cardona, Stephan Saalfeld, Stephan Preibisch, Benjamin Schmid, Anchi Cheng, Jim Pulokas,
Pavel Tomancak, and Volker Hartenstein. An integrated micro- and macroarchitectural analysis of the
drosophila brain by computer-assisted serial section electron microscopy. PLoS Biol, 8(10):e1000502, 10
2010.
[10] Dan Claudiu Ciresan, Ueli Meier, Luca Maria Gambardella, and J?urgen Schmidhuber. Deep, big, simple
neural nets for handwritten digit recognition. Neural Computation, 22(12):3207?3220, 2010.
[11] Dan Claudiu Ciresan, Ueli Meier, Luca Maria Gambardella, and J?urgen Schmidhuber. Convolutional
neural network committees for handwritten character classification. In International Conference on Document Analysis and Recognition, pages 1250?1254, 2011.
[12] Dan Claudiu Ciresan, Ueli Meier, Jonathan Masci, Luca Maria Gambardella, and J?urgen Schmidhuber.
Flexible, high performance convolutional neural networks for image classification. In International Joint
Conference on Artificial Intelligence, pages 1237?1242, 2011.
[13] Dan Claudiu Ciresan, Ueli Meier, and J?urgen Schmidhuber. Multi-column deep neural networks for image
classification. In Computer Vision and Pattern Recognition, pages 3642?3649, 2012.
[14] C.A. Curcio, K.R. Sloan, R.E. Kalina, and A.E. Hendrickson. Human photoreceptor topography. The
Journal of comparative neurology, 292(4):497?523, 1990.
[15] A. Foi and G. Boracchi. Foveated self-similarity in nonlocal image filtering. In Proceedings of SPIE,
volume 8291, page 829110, 2012.
[16] Kunihiko Fukushima. Neocognitron: A self-organizing neural network for a mechanism of pattern recognition unaffected by shift in position. Biological Cybernetics, 36(4):193?202, 1980.
[17] G. Gonz?alez, F. Fleurety, and P. Fua. Learning rotational features for filament detection. In Computer
Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 1582?1589. IEEE, 2009.
[18] Saadia Iftikhar and Afzal Godil. The Detection of Neuronal Structures using a Patch-based Multi-features
and Support Vector Machines Learning Algorithm. In Proc. of ISBI 2012 EM Segmentation Challenge.
[19] Viren Jain, Benjamin Bollmann, Mark Richardson, Daniel R. Berger, Moritz Helmstaedter, Kevin L. Briggman, Winfried Denk, Jared B. Bowden, John M. Mendenhall, Wickliffe C. Abraham, Kristen M. Harris,
N. Kasthuri, Ken J. Hayworth, Richard Schalek, Juan Carlos Tapia, Jeff W. Lichtman, and H. Sebastian
Seung. Boundary Learning by Optimization with Topological Constraints. In CVPR, pages 2488?2495.
IEEE, 2010.
8
[20] Lee Kamentsky. Segmentation of EM images of neuronal structures using CellProfiler. In Proc. of ISBI
2012 EM Segmentation Challenge.
[21] Bobby Kasthuri. Mouse Visual Cortex Dataset in the Open Connectome Project.
openconnectomeproject.org/Kasthuri11/.
http://
[22] V. Kaynig, T. Fuchs, and J. Buhmann. Geometrical consistent 3D tracing of neuronal processes in ssTEM
data. Medical Image Computing and Computer-Assisted Intervention?MICCAI 2010, pages 209?216,
2010.
[23] V. Kaynig, T. Fuchs, and J.M. Buhmann. Neuron geometry extraction by perceptual grouping in sstem
images. In Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, pages 2902?
2909. IEEE, 2010.
[24] Dmitry Laptev, Alexander Vezhnevets, Sarvesh Dwivedi, and Joachim Buhmann. Segmentation of Neuronal Structures in EM stacks. In Proc. of ISBI 2012 EM Segmentation Challenge.
[25] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition.
Proceedings of the IEEE, 86(11):2278?2324, November 1998.
[26] Ting Liu, Mojtaba Seyedhosseini, Elizabeth Jurrus, and Tolga Tasdizen. Neuron Segmentation in EM
Images using Series of Classifiers and Watershed Tree. In Proc. of ISBI 2012 EM Segmentation Challenge.
[27] A. Lucchi, K. Smith, R. Achanta, G. Knott, and P. Fua. Supervoxel-Based Segmentation of Mitochondria
in EM Image Stacks With Learned Shape Features. Medical Imaging, IEEE Transactions on, (99):1?1,
2012.
[28] A. Lucchi, K. Smith, R. Achanta, V. Lepetit, and P. Fua. A fully automated approach to segmentation of
irregularly shaped cellular structures in EM images. Medical Image Computing and Computer-Assisted
Intervention?MICCAI 2010, pages 463?471, 2010.
[29] W.M. Rand. Objective criteria for the evaluation of clustering methods. Journal of the American Statistical
association, 66(336):846?850, 1971.
[30] Maximiliam Riesenhuber and Tomaso Poggio. Hierarchical models of object recognition in cortex. Nat.
Neurosci., 2(11):1019?1025, 1999.
[31] Dominik Scherer, Adreas M?uller, and Sven Behnke. Evaluation of pooling operations in convolutional
architectures for object recognition. In International Conference on Artificial Neural Networks, 2010.
[32] Thomas Serre, Lior Wolf, and Tomaso Poggio. Object recognition with features inspired by visual cortex.
In Proc. of Computer Vision and Pattern Recognition Conference, 2005.
[33] Patrice Y. Simard, Dave. Steinkraus, and John C. Platt. Best practices for convolutional neural networks
applied to visual document analysis. In Seventh International Conference on Document Analysis and
Recognition, pages 958?963, 2003.
[34] K. Smith, A. Carleton, and V. Lepetit. Fast ray features for learning irregular shapes. In Computer Vision,
2009 IEEE 12th International Conference on, pages 397?404. IEEE, 2009.
[35] Daniel Strigl, Klaus Kofler, and Stefan Podlipnig. Performance and scalability of GPU-based convolutional neural networks. In 18th Euromicro Conference on Parallel, Distributed, and Network-Based
Processing, 2010.
[36] Xiao Tan and Changming Sun. Membrane extraction using two-step classification and post-processing.
In Proc. of ISBI 2012 EM Segmentation Challenge.
9
|
4741 |@word mild:2 deformed:1 cnn:1 polynomial:2 radim:1 disk:1 open:3 pavel:1 kerlin:1 lepetit:2 briggman:1 liu:3 series:1 score:2 daniel:2 document:4 outperforms:3 existing:1 com:1 contextual:1 activation:5 connectomics:1 gpu:4 john:2 blur:1 shape:3 remove:1 designed:1 davi:1 intelligence:2 fewer:1 nervous:1 merger:1 beginning:1 smith:4 core:1 ttrain:2 coarse:1 org:2 kaynig:3 along:2 become:1 consists:1 dan:6 combine:1 ray:2 fitting:1 expected:1 tomaso:2 roughly:1 multi:3 brain:3 inspired:3 decreasing:1 steinkraus:1 automatically:1 cpu:2 actual:2 window:17 encouraging:1 increasing:1 provided:3 project:3 interpreted:2 transformation:3 guarantee:1 alez:1 act:1 classifier:17 platt:1 misplaced:1 grant:1 medical:3 intervention:2 reid:1 segmenting:1 positive:1 overestimate:1 local:1 tends:1 severely:1 despite:2 approximately:1 black:1 achanta:3 fov:2 suggests:1 practical:2 acknowledgment:1 hood:1 testing:5 filament:1 lecun:1 practice:1 x3:1 prevalence:1 digit:2 jan:1 area:1 snf:1 significantly:3 physiology:1 burget:2 tolga:1 bowden:1 unfeasible:1 close:1 selection:1 cannot:1 context:2 applying:2 www:1 fruitful:1 map:17 equivalent:1 center:3 resolution:4 pure:1 usi:4 handle:1 tan:2 saalfeld:1 us:4 viren:1 idsia:9 approximated:1 particularly:1 recognition:12 cut:3 submission:1 observed:1 region:2 cord:1 connected:6 sun:1 plo:1 alessandro:1 benjamin:2 complexity:2 multistage:1 seung:1 denk:1 trained:9 segment:3 laptev:3 efficiency:1 completely:1 easily:1 joint:1 represented:2 various:1 train:3 forced:1 fast:4 effective:1 sven:2 jain:1 artificial:3 klaus:1 kevin:1 outside:1 refined:1 whose:3 larger:3 valued:1 cvpr:3 tested:2 ability:1 favor:1 richardson:1 curcio:1 jointly:1 itself:2 final:1 online:2 patrice:1 hoc:1 advantage:1 myers:1 net:4 relevant:1 seyedhosseini:1 organizing:1 intuitive:1 validate:1 competition:1 scalability:1 webpage:1 exploiting:1 extending:1 produce:1 generating:1 comparative:1 object:3 recurrent:1 odd:1 jurgen:1 edward:1 dividing:1 predicted:1 direction:1 radius:1 anatomy:1 annotated:2 filter:3 centered:3 human:6 enable:1 mendenhall:1 jurrus:1 require:1 fix:1 generalization:1 f1:2 drosophila:2 preliminary:1 kristen:1 biological:2 assisted:3 around:1 considered:1 ground:10 ueli:4 mapping:2 electron:4 ventral:1 adopt:1 proc:8 label:2 currently:1 repetition:1 successfully:1 tool:1 minimization:2 uller:1 brought:1 clearly:1 stefan:1 ingly:1 always:1 aim:1 shelf:1 volker:1 validated:1 june:1 joachim:1 improvement:1 maria:3 superpixels:2 contrast:2 kim:1 abstraction:1 epfl:1 unary:3 integrated:1 typically:1 hidden:1 relation:1 dnn:10 manipulating:1 comprising:1 hartenstein:1 pixel:47 issue:1 classification:6 among:2 orientation:1 augment:1 flexible:2 retaining:1 smoothing:1 special:1 art:1 softmax:1 urgen:4 field:1 once:1 saving:1 having:1 never:1 sampling:7 manually:2 x4:3 represents:3 tapia:1 extraction:3 shaped:1 throughput:1 tem:2 future:1 minimized:1 others:1 report:2 richard:1 micro:2 randomly:2 composed:1 preserve:1 simultaneously:1 individual:1 sstem:5 geometry:2 phase:1 n1:4 fukushima:1 attempt:1 detection:2 huge:3 interest:1 evaluation:2 weakness:1 watershed:1 edge:1 necessary:1 arthur:1 bobby:1 respective:1 poggio:2 tree:1 rotating:1 penalizes:1 instance:4 column:1 earlier:1 classify:1 cardona:1 planned:1 cover:2 juergen:1 calibrate:1 veksler:1 seventh:1 supsi:4 graphic:1 reported:2 varies:1 calibrated:3 international:5 retain:1 lee:2 off:1 connectome:2 lucchi:3 mouse:1 connectivity:1 central:3 recorded:1 nm:1 x9:1 juan:1 cires:1 expert:1 chung:1 american:1 return:1 simard:1 toy:1 checkerboard:1 account:3 potential:1 distribute:1 coefficient:2 explicitly:1 sloan:1 ad:1 performed:1 view:1 observer:4 portion:1 carlos:1 ttest:4 parallel:1 vivo:1 contribution:3 square:6 publicly:1 greg:1 accuracy:2 convolutional:17 characteristic:2 efficiently:1 succession:2 correspond:1 variance:2 raw:5 handwritten:4 metaxas:1 pulokas:1 foi:1 researcher:1 cybernetics:1 unaffected:1 tissue:1 classified:2 processor:2 dave:1 manual:2 sebastian:1 e1000502:1 energy:1 acquisition:1 associated:1 spie:1 lior:1 sampled:2 proved:1 dataset:3 segmentation:38 mizing:1 clay:1 routine:1 sophisticated:1 back:2 nerve:1 finetune:1 day:1 x6:1 supervised:1 response:1 improved:1 rand:9 fua:4 wei:1 evaluated:1 just:1 biomedical:2 stage:1 miccai:2 nonlinear:1 overlapping:1 propagation:3 quality:1 effect:2 serre:1 concept:1 gtx:1 former:3 spatially:2 moritz:1 adjacent:1 x5:1 during:2 self:2 inferior:1 criterion:1 leftmost:1 neocognitron:1 claudiu:4 performs:1 postprocessing:5 tinyurl:1 image:43 geometrical:1 recently:1 boykov:1 overview:1 vezhnevets:1 volume:5 million:3 discussed:1 larva:1 interpretation:2 relating:1 synthesized:1 association:1 significant:2 preibisch:1 automatic:1 calibration:1 longer:1 similarity:2 operating:1 cortex:3 mitochondrion:2 x46:1 recent:2 confounding:1 perspective:1 irrelevant:1 supervoxel:1 schmidhuber:5 periphery:1 gonz:1 outperforming:1 binary:5 transmitted:1 minimum:1 additional:1 neuroanatomy:2 gambardella:4 resolving:1 full:2 multiple:3 segmented:1 technical:1 faster:1 characterized:1 luca:5 serial:2 prevented:1 equally:2 post:5 bigger:1 impact:1 variant:1 vision:5 metric:5 expectation:1 albert:1 histogram:1 kernel:2 tailored:1 sergey:1 microscopy:5 irregular:1 whereas:1 fine:3 median:2 source:2 modality:1 carleton:1 unlike:1 pooling:10 tend:1 tomancak:1 bollmann:1 superhuman:1 near:1 synthetically:1 sectioned:1 split:1 stephan:2 automated:6 bengio:1 architecture:7 competing:4 ciresan:5 behnke:2 reduce:2 haffner:1 shift:1 i7:1 motivated:1 fuchs:2 gb:1 giusti:1 passed:1 deep:8 mirroring:2 generally:1 latency:1 clear:1 amount:3 zabih:1 svms:1 ken:1 http:4 problematic:1 per:2 affected:2 group:1 slic:1 four:4 threshold:1 nevertheless:1 neither:1 verified:1 thresholded:1 kept:1 imaging:2 graph:3 ram:1 monotone:1 year:1 powerful:2 reasonable:1 architectural:1 patch:1 decision:1 accelerates:1 layer:18 followed:2 cheng:1 topological:3 strength:2 hayworth:1 constraint:1 n3:4 x2:4 kasthuri:2 nanometer:2 soucy:1 shaji:1 relatively:2 gpus:1 speedup:1 structured:1 developing:1 peripheral:2 belonging:2 membrane:24 across:1 slightly:2 em:18 reconstructing:1 character:2 elizabeth:1 urgent:1 n4:6 making:1 organizer:3 pipeline:1 committee:1 mechanism:1 jared:1 irregularly:1 studying:1 available:3 operation:1 hierarchical:3 enforce:1 disagreement:1 generic:1 original:1 thomas:1 remaining:1 include:1 clustering:1 x21:1 ting:1 threading:1 warping:5 objective:1 strategy:1 exhibit:2 gradient:3 fovea:1 distance:1 mapped:1 card:1 threaded:1 consensus:1 cellular:1 reason:1 code:4 index:1 berger:1 rotational:2 minimizing:1 mostly:1 ba:2 schalek:1 implementation:4 ambitious:1 unknown:1 adjustable:1 neuron:29 convolution:2 datasets:2 observation:1 knott:1 descent:1 november:1 riesenhuber:1 situation:1 jim:1 nonuniform:6 stack:23 smoothed:1 intensity:5 introduced:1 namely:2 required:1 meier:4 connection:4 optimized:3 accepts:1 learned:1 nu:2 address:1 below:1 pattern:7 dimitris:1 challenge:10 max:9 reliable:1 misclassification:1 suitable:1 rely:1 regularized:1 haar:1 indicator:1 buhmann:3 representing:1 improve:1 lately:1 extract:1 schmid:1 speeding:1 text:2 epoch:3 eugene:1 deepest:1 macroarchitectural:1 fully:7 lecture:1 topography:2 interesting:1 filtering:4 scherer:1 bock:1 isbi:10 consistent:2 imposes:1 xiao:1 thresholding:5 x18:1 classifying:2 bypass:1 tasdizen:1 row:3 elsewhere:1 accounted:1 supported:1 last:1 infeasible:1 bias:1 side:1 wide:1 face:1 tracing:1 ghz:1 slice:9 boundary:6 plain:2 depth:1 cortical:1 hendrickson:1 distributed:1 computes:4 author:1 forward:1 made:1 simplified:1 far:1 erhan:1 transaction:2 nonlocal:2 approximate:1 dmitry:1 overcomes:1 keep:1 mustafa:1 vaclav:1 summing:1 andermann:1 norb:1 neurology:1 grayscale:3 table:4 promising:1 learn:1 transfer:1 nature:1 helmstaedter:1 symmetry:1 obtaining:1 forest:1 improving:3 bottou:1 microprocessor:1 did:1 main:4 neurosci:1 abraham:1 border:2 noise:1 scored:1 whole:1 n2:3 big:1 kalina:1 x1:2 neuronal:9 biggest:1 cubic:1 fashion:1 sub:1 position:1 lugano:4 lie:2 candidate:1 perceptual:1 dominik:1 extractor:1 masci:1 minute:3 specific:4 experimented:1 grouping:2 essential:1 mnist:1 mirror:1 texture:1 magnitude:2 nat:1 illustrates:1 foveated:1 margin:1 lichtman:1 depicted:1 simply:1 appearance:2 likely:2 wetzel:1 visual:5 partially:1 springer:1 ch:5 wolf:1 truth:10 harris:1 kunihiko:1 goal:1 month:1 jeff:1 hard:1 experimentally:1 foveation:5 except:1 averaging:5 denoising:1 total:3 yurgenson:1 experimental:1 disregard:2 mojtaba:1 photoreceptor:2 meaningful:3 aaron:1 select:1 mark:2 support:1 latter:2 winfried:1 jonathan:1 alexander:1 preparation:2 evaluate:1 trainable:2 biol:1 correlated:1
|
4,135 | 4,742 |
To appear in: Neural Information Processing Systems (NIPS),
Lake Tahoe, Nevada. December 3-6, 2012.
Efficient and direct estimation of a neural subunit
model for sensory coding
Brett Vintch
Andrew D. Zaharia
?
J. Anthony Movshon
Eero P. Simoncelli ?
Center for Neural Science, and
Howard Hughes Medical Institute
New York University
New York, NY 10003
[email protected]
Abstract
Many visual and auditory neurons have response properties that are well explained
by pooling the rectified responses of a set of spatially shifted linear filters. These
filters cannot be estimated using spike-triggered averaging (STA). Subspace methods such as spike-triggered covariance (STC) can recover multiple filters, but require substantial amounts of data, and recover an orthogonal basis for the subspace
in which the filters reside rather than the filters themselves. Here, we assume a
linear-nonlinear?linear-nonlinear (LN-LN) cascade model in which the first linear stage is a set of shifted (?convolutional?) copies of a common filter, and the
first nonlinear stage consists of rectifying scalar nonlinearities that are identical
for all filter outputs. We refer to these initial LN elements as the ?subunits? of
the receptive field. The second linear stage then computes a weighted sum of the
responses of the rectified subunits. We present a method for directly fitting this
model to spike data, and apply it to both simulated and real neuronal data from
primate V1. The subunit model significantly outperforms STA and STC in terms
of cross-validated accuracy and efficiency.
1
Introduction
Advances in sensory neuroscience rely on the development of testable functional models for the
encoding of sensory stimuli in neural responses. Such models require procedures for fitting their
parameters to data, and should be interpretable in terms both of sensory function and of the biological
elements from which they are made. The most common models in the visual and auditory literature
are based on linear-nonlinear (LN) cascades, in which a linear stage serves to project the highdimensional stimulus down to a one-dimensional signal, where it is then nonlinearly transformed
to drive spiking. LN models are readily fit to data, and their linear operators specify the stimulus
selectivity and invariance of the cell. The weights of the linear stage may be loosely interpreted
as representing the efficacy of synapses, and the nonlinearity as a transformation from membrane
potential to firing rate.
For many visual and auditory neurons, responses are not well described by projection onto a single
linear filter, but instead reflect a combination of several filters. In the cat retina, the responses of Y
cells have been described by linear pooling of shifted rectified linear filters, dubbed ?subunits? [1, 2].
Similar behaviors are seen in guinea pig [3] and monkey retina [4]. In the auditory nerve, responses
are described as computing the envelope of the temporally filtered sound waveform, which can be
computed via summation of squared quadrature filter responses [5]. In primary visual cortex (V1),
simple cells are well described using LN models [6, 7], but complex cell responses are more like a
1
superposition of multiple spatially shifted simple cells [8], each with the same orientation and spatial
frequency preference [9]. Although the description of complex cells is often reduced to a sum of
two squared filters in quadrature [10], more recent experiments indicate that these cells (and indeed
most ?simple? cells) require multiple shifted filters to fully capture their responses [11, 12, 13].
Intermediate nonlinearities are also required to describing the response properties of some neurons
in V2 to stimuli (e.g., angles [14] and depth edges [15]).
Each of these examples is consistent with a canonical but constrained LN-LN model, in which the
first linear stage consists of convolution with one (or a few) filters, and the first nonlinear stage
is point-wise and rectifying. The second linear stage then pools the responses of these ?subunits?
using a weighted sum, and the final nonlinearity converts this to a firing rate. Hierarchical stacks of
this type of ?generalized complex cell? model have also been proposed for machine vision [16, 17].
What is lacking is a method for validating this model by fitting it directly to spike data.
A widely used procedure for fitting a simple LN model to neural data is reverse correlation [18, 19].
The spike-triggered average of a set of Gaussian white noise stimuli provides an unbiased estimate
of the linear kernel. In a subunit model, the initial linear stage projects the stimulus into a multidimensional subspace, which can be estimated using spike-triggered covariance (STC) [20, 21].
This has been used successfully for fly motion neurons [22], vertebrate retina [23], and primary visual cortex [24, 11]. But this method relies on a Gaussian stimulus ensemble, requires a substantial
amount of data, and recovers only a set of orthogonal axes for the response subspace?not the underlying biological filters. More general methods based on information maximization alleviate some of
the stimulus restrictions [25] but strongly limit the dimensionality of the recoverable subspace and
still produce only a basis for the subspace.
Here, we develop a specific subunit model and a maximum likelihood procedure to estimate its
parameters from spiking data. We fit the model to both simulated and real V1 neuronal data, demonstrating that it is substantially more accurate for a given amount of data than the current state-of-theart V1 model which is based on STC [11], and that it produces biologically interpretable filters.
2
Subunit model
We assume that neural responses arise from a weighted sum of the responses of a set of nonlinear
subunits. Each subunit applies a linear filter to its input (which can be either the raw stimulus, or
the responses arising from a previous stage in a hierarchical cascade), and transforms the filtered
response using a memoryless rectifying nonlinearity. A critical simplification is that the subunit
filters are related by a fixed transformation; here, we assume they are spatially translated copies of
a common filter, and thus the population of subunits can be viewed as computing a convolution.
For example, the subunits of a V1 complex cell could be simple cells in V1 that share the same
orientation and spatial frequency preference, but differ in spatial location, as originally proposed by
Hubel & Wiesel [8, 9]. We also assume that all subunits use the same rectifying nonlinearity. The
response to input defined over two discrete spatial dimensions and time, x(i, j, t), is written as:
0
1
X
X
r?(t) =
wm,n f? @
k(m, n, ? )? x(i m, j n, t ? )A + . . . + b,
(1)
m,n
i,j,?
where k is the subunit filter, f? is a point-wise function parameterized by vector ?, wn,m are the
spatial weights, and b is an additive baseline. The ellipsis indicates that we allow for multiple
subunit channels, each with its own filter, nonlinearity, and pooling weights. We interpret r?(t) as a
?generator potential?, (e.g., time-varying membrane voltage) which is converted to a firing rate by
another rectifying nonlinearity.
The subunit model of Eq. (1) may be seen as a specific instance of a subspace model, in which the
input is initially projected onto a linear subspace. Bialek and colleagues introduced spike-triggered
covariance as a means of recovering such subspaces [20, 22]. Specifically, eigenvector analysis
of the covariance matrix of the spike-triggered input ensemble exposes orthogonal axes for which
the spike-triggered ensemble has a variance that differs significantly from that of the raw input
ensemble. These axes may be separated into those along which variance is greater (excitatory) or
less than (suppressive) that of the input. Figure 1 demonstrates what happens when STC is applied
to a simulated complex cell with 15 spatially shifted subunits. The response of this model cell is
2
b)
c)
position
envelope
STC
plane
?1-2
1
0
?1-4
1
0
//
shifted filter
manifold
eigenvectors
//
eigenvalues
10 4 data points
10 6 data points
a)
Figure 1: Spike-triggered covariance analysis of a simulated V1 complex cell. (a) The model output
is formed by summing the rectified responses of multiple linear filter kernels which are shifted and
scaled copies of a canonical form. (b) The shifted filters lie along a manifold in stimulus space
(four shown), and are not mutually orthogonal in general. STC recovers an orthogonal basis for
a low-dimensional subspace that contains this manifold by finding the directions in stimulus space
along which spikes are elicited or suppressed. (c) STC analysis of this model cell returns a variable
number of filters dependent upon the amount of acquired data. A modest amount of data typically
reveals two strong STC eigenvalues (top), whose eigenvectors form a quadrature (90-degree phaseshifted) pair and span the best-fitting plane for the set of shifted model filters. These will generally
have tuning properties (orientation, spatial frequency) similar to the true model filters. However, the
manifold does not generally lie in a two-dimensional subspace [26], and a larger data set reveals
additional eigenvectors (bottom) that serve to capture the deviations from the ~e1,2 plane. Due to
the constraint of mutual orthogonality, these filters are usually not localized and they have tuning
properties that differ from true model filters.
P
r?(t) = i wi b(~ki ? ~x(t))c2 , where the ~k?s are shifted filters, w weights filters by position, and ~x is
the stimulus vector. The recovered STC axes span the same subspace as the shifted model filters, but
there are fewer of them, and the enforced orthogonality of eigenvectors means that they are generally
not a direct match to any of the model filters. This has also been observed in filters extracted from
physiological data [11, 12]. Although one may follow the STC analysis by indirectly identifying a
localized filter whose shifted copies span the recovered subspace [11, 13], the reliance on STC still
imposes the stimulus limitations and data requirements mentioned above.
3
Direct subunit model estimation
A generic subspace method like STC does not exploit the specific structure of the subunit model.
We therefore developed an estimation procedure explicitly tailored for this type of computation. We
first introduce a piecewise-linear parameterization of the subunit nonlinearity:
X
f (s) =
?l Tl (s),
(2)
l
where the ??s scale a small set of overlapping ?tent? functions, Tl (?), that represent localized portions
of f (?) (we find that a dozen or so basis functions are typically sufficient to provide the needed
flexibility). Incorporating this into the model response of Eq. (1) allows us to fold the second linear
pooling stage and the subunit nonlinearity into a single sum:
0
1
X
X
r?(t) =
wm,n ?l Tl @
k(m, n, ? )? x(i m, j n, t ? )A + ... + b.
(3)
m,n,l
i,j,?
The model is now partitioned into two linear stages, separated by the fixed nonlinear functions Tl (?).
In the first, the stimulus is convolved with k, and in the second, the nonlinear responses are summed
with a set of weights that are separable in the indices l and n, m. The partition motivates the use of an
iterative coordinate descent scheme: the linear weights of each portion are optimized in alternation,
3
while the other portion is held constant. For each step, we minimized the mean square error between
the observed firing rate of a cell and the firing rate predicted by the model. For models that include
two subunit channels we optimize over both channels simultaneously (see section 3.3 for comments
regarding two-channel initialization).
3.1
Estimating the convolutional subunit kernel
The first coordinate descent leg optimizes the convolutional subunit kernel, k, using gradient descent
while fixing the subunit nonlinearity and the final linear pooling. Because the tent basis functions
are fixed and piecewise linear, the gradient is easily determined. This property also ensures that
the descent is locally convex: assuming that updating k does not cause any of the the linear subunit
responses to jump between the localized tent functions representing f , then the optimization is linear
and the objective function is quadratic. In practice, the full gradient descent path causes the linear
subunit responses to move slowly across bins of the piecewise nonlinearity. However, we include
a regularization term to impose smoothness on the nonlinearity (see below) and this yields a wellbehaved minimization problem for k.
3.2
Estimating the subunit nonlinearities and linear subunit pooling
The second leg of coordinate descent optimizes the subunit nonlinearity (more specifically, the
weights on the tent functions, ?l ), and the subunit pooling, wn,m . As described above, the objective
is bilinear in ?l and wn,m when k is fixed. Estimating both ?l and wn,m can be accomplished with
alternating least-squares, which assures convergence to a (local) minimum [27]. We also include
two regularization terms in the objective function. The first ensures smoothness in the nonlinearity
f , by penalizing the square of the second derivative of the function in the least-squares fit. This
smooth nonlinearity helps to guarantee that the optimization of k is well behaved, even where finite
data sets leave the function poorly constrained. We also include a cross-validated ridge prior for
the pooling weights to bias wn,m toward zero. The filter kernel k can also be regularized to ensure
smoothness, but for the examples shown here we did not find the need to include such a term.
3.3
Model initialization
Our objective function is non-convex and contains local minima, so the selection of initial parameter
values may affect the solution. We found that initializing our two-channel subunit model to have
a positive pooling function for one channel and a negative pooling function for the second channel
allowed the optimization of the second channel to proceed much more quickly. This is probably
due in part to a suppressive channel that is much weaker than the excitatory channel in general.
We initialized the nonlinearity to halfwave-rectification for the excitatory channel and fullwaverectification for the suppressive channel.
To initialize the convolutional filter we use a novel technique that we term ?convolutional STC?. The
subunit model describes a receptive field as the linear combination of nonlinear kernel responses that
spatially tile the stimulus. Thus, the contribution of each localized patch of stimulus (of a size equal
to the subunit kernel) is the same, up to a scale factor set by the weighting used in the subsequent
pooling stage. As such, we compute an STC analysis on the union of all localized patches of stimuli.
For each subunit location, {m, n}, we extract the local stimulus values in a window, gm,n (i, j), the
size of the convolutional kernel and append them vertically in a ?local? stimulus matrix. As an initial
guess for the pooling weights, we weight each of these blocks by a Gaussian spatial profile, chosen
to roughly match the size of the receptive field. We also generate a vector containing the vertical
concatenation of copies of the measured spike train, ~r (one copy for each subunit location).
0
1
0 1
w1,1 Xg1,1 (i,j)
~r
Bw1,2 Xg1,2 (i,j) C
B~rC
!
X
;
(4)
@
A
@ A ! ~rloc .
loc
..
..
.
.
After performing STC analysis on the localized stimulus matrix, we use the first (largest variance)
eigenvector to initialize the subunit kernel of the excitatory channel, and the last (lowest variance)
eigenvector to initialize the kernel of the suppressive channel. In practice, we find that this initialization greatly reduces the number of iterations, and thus the run time, of the optimization procedure.
4
a)
b)
Simulated simple cell
1
Simulated complex cell
1
Model performance (r)
train
0.75
0.5
0.75
test
0.5
0.25
0.25
subunit model
Rust-STC model
0
5/60 10/60
1
5
10
20
0
40
5/60 10/60
Minutes of simulated data
1
5
10
20
40
Minutes of simulated data
Figure 2: Model fitting performance for simulated V1 neurons. Shown are correlation coefficients
for the subunit model (black circles) and the Rust-STC model (blue squares) [11], computed on both
the training data (open), and on a holdout test set (closed). Spike counts for each presented stimulus
frame are drawn from a Poisson distribution. Shaded regions indicate ? 1 s.d. for 5 simulation runs.
(a) ?Simple? cell, with spike rate determined by the halfwave-rectified and squared response of a
single oriented linear filter. (b) ?Complex? cell, with rate determined by a sum of squared Gabor
filters arranged in spatial quadrature. Insets show estimated filters for the subunit (top) and RustSTC (bottom) models with ten seconds (400 frames; left) and 20 minutes (48,000 frames; right) of
data.
4
Experiments
We fit the subunit model to physiological data sets in 3 different primate cortical areas: V1, V2, and
MT. The model is able to explain a significant amount of variance for each of these areas, but for
illustrative purposes we show here only data for V1. Initially, we use simulated V1 cells to compare
the performance of the subunit model to that of the Rust-STC model [11], which is based upon STC
analysis.
4.1
Simulated V1 data
We simulated the responses of canonical V1 simple cells and complex cells in response to white
noise stimuli. Stimuli consisted of a 16x16 spatial array of pixels whose luminance values were
set to independent ternary white noise sequences, updated every 25 ms (or 40 Hz). The simulated
cells use spatiotemporally oriented Gabor filters: The simple cell has one even-phase filter and a
half-squaring output nonlinearity while the complex cell has two filters (one even and one odd)
whose squared responses are combined to give a firing rate. Spike counts are drawn from a Poisson
distribution, and overall rates are scaled so as to yield an average of 40 ips (i.e. one spike per time
bin).
For consistency with the analysis of the physiological data, we fit the simulated data using a subunit
model with two subunit channels (even though the simulated cells only possess an excitatory channel). When fitting the Rust-STC model, we followed the procedure described in [11]. Briefly, after
the STA and STC filters are estimated, they are weighted according to their predictive power and
combined in excitatory and suppressive pools, E and S (we use cross-validation to determine the
number of filters to use for each pool). These two pooled responses are then combined using a joint
output nonlinearity: r?(t)Rust = ? + ( E ?
S ? )/( E ? + ?S ? + 1). Parameters {?, , , , ?, ?}
are optimized to minimizing mean squared error between observed spike counts and the model rate.
Model performances, measured as the correlation between the model rate and spike count, are shown
in Figure 2. In low data regimes, both models perform nearly perfectly on the training data, but
poorly on separate test data not used for fitting, a clear indication of over-fitting. But as the data
set increases in size, the subunit model rapidly improves, reaching near-perfect performance for
modest spike counts. The Rust-STC model also improves, but much more slowly; It requires more
than an order of magnitude more data to achieve the same performance as the subunit model. This
5
a)
b)
Convolutional subunit filters
Nonlinearity
Position map
spikes
0
0 ms
25
50
75
100
125
150
175
+
0.96?
1s
c)
Firing rate (ips)
Suppressive
Excitatory
Trials
5
200
150
subunit
100
50
0
measured
Rust
0
90
180
Orientation
270
Figure 3: Two-channel subunit model fit to a physiological data from a macaque V1 cell. (a) Fitted
parameters for the excitatory (top row) and suppressive (bottom row) channels, including the spacetime subunit filters (8 grayscale images, corresponding to different time frames), the nonlinearity,
and the spatial weighting function wn,m that is used to combine the subunit responses. (b) A raster
showing spiking responses to 20 repeated presentations of an identical stimulus with the average
spike count (black) and model prediction (blue) plotted above. (c) Simulated models (subunit model:
blue, Rust-STC model: purple) and measured (black) responses to drifting sinusoidal gratings.
inefficiency is more pronounced for the complex cell, because the simple cell is fully explained by
the STA filter, which can be estimated much more reliably than the STC filters for small amounts of
data. We conclude that directly fitting the subunit model is much more efficient in the use of data
than using STC to estimate a subspace model.
4.2
Physiological data from macaque V1
We presented spatio-temporal pixel noise to 38 cells recorded from V1 in anesthetized macaques (see
[11] for details of experimental design). The stimulus was a 16x16 grid with luminance values set
by independent ternary white noise sequences refreshed at 40 Hz. For 21 neurons we also presented
20 repeats of a sequence of 1000 stimulus frames as a validation set. The model filters were assumed
to respond over a 200 ms (8 frame) causal time window in which the stimulus most strongly affected
the firing of the neurons, and thus, model responses were derived from a stimulus vector with 2048
dimensions (16x16x8).
Figure 3 shows the fit of a 2-channel subunit model to data from a typical V1 cell. Figure 3a
illustrates the subunit kernels and their associated nonlinearities and spatial pooling maps, for both
the excitatory channel (top row) and the suppressive channel (bottom row). The two channels
show clear but opposing direction selectivity, starting at a latency of 50 ms. The fact that this cell
is complex is reflected in two aspects of the model parameters. First, the model shows a symmetric,
full-wave rectifying nonlinearity for the excitatory channel. Second, the final linear pooling for this
channel is diffuse over space, eliciting a response that is invariant to the exact spatial position and
phase of the stimulus.
For this particular example the model fits well. For the cross-validated set of repeated stimuli (which
have the same structure as for the fitting data), on average the model correlates with each trial?s firing
rate with an r-value of 0.54. A raster of spiking responses to twenty repetitions of a 5 s stimulus
are depicted in Fig. 3b, along with the average firing rate and the model prediction, which are well
matched. The model can also capture the direction selectivity of this cell?s response to moving
sinusoidal gratings (whose spatial and temporal frequency are chosen to best drive the cell) (Fig.
3c). The subunit model acceptably fits most of the cells we recorded in V1. Moreover, fit quality
is not correlated with modulation index (r = 0.08; n.s.), suggesting that the model captures the
behavior of both simple and complex cells equally well.
The fitted subunit model also significantly outperforms the Rust-STC model in terms of predicting
responses to novel data. Figure 4a shows the performance of the Rust-STC and subunit models for
21 V1 neurons, for both training data and test data on single trials. For the training data, the Rust6
a)
b)
1
c)
1
100%
Subunit model accuracy (r)
Oracle
0.75
0.75
0.5
test
0.25
0
0
training
n = 21
0.25
0.5
0.75
Rust-STC model accuracy (r)
1
75%
1
0.75
0.5
50%
0.5
0.25
25%
0.25
0
0
0.25
0.5
0.75
?Oracle? accuracy (r)
1
0
0
25,000
50,000
Total number of recorded spikes
Figure 4: Model performance comparisons on physiological data. (a) Subunit model performance
vs. Rust-STC model for V1 data. Training accuracy is computed for a single variable-length sequence extracted from the fitting data. Test accuracy is computed on the average response to 20
repeats of a 25 s stimulus. (b) Subunit model performance vs. an ?Oracle? model for V1 data (see
text). Each point represents the average accuracy in predicting responses to each of 20 repeated stimuli. The oracle model uses the average spike count over the other 19 repeats as a prediction. Inset:
Ratio of subunit-to-oracle performance. Error bars indicate 1 s.d. (c) Subunit model performance
on test data, as a function of the total number of recorded spikes.
STC model performs significantly better than the subunit model (Figure 4a; < rRust >= 0.81,
< rsubunit >= 0.33; p ? 0.005). However, this is primarily due to over-fitting: Visual inspection
of the STC kernels for most cells reveals very little structure. For test data (that was not included
in the data used to fit the models), the subunit model exhibits significantly better performance than
the Rust-STC model (< rRust >= 0.16, < rsubunit >= 0.27; p ? 0.005). This is primarily due
to over-fitting in the STC analysis. For a stimulus composed of a 16x16 pixel grid with 8 frames,
the spike-triggered covariance matrix contains over 2 million parameters. For the same stimulus, a
subunit model with two channels and an 8x8x8 subunit kernel has only about 1200 parameters.
The subunit model performs well when compared to the Rust-STC model, but we were interested in
obtaining a more absolute measure of performance. Specifically, no purely stimulus-driven model
can be expected to explain the response variability seen across repeated presentations of the same
stimulus. We can estimate an upper bound on stimulus-driven model performance by implementing
an empirical ?oracle? model that uses the average response over all but one of a set of repeated
stimulus trials to predict the response on the remaining trial. Over the 21 neurons with repeated
stimulus data, we found that the subunit model achieved, on average, 76% the performance of the
oracle model (Figure 4b). Moreover, the cells that were least well fit by the subunit model were also
the cells that responded only weakly to the stimulus (Figure 4c). We conclude that, for most cells,
the fitted subunit model explains a significant fraction of the response that can be explained by any
stimulus-driven model.
5
Discussion
Subunits have been proposed as a qualitative description of many types of receptive fields in sensory
systems [2, 28, 8, 11, 12], and have enjoyed a recent renewal of interest by the modeling community
[13, 29]. Here we have described a new parameterized canonical subunit model that can be applied
to an arbitrary set of inputs (either a sensory stimulus, or a population of afferents from a previous
stage of processing), and we have developed a method for directly estimating the parameters of this
model from measured spiking data. Compared with STA or STC, the model fits are more accurate
for a given amount of data, less sensitive to the choice of stimulus ensemble, and more interpretable
in terms of biological mechanism.
For V1, we have applied this model directly to the visual stimuli, adopting the simplifying assumption that subcortical pathways faithfully relay the image data to V1. Higher visual areas build their
responses on the afferent inputs arriving from lower visual areas, and we have applied this subunit
7
model to such neurons by first simulating the responses of a population of the afferent V1 neurons,
and then optimizing a subunit model that best maps these afferent responses to the spiking responses
observed in the data. Specifically, for neurons in area V2, we model the afferent V1 population as
a collection of simple cells that tile visual space. The V1 filters are chosen to uniformly cover the
space of orientations, scales, and positions [30]. We also include four different phases. For neurons
in area MT (V5), we use an afferent V1 population that also includes direction selective subunits, because the projections from V1 to MT are known to be sensitive to the direction of visual motion [31].
Specifically, the V1 filters are a rotation-invariant set of 3-dimensional, space-space-time steerable
filters [32]. We fit these models to neural responses to textured stimuli that varied in contrast and
local orientation content (for MT, the local elements also drift over time). Our preliminary results
show that the subunit model outperforms standard models for these higher order areas as well.
We are currently working to refine and generalize the subunit model in a number of ways. The
mean squared error objective function, while computationally appealing, does not accurately reflect
the noise properties of real neurons, whose variance changes with their mean rate. A likelihood
objective function, based on a Poisson or similar spiking model, can improve the accuracy of the
fitted model, but it does so at a cost to the simplicity of model estimation (e.g. Alternating Least
Squares can no longer be used to solve the bilinear problem). Real neurons also possess other forms
of nonlinearities, such as local gain control that is been observed in neurons through the visual and
auditory systems [33]. We are exploring means by which this functionality can be included directly
in the model framework (e.g. [11]), while retaining the tractability of the parameter estimation.
Acknowledgments
This work was supported by the Howard Hughes Medical Institute, and by NIH grant EY04440.
References
[1] H. B. Barlow and W. R. Levick. The mechanism of directionally selective units in rabbit?s retina. The
Journal of Physiology, 178(3):477, June 1965.
[2] S. Hochstein and R. M. Shapley. Linear and nonlinear spatial subunits in Y cat retinal ganglion cells,
1976.
[3] J. B. Demb, K. Zaghloul, L. Haarsma, and P. Sterling. Bipolar cells contribute to nonlinear spatial
summation in the brisk-transient (Y) ganglion cell in mammalian retina. The Journal of neuroscience,
21(19):7447?7454, 2001.
[4] J.D. Crook, B.B. Peterson, O.S. Packer, F.R. Robinson, J.B. Troy, and D.M. Dacey. Y-cell receptive field
and collicular projection of parasol ganglion cells in macaque monkey retina. The Journal of neuroscience,
28(44):11277?11291, 2008.
[5] P.X. Joris, C.E. Schreiner, and A. Rees. Neural processing of amplitude-modulated sounds. Physiol. Rev.,
84:541?577, 2004.
[6] J. P. Jones and L. A. Palmer. The two-dimensional spatial structure of simple receptive fields in cat striate
cortex. Journal of neurophysiology, 58(6):1187?1211, 1987.
[7] G. C. DeAngelis, I. Ohzawa, and R. D. Freeman. Spatiotemporal organization of simple-cell receptive
fields in the cat?s striate cortex. I. General characteristics and postnatal development. Journal of neurophysiology, 69(4):1091?1117, 1993.
[8] D. H. Hubel and T. N. Wiesel. Receptive fields, binocular interaction and functional architecture in the
cat?s visual cortex. The Journal of Physiology, 160(1):106?154, 1962.
[9] J. A. Movshon, I. D. Thompson, and D. J. Tolhurst. Receptive field organization of complex cells in the
cat?s striate cortex. The Journal of Physiology, 283(1):79?99, 1978.
[10] E. H. Adelson and J. R. Bergen. Spatiotemporal energy models for the perception of motion. Journal of
the Optical Society of America A, 2(2):284?299, 1985.
[11] N. C. Rust, O. Schwartz, J. A. Movshon, and E. P. Simoncelli. Spatiotemporal elements of macaque V1
receptive fields. Neuron, 46(6):945?956, June 2005.
[12] X. Chen, F. Han, M. M. Poo, and Y. Dan. Excitatory and suppressive receptive field subunits in awake
monkey primary visual cortex (V1). Proceedings of the National Academy of Sciences, 104(48):19120?
19125, November 2007.
[13] T. Lochmann, T. Blanche, and D. A. Butts. Construction of direction selectivity in V1: from simple to
complex cells. Computational and Systems Neuroscience (CoSyNe), 2011.
8
[14] M. Ito and H. Komatsu. Representation of angles embedded within contour stimuli in area V2 of macaque
monkeys. The Journal of neuroscience, 24(13):3313?3324, 2004.
[15] C. E. Bredfeldt, J. C. A. Read, and B. G. Cumming. A quantitative explanation of responses to disparitydefined edges in macaque V2. Journal of neurophysiology, 101(2):701?713, 2009.
[16] K. Fukushima. Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biological cybernetics, 36(4):193?202, 1980.
[17] M. Riesenhuber and T. Poggio. Hierarchical models of object recognition in cortex. Nature neuroscience,
2:1019?1025, 1999.
[18] E. De Boer. Reverse correlation I. A heuristic introduction to the technique of triggered correlation with
application to the analysis of compound systems. Proc. Kon. Nederl. Akad. Wet, 1968.
[19] E. J. Chichilnisky. A simple white noise analysis of neuronal light responses. Network: Computation in
Neural Systems, 12(2):199?213, 2001.
[20] R. D. R. V. Steveninck and W. Bialek. Real-Time Performance of a Movement-Sensitive Neuron in the
Blowfly Visual System: Coding and Information Transfer in Short Spike Sequences. Proceedings of the
Royal Society B: Biological Sciences, 234(1277):379?414, September 1988.
[21] O. Schwartz, J. W. Pillow, N.C. Rust, and E.P. Simoncelli. Spike-triggered neural characterization. Journal of Vision, 6(4):13?13, February 2006.
[22] N. Brenner, W. Bialek, and R. de Ruyter van Steveninck. Adaptive rescaling maximizes information
transmission. Neuron, 26(3):695?702, June 2000.
[23] O. Schwartz, E. J. Chichilnisky, and E. P. Simoncelli. Characterizing neural gain control using spiketriggered covariance. Advances in neural information processing systems, 1:269?276, 2002.
[24] J. Touryan, B. Lau, and Y Dan. Isolation of relevant visual features from random stimuli for cortical
complex cells. The Journal of neuroscience, 22(24):10811?10818, 2002.
[25] T. Sharpee, N. C. Rust, and W. Bialek. Analyzing neural responses to natural signals: maximally informative dimensions. Neural computation, 16(2):223?250, 2004.
[26] C. Ekanadham, D. Tranchina, and E. P. Simoncelli. Recovery of sparse translation-invariant signals with
continuous basis pursuit. IEEE Trans Signal Processing, 59(10):4735?4744, Oct 2011.
[27] M. Ahrens, L. Paninski, and M. Sahani. Inferring input nonlinearities in neural encoding models. Network: Computation in Neural Systems, 19(1):35?67, 2008.
[28] J. D. Victor and R. M. Shapley. The nonlinear pathway of Y ganglion cells in the cat retina. The Journal
of General Physiology, 74(6):671?689, December 1979.
[29] M. Eickenberg, R. J. Rowekamp, M. Kouh, and T. O. Sharpee. Characterizing responses of translationinvariant neurons to natural stimuli: maximally informative invariant dimensions. Neural computation,
24(9):2384?2421, September 2012.
[30] E. P. Simoncelli and W. T. Freeman. The steerable pyramid: A flexible architecture for multi-scale
derivative computation. Image Processing, 1995. Proceedings., International Conference on, 3:444?447
vol. 3, 1995.
[31] J. A. Movshon and W. T. Newsome. Visual response properties of striate cortical neurons projecting to
area MT in macaque monkeys. The Journal of neuroscience, 16(23):7733?7741, 1996.
[32] E. P. Simoncelli and D. J. Heeger. A model of neuronal responses in visual area MT. Vision Research,
38(5):743?761, March 1998.
[33] M. Carandini and D. J. Heeger. Normalization as a canonical neural computation. Nature Reviews Neuroscience, 13(1):51?62, November 2011.
9
|
4742 |@word neurophysiology:3 trial:5 briefly:1 wiesel:2 open:1 simulation:1 covariance:7 simplifying:1 initial:4 inefficiency:1 contains:3 efficacy:1 loc:1 outperforms:3 current:1 recovered:2 written:1 readily:1 physiol:1 additive:1 partition:1 subsequent:1 informative:2 wellbehaved:1 interpretable:3 v:2 half:1 fewer:1 guess:1 parameterization:1 plane:3 inspection:1 postnatal:1 short:1 filtered:2 provides:1 tolhurst:1 contribute:1 location:3 preference:2 tahoe:1 characterization:1 rc:1 along:4 c2:1 direct:3 eickenberg:1 qualitative:1 consists:2 fitting:14 combine:1 pathway:2 shapley:2 dan:2 introduce:1 acquired:1 expected:1 indeed:1 roughly:1 themselves:1 behavior:2 multi:1 freeman:2 little:1 window:2 vertebrate:1 project:2 brett:1 underlying:1 estimating:4 matched:1 moreover:2 maximizes:1 lowest:1 what:2 interpreted:1 substantially:1 monkey:5 eigenvector:3 developed:2 finding:1 transformation:2 dubbed:1 guarantee:1 temporal:2 quantitative:1 every:1 multidimensional:1 bipolar:1 demonstrates:1 scaled:2 schwartz:3 control:2 unit:1 medical:2 acceptably:1 appear:1 grant:1 positive:1 local:7 vertically:1 limit:1 bilinear:2 encoding:2 demb:1 analyzing:1 firing:10 path:1 modulation:1 black:3 initialization:3 shaded:1 palmer:1 steveninck:2 acknowledgment:1 komatsu:1 ternary:2 hughes:2 practice:2 union:1 differs:1 block:1 procedure:6 steerable:2 area:10 empirical:1 cascade:3 significantly:5 projection:3 gabor:2 spatiotemporally:1 physiology:4 cannot:1 onto:2 selection:1 operator:1 restriction:1 optimize:1 map:3 center:1 poo:1 starting:1 convex:2 rabbit:1 thompson:1 simplicity:1 identifying:1 recovery:1 schreiner:1 array:1 kouh:1 population:5 coordinate:3 updated:1 construction:1 gm:1 exact:1 us:2 lochmann:1 element:4 recognition:2 updating:1 tranchina:1 mammalian:1 bottom:4 observed:5 fly:1 initializing:1 capture:4 region:1 ensures:2 movement:1 substantial:2 mentioned:1 weakly:1 predictive:1 serve:1 upon:2 purely:1 efficiency:1 basis:6 textured:1 translated:1 easily:1 joint:1 cat:7 america:1 train:2 separated:2 deangelis:1 whose:6 heuristic:1 widely:1 larger:1 solve:1 final:3 ip:2 directionally:1 triggered:11 eigenvalue:2 sequence:5 indication:1 nevada:1 interaction:1 relevant:1 rapidly:1 organizing:1 flexibility:1 poorly:2 achieve:1 academy:1 description:2 pronounced:1 convergence:1 requirement:1 transmission:1 produce:2 perfect:1 leave:1 object:1 help:1 andrew:1 develop:1 fixing:1 measured:5 odd:1 eq:2 grating:2 strong:1 recovering:1 predicted:1 indicate:3 differ:2 direction:6 waveform:1 functionality:1 filter:53 transient:1 implementing:1 bin:2 explains:1 require:3 alleviate:1 preliminary:1 biological:5 summation:2 exploring:1 predict:1 dacey:1 relay:1 purpose:1 estimation:5 proc:1 wet:1 currently:1 superposition:1 expose:1 sensitive:3 largest:1 repetition:1 faithfully:1 successfully:1 rowekamp:1 weighted:4 minimization:1 gaussian:3 rather:1 reaching:1 varying:1 voltage:1 validated:3 ax:4 derived:1 june:3 likelihood:2 indicates:1 greatly:1 contrast:1 baseline:1 dependent:1 bergen:1 squaring:1 typically:2 initially:2 transformed:1 selective:2 interested:1 pixel:3 overall:1 flexible:1 orientation:6 retaining:1 development:2 spatial:16 summed:1 initialize:3 renewal:1 mutual:1 equal:1 field:11 constrained:2 identical:2 represents:1 jones:1 adelson:1 nearly:1 theart:1 minimized:1 stimulus:49 piecewise:3 few:1 retina:7 sta:5 primarily:2 oriented:2 composed:1 simultaneously:1 packer:1 national:1 sterling:1 phase:3 cns:1 opposing:1 fukushima:1 organization:2 interest:1 light:1 held:1 accurate:2 edge:2 poggio:1 orthogonal:5 modest:2 loosely:1 initialized:1 circle:1 plotted:1 causal:1 fitted:4 instance:1 modeling:1 cover:1 newsome:1 maximization:1 cost:1 tractability:1 deviation:1 ekanadham:1 spatiotemporal:3 combined:3 rees:1 international:1 boer:1 collicular:1 pool:3 quickly:1 w1:1 squared:7 reflect:2 recorded:4 containing:1 slowly:2 tile:2 cosyne:1 derivative:2 return:1 rescaling:1 suggesting:1 potential:2 nonlinearities:6 converted:1 sinusoidal:2 retinal:1 parasol:1 coding:2 pooled:1 includes:1 coefficient:1 de:2 explicitly:1 afferent:6 closed:1 portion:3 wm:2 recover:2 wave:1 elicited:1 rectifying:6 contribution:1 formed:1 square:6 accuracy:8 convolutional:7 variance:6 responded:1 characteristic:1 ensemble:5 yield:2 purple:1 generalize:1 raw:2 accurately:1 rectified:5 drive:2 unaffected:1 cybernetics:1 explain:2 synapsis:1 raster:2 energy:1 colleague:1 frequency:4 refreshed:1 associated:1 recovers:2 gain:2 auditory:5 holdout:1 carandini:1 dimensionality:1 improves:2 amplitude:1 levick:1 nerve:1 originally:1 higher:2 follow:1 reflected:1 response:55 specify:1 maximally:2 arranged:1 though:1 strongly:2 stage:14 binocular:1 correlation:5 working:1 nonlinear:12 overlapping:1 quality:1 behaved:1 ohzawa:1 consisted:1 unbiased:1 true:2 barlow:1 regularization:2 spatially:5 memoryless:1 alternating:2 symmetric:1 read:1 white:5 self:1 illustrative:1 m:4 generalized:1 neocognitron:1 ridge:1 performs:2 motion:3 image:3 wise:2 novel:2 nih:1 common:3 rotation:1 functional:2 spiking:7 rust:17 mt:6 million:1 translationinvariant:1 interpret:1 refer:1 significant:2 smoothness:3 tuning:2 enjoyed:1 consistency:1 grid:2 nonlinearity:20 xg1:2 moving:1 han:1 cortex:8 longer:1 own:1 recent:2 optimizing:1 optimizes:2 driven:3 reverse:2 selectivity:4 compound:1 alternation:1 accomplished:1 victor:1 seen:3 minimum:2 greater:1 additional:1 impose:1 determine:1 signal:4 recoverable:1 multiple:5 simoncelli:7 sound:2 full:2 reduces:1 smooth:1 match:2 cross:4 e1:1 equally:1 ellipsis:1 prediction:3 vision:3 poisson:3 iteration:1 kernel:13 tailored:1 represent:1 adopting:1 achieved:1 cell:52 pyramid:1 normalization:1 suppressive:9 envelope:2 touryan:1 posse:2 probably:1 comment:1 pooling:14 hz:2 validating:1 december:2 near:1 intermediate:1 wn:6 affect:1 fit:14 isolation:1 architecture:2 perfectly:1 blanche:1 regarding:1 zaghloul:1 shift:1 movshon:4 york:2 cause:2 proceed:1 generally:3 latency:1 clear:2 eigenvectors:4 amount:8 transforms:1 locally:1 ten:1 reduced:1 generate:1 halfwave:2 canonical:5 shifted:13 ahrens:1 estimated:5 neuroscience:9 arising:1 per:1 blue:3 discrete:1 vol:1 affected:1 four:2 reliance:1 demonstrating:1 drawn:2 penalizing:1 v1:32 luminance:2 fraction:1 sum:6 convert:1 enforced:1 run:2 angle:2 parameterized:2 respond:1 lake:1 patch:2 bound:1 ki:1 followed:1 simplification:1 spacetime:1 fold:1 quadratic:1 refine:1 oracle:7 constraint:1 orthogonality:2 awake:1 diffuse:1 aspect:1 span:3 hochstein:1 performing:1 separable:1 optical:1 according:1 combination:2 march:1 membrane:2 across:2 describes:1 suppressed:1 wi:1 partitioned:1 appealing:1 rev:1 primate:2 biologically:1 happens:1 tent:4 leg:2 explained:3 invariant:4 lau:1 projecting:1 ln:9 rectification:1 mutually:1 computationally:1 assures:1 describing:1 count:7 mechanism:3 needed:1 serf:1 pursuit:1 apply:1 hierarchical:3 v2:5 indirectly:1 generic:1 blowfly:1 simulating:1 convolved:1 drifting:1 top:4 remaining:1 include:6 ensure:1 joris:1 exploit:1 testable:1 build:1 february:1 eliciting:1 society:2 objective:6 move:1 v5:1 spike:27 receptive:11 primary:3 striate:4 bialek:4 exhibit:1 gradient:3 september:2 subspace:15 separate:1 simulated:16 concatenation:1 manifold:4 toward:1 assuming:1 length:1 index:2 ratio:1 minimizing:1 akad:1 troy:1 negative:1 append:1 design:1 reliably:1 motivates:1 twenty:1 perform:1 upper:1 vertical:1 neuron:21 convolution:2 howard:2 finite:1 descent:6 november:2 riesenhuber:1 spiketriggered:1 subunit:82 variability:1 frame:7 varied:1 stack:1 arbitrary:1 community:1 drift:1 introduced:1 nonlinearly:1 required:1 pair:1 chichilnisky:2 optimized:2 nip:1 trans:1 macaque:8 able:1 bar:1 robinson:1 usually:1 below:1 perception:1 pattern:1 regime:1 pig:1 including:1 royal:1 explanation:1 power:1 critical:1 natural:2 rely:1 regularized:1 predicting:2 representing:2 scheme:1 improve:1 temporally:1 extract:1 sahani:1 text:1 prior:1 literature:1 review:1 lacking:1 fully:2 embedded:1 limitation:1 subcortical:1 zaharia:1 localized:7 generator:1 validation:2 degree:1 sufficient:1 consistent:1 imposes:1 share:1 translation:1 row:4 excitatory:11 repeat:3 last:1 copy:6 arriving:1 supported:1 guinea:1 bias:1 allow:1 weaker:1 institute:2 peterson:1 characterizing:2 anesthetized:1 absolute:1 sparse:1 vintch:2 van:1 depth:1 dimension:4 cortical:3 pillow:1 contour:1 computes:1 sensory:6 reside:1 made:1 jump:1 projected:1 collection:1 kon:1 adaptive:1 correlate:1 butt:1 hubel:2 reveals:3 summing:1 conclude:2 eero:1 spatio:1 assumed:1 grayscale:1 continuous:1 iterative:1 channel:25 nature:2 transfer:1 ruyter:1 obtaining:1 brisk:1 complex:16 anthony:1 stc:36 did:1 noise:7 arise:1 profile:1 allowed:1 repeated:6 quadrature:4 neuronal:4 fig:2 tl:4 x16:3 ny:1 cumming:1 position:6 inferring:1 heeger:2 lie:2 weighting:2 ito:1 dozen:1 down:1 minute:3 specific:3 inset:2 showing:1 nyu:1 physiological:6 incorporating:1 magnitude:1 illustrates:1 chen:1 depicted:1 paninski:1 ganglion:4 crook:1 visual:18 scalar:1 applies:1 relies:1 extracted:2 oct:1 viewed:1 presentation:2 brenner:1 content:1 change:1 included:2 specifically:5 determined:3 typical:1 uniformly:1 averaging:1 total:2 invariance:1 experimental:1 sharpee:2 highdimensional:1 modulated:1 correlated:1
|
4,136 | 4,743 |
The Coloured Noise Expansion and Parameter
Estimation of Diffusion Processes
Simo S?arkk?a
Aalto University
Department of Biomedical Engineering
and Computational Science
Rakentajanaukio 2, 02150 Espoo
[email protected]
Simon M.J. Lyons
School of Informatics
University of Edinburgh
10 Crichton Street, Edinburgh, EH8 9AB
[email protected]
Amos J. Storkey
School of Informatics
University of Edinburgh
10 Crichton Street, Edinburgh, EH8 9AB
[email protected]
Abstract
Stochastic differential equations (SDE) are a natural tool for modelling systems
that are inherently noisy or contain uncertainties that can be modelled as stochastic
processes. Crucial to the process of using SDE to build mathematical models
is the ability to estimate parameters of those models from observed data. Over
the past few decades, significant progress has been made on this problem, but
we are still far from having a definitive solution. We describe a novel method
of approximating a diffusion process that we show to be useful in Markov chain
Monte-Carlo (MCMC) inference algorithms. We take the ?white? noise that drives
a diffusion process and decompose it into two terms. The first is a ?coloured
noise? term that can be deterministically controlled by a set of auxilliary variables.
The second term is small and enables us to form a linear Gaussian ?small noise?
approximation. The decomposition allows us to take a diffusion process of interest
and cast it in a form that is amenable to sampling by MCMC methods. We explain
why many state-of-the-art inference methods fail on highly nonlinear inference
problems, and we demonstrate experimentally that our method performs well in
such situations. Our results show that this method is a promising new tool for use
in inference and parameter estimation problems.
1
Introduction
Diffusion processes are a flexible and useful tool in stochastic modelling. Many important real world
systems are currently modelled and best understood in terms of stochastic differential equations in
general and diffusions in particular. Diffusions have been used to model prices of financial instruments [1], chemical reactions [2], firing patterns of individual neurons [3], weather patterns [4] and
fMRI data [5, 6, 7] among many other phenomena.
The analysis of diffusions dates back to Feller and Kolmogorov, who studied them as the scaling
limits of certain Markov processes (see [8]). The theory of diffusion processes was revolutionised
by It?o, who interpreted a diffusion process as the solution to a stochastic differential equation [9,
10]. This viewpoint allows one to see a diffusion process as the randomised counterpart of an
ordinary differential equation. One can argue that stochastic differential equations are the natural
1
tool for modelling continuously evolving systems of real valued quantities that are subject to noise
or stochastic influences.
The classical approach to mathematical modelling starts with a set of equations that describe the
evolution of a system of interest. These equations are goverened by a set of input parameters (for
example particle masses, reaction rates, or more general constants of proportionality) that determine
the behaviour of the system. For practical purposes, it is of considerable interest to solve the inverse
problem. Given the output of some system, what can be said about the parameters that govern it?
In the present setting, we observe data which we hypothesize are generated by a diffusion. We would
like to know what the nature of this diffusion is. For example, we may begin with a parametric model
of a physical system, with a prior distribution over the parameters. In principle, one can apply Bayes?
theorem to deduce the posterior distribution. In practice, this is computationally prohibitive: it is
necessary to solve a partial differential equation known as the Fokker-Planck equation (see [11]) in
order to find the transition density of the diffusion of interest. This solution is rarely available in
closed form, and must be computed numerically.
In this paper, we propose a novel approximation for a nonlinear diffusion process X. One heuristic
way of thinking about a diffusion is as an ordinary differential equation that is perturbed by white
noise. We demonstrate that one can replace the white noise by a ?coloured? approximation without
inducing much error. The nature of the coloured noise expansion method enables us to control the
behaviour of the diffusion over various length-scales. This allows us to produce samples from the
diffusion process that are consistent with observed data. We use these samples in a Markov chain
Monte-Carlo (MCMC) inference algorithm.
The main contributions of this paper are:
? Novel development of a method for sampling from the time-t marginal distribution of a
diffusion process based on a ?coloured? approximation of white noise.
? Demonstration that this approximation is a powerful and scalable tool for making parameter
estimation feasible for general diffusions at minimal cost.
The paper is structured as follows: in Section 2, we describe the structure of our problem. In
Section 3 we conduct a brief survey of existing approaches to the problem. In Section 4, we discuss
the coloured noise expansion and its use in controlling the behaviour of a diffusion process. Our
inference algorithm is described in Section 5. We describe some numerical experiments in Section 6,
and future work is discussed in Section 7.
2
Parametric Diffusion Processes
In this section we develop the basic notation and formalism for the diffusion processes used in this
work. First, we assume our data are generated by observing a k-dimensional diffusion processes
with dynamics
dXt = a? (Xt )dt + B? dWt ,
X0 ? p(x0 ),
(1)
where the initial condition is drawn from some known distribution. Observations are assumed to
occur at times t1 , . . . , tn , with ti ?ti?1 := Ti . We require that a? : IRk ? IRk is sufficiently regular
to guarantee the existence of a unique strong solution to (1), and we assume B? ? IRk?d . Both terms
depend on a set of potentially unknown parameters ? ? IRd? . We impose a prior distribution p(?)
on the parameters. The driving noise W is a d-dimensional Brownian motion, and the equation is
interpreted in the It?o sense. Observations are subject to independent Gaussian perturbations centered
at the true value of X. That is,
Yti = Xti + ti ,
ti ? N (0, ?i )
(2)
We use the notation X to refer to the entire sample path of the diffusion, and Xt to denote the value
of the process at time t. We will also employ the shorthand Y1:n = {Yt1 , . . . , Ytn }.
Many systems can be modelled using the form (1). Such systems are particularly relevant in physics
and natural sciences. In situations where this is not explicitly the case, one can often hope to reduce a
diffusion to this form via the Lamperti transform. One can almost always accomplish this in the univariate case, but the multivariate setting is somewhat more involved. A??t-Sahalia [12] characterises
the set of multivariate diffusions to which this transform can be applied.
2
3
Background Work
Most approaches to parameter estimation of diffusion processes rely on the Monte-Carlo approximation. Beskos et al. [13] [14] employ a method based on rejection sampling to estimate parameters
without introducing any discretisation error. Golightly and Wilkinson [15] extend the work of Chib
et al. [16] and Durham and Gallant [17] to construct a Gibbs sampler that can be applied to the
parameter estimation problem.
Roughly speaking, Gibbs samplers that exist in the literature alternate between drawing samples
from some representation of the diffusion process X conditional on parameters ?, and samples from
? conditional on the current sample path of X. Note that draws from X must be consistent with the
observations Y1:n .
The usual approach to the consistency issue is to make a proposal by conditioning a linear diffusion
to hit some neighbourhood of the observation Yk , then to make a correction via a rejection sampling [18] or a Metropolis-Hastings [16] step. However, as the inter-observation time grows, the
qualitative difference between linear and nonlinear diffusions gets progressively more pronounced,
and the rate of rejection grows accordingly. Figure 1 shows the disparity between a sample from a
nonlinear process and a sample from the linear proposal. One can see that the target sample path is
constrained to stay near the mode ? = 2.5, whereas the proposal can move more freely. One should
expect to make many proposals before finding one that ?behaves? like a typical draw from the true
process.
Sample path with noisy observations
Nonlinear sample path and proposal
2
1
1
X(t)
X(t)
0
?1
0
?1
?2
?2
?3
0
1
2
?3
0
3
5
t
10
t
15
20
(a)
(b)
Figure 1: (a) Sample path of a double well process (see equation (18)) with ? = 2, ? = 2.5, B = 2
(blue line). Current Gibbs samplers use linear proposals (dashed red line) with a rejection step to
draw conditioned nonlinear paths. In this case, the behaviour of the proposal is very different to that
of the target, and the rate of rejection is high.
(b) Sample path of a double well process (solid blue line) with noisy observations (red dots). We
use this as an initial dataset on which to test our algorithm. Parameters are ? = 2, ? = 1, B = 1.
Observation errors have variance ? = .25.
For low-dimensional inference problems, algorithms that employ sequential Monte-Carlo (SMC)
methods [19] [20] typically yield good results. However, unlike the Gibbs samplers mentioned
above, SMC-based methods often do not scale well with dimension. The number of particles that
one needs to maintain a given accuracy is known to scale exponentially with the dimension of the
problem [21].
A??t-Sahalia [12, 22] uses a deterministic technique based on Edgeworth expansions to approximate
the transition density. Other approaches include variational methods [23, 24] that can compute
continuous time Gaussian process approximations to more general stochastic differential systems,
as well as various non-linear Kalman filtering and smoothing based approximations [25, 26, 27] .
4
Coloured Noise Expansions and Brownian Motion
We now introduce a method of approximating a nonlinear diffusion that allows us to gain a considerable amount of control over the behaviour of the process. Similar methods have been used
3
for stratified sampling of diffusion processes [28] and the solution of stochastic partial differential
equations [29] . One of the major challenges of using MCMC methods for parameter estimation
in the present context is that it is typically very difficult to draw samples from a diffusion process
conditional on observed data. If one only knows the initial condition of a diffusion, then it is straightforward to simulate a sample path of the process. However, simulating a sample path conditional on
both initial and final conditions is a challenging problem.
Our approximation separates the diffusion process X into the sum of a linear and nonlinear component. The linear component of the sum allows us to condition the approximation to fit observed data
more easily than in conventional methods. On the other hand, the nonlinear component captures
the ?gross? variation of a typical sample path. In this section, we fix a generic time interval [0, T ],
though one can apply the same derivation for any given interval Ti = ti ? ti?1 .
Heuristically, one can think of the random process that drives the process defined in equation (1) as
white noise. In our approximation, we project this white noise into an N -dimensional subspace of
L2 [0, T ], the Hilbert space of square-integrable functions defined on the interval [0, T ]. This gives
a ?coloured noise? process that approaches white noise asymptotically as N ? ?. The coloured
noise process is then used to drive an approximation of (1). We can choose the space into which
to project the white noise in such a way that we will gain some control over its behaviour. This is
analagous to the way that Fourier analysis allows us to manipulate properties of signals
Recall that a standard Brownian motion on the interval [0, T ] is a one-dimentional Gaussian process
with zero mean and covariance function k(s, t) = min{s, t}. By definition of the It?o integral, we
can write
Z t
Z T
Wt =
dWs =
I[0,t] (s)dWs .
(3)
0
0
2
Suppose {?i }i?1 is an orthonormal basis of L [0, T ]. We can interpret the indicator function in (3)
as an element of L2 [0, T ] and expand it in terms of the basis functions as follows:
?
? Z t
X
X
I[0,t] (s) =
hI[0,t] (?), ?i (?)i?i (s) =
?i (u)du ?i (s).
(4)
i=1
0
i=1
Substituting (4) into (3), we see that
Wt =
Z
?
X
i=1
!Z
T
?i (s)dWs
0
t
?i (u)du.
(5)
0
RT
We will employ the shorthand Zi = 0 ?i (s)dWs . Since the functions {?i } are deterministic and
orthonormal, we know from standard results of It?o calculus that the random variables {Zi } are i.i.d
standard normal.
? t of
The infinite series in equation (5) can be truncated after N terms to derive an approximation, W
Brownian motion. Taking the derivative with respect to time, the result is a ?coloured? approximation
of white noise, taking the form
N
X
?t
dW
=
Zi ?i (t).
(6)
dt
i=1
The multivariate approximation is similar. We seperate a d-dimensional Brownian motion into onedimensional components and decompose the individual components as in (6). In principle, one can
choose a different value of N for each component of the Brownian motion, but for ease of exposition
we do not do so here. We can substitute this approximation into equation (1), which gives
N
X
dXNL
t
= a? (XNL
?i Zi ,
t ) + B?
dt
i=1
XNL
0 ? p(x0 ),
(7)
where ?i is the diagonal d ? d matrix with entries (?i1 , . . . , ?id ), and Zi = (Zi1 , . . . , Zid )| .
This derivation is useful because equation (7) gives us an alternative to the Euler-Maruyama discretisation for sampling approximately from the time-t marginal distribution of a diffusion process. We
4
draw coefficients Zij from a standard normal distribution, and solve the appropriate vector-valued
ordinary differential equation. While the Euler discretisation is the de facto standard method for
numerical approximation of SDE, other methods do exist. Kloeden and Platen [30] discuss higher
order methods such as the stochastic Runge-Kutta scheme [31].
In the Euler-Maruyama
approximation, one discretises the driving Brownian motion into increments
?
Wti ?Wti?1 = Ti Zi . One must typically employ a fine discretisation to get a good approximation
to the true diffusion process. Empirically, we find that one needs far fewer Gaussian inputs Zi for
an accurate representation of XT using the coloured noise approximation. This more parsimonious
representation has advantages. For example, Corlay and Pages [28] employ related ideas to conduct
stratified sampling of a diffusion process.
The coefficients Zi are also more amenable to interpretation than the Gaussian increments in the
Euler-Maruyama expansion. Suppose we have a one-dimensional process in which we use the
Fourier cosine basis
p
?k (t) = 2/T cos((2k ? 1)?t/2T ).
(8)
If we change Z1 while holding the other coefficients fixed, we will typically see a change in the
large-scale behaviour of the path. On the other hand, a change in ZN will typically result in a
change to the small-scale oscillations in the path. The seperation of behaviours across coefficients
gives us a means to obtain fine-grained control over the behaviour of a diffusion process within a
Metropolis-Hastings algorithm.
We can improve our approximation by attempting to correct for the fact that we truncated the sum
in equation (6). Instead of simply discarding the terms Zi ?i for i > N , we attempt to account
for their effect as follows. We assume the existence of some ?correction? process XC such that
X = XNL + XC . We know that the dynamics of X satisfy
C
dXt = a? XNL
t + Xt dt + B? dWt .
(9)
Taylor expanding the drift term around XNL , we see that to first order,
NL
NL
C
dXt ? a? Xt + Ja (Xt )Xt dt + B? dWt
NL
C
?
?
= a? Xt + B? dWt dt + Ja (XNL
t )Xt dt + B? dWt ? dWt .
(10)
Here, Ja (x) is the Jacobian matrix of the function a evaluated at x. This motivates the use of a linear
time-dependent approximation to the correction process. We will refer to this linear approximation
as XL . The dynamics of XL satisfy
L
dXLt = Ja (XNL
t )Xt dt + B? dRt ,
XL0 = 0,
(11)
? Conditional on XNL , XL is a linwhere the driving noise is the ?residual? term R = W ? W.
ear Gaussian process, and equation (11) can be solved in semi-closed form. First, we compute a
numerical approximation to the solution of the homogenous matrix-valued equation
d
?(t) = Ja (XNL
t )?(t),
dt
?(0) = In .
(12)
One can compute ??1 (t) in a similar fashion via the relationship d??1 /dt = ???1 (d?/dt)??1 .
We then have
XLt = ?(t)
Z
t
?(u)?1 BdRu
0
Z
= ?(t)
t
?(u)
0
?1
BdWu ?
N
X
i=1
5
Z
?(t)
t
?(u)
0
?1
B?i (u)du Zi .
(13)
It follows that XL has mean 0 and covariance
Z s?t
?(u)?1 BB| ?| (u)?1 du ?| (t)
k(s, t) = ?(s)
0
?
N
X
s
Z
?(s)
Z t
|
?(u)?1 B?i (u)du ?| (t).
?(u)?1 B?i (u)du
(14)
0
0
i=1
The process XNL is designed to capture the most significant nonlinear features of the original diffusion X, while the linear process XL corrects for the truncation of the sum (6), and can be understood
using tools from the theory of Gaussian processes. One can think of the linear term as the result of
a ?small-noise? expansion about the nonlinear trajectory. Small-noise techniques have been applied
to diffusions in the past [11], but the method described above has the advantage of being inherently
? = XNL + XL converges to X in L2 [0, T ]
nonlinear. In the supplement to this paper, we show that X
as N ? ? under the assumption that a is Lipschitz continuous. If the drift function is linear, then
? = X regardless of the choice of N .
X
5
Parameter Estimation
In this section, we describe a novel modification of the Gibbs sampler that does not suffer the drawbacks of the linear proposal strategy. In Section 6, we demonstrate that for highly nonlinear problems
it will perform significantly better than standard methods because of the nonlinear component of our
approximation.
Suppose for now that we make a single noiseless observation at time t1 = T (for ease of notation,
we will assume that observations are uniformly spaced through time with ti+1 ? ti = T , though this
is not necessary). Our aim is to sample from the posterior distribution
L
NL
p ?, Z1:N | XNL
(15)
1 + X1 = Y1 ? N (Y1 | X1 , k1 (T, T ))N (Z1:N )p(?).
We adopt the convention that N (?| ?, ?) represents the normal distribution with mean ? and covariance ?, whereas N (?) represents the standard normal distribution. Note that we have left dependence
of k1 on Z and ? implicit. The right-hand side of this expression allows us to evaluate the posterior
up to proportionality; hence it can be targeted with a Metropolis-Hastings sampler.
With multiple observations, the situation is similar. However, we now have a set of Gaussian inputs
? i |X
? i?1 . If we attempt to update ? and {Z(i) }i?n all at once, the rate of
Z(i) for each transition X
rejection will be unacceptably high. For this reason, we update each Z(i) in turn, holding ? and
the other Gaussian inputs fixed. We draw Z(i)? from the proposal distribution, and compute XNL?
i
with initial condition Yi?1 . We also compute the covariance ki? (T, T ) of the linear correction. The
acceptance probability for this update is
(i)?
?=1?
(i)?
(i)
N (Yi | XNL?
, ki? (T, T ))N (Z1:N )p(Z1:N ? Z1:N )
i
(i)
(i)
(i)?
N (Yi | XNL
i , ki (T, T ))N (Z1:N )p(Z1:N ? Z1:N )
(16)
After updating the Gaussian inputs, we make a global update for the ? parameter. The acceptance
probability for this move is
?=1?
n
Y
, ki? (T, T ))p(?? )p(?? ? ?)
N (Yi | XNL?
i
,
?
N (Yi | XNL
i , ki (T, T ))p(?)p(? ? ? )
i=1
(17)
where XNL?
and ki? (T, T ) are computed using the proposed value of ?? .
i
We noted earlier that when j is large, Zj governs the small-time oscillations of the diffusion process.
One should not expect to gain much information about the value of Zj when we have large interobservation times. We find this to be the case in our experiments - the posterior distribution of
Zj:N approaches a spherical Gaussian distribution when j > 3. For this reason, we employ a
Gaussian random walk proposal in Z1 with stepsize ?RW = .45, and proposals for Z2:N are drawn
independently from the standard normal distribution.
6
In the presence of observation noise, we proceed roughly as before. Recall that we make obser(i)?
vations Yi = Xi + i . We draw proposals Z1:N and ?i . The initial condition for XNL
is now
i
Yi?1 ? i?1 . However, one must make an important modification to the algorithm. Suppose we
? i and it is accepted. If we subsequently propose an update for X
? i+1 and
propose an update of X
?
it is rejected, then the initial condition for Xi+1 will be inconsistent with the current state of the
chain (it will be Yi ? i instead of Yi ? ?i ). For this reason, we must propose joint updates for
? i , i , X
? i+1 ). If the variance of the observation noise is high, it may be more efficient to target the
(X
joint posterior distribution p ?, {Zi1:N , XLi } | Y1:n .
6
Numerical Experiments
The double-well diffusion is a widely-used benchmark for nonlinear inference problems [24, 32,
33, 34]. It has been used to model systems that exhibit switching behaviour or bistability [11, 35].
It possesses nonlinear features that are sufficient to demonstrate the shortcomings of some existing
inference methods, and how our approach overcomes these issues. The dynamics of the process are
given by
dXt = ?Xt ? 2 ? Xt2 dt + BdWt .
(18)
The process X has a bimodal stationary distribution, with modes at x = ??. The parameter ?
governs the rate at which sample trajectories are ?pushed? toward either mode. If B is small in
comparison to ?, mode-switching occurs relatively rarely.
Figure 1(b) shows a trajectory of a double-well diffusion over 20 units of time, with observations
at times {1, 2, . . . , 20} . We used the parameters ? = 2, ? = 1, B = 1. The variance of the
observation noise was set to ? = .25.
As we mentioned earlier, particle MCMC performs well in low-dimensional inference problems.
For this reason, the results of a particle MCMC inference algorithm (with N = 1, 000) particles are
used as ?ground truth?. Our algorithm used N = 3 Gaussian inputs with a linear correction. We
used the Fourier cosine series (8) as an orthonormal basis. We compare our Gibbs sampler to that
of Golightly and Wilkinson [15], for which we use an Euler discretisation with stepsize ?t = .05.
Each algorithm drew 70, 000 samples from the posterior distribution, moving through the parameter
space in a Gaussian random walk. We placed an exponential(4) prior on ? and an exponential(1)
prior on ? and B.
For this particular choice of parameters, both Gibbs samplers give a good approximation to the true
posterior. Figure 2 shows histograms of the marginal posterior distributions of (?, ?, B) for each
algorithm.
p(?)
p(?)
0.6
2
2
1.5
1.5
p(B)
1
0.8
1
1
0.4
0.5
0.2
0
0
1
2
3
?
(a) p(?|Y1:20 )
4
5
0
0
0.5
0.5
1
1.5
2
2.5
0
0
0.5
1
1.5
2
2.5
B
?
(b) p(?|Y1:20 )
(c) p(B|Y1:20 )
Figure 2: Marginal posterior distributions for (?, ?, B) conditional on observed data. The solid
black line is the output of a particle MCMC method, taken as ground truth. The broken red line is
the output of the linear proposal method, and the broken and dotted blue line is the density estimate
from the coloured noise expansion method. We see that both methods give a good approximation to
the ground truth.
Gibbs samplers that have been used in the past rely on making proposals by conditioning a linear diffusion to hit a target, and subsequently accepting or rejecting those proposals. Over short
timescales, or for problems that are not highly nonlinear, this can be an effective strategy. However,
as the timescale increases, the proposal and target become quite dissimilar (see Figure 1(a)).
7
3.5
3
3
2.5
2.5
2
2
1.5
1
1
0.5
1.5
2
?
2.5
(a) Particle MCMC
3
6
1.5
0.5
0
1
8
p(?)
3.5
p(?)
p(?)
For our second experiment, we simulate a double well process with (?, ?, B) = (2, 2.5, 2). We make
noisy observations with ti ? ti?1 = 3 and ? = .1. The algorithms target the posterior distribution
over ?, with ? and B fixed at their true values. From our previous discussion, one might expect the
linear proposal strategy to perform poorly in this more nonlinear setting. This is indeed the case. As
in the previous experiment, we used a linear proposal Gibbs sampler with Euler stepsize dt = 0.05.
In the ?path update? stage, fewer than .01% of proposals were accepted. On the other hand, the
coloured noise expansion method used N = 7 Gaussian inputs with a linear correction and was able
to approximate the posterior accurately. Figure 3 shows histograms of the results. Note the different
scaling of the rightmost plot.
0
1
4
2
1.5
2
?
2.5
0
1
3
(b) Coloured noise expansion
method
1.5
2
?
2.5
3
(c) Linear proposal method
Figure 3: p(?|Y1:10 , B, ?) after ten observations with a relatively large inter-observation time. We
drew data from a double well process with (?, ?, B) = (2, 2.5, 2). The coloured noise expansion
method matches the ground truth, whereas the linear proposal method is inconsistent with the data.
7
Discussion and Future Work
We have seen that the standard linear proposal/correction strategy can fail for highly nonlinear problems. Our inference method avoids the linear correction step, instead targeting the posterior over
input variables directly. With regard to computational efficiency, it is difficult to give an authoritative analysis because both our method and the linear proposal method are complex, with several
parameters to tune. In our experiments, the algorithms terminated in a roughly similar length of time
(though no serious attempt was made to optimise the runtime of either method).
With regard to our method, several questions remain open. The accuracy of our algorithm depends
on the choice of basis functions {?i }. At present, it is not clear how to make this choice optimally
in the general setting. In the linear case, it is possible to show that one can achieve the accuracy
of the Karhunen-Loeve decomposition, which is theoretically optimal. One can also set the error at
a single time t to zero with a judicious choice of a single basis function. We aim to present these
results in a paper that is currently under preparation.
We used a Taylor expansion to compute the covariance of the correction term. However, it may
be fruitful to use more sophisticated ideas, collectively known as statistical linearisation methods.
In this paper, we restricted our attention to processes with a state-independent diffusion coefficient
so that the covariance of the correction term could be computed. We may be able to extend this
methodology to process with state-dependent noise - certainly one could achieve this by taking a
0-th order Taylor expansion about XNL . Whether it is possible to improve upon this idea is a matter
for further investigation.
Acknowledgments
Simon Lyons was supported by Microsoft Research, Cambridge.
References
[1] R.C. Merton. Theory of rational option pricing. The Bell Journal of Economics and Management Science,
4:141?183, 1973.
[2] D.T. Gillespie. The chemical Langevin equation. Journal of Chemical Physics, 113,1:297?306, 2000.
8
[3] G. Kallianpur. Weak convergence of stochastic neuronal models. Stochastic Methods in Biology, 70:116?
145, 1987.
[4] H.A. Dijkstra, L.M. Frankcombe, and A.S. von der Heydt. A stochastic dynamical systems view of the
Atlantic Multidecadal Oscillation. Philosophical Transactions of the Royal Society A, 366:2543?2558,
2008.
[5] L. Murray and A. Storkey. Continuous time particle filtering for fMRI. Advances in Neural Information
Processing Systems, 20:1049?1056, 2008.
[6] J. Daunizeau, K.J. Friston, and S.J. Kiebel. Variational Bayesian identification and prediction of stochastic
nonlinear dynamic causal models. Physica D, pages 2089?2118, 2009.
[7] L.M. Murray and A.J. Storkey. Particle smoothing in continuous time: A fast approach via density
estimation. IEEE Transactions on Signal Processing, 59:1017?1026, 2011.
[8] W. Feller. An Introduction to Probability Theory and its Applications, Volume II. Wiley, 1971.
[9] I. Karatzas and S.E. Shreve. Brownian Motion and Stochastic Calculus. Springer, 1991.
[10] B. Oksendal. Stochastic Differential Equations. Springer, 2007.
[11] C.W. Gardiner. Handbook of Stochastic Methods for Physics, Chemistry and the Natural Sciences.
Springer-Verlag, 1983.
[12] Y. A??t-Sahalia. Closed-form likelihood expansions for multivariate diffusions. The Annals of Statistics,
36(2):906?937, 2008.
[13] A. Beskos, O. Papaspiliopoulos, and G.O. Roberts. Monte-Carlo maximum likelihood estimation for
discretely observed diffusion processes. Annals of Statistics, 37:223?245, 2009.
[14] A. Beskos, O. Papaspiliopoulos, G.O. Roberts, and P. Fearnhead. Exact and computationally efficient
likelihood-based estimation for discretely observed diffusion processes (with discussion). Journal of the
Royal Statistical Society: Series B (Statistical Methodology), 68:333?382, 2006.
[15] A. Golightly and D.J. Wilkinson. Bayesian inference for nonlinear multivariate diffusion models observed
with error. CSDA, 52:1674?1693, 2008.
[16] S. Chib, M.K. Pitt, and N. Shepard. Likelihood-based inference for diffusion models. Working Paper,
2004. http://www.nuff.ox.ac.uk/economics/papers/2004/w20/chibpittshephard.pdf.
[17] G.B. Durham and A.R. Gallant. Numerical techniques for maximum likelihood estimation of continuoustime diffusion processes (with comments). Journal of Business and Economic Statistics, 20:297?338,
2002.
[18] A. Beskos, O. Papaspiliopoulos, and G.O. Roberts. Retrospective exact simulation of diffusion sample
paths with applications. Bernoulli, 12(6):1077, 2006.
[19] D. Rimmer, A. Doucet, and W.J. Fitzgerald. Particle filters for stochastic differential equations of nonlinear diffusions. Technical report, Cambridge University Engineering Department, 2005.
[20] Christophe Andrieu, Arnaud Doucet, and Roman Holenstein. Particle Markov Chain Monte Carlo methods. Journal of the Royal Statistical Society, 72:1?33, 2010.
[21] C. Snyder, T. Bengtsson, P. Bickel, and J. Anderson. Obstacles to high-dimensional particle filtering.
Monthly Weather Review, 136(12):4629?4640, 2008.
[22] Y. A??t-Sahalia. Maximum likelihood estimation of discretely sampled diffusions: a closed-form approximation approach. Econometrica, 70:223?262, 2002.
[23] C. Archambeau, D. Cornford, M. Opper, and J. Shawe-Taylor. Gaussian process approximations of
stochastic differential equations. JMLR: Workshop and Conference Proceedings, 1:1?16, 2007.
[24] C. Archambeau, M. Opper, Y. Shen, D. Cornford, and J. Shawe-Taylor. Variational inference for diffusion
processes. In Advances in Neural Information Processing Systems 20 (NIPS 2007), 2008.
[25] S. S?arkk?a. On unscented Kalman filtering for state estimation of continuous-time nonlinear systems.
IEEE Transactions on Automatic Control, 52:1631?1641, 2007.
[26] A.H. Jazwinski. Stochastic processes and filtering theory, volume 63. Academic Pr, 1970.
[27] H. Singer. Nonlinear continuous time modeling approaches in panel research. Statistica Neerlandica,
62(1):29?57, 2008.
[28] S. Corlay and P. Gilles. Functional quantization based stratified sampling methods. Arxiv preprint
Arxiv:1008.4441, 2010.
[29] W. Luo. Wiener chaos expansion and numerical solutions of stochastic partial differential equations. PhD
thesis, California Institute of Technology, 2006.
[30] P.E. Kloeden and E. Platen. Numerical Solution of Stochastic Differential Equations. Springer, 1999.
[31] A.F. Bastani and S.M. Hosseini. A new adaptive Runge-Kutta method for stochastic differential equations.
Journal of Computational and Applied Mathematics, 206:631?644, 2007.
[32] Y. Shen, C. Archambeau, D. Cornford, M. Opper, J. Shawe-Taylor, and R. Barillec. A comparison of
variational and Markov chain Monte Carlo methods for inference in partially observed stochastic dynamic
systems. Journal of Signal Processing Systems, 61(1):51?59, 2010.
[33] H. Singer. Parameter estimation of nonlinear stochastic differential equations: simulated maximum likelihood versus extended Kalman filter and It?o-Taylor expansion. Journal of Computational and Graphical
Statistics, 11(4):972?995, 2002.
[34] M. Opper, A. Ruttor, and G. Sanguinetti. Approximate inference in continuous time Gaussian-jump
processes. Advances in Neural Information Processing Systems, 23:1831?1839, 2010.
[35] N.G. van Kampen. Stochastic processes in physics and chemistry. North holland, 2007.
9
|
4743 |@word open:1 proportionality:2 calculus:2 heuristically:1 simulation:1 decomposition:2 covariance:6 solid:2 ytn:1 initial:7 series:3 disparity:1 zij:1 rightmost:1 past:3 reaction:2 existing:2 current:3 arkk:2 z2:1 atlantic:1 luo:1 must:5 kiebel:1 numerical:7 enables:2 hypothesize:1 designed:1 plot:1 progressively:1 update:8 stationary:1 prohibitive:1 fewer:2 unacceptably:1 accordingly:1 short:1 accepting:1 obser:1 mathematical:2 differential:17 become:1 qualitative:1 shorthand:2 introduce:1 theoretically:1 x0:3 inter:2 indeed:1 roughly:3 karatzas:1 spherical:1 lyon:3 xti:1 begin:1 project:2 notation:3 panel:1 mass:1 what:2 sde:3 interpreted:2 finding:1 guarantee:1 ti:13 runtime:1 hit:2 uk:3 control:5 facto:1 unit:1 planck:1 t1:2 before:2 understood:2 engineering:2 limit:1 switching:2 id:1 firing:1 path:15 approximately:1 black:1 might:1 studied:1 challenging:1 co:1 ease:2 archambeau:3 smc:2 stratified:3 practical:1 unique:1 acknowledgment:1 practice:1 edgeworth:1 evolving:1 bell:1 significantly:1 weather:2 regular:1 get:2 targeting:1 context:1 influence:1 www:1 conventional:1 deterministic:2 fruitful:1 straightforward:1 regardless:1 attention:1 independently:1 economics:2 survey:1 shen:2 orthonormal:3 financial:1 dw:1 variation:1 increment:2 annals:2 controlling:1 target:6 suppose:4 exact:2 us:1 auxilliary:1 storkey:4 element:1 particularly:1 updating:1 observed:9 preprint:1 solved:1 capture:2 cornford:3 yk:1 mentioned:2 feller:2 gross:1 govern:1 broken:2 fitzgerald:1 wilkinson:3 econometrica:1 dynamic:6 depend:1 upon:1 efficiency:1 basis:6 easily:1 joint:2 various:2 kolmogorov:1 derivation:2 irk:3 seperate:1 fast:1 describe:5 shortcoming:1 monte:7 effective:1 vations:1 quite:1 heuristic:1 widely:1 valued:3 solve:3 drawing:1 ability:1 statistic:4 think:2 transform:2 noisy:4 timescale:1 final:1 runge:2 xlt:1 advantage:2 propose:4 relevant:1 date:1 poorly:1 achieve:2 inducing:1 pronounced:1 convergence:1 double:6 produce:1 converges:1 derive:1 develop:1 ac:3 school:2 progress:1 strong:1 convention:1 drawback:1 correct:1 filter:2 stochastic:26 subsequently:2 centered:1 seperation:1 require:1 ja:5 behaviour:10 fix:1 decompose:2 investigation:1 correction:10 physica:1 unscented:1 sufficiently:1 around:1 ground:4 normal:5 pitt:1 substituting:1 driving:3 major:1 bickel:1 adopt:1 purpose:1 estimation:14 currently:2 tool:6 amos:1 hope:1 gaussian:18 always:1 aim:2 fearnhead:1 modelling:4 likelihood:7 bernoulli:1 aalto:2 sense:1 inference:17 dependent:2 entire:1 typically:5 jazwinski:1 expand:1 i1:1 issue:2 among:1 flexible:1 espoo:1 development:1 art:1 constrained:1 smoothing:2 marginal:4 homogenous:1 construct:1 once:1 having:1 sampling:8 biology:1 represents:2 thinking:1 fmri:2 future:2 report:1 serious:1 few:1 employ:7 roman:1 chib:2 neerlandica:1 individual:2 maintain:1 microsoft:1 ab:2 attempt:3 continuoustime:1 interest:4 acceptance:2 highly:4 certainly:1 nl:4 chain:5 amenable:2 accurate:1 integral:1 partial:3 necessary:2 simo:2 discretisation:5 conduct:2 taylor:7 walk:2 causal:1 minimal:1 formalism:1 earlier:2 obstacle:1 modeling:1 zn:1 bistability:1 ordinary:3 cost:1 introducing:1 entry:1 euler:6 optimally:1 perturbed:1 accomplish:1 density:4 stay:1 physic:4 informatics:2 corrects:1 continuously:1 von:1 thesis:1 ear:1 management:1 choose:2 derivative:1 account:1 de:1 chemistry:2 north:1 coefficient:5 matter:1 analagous:1 satisfy:2 explicitly:1 depends:1 view:1 closed:4 observing:1 red:3 start:1 bayes:1 option:1 simon:2 contribution:1 square:1 accuracy:3 wiener:1 variance:3 who:2 yield:1 spaced:1 modelled:3 xli:1 weak:1 bayesian:2 rejecting:1 accurately:1 identification:1 carlo:7 trajectory:3 drive:3 holenstein:1 explain:1 ed:2 definition:1 involved:1 gain:3 maruyama:3 dataset:1 merton:1 rational:1 sampled:1 recall:2 hilbert:1 zid:1 bengtsson:1 sophisticated:1 back:1 higher:1 dt:13 methodology:2 evaluated:1 though:3 ox:1 anderson:1 rejected:1 biomedical:1 implicit:1 stage:1 shreve:1 hand:4 hastings:3 working:1 nonlinear:25 mode:4 pricing:1 grows:2 effect:1 contain:1 true:5 counterpart:1 evolution:1 hence:1 andrieu:1 chemical:3 arnaud:1 white:9 noted:1 discretises:1 cosine:2 pdf:1 demonstrate:4 tn:1 performs:2 motion:8 variational:4 chaos:1 novel:4 fi:1 behaves:1 functional:1 physical:1 empirically:1 conditioning:2 exponentially:1 volume:2 shepard:1 discussed:1 extend:2 interpretation:1 numerically:1 interpret:1 onedimensional:1 significant:2 refer:2 monthly:1 cambridge:2 gibbs:9 automatic:1 consistency:1 mathematics:1 particle:12 shawe:3 dot:1 moving:1 deduce:1 kampen:1 posterior:12 brownian:8 multivariate:5 linearisation:1 sarkka:1 certain:1 verlag:1 christophe:1 yi:9 der:1 integrable:1 seen:1 somewhat:1 impose:1 freely:1 determine:1 dashed:1 signal:3 semi:1 multiple:1 ii:1 technical:1 match:1 academic:1 manipulate:1 drt:1 controlled:1 prediction:1 scalable:1 basic:1 noiseless:1 dimentional:1 arxiv:2 histogram:2 bimodal:1 proposal:23 background:1 whereas:3 fine:2 interval:4 crucial:1 daunizeau:1 unlike:1 posse:1 oksendal:1 comment:1 subject:2 inconsistent:2 near:1 presence:1 crichton:2 fit:1 zi:10 wti:2 reduce:1 idea:3 economic:1 beskos:4 whether:1 expression:1 retrospective:1 ird:1 suffer:1 speaking:1 proceed:1 useful:3 governs:2 clear:1 tune:1 amount:1 dws:4 ten:1 rw:1 http:1 exist:2 zj:3 dotted:1 blue:3 write:1 snyder:1 kloeden:2 drawn:2 bastani:1 diffusion:57 asymptotically:1 sum:4 xt2:1 inverse:1 uncertainty:1 powerful:1 almost:1 yt1:1 parsimonious:1 draw:7 oscillation:3 scaling:2 pushed:1 ki:6 hi:1 discretely:3 gardiner:1 occur:1 loeve:1 fourier:3 simulate:2 min:1 attempting:1 relatively:2 department:2 structured:1 alternate:1 across:1 remain:1 metropolis:3 making:2 modification:2 restricted:1 pr:1 taken:1 computationally:2 equation:29 randomised:1 discus:2 turn:1 fail:2 singer:2 know:4 instrument:1 available:1 apply:2 observe:1 generic:1 appropriate:1 simulating:1 dwt:6 neighbourhood:1 stepsize:3 alternative:1 existence:2 substitute:1 original:1 include:1 graphical:1 xc:2 k1:2 build:1 hosseini:1 approximating:2 classical:1 society:3 murray:2 w20:1 move:2 question:1 quantity:1 occurs:1 parametric:2 strategy:4 rt:1 usual:1 diagonal:1 dependence:1 said:1 exhibit:1 subspace:1 kutta:2 separate:1 simulated:1 street:2 argue:1 reason:4 toward:1 length:2 kalman:3 relationship:1 demonstration:1 difficult:2 robert:3 potentially:1 holding:2 motivates:1 unknown:1 perform:2 gallant:2 gilles:1 neuron:1 observation:18 markov:5 sm:1 benchmark:1 dijkstra:1 truncated:2 situation:3 zi1:2 langevin:1 extended:1 y1:9 perturbation:1 heydt:1 drift:2 cast:1 z1:11 philosophical:1 california:1 eh8:2 nip:1 able:2 dynamical:1 pattern:2 challenge:1 optimise:1 royal:3 gillespie:1 natural:4 rely:2 friston:1 business:1 indicator:1 residual:1 scheme:1 improve:2 golightly:3 technology:1 brief:1 coloured:15 prior:4 characterises:1 literature:1 l2:3 review:1 expect:3 dxt:4 filtering:5 versus:1 xnl:20 authoritative:1 sufficient:1 consistent:2 principle:2 viewpoint:1 placed:1 supported:1 truncation:1 side:1 institute:1 taking:3 edinburgh:4 regard:2 van:1 dimension:2 opper:4 world:1 transition:3 avoids:1 made:2 adaptive:1 jump:1 far:2 transaction:3 bb:1 approximate:3 ruttor:1 overcomes:1 global:1 doucet:2 handbook:1 assumed:1 xi:2 sanguinetti:1 continuous:7 decade:1 why:1 promising:1 nature:2 expanding:1 inherently:2 expansion:16 du:6 complex:1 main:1 timescales:1 statistica:1 terminated:1 noise:31 definitive:1 x1:2 neuronal:1 papaspiliopoulos:3 fashion:1 wiley:1 deterministically:1 exponential:2 xl:6 jmlr:1 jacobian:1 grained:1 theorem:1 xt:11 discarding:1 workshop:1 quantization:1 sequential:1 drew:2 supplement:1 phd:1 conditioned:1 karhunen:1 durham:2 rejection:6 platen:2 simply:1 univariate:1 partially:1 holland:1 collectively:1 springer:4 fokker:1 truth:4 conditional:6 targeted:1 exposition:1 price:1 replace:1 considerable:2 experimentally:1 feasible:1 yti:1 typical:2 infinite:1 change:4 lipschitz:1 sampler:10 wt:2 uniformly:1 judicious:1 accepted:2 rarely:2 xl0:1 dissimilar:1 preparation:1 evaluate:1 mcmc:8 phenomenon:1
|
4,137 | 4,744 |
A latent factor model for highly multi-relational data
Nicolas Le Roux
INRIA - SIERRA Project Team,
Ecole Normale Sup?erieure, Paris, France
[email protected]
Rodolphe Jenatton
CMAP, UMR CNRS 7641,
Ecole Polytechnique, Palaiseau, France
[email protected]
Antoine Bordes
Heudiasyc, UMR CNRS 7253,
Universit?e de Technologie de Compi`egne, France
[email protected]
Guillaume Obozinski
INRIA - SIERRA Project Team,
Ecole Normale Sup?erieure, Paris, France
[email protected]
Abstract
Many data such as social networks, movie preferences or knowledge bases are
multi-relational, in that they describe multiple relations between entities. While
there is a large body of work focused on modeling these data, modeling these multiple types of relations jointly remains challenging. Further, existing approaches
tend to breakdown when the number of these types grows. In this paper, we propose a method for modeling large multi-relational datasets, with possibly thousands of relations. Our model is based on a bilinear structure, which captures various orders of interaction of the data, and also shares sparse latent factors across
different relations. We illustrate the performance of our approach on standard
tensor-factorization datasets where we attain, or outperform, state-of-the-art results. Finally, a NLP application demonstrates our scalability and the ability of
our model to learn efficient and semantically meaningful verb representations.
1
Introduction
Statistical Relational Learning (SRL) [7] aims at modeling data consisting of relations between
entities. Social networks, preference data from recommender systems, relational databases used for
the semantic web or in bioinformatics, illustrate the diversity of applications in which such modeling
has a potential impact.
Relational data typically involve different types of relations between entities or attributes. These
entities can be users in the case of social networks or recommender systems, words in the case of
lexical knowledge bases, or genes and proteins in the case of bioinformatics ontologies, to name
a few. For binary relations, the data is naturally represented as a so called multi-relational graph
consisting of nodes associated with entities and of different types of edges between nodes corresponding to the different types of relations. Equivalently the data consists of a collection of triplets
of the form (subject, relation, object), listing the actual relationships where we will call subject and
object respectively the first and second term of a binary relation. Relational data typically cumulates
many difficulties. First, a large number of relation types, some being significantly more represented
than others and possibly concerning only subsets of entities; second, the data is typically noisy and
incomplete (missing or incorrect relationships, redundant entities); finally most datasets are large
scale with up to millions of entities and billions of links for real-world knowledge bases.
Besides relational databases, SRL can also be used to model natural language semantics. A standard
way of representing the meaning of language is to identify entities and relations in texts or speech
utterances and to organize them. This can be conducted at various scales, from the word or sentence
level (e.g. in parsing or semantic role labeling) to a collection of texts (e.g. in knowledge extraction).
1
SRL systems are a useful tool there, as they can automatically extract high level information from
the collected data by building summaries [22], sense categorization lexicons [11], ontologies [20],
etc. Progress in SRL would be likely to lead to advances in natural language understanding.
In this paper, we introduce a model for relational data and apply it to multi-relational graphs and
to natural language. In assigning high probabilities to valid relations and low probabilities to all
the others, this model extracts meaningful representations of the various entities and relations in
the data. Unlike other factorization methods (e.g. [15]), our model is probabilistic which has the
advantage of accounting explicitly for the uncertainties in the data. Besides, thanks to a sparse
distributed representation of relation types, our model can handle data with a significantly larger
number of relation types than was considered so far in the literature (a crucial aspect for natural
language data). We empirically show that this approach ties or beats state-of-the-art algorithms on
various benchmarks of link prediction, a standard test-bed for SRL methods.
2
Related work
A branch of relational learning, motivated by applications such as collaborative filtering and link
prediction in networks, models relations between entities as resulting from intrinsic latent attributes
of these entities.1 Work in what we will call relational learning from latent attributes (RLA) focused
mostly on the problem of modeling a single relation type as opposed to trying to model simultaneously a collection of relations which can themselves be similar. As reflected by several formalisms
proposed for relational learning [7], it is the latter multi-relational learning problem which is needed
to model efficiently large scale relational databases. The fact that relations can be similar or related
suggests that a superposition of independently learned models for each relation would be highly
inefficient especially since the relationships observed for each relation are extremely sparse.
RLA translates often into learning an embedding of the entities, which corresponds algebraically to
a matrix factorization problem (typically the matrix of observed relationships). A natural extension
to learning multiple relations consists in stacking the matrices to be factorized and applying classical
tensor factorization methods such as CANDECOMP / PARAFAC [25, 8]. This approach, which induces
inherently some sharing of parameters between both different terms and different relations, has been
applied successfully [8] and has inspired some probabilistic formulations [4].
Another natural extension to learning several relations simultaneously can be to share the common
embedding or the entities across relations via collective matrix factorization as proposed in RESCAL
[15] and other related work [18, 23].
The simplest form of latent attribute that can be associated to an entity is a latent class: the resulting
model is the classical stochastic blockmodel [26, 17]. Several clustering-based approaches have
been proposed for multi-relational learning: [9] considered a non-parametric Bayesian extension
of the stochastic blockmodel allowing to automatically infer the number of latent clusters; [14, 28]
refined this to allow entities to have a mixed clusters membership; [10] introduced clustering in
Markov-Logic networks; [24] used a non-parametric Bayesian clustering of entities embedding in a
collective matrix factorization formulation. To share parameters between relations, [9, 24, 14, 28]
and [10] build models that cluster not only entities but relations as well.
With the same aim of reducing the number of parameters, the Semantic Matching Energy model
(SME) of [2] embeds relations as a vector from the same space as the entities and models likely
relationships by an energy combining together binary interactions between the relation vector and
each of the vectors encoding the two terms.
In terms of scalability, RESCAL [15], which has been shown to achieve state of the art performance
on several relation datasets, has recently been applied to the knowledge base YAGO [16] thereby
showing its ability to scale well on data with very large numbers of entities, although the number
of relations modeled remained moderate (less than 100). As for SME [2], its modeling of relations
by vectors allowed it to scale to several thousands of relations. Scalability can be also an issue for
nonparametric Bayesian models (e.g. [9, 24]) because of the cost of inference.
1
This is called Statistical Predicate Invention by [10].
2
3
Relational data modeling
We consider relational data consisting of triplets that encode the existence of relation between two
entities that we will call the subject and the object. Specifically, we consider a set of ns subjects {Si }i?J1;ns K along with no objects {Ok }k?J1;no K which are related by some of nr relations
{Rj }j?J1;nr K . A triplet encodes that the relation Rj holds between the subject Si and the object
Ok , which we will write Rj (Si , Ok ) = 1. We will therefore refer to a triplet also as a relationship.
A typical example which we will discuss in greater detail is in natural language processing where a
triplet (Si , Rj , Ok ) corresponds to the association of a subject and a direct object through a transitive
verb. The goal is to learn a model of the relations to reliably predict unseen triplets. For instance,
one might be interested in finding a likely relation Rj based only on the subject and object (Si , Ok ).
4
Model description
In this work, we formulate the problem of learning a relation as a matrix factorization problem.
Following a rationale underlying several previous approaches [15, 24], we consider a model in
which entities are embedded in Rp and relations are encoded as bilinear operators on the entities. More precisely, we assume that the ns subjects (resp. no objects) are represented by vectors
of Rp , stored as the columns of the matrix S , [s1 , . . . , sns ] ? Rp?ns (resp. as the columns of
O , [o1 , . . . , ono ] ? Rp?no ). Each of the p-dimensional representations si , ok will have to be
learned. The relations are represented by a collection of matrices (Rj )1?j?nr , with Rj ? Rp?p ,
which together form a three-dimensional tensor.
We consider a model of the probability of the event Rj (Si , Ok ) = 1 . Assuming first that si
(j)
and ok are fixed, our model is derived from a logistic model P[Rj (Si , Ok ) = 1] , ? ?ik , with
(j)
?(t) , 1/(1 + e?t ). A natural form for ?ik is a linear function of the tensor product si ? ok
(j)
which we can write ?ik = hsi , Rj ok i where h?, ?i is the usual inner product in Rp . If we think now
of learning si , Rj and ok for all (i, j, k) simultaneously, this model learns together the matrices
Rj and optimal embeddings si , ok of the entities so that the usual logistic regressions based on
si ?ok predict well the probability of the observed relationships. This is the initial model considered
in [24] and it matches the model considered in [16] if the least-square loss is substituted to the
(j)
logistic loss. We will refine this model in two ways: first by redefining the term ?ik as a function
(j)
?ik , E(si , Rj , ok ) taking into account the different orders of interactions between si , ok and
Rj , second, by parameterizing the relations Rj by latent ?relational? factors that reduce the overall
number of parameters of the model.
4.1
A multiple order log-odds ratio model
One way of thinking about the probability of occurrence of a specific relationship corresponding to
the triplet (Si , Rj , Ok ) is as resulting (a) from the marginal propensity of individual entities Si , Ok
to enter relations and the marginal propensity of relations Rj to occur, (b) from 2-way interactions
of (Si , Rj ), (Rj , Ok ) corresponding to entities tending to occur marginally as left of right terms
of a relation (c) from 2-way interactions of pairs of entities (Si , Ok ) that overall tend to have more
relations together, and (d) the 3-way dependencies between (Si , Rj , Ok ).
In NLP, we often refer to these as respectively unigram, bigram and trigram terms, a terminology
which we will reuse in the rest of the paper. We therefore design E(si , Rj , ok ) to account for these
interactions of various orders, retaining only 2 terms involving Rj .
(j)
In particular, introducing new parameters y, y0 , z, z0 ? Rp , we define ?ik = E(si , Rj , ok ) as
E(si , Rj , ok ) , hy, Rj y0 i + hsi , Rj zi + hz0 , Rj ok i + hsi , Rj ok i,
0
i
0
k
i
(1)
k
where hy, Rj y i, hs , Rj zi+hz , Rj o i and hs , Rj o i are the uni-, bi- and trigram terms. This
parametrization is redundant in general given that E(si , Rj , ok ) is of the form h(si + z), Rj (ok +
z0 )i + bj ; but it is however useful in the context of a regularized model (see Section 5).
2
This is motivated by the fact that we are primarily interested in modelling the relations terms, and that it is
not necessary to introduce all terms to fully parameterize the model.
3
4.2
Sharing parameters across relations through latent factors
When learning a large number of relations, the number of observations for many relations can be
quite small, leading to a risk of overfitting. Sutskever et al. [24] addressed this issue with a nonparametric Bayesian model inducing clustering of both relations and entities. SME [2] proposed to
embed relations as vectors of Rp , like entities, to tackle problems with hundreds of relation types.
With a similar motivation to decrease the overall number of parameters, instead of using a general
parameterization of the matrices Rj as in RESCAL [16], we require that all Rj decompose over a
common set of d rank one matrices {?r }1?r?d representing some canonical relations:
Rj =
d
X
?jr ?r ,
for some sparse ?j ? Rd
and
?r = ur vr>
for ur , vr ? Rp .
(2)
r=1
The combined effect of (a) the sparsity of the decomposition and (b) the fact that d nr leads
to sharing parameters across relations. Further, constraining ?r to be the outer product ur vr> also
speeds up all computations relying on linear algebra.
5
Regularized formulation and optimization
Denoting P (resp. N ) the set of indices of positively (resp. negatively) labeled relations, the likelihood we seek to maximize is
Y
Y
L,
P[Rj (Si , Ok ) = 1] ?
P[Rj 0 (Si0 , Ok0 ) = 0] .
(i0 ,j 0 ,k0 )?N
(i,j,k)?P
The log-likelihood is thus log(L) =
(j)
P
(i,j,k)?P
?ik ?
P
(i,j,k)?P?N
(j)
log(1 + exp(?ik )), with
(j)
?ik = E(si , Rj , ok ). To properly normalize the terms appearing in (1) and (2), we carry out
the minimization of the negative log-likelihood over a specific constraint set, namely
? j
>
?k? k1 ? ?, ?r = ur ? vr ,
0
min
? log(L), with
z = z , O = S,
? j k
S,O,{?j },
s , o , y, y0 , z, ur and vr in the ball w; kwk2 ? 1 .
0
0
{?r }y,y ,z,z
We chose to constrain ? in `1 -norm based on preliminary experiments suggesting that it led to better
results that the regularization in `2 -norm. The regularization parameter ? ? 0 controls the sparsity of
the relation representations in (2). The equality constraints induce a shared representations between
subject and objects which were shown to improve the model in preliminary experiments. Given the
fact that the model is conditional on a pair (si , ok ), only a single scale parameter, namely ?jr , is
necessary in the product ?rj hsi , ?r ok i, which motivates all the Euclidean unit ball constraints.
5.1
Algorithmic approach
Given the large scale of the problems we are interested in (e.g., |P| ? 106 ), and since we can
project efficiently onto the constraint set (both the projections onto `1 - and `2 -norm balls can be
performed in linear time [1]), our optimization problem lends itself well to a stochastic projected
gradient algorithm [3].
In order to speed up the optimization, we use several practical tricks. First, we consider a stochastic
gradient descent scheme with mini-batches containing 100 triplets. Second, we use stepsizes of the
form a/(1 + k) with k the iteration number and a a scalar (common to all parameters) optimized
over a logarithmic grid on a validation set.3
Additionally, we cannot treat the NLP application (see Sec. 8) as a standard tensor factorization
problem. Indeed, in that case, we only have access to the positively labeled triplets P. Following [2],
we generate elements in N by considering triplets of the form {(i, j 0 , k)}, j 0 6= j for each (i, j, k) ?
P. In practice, for each positive triplet, we sample a number of artificial negative triplets containing
the same subject and object as our positive triplet but different verbs. This allowed us to change
3
The code is available under an open-source license from http://goo.gl/TGYuh.
4
the problem into a multiclass one where the goal was to correctly classify the ?positive? verb, in
competition with the ?negative? ones.
The standard approach for this problem is to use a multinomial logistic function. However, such
a function is highly sensitive to the particular choice of negative verbs and using all the verbs as
negative ones would be too costly. Another more robust approach consists in using the likelihood
function defined above where we try to classify the positive verbs as a valid relationship and the negative ones as invalid relationships. Further, this approximation to the multinomial logistic function
is asymptotically unbiased.
Finally, we observed that it was advantageous to down-weight the influence of the negative verbs to
avoid swamping the influence of the positive ones.
6
Relation to other models
Our model is closely related to several other models. First, if d is large, the parameters of the Rj are
decoupled and the RESCAL model is retrieved (up to a change of loss function).
Second, our model is also related to classical tensor factorization model such as PARAFAC which ap? of the form
proximate the tensor [Rk (Si , Oj )]i,j,k in the least-square sense by a low rank tensor H
Pd
nr ?ns ?no
. The parameterization of all Rj as linear combir=1 ?r ? ? r ? ?r for (?r , ? r , ? r ) ? R
nations of d rank one matrices is in fact equivalent to constraining the tensor R = {Rj }j?J1;nr K to
Pd
be the low rank tensor R = r=1 ?r ? ur ? vr . As a consequence, the tensor of all trigram terms4
Pd
can be written also as r=1 ?r ? ? r ? ? r with ? r = S> ur and ? r = O> vr . This shows that our
model is a particular form of tensor factorization which reduces to PARAFAC (up to a change of loss
function) when p is sufficiently large.
Finally, the approach considered in [2] seems a priori quite different from ours, in particular since
relations are in that work embedded as vectors of Rp like the entities as opposed to matrices of
Rp?p in our case. This choice can be detrimental to model complex relation patterns as we show
in Section 7. In addition, no parameterization of the model [2] is able of handling both bigram and
trigram interactions as we propose.
7
Application to multi-relational benchmarks
We report in this section the performance of our model evaluated on standard tensor-factorization
datasets, which we first briefly describe.
7.1
Datasets
Kinships. Australian tribes are renowned among anthropologists for the complex relational structure of their kinship systems. This dataset, created by [6], focuses on the Alyawarra, a tribe from
Central Australia. 104 tribe members were asked to provide the kinship terms they used for one another. This results in graph of 104 entities and 26 relation types, each of them depicting a different
kinship term, such as Adiadya or Umbaidya. See [6] or [9] for more details.
UMLS. This dataset contains data from the Unified Medical Language System semantic work
gathered by [12]. This consists in a graph with 135 entities and 49 relation types. The entities
are high-level concepts like ?Disease or Syndrome?, ?Diagnostic Procedure?, or ?Mammal?. The
relations represent verbs depicting causal influence between concepts like ?affect? or ?cause?.
Nations. This dataset groups 14 countries (Brazil, China, Egypt, etc.) with 56 binary relation
types representing interactions among them like ?economic aid?, ?treaties? or ?rel diplomacy?, and
111 features describing each country, which we treated as 111 additional entities interacting with
the country through an additional ?has feature? relation5 . See [21] for details.
4
5
Other terms can be decomposed in a similar way.
The resulting new relationships were only used for training, and not considered at test time.
5
Datasets
Kinships
UMLS
Nations
Area under PR curve
Log-likelihood
Area under PR curve
Log-likelihood
Area under PR curve
Log-likelihood
Our approach
0.946 ? 0.005
-0.029 ? 0.001
0.990 ? 0.003
-0.002 ? 0.0003
0.909 ? 0.009
-0.202 ? 0.008
RESCAL [16]
0.95
N/A
0.98
N/A
0.84
N/A
MRC [10]
0.84
-0.045 ? 0.002
0.98
-0.004 ? 0.001
0.75
-0.311 ? 0.022
SME [2]
0.907 ? 0.008
N/A
0.983 ? 0.003
N/A
0.883 ? 0.02
N/A
Table 1: Comparisons of the performance obtained by our approach, RESCAL [16], MRC [10] and
SME [2] over three standard datasets. The results are computed by 10-fold cross-validation.
7.2
Results
These three datasets are relatively small-scale and contain only a few relationships (in the order of
tens). Since our model is primarily designed to handle a large number of relationships (see Sec. 4.2),
this setting is the most favorable to evaluate the potential of our approach. As reported in Table 1,
our method does nonetheless yield better or equally good performance as previous state-of-the-art
techniques, both in terms of area under the precision-recall curve (AUC) and log-likelihood (LL).
The results displayed in Table 1 are computed by 10-fold cross-validation6 , averaged over 10 random
splits of the datasets (90% for cross-validation and 10% for testing). We chose to compare our model
with RESCAL [16], MRC [10] and SME [2] because they achieved the best published results on these
benchmarks in terms of AUC and LL, to the best of our knowledge.
Interestingly, the trigram term from (1) is essential to obtain good performance on Kinships (with
the trigram term removed, we obtain 0.16 in AUC and ?0.14 in LL), thus showing the need for
modeling 3-way interactions in complex relational data. Moreover, and as expected due to the low
number of relations, the value of ? selected by cross-validation is quite large (? = nr ? d), and
as consequence does not lead to sparsity in (2). Results on this dataset also exhibits the benefit of
modeling relations with matrices instead of vectors as does SME [2].
Zhu [28] recently reported results on Nations and Kinships evaluated in terms of area under the
receiver-operating-characteristic curve instead of area under the precision-recall curve as we display
in Table 1. With this other metric, our model obtains 0.953 on Nations and 0.992 on Kinships and
hence outperforms Zhu?s approach, which achieves 0.926 and 0.962 respectively.
8
Learning semantic representations of verbs
By providing an approach to model the relational structure of language, SRL can be of great use for
learning natural language semantics. Hence, this section proposes an application of our method on
text data from Wikipedia for learning a representation of words, with a focus on verbs.
8.1
Experimental setting
Data. We collected this data in two stages. First, the SENNA software7 [5] was used to perform part-of-speech tagging, chunking, lemmatization8 and semantic role labeling on ?2,000,000
Wikipedia articles. This data was then filtered to only select sentences for which the syntactic structure was (subject, verb, direct object) with each term of the triplet being a single word from the
WordNet lexicon [13]. Subjects and direct objects ended up being all single nouns, whose dictionary size is 30,605. The total number of relations in this dataset (i.e. the number of verbs) is 4,547:
this is much larger than for previously published multi-relational benchmarks. We kept 1,000,000
such relationships to build a training set, 50,000 for a validation set and 250,000 for test. All triplets
are unique and we made sure that all words appearing in the validation or test sets were occurring in
the training set.9
6
The values of ?, d and p are searched in nr ? d ? {0.05, 0.1, 0.5, 1}, {100, 200, 500} and {10, 25, 50}.
Available from ronan.collobert.com/senna/.
8
Lemmatization was carried out using NLTK (nltk.org) and transforms a word into its base form.
9
The data set is available under an open-source license from http://goo.gl/TGYuh.
7
6
Our approach
SME [2]
Bigram
synonyms not considered
median/mean rank p@5 p@20
50 / 195.0
0.78
0.95
56 / 199.6
0.77
0.95
48 / 517.4
0.72
0.83
best synonyms considered
median/mean rank p@5 p@20
19 / 96.7
0.89
0.98
19 / 99.2
0.89
0.98
17 / 157.7
0.87
0.95
Table 2: Performance obtained on the NLP dataset by our approach, SME [2] and a bigram model.
Details about the statistics of the table are given in the text.
Practical training setup. During the training phase, we optimized over the validation set various parameters, namely, the size p ? {25, 50, 100} of the representations, the dimension d ?
{50, 100, 200} of the latent decompositions (2), the value of the regularization parameter ? as a
fraction {1, 0.5, 0.1, 0.05, 0.01} of nr ? d, the stepsize in {0.1, 0.05, 0.01} and the weighting of the
negative triplets. Moreover, to speed up the training, we gradually increased the number of sampled
negative verbs (cf. Section 5.1), from 25 up to 50, which had the effect of refining the training.
8.2
Results
Verb prediction. We first consider a direct evaluation of our approach based on the test set of
250,000 instances by measuring how well we predict a relevant and meaningful verb given a pair
(subject, direct object). To this end, for each test relationship, we rank all verbs using our probability
estimates given a pair (subject, direct object). Table 2 displays our results with two kinds of metrics,
namely, (1) the rank of the correct verb and (2) the fraction of test examples for which the correct
verb is ranked in the top z% of the list. The latter criterion is referred to as p@z. In order to
evaluate if some language semantics is captured by the representations, we also consider a less
conservative approach where, instead of focusing on the correct verb only, we measure the minimum
rank achieved over its set of synonyms obtained from WordNet. Our method is compared with that
of SME [2], which was shown to scale well on data with large sets of relations, and with a bigram
model, which estimates the probabilities of the pairs (subject, verb) and (verb, direct object).
The first observation is that the task of verb prediction can be quite well addressed by a simple model
based on 2-way interactions, as shown by the good median rank obtained by the bigram model. This
is confirmed by the mild influence of the trigram term on the performance of our model. On this data,
we experienced that using bigram interactions in our energy function was essential to achieve good
predictions. However, the drop in the mean rank between our approach and the bigram-only model
still indicates that many examples do need a richer model to be correctly handled. By comparison,
we tend to consistently match or improve upon the performance of SME. Remarquably, model
selection led to the choice of ? = 0.1 ? nr ? d for which the coefficients ? of the representations (2)
are sparse in the sense they are dominated by few large values (e.g., the top 2% of the largest values
of ? account for about 25% of the total `1 -norm k?k1 ).
Predicting class 4
1
Predicting classes 3 and 4
1
Our approach
SME
Collobert et al.
Best WordNet
0.8
Our approach
SME
Collobert et al.
Best WordNet
0.9
0.8
Precision
Precision
0.7
0.6
0.4
0.6
0.5
0.4
0.3
0.2
0.2
0
0
0.2
0.4
Recall
0.6
0.8
0.1
0
1
0.2
0.4
Recall
0.6
0.8
1
Figure 1: Precision-recall curves for the task of lexical similarity classification. The curves are
computed based on different similarity measures between verbs, namely, our approach, SME [2],
Collobert et al. [5] and the best (out of three) WordNet similarity measure [13]. Details about the
task can be found in the text.
7
AUC (class 4)
AUC (classes 3&4)
Our approach
0.40
0.54
SME [2]
0.21
0.36
Collobert et al. [5]
0.31
0.48
Best WordNet [19]
0.40
0.59
Table 3: Performance obtained on a task of lexical similarity classification [27], where we compare
our approach, SME [2], Collobert et al.?s word embeddings [5] and the best (out of 3) WordNet
Similarity measure [19] using area under the precision-recall curve. Details are given in the text.
Lexical similarity classification. Our method learns latent representations for verbs and imposes
them some structure via shared parameters, as shown in Section 4.2. This should lead to similar
representations for similar verbs. We consider the task of lexical similarity classification described
in [27] to evaluate this hypothesis. Their dataset consists of 130 pairs of verbs labeled by humans
with a score in {0, 1, 2, 3, 4}. Higher scores means a stronger semantic similarity between the verbs
composing the pair. For instance, (divide,split) is labeled 4, while (postpone,show)
has a score of 0.
Based on the pairwise Euclidean distances10 between our learned verb representations Rj , we try to
predict the class 4 (and also the ?merged? classes {3, 4}) by using the assumption that the smallest
the distance between Ri and Rj , the more likely the pair (i, j) should be labeled as 4. We compare
to representations learnt by [2] on the same training data, to word embeddings of [5] (which are
considered as efficient features in Natural Language Processing), and with three similarity measures
provided by WordNet Similarity [19]. For the latest, we only display the best one, named ?path?,
which is built by counting the number of nodes along the shortest path between the senses in the
?is-a? hierarchies of WordNet.
We report our results on precision-recall curves displayed in Figure 1 and the corresponding areas
under the curve (AUC) in Table 3. Even though we tend to miss the first few pairs, we compare
favorably to [2] and [5] and our AUC is close to the reference established by WordNet Similarity.
Our method is capable of encoding meaningful semantic embeddings for verbs, even though it has
been trained on noisy, automatically collected data and in spite of the fact that it was not our primary
goal that distance in parameter space should satisfy any condition. Performance might be improved
by training on cleaner triplets, such as those collected by [11].
9
Conclusion
Designing methods capable of handling large amounts of linked relations seems necessary to be able
to model the wealth of relations underlying the semantics of any real-world problems. We tackle
this problem by using a shared representation of relations naturally suited to multi-relational data,
in which entities have a unique representation shared between relation types, and where we propose
that relation themselves decompose over latent ?relational? factors. This new approach ties or beats
state-of-the art models on both standard relational learning problems and an NLP task. The decomposition of relations over latent factors allows a significant reduction of the number of parameters
and is motivated both by computational and statistical reasons. In particular, our approach is quite
scalable both with respect to the number of relations and to the data samples.
One might wonder about the relative importance of the various terms in our formulation. Interestingly, though the presence of the trigram term was crucial in the tensor factorization problems, it
played a marginal role in the NLP experiment, where most of the information was contained in the
bigram and unigram terms.
Finally, we believe that exploring the similarities of the relations through an analysis of the latent
factors could provide some insight on the structures shared between different relation types.
Acknowledgments
This work was partially funded by the Pascal2 European Network of Excellence. NLR and RJ are
supported by the European Research Council (resp., SIERRA-ERC-239993 & SIPA-ERC-256919).
10
Other distances could of course be considered, we choose the Euclidean metric for simplicity.
8
References
[1] F. Bach, R. Jenatton, J. Mairal, and G. Obozinski. Optimization with sparsity-inducing penalties. Foundations and Trends in Machine Learning, 4(1):1?106, 2011.
[2] A. Bordes, X. Glorot, J. Weston, and Y. Bengio. A semantic matching energy function for learning with
multi-relational data. Machine Learning, 2012. To appear.
[3] L. Bottou and Y. LeCun. Large scale online learning. In Advances in Neural Information Processing
Systems, volume 16, pages 217?224, 2004.
[4] W. Chu and Z. Ghahramani. Probabilistic models for incomplete multi-dimensional arrays. Journal of
Machine Learning Research - Proceedings Track, 5:89?96, 2009.
[5] R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa. Natural language processing (almost) from scratch. JMLR, 12:2493?2537, 2011.
[6] W. Denham. The detection of patterns in Alyawarra nonverbal behavior. PhD thesis, 1973.
[7] L. Getoor and B. Taskar. Introduction to Statistical Relational Learning (Adaptive Computation and
Machine Learning). The MIT Press, 2007.
[8] R. A. Harshman and M. E. Lundy. Parafac: parallel factor analysis. Comput. Stat. Data Anal., 18(1):39?
72, Aug. 1994.
[9] C. Kemp, J. B. Tenenbaum, T. L. Griffiths, T. Yamada, and N. Ueda. Learning systems of concepts with
an infinite relational model. In Proc. of AAAI, pages 381?388, 2006.
[10] S. Kok and P. Domingos. Statistical predicate invention. In Proceedings of the 24th international conference on Machine learning, pages 433?440, 2007.
[11] A. Korhonen, Y. Krymolowski, and T. Briscoe. A large subcategorization lexicon for natural language
processing applications. In Proceedings of LREC, 2006.
[12] A. T. McCray. An upper level ontology for the biomedical domain. Comparative and Functional Genomics, 4:80?88, 2003.
[13] G. Miller. WordNet: a Lexical Database for English. Communications of the ACM, 38(11):39?41, 1995.
[14] K. Miller, T. Griffiths, and M. Jordan. Nonparametric latent feature models for link prediction. In Advances in Neural Information Processing Systems 22, pages 1276?1284. 2009.
[15] M. Nickel, V. Tresp, and H.-P. Kriegel. A three-way model for collective learning on multi-relational
data. In Proceedings of the 28th Intl Conf. on Mach. Learn., pages 809?816, 2011.
[16] M. Nickel, V. Tresp, and H.-P. Kriegel. Factorizing YAGO: scalable machine learning for linked data. In
Proc. of the 21st intl conf. on WWW, pages 271?280, 2012.
[17] K. Nowicki and T. A. B. Snijders. Estimation and prediction for stochastic blockstructures. Journal of
the American Statistical Association, 96(455):1077?1087, 2001.
[18] A. Paccanaro and G. Hinton. Learning distributed representations of concepts using linear relational
embedding. IEEE Trans. on Knowl. and Data Eng., 13:232?244, 2001.
[19] T. Pedersen, S. Patwardhan, and J. Michelizzi. Wordnet:: Similarity: measuring the relatedness of concepts. In Demonstration Papers at HLT-NAACL 2004, pages 38?41, 2004.
[20] H. Poon and P. Domingos. Unsupervised ontology induction from text. In Proceedings of the 48th Annual
Meeting of the Association for Computl Linguistics, pages 296?305, 2010.
[21] R. J. Rummel. Dimensionality of nations project: Attributes of nations and behavior of nation dyads. In
ICPSR data file, pages 1950?1965. 1999.
[22] D. Shen, J.-T. Sun, H. Li, Q. Yang, and Z. Chen. Document summarization using conditional random
fields. In Proc. of the 20th Intl Joint Conf. on Artif. Intel., pages 2862?2867, 2007.
[23] A. P. Singh and G. J. Gordon. Relational learning via collective matrix factorization. In Proc. of
SIGKDD?08, pages 650?658, 2008.
[24] I. Sutskever, R. Salakhutdinov, and J. Tenenbaum. Modelling relational data using bayesian clustered
tensor factorization. In Adv. in Neur. Inf. Proc. Syst. 22, 2009.
[25] L. R. Tucker. Some mathematical notes on three-mode factor analysis. Psychometrika, 31:279?311, 1966.
[26] Y. J. Wang and G. Y. Wong. Stochastic blockmodels for directed graphs. Journal of the American
Statistical Association, 82(397), 1987.
[27] D. Yang and D. M. W. Powers. Verb similarity on the taxonomy of wordnet. Proceedings of GWC-06,
pages 121?128, 2006.
[28] J. Zhu. Max-margin nonparametric latent feature models for link prediction. In Proceedings of the 29th
Intl Conference on Machine Learning, 2012.
9
|
4744 |@word h:2 mild:1 briefly:1 bigram:9 norm:4 advantageous:1 seems:2 stronger:1 open:2 seek:1 accounting:1 decomposition:3 eng:1 mammal:1 thereby:1 carry:1 reduction:1 initial:1 contains:1 score:3 ecole:3 denoting:1 ours:1 interestingly:2 document:1 outperforms:1 existing:1 com:1 si:29 assigning:1 chu:1 written:1 parsing:1 ronan:1 j1:4 designed:1 drop:1 selected:1 parameterization:3 parametrization:1 yamada:1 egne:1 filtered:1 node:3 lexicon:3 preference:2 org:1 mathematical:1 along:2 direct:7 ik:9 incorrect:1 consists:5 introduce:2 excellence:1 pairwise:1 tagging:1 expected:1 indeed:1 kuksa:1 themselves:2 behavior:2 ontology:4 multi:13 inspired:1 relying:1 utc:1 anthropologist:1 automatically:3 salakhutdinov:1 actual:1 decomposed:1 considering:1 psychometrika:1 project:4 provided:1 underlying:2 moreover:2 factorized:1 kinship:8 what:1 kind:1 unified:1 finding:1 ended:1 nation:8 tackle:2 tie:2 universit:1 demonstrates:1 control:1 unit:1 medical:1 appear:1 organize:1 harshman:1 positive:5 treat:1 consequence:2 bilinear:2 encoding:2 mach:1 path:2 ap:1 inria:2 might:3 umr:2 chose:2 china:1 suggests:1 challenging:1 factorization:14 bi:1 averaged:1 directed:1 practical:2 unique:2 acknowledgment:1 testing:1 lecun:1 practice:1 postpone:1 procedure:1 area:8 attain:1 significantly:2 matching:2 projection:1 word:8 induce:1 griffith:2 spite:1 protein:1 onto:2 cannot:1 selection:1 operator:1 close:1 context:1 applying:1 risk:1 influence:4 wong:1 www:1 equivalent:1 lexical:6 missing:1 latest:1 independently:1 focused:2 formulate:1 shen:1 roux:2 simplicity:1 parameterizing:1 insight:1 array:1 embedding:4 handle:2 brazil:1 resp:5 hierarchy:1 alyawarra:2 user:1 designing:1 hypothesis:1 domingo:2 trick:1 element:1 trend:1 breakdown:1 database:4 labeled:5 observed:4 role:3 taskar:1 wang:1 capture:1 parameterize:1 thousand:2 adv:1 sun:1 decrease:1 goo:2 removed:1 disease:1 pd:3 asked:1 technologie:1 trained:1 singh:1 algebra:1 negatively:1 upon:1 treaty:1 joint:1 k0:1 various:7 represented:4 describe:2 artificial:1 labeling:2 refined:1 quite:5 encoded:1 larger:2 whose:1 richer:1 ability:2 statistic:1 unseen:1 think:1 jointly:1 noisy:2 itself:1 syntactic:1 online:1 advantage:1 propose:3 interaction:11 product:4 senna:2 fr:3 relevant:1 combining:1 poon:1 achieve:2 bed:1 description:1 inducing:2 normalize:1 scalability:3 competition:1 mccray:1 billion:1 sutskever:2 cluster:3 intl:4 categorization:1 comparative:1 sierra:3 object:15 illustrate:2 stat:1 progress:1 aug:1 icpsr:1 australian:1 closely:1 correct:3 attribute:5 merged:1 stochastic:6 human:1 australia:1 require:1 clustered:1 decompose:2 preliminary:2 extension:3 exploring:1 hold:1 sufficiently:1 considered:10 exp:1 great:1 lundy:1 algorithmic:1 predict:4 bj:1 trigram:8 achieves:1 dictionary:1 smallest:1 favorable:1 proc:5 estimation:1 superposition:1 si0:1 propensity:2 sensitive:1 nlr:1 largest:1 council:1 knowl:1 successfully:1 tool:1 minimization:1 mit:1 aim:2 normale:2 srl:6 avoid:1 stepsizes:1 encode:1 heudiasyc:1 parafac:4 derived:1 focus:2 properly:1 refining:1 rank:11 modelling:2 consistently:1 indicates:1 likelihood:8 blockmodel:2 sigkdd:1 sense:3 inference:1 cnrs:2 membership:1 i0:1 typically:4 relation:78 france:4 interested:3 semantics:4 issue:2 overall:3 among:2 classification:4 priori:1 retaining:1 proposes:1 art:5 noun:1 marginal:3 field:1 extraction:1 unsupervised:1 thinking:1 others:2 report:2 gordon:1 few:4 primarily:2 simultaneously:3 individual:1 phase:1 consisting:3 detection:1 highly:3 gwc:1 evaluation:1 rodolphe:1 sens:1 edge:1 capable:2 necessary:3 decoupled:1 incomplete:2 euclidean:3 divide:1 causal:1 instance:3 formalism:1 modeling:10 column:2 classify:2 increased:1 measuring:2 stacking:1 cost:1 introducing:1 subset:1 hundred:1 wonder:1 predicate:2 conducted:1 too:1 stored:1 reported:2 dependency:1 learnt:1 combined:1 thanks:1 st:1 international:1 yago:2 probabilistic:3 together:4 thesis:1 central:1 aaai:1 opposed:2 containing:2 possibly:2 choose:1 denham:1 conf:3 american:2 inefficient:1 leading:1 li:1 syst:1 account:3 potential:2 suggesting:1 de:2 diversity:1 sec:2 coefficient:1 satisfy:1 explicitly:1 collobert:7 performed:1 try:2 linked:2 sup:2 parallel:1 collaborative:1 square:2 characteristic:1 efficiently:2 listing:1 gathered:1 identify:1 yield:1 miller:2 bayesian:5 pedersen:1 kavukcuoglu:1 marginally:1 mrc:3 confirmed:1 published:2 sharing:3 hlt:1 energy:4 nonetheless:1 tucker:1 naturally:2 associated:2 sampled:1 nonverbal:1 dataset:7 recall:7 knowledge:6 rla:2 dimensionality:1 jenatton:3 focusing:1 ok:32 higher:1 reflected:1 diplomacy:1 improved:1 formulation:4 evaluated:2 though:3 stage:1 biomedical:1 web:1 logistic:5 mode:1 grows:1 believe:1 artif:1 building:1 name:2 effect:2 concept:5 unbiased:1 contain:1 naacl:1 regularization:3 equality:1 hence:2 umls:2 nowicki:1 semantic:9 ll:3 during:1 auc:7 criterion:1 paccanaro:1 trying:1 polytechnique:2 egypt:1 meaning:1 recently:2 common:3 wikipedia:2 tending:1 multinomial:2 functional:1 empirically:1 volume:1 million:1 association:4 kwk2:1 refer:2 significant:1 enter:1 rd:1 erieure:2 grid:1 erc:2 language:13 had:1 funded:1 access:1 similarity:14 operating:1 etc:2 base:5 retrieved:1 moderate:1 inf:1 binary:4 meeting:1 renowned:1 captured:1 minimum:1 greater:1 additional:2 syndrome:1 algebraically:1 maximize:1 redundant:2 shortest:1 hsi:4 branch:1 multiple:4 rj:47 infer:1 reduces:1 karlen:1 snijders:1 match:2 cross:4 bach:1 concerning:1 proximate:1 equally:1 impact:1 prediction:8 sme:16 regression:1 involving:1 scalable:2 metric:3 iteration:1 represent:1 achieved:2 addition:1 addressed:2 wealth:1 median:3 source:2 country:3 crucial:2 rest:1 unlike:1 sure:1 file:1 subject:15 tend:4 hz:1 member:1 jordan:1 call:3 odds:1 counting:1 presence:1 constraining:2 split:2 embeddings:4 bengio:1 yang:2 affect:1 zi:2 inner:1 reduce:1 economic:1 multiclass:1 translates:1 motivated:3 handled:1 reuse:1 dyad:1 penalty:1 speech:2 rescal:7 cause:1 useful:2 involve:1 cleaner:1 transforms:1 nonparametric:4 amount:1 kok:1 ten:1 tenenbaum:2 induces:1 simplest:1 generate:1 http:2 outperform:1 canonical:1 diagnostic:1 correctly:2 track:1 write:2 group:1 terminology:1 license:2 umbaidya:1 invention:2 kept:1 graph:5 asymptotically:1 fraction:2 uncertainty:1 named:1 almost:1 blockstructures:1 ueda:1 lrec:1 played:1 display:3 fold:2 krymolowski:1 refine:1 annual:1 occur:2 precisely:1 constraint:4 constrain:1 ri:1 encodes:1 hy:2 dominated:1 aspect:1 speed:3 extremely:1 min:1 relatively:1 neur:1 ball:3 jr:2 across:4 y0:3 ur:7 s1:1 gradually:1 pr:3 chunking:1 remains:1 previously:1 discus:1 describing:1 needed:1 end:1 available:3 apply:1 occurrence:1 appearing:2 stepsize:1 batch:1 rp:11 existence:1 top:2 clustering:4 nlp:6 cf:1 linguistics:1 k1:2 especially:1 build:2 sipa:1 classical:3 ghahramani:1 tensor:15 parametric:2 costly:1 primary:1 usual:2 nr:10 antoine:2 exhibit:1 gradient:2 lends:1 detrimental:1 distance:3 link:5 entity:35 outer:1 collected:4 kemp:1 reason:1 induction:1 assuming:1 besides:2 o1:1 modeled:1 relationship:15 index:1 ratio:1 mini:1 code:1 providing:1 equivalently:1 setup:1 mostly:1 demonstration:1 taxonomy:1 favorably:1 negative:9 design:1 reliably:1 collective:4 summarization:1 motivates:1 perform:1 allowing:1 recommender:2 anal:1 observation:2 upper:1 datasets:10 markov:1 benchmark:4 descent:1 displayed:2 beat:2 relational:35 communication:1 team:2 hinton:1 interacting:1 verb:31 introduced:1 pair:9 paris:2 namely:5 sentence:2 redefining:1 optimized:2 learned:3 established:1 trans:1 able:2 kriegel:2 pattern:2 candecomp:1 sparsity:4 built:1 oj:1 max:1 pascal2:1 power:1 event:1 getoor:1 difficulty:1 natural:12 regularized:2 treated:1 ranked:1 predicting:2 zhu:3 representing:3 scheme:1 improve:2 movie:1 created:1 carried:1 transitive:1 extract:2 utterance:1 tresp:2 sn:1 genomics:1 text:7 understanding:1 literature:1 relative:1 embedded:2 loss:4 fully:1 rationale:1 mixed:1 filtering:1 nickel:2 validation:7 foundation:1 imposes:1 article:1 share:3 bordes:3 course:1 summary:1 gl:2 supported:1 tribe:3 english:1 allow:1 taking:1 sparse:5 distributed:2 benefit:1 curve:11 dimension:1 world:2 valid:2 collection:4 made:1 projected:1 adaptive:1 palaiseau:1 far:1 social:3 obtains:1 uni:1 relatedness:1 gene:1 logic:1 overfitting:1 mairal:1 receiver:1 factorizing:1 latent:16 triplet:17 table:9 additionally:1 learn:3 robust:1 nicolas:2 inherently:1 composing:1 depicting:2 bottou:2 complex:3 european:2 domain:1 substituted:1 blockmodels:1 synonym:3 motivation:1 allowed:2 body:1 positively:2 referred:1 intel:1 en:1 vr:7 embeds:1 n:5 aid:1 precision:7 experienced:1 comput:1 jmlr:1 weighting:1 subcategorization:1 learns:2 nltk:2 z0:2 remained:1 embed:1 ono:1 specific:2 unigram:2 down:1 showing:2 rk:1 list:1 glorot:1 intrinsic:1 essential:2 rel:1 importance:1 cmap:2 compi:1 phd:1 occurring:1 margin:1 chen:1 suited:1 led:2 logarithmic:1 likely:4 contained:1 partially:1 scalar:1 corresponds:2 acm:1 obozinski:3 weston:2 conditional:2 goal:3 invalid:1 shared:5 change:3 specifically:1 typical:1 reducing:1 semantically:1 infinite:1 wordnet:13 miss:1 korhonen:1 conservative:1 called:2 total:2 cumulates:1 experimental:1 meaningful:4 select:1 guillaume:2 searched:1 latter:2 bioinformatics:2 evaluate:3 scratch:1 handling:2
|
4,138 | 4,745 |
A Scalable CUR Matrix Decomposition Algorithm:
Lower Time Complexity and Tighter Bound
Shusen Wang and Zhihua Zhang
College of Computer Science & Technology
Zhejiang University
Hangzhou, China 310027
{wss,zhzhang}@zju.edu.cn
Abstract
The CUR matrix decomposition is an important extension of Nystr?om approximation to a general matrix. It approximates any data matrix in terms of a small number of its columns and rows. In this paper we propose a novel randomized CUR
algorithm with an expected relative-error bound. The proposed algorithm has the
advantages over the existing relative-error CUR algorithms that it possesses tighter
theoretical bound and lower time complexity, and that it can avoid maintaining the
whole data matrix in main memory. Finally, experiments on several real-world
datasets demonstrate significant improvement over the existing relative-error algorithms.
1
Introduction
Large-scale matrices emerging from stocks, genomes, web documents, web images and videos everyday bring new challenges in modern data analysis. Most efforts have been focused on manipulating, understanding and interpreting large-scale data matrices. In many cases, matrix factorization
methods are employed to construct compressed and informative representations to facilitate computation and interpretation. A principled approach is the truncated singular value decomposition
(SVD) which finds the best low-rank approximation of a data matrix. Applications of SVD such as
eigenface [20, 21] and latent semantic analysis [4] have been illustrated to be very successful.
However, the basis vectors resulting from SVD have little concrete meaning, which makes it very
difficult for us to understand and interpret the data in question.
An example in [10, 19] has well
?
shown this viewpoint; that is, the vector [(1/2)age ? (1/ 2)height + (1/2)income], the sum of the
significant uncorrelated features from a dataset of people?s features, is not particularly informative.
The authors of [17] have also claimed: ?it would be interesting to try to find basis vectors for all
experiment vectors, using actual experiment vectors and not artificial bases that offer little insight.?
Therefore, it is of great interest to represent a data matrix in terms of a small number of actual
columns and/or actual rows of the matrix.
The CUR matrix decomposition provides such techniques, and it has been shown to be very useful
in high dimensional data analysis [19]. Given a matrix A, the CUR technique selects a subset of
columns of A to construct a matrix C and a subset of rows of A to construct a matrix R, and
? = CUR best approximates A. The typical CUR algorithms [7,
computes a matrix U such that A
8, 10] work in a two-stage manner. Stage 1 is a standard column selection procedure, and Stage 2
does row selection from A and C simultaneously. Thus Stage 2 is more complicated than Stage 1.
The CUR matrix decomposition problem is widely studied in the literature [7, 8, 9, 10, 12, 13, 16,
18, 19, 22]. Perhaps the most widely known work on the CUR problem is [10], in which the authors
devised a randomized CUR algorithm called the subspace sampling algorithm. Particularly, the
algorithm has (1 + ?) relative-error ratio with high probability (w.h.p.).
1
Unfortunately, all the existing CUR algorithms require a large number of columns and rows to be
chosen. For example, for an m ? n matrix A and a target rank k ? min{m, n}, the state-ofthe-art CUR algorithm ? the subspace sampling algorithm in [10] ? requires exactly O(k 4 ??6 )
rows or O(k??4 log2 k) rows in expectation to achieve (1 + ?) relative-error ratio w.h.p. Moreover,
the computational cost of this algorithm is at least the cost of the truncated SVD of A, that is,
O(min{mn2 , nm2 }).1 The algorithms are therefore impractical for large-scale matrices.
In this paper we develop a CUR algorithm which beats the state-of-the-art algorithm in both theory
and experiments. In particular, we show in Theorem 5 a novel randomized CUR algorithm with
lower time complexity and tighter theoretical bound in comparison with the state-of-the-art CUR
algorithm in [10].
The rest of this paper is organized as follows. Section 3 introduces several existing column selection
algorithms and the state-of-the-art CUR algorithm. Section 4 describes and analyzes our novel
CUR algorithm. Section 5 empirically compares our proposed algorithm with the state-of-the-art
algorithm.
2
Notations
For a matrix A = [aij ] ? Rm?n , let a(i) be its i-th row and aj be its j-th column. Let ?A?1 =
?
?
2 1/2
be the Frobenius norm, and ?A?2 be the spectral
i,j |aij | be the ?1 -norm, ?A?F = (
i,j aij )
norm. Moreover, let Im denote an m ? m identity matrix, and 0mn denotes an m ? n zero matrix.
??
T
T
T
T
= UA,k ?A,k VA,k
+ UA,k? ?A,k? VA,k?
be the
Let A = UA ?A VA
= i=0 ?A,i uA,i vA,i
SVD of A, where ? = rank(A), and UA,k , ?A,k , and VA,k correspond to the top k singular values.
T
T
We denote Ak = UA,k ?A,k VA,k
. Furthermore, let A? = UA,? ??1
A,? VA,? be the Moore-Penrose
inverse of A [1].
3 Related Work
Section 3.1 introduces several relative-error column selection algorithms related to this work. Section 3.2 describes the state-of-the-art CUR algorithm in [10]. Section 3.3 discusses the connection
between the column selection problem and the CUR problem.
3.1
Relative-Error Column Selection Algorithms
Given a matrix A ? Rm?n , column selection is a problem of selecting c columns of A to construct
C ? Rm?c to minimize ?A ? CC? A?F . Since there are (nc ) possible choices of constructing C,
so selecting the best subset is a hard problem. In recent years, many polynomial-time approximate
algorithms have been proposed, among which we are particularly interested in the algorithms with
relative-error bounds; that is, with c ? k columns selected from A, there is a constant ? such that
?A ? CC? A?F ? ??A ? Ak ?F .
We call ? the relative-error ratio. We now present some recent results related to this work.
We first introduce a recently developed deterministic algorithm called the dual set sparsification
proposed in [2, 3]. We show their results in Lemma 1. Furthermore, this algorithm is a building
block of some more powerful algorithms (e.g., Lemma 2), and our novel CUR algorithm also relies
on this algorithm. We attach the algorithm in Appendix A.
Lemma 1 (Column Selection via Dual Set Sparsification Algorithm). Given a matrix A ? Rm?n
of rank ? and a target rank k (< ?), there exists a deterministic algorithm to select c (> k) columns
of A and form a matrix C ? Rm?c such that
?
1
?
?
? Ak
.
A ? CC A
? 1 +
A
F
F
(1 ? k/c)2
1
Although some partial SVD algorithms, such as Krylov subspace methods, require only O(mnk) time,
they are all numerical unstable. See [15] for more discussions.
2
Moreover, the matrix C can be computed in TVA,k +O(mn+nck 2 ), where TVA,k is the time needed
to compute the top k right singular vectors of A.
There are also a variety of randomized column selection algorithms achieving relative-error bounds
in the literature: [3, 5, 6, 10, 14].
An randomized algorithm in [2] selects only c = 2k
? (1 + o(1)) columns to achieve the expected
relative-error ratio (1 + ?). The algorithm is based on the approximate SVD via random projection [15], the dual set sparsification algorithm [2], and the adaptive sampling algorithm [6]. Here we
present the main results of this algorithm in Lemma 2. Our proposed CUR algorithm is motivated
by and relies on this algorithm.
Lemma 2 (Near-Optimal Column Selection Algorithm). Given a matrix A ? Rm?n of rank ?, a
target rank k (2 ? k < ?), and 0 < ? < 1, there exists a randomized algorithm to select at most
)
2k (
1 + o(1)
c=
?
columns of A to form a matrix C ? Rm?c such that
E2 ?A ? CC? A?F ? E?A ? CC? A?2F ? (1 + ?)?A ? Ak ?2F ,
where the expectations are taken w.r.t. C. Furthermore, the matrix C can be computed in O((mnk+
nk 3 )??2/3 ).
3.2
The Subspace Sampling CUR Algorithm
Drineas et al. [10] proposed a two-stage randomized CUR algorithm which has a relative-error
bound w.h.p. Given a matrix A ? Rm?n and a target rank k, in the first stage the algorithm
chooses exactly c = O(k 2 ??2 log ? ?1 ) columns (or c = O(k??2 log k log ? ?1 ) in expectation) of
A to construct C ? Rm?c ; in the second stage it chooses exactly r = O(c2 ??2 log ? ?1 ) rows (or
r = O(c??2 log c log ? ?1 ) in expectation) of A and C simultaneously to construct R and U. With
probability at least 1 ? ?, the relative-error ratio is 1 + ?. The computational cost is dominated by
the truncated SVD of A and C.
Though the algorithm is ?-optimal with high probability, it requires too many rows get chosen: at
least r = O(k??4 log2 k) rows in expectation. In this paper we seek to devise an algorithm with
mild requirement on column and row numbers.
3.3
Connection between Column Selection and CUR Matrix Decomposition
The CUR problem has a close connection with the column selection problem. As aforementioned,
the first stage of existing CUR algorithms is simply a column selection procedure. However, the
second stage is more complicated. If the second stage is na??vely solved by a column selection
algorithm on AT , then the error ratio will be at least (2 + ?).
For a relative-error CUR algorithm, the first stage seeks to bound a construction error ratio of
?A?CC? A?F
?A?CC? AR? R?F
given C. Actually, the first
?A?Ak ?F , while the section stage seeks to bound
?A?CC? A?F
stage is a special case of the second stage where C = Ak . Given a matrix A, if an algorithm solv?
AR? R?F
ing the second stage results in a bound ?A?CC
? ?, then this algorithm also solves the
?A?CC? A?F
T
column selection problem for A with an ? relative-error ratio. Thus the second stage of CUR is a
generalization of the column selection problem.
4
Main Results
In this section we introduce our proposed CUR algorithm. We call it the fast CUR algorithm because
it has lower time complexity compared with SVD. We describe it in Algorithm 1 and give a theoretical analysis in Theorem 5. Theorem 5 relies on Lemma 2 and Theorem 4, and Theorem 4 relies on
Theorem 3. Theorem 3 is a generalization of [6, Theorem 2.1], and Theorem 4 is a generalization
of [2, Theorem 5].
3
Algorithm 1 The Fast CUR Algorithm.
(
)
1: Input: a real matrix A ? Rm?n , target rank k, ? ? (0, 1], target column number c = 2k
1 + o(1) , target
?
(
)
row number r = 2c
1 + o(1) ;
?
2: // Stage 1: select c columns of A to construct C ? Rm?c
? k?
? kV
? k;
3: Compute approximate truncated SVD via random projection such that Ak ? U
? k?
? kV
? k ); V1 ? columns of V
? kT ;
4: Construct U1 ? columns of (A ? U
5: Compute s1 ? Dual Set Spectral-Frobenius Sparsification Algorithm (U1 , V1 , c ? 2k/?);
6: Construct C1 ? ADiag(s1 ), and then delete the all-zero columns;
7: Residual matrix D ? A ? C1 C?1 A;
8: Compute sampling probabilities: pi = ?di ?22 /?D?2F , i = 1, ? ? ? , n;
9: Sampling c2 = 2k/? columns from A with probability {p1 , ? ? ? , pn } to construct C2 ;
10: // Stage 2: select r rows of A to construct R ? Rr?n
? k?
? kV
? k )T ; V2 ? columns of U
? Tk ;
11: Construct U2 ? columns of (A ? U
12: Compute s2 ? Dual Set Spectral-Frobenius Sparsification Algorithm (U2 , V2 , r ? 2c/?);
13: Construct R1 ? Diag(s2 )A, and then delete the all-zero rows;
14: Residual matrix B ? A ? AR?1 R1 ; Compute qj = ?b(j) ?22 /?B?2F , j = 1, ? ? ? , m;
15: Sampling r2 = 2c/? rows from A with probability {q1 , ? ? ? , qm } to construct R2 ;
16: return C = [C1 , C2 ], R = [RT1 , RT2 ]T , and U = C? AR? .
4.1
Adaptive Sampling
The relative-error adaptive sampling algorithm is established in [6, Theorem 2.1]. The algorithm
is based on the following idea: after selecting a proportion of columns from A to form C1 by
an arbitrary algorithm, the algorithms randomly samples additional c2 columns according to the
residual A ? C1 C?1 A. Boutsidis et al. [2] used the adaptive sampling algorithm to decrease the
residual of the dual set sparsification algorithm and obtained an (1 + ?) relative-error bound. Here
we prove a new bound for the adaptive sampling algorithm. Interestingly, this new bound is a
generalization of the original one in [6, Theorem 2.1]. In other words, Theorem 2.1 of [6] is a direct
corollary of our following theorem in which C = Ak is set.
Theorem 3 (The Adaptive Sampling Algorithm). Given a matrix A ? Rm?n and a matrix C ?
Rm?c such that rank(C) = rank(CC? A) = ?, (? ? c ? n), we let R1 ? Rr1 ?n consist of r1
rows of A, and define the residual B = A ? AR?1 R1 . Additionally, for i = 1, ? ? ? , m, we define
pi = ?b(i) ?22 /?B?2F .
We further sample r2 rows i.i.d. from A, in each trial of which the i-th row is chosen with probability
pi . Let R2 ? Rr2 ?n contains the r2 sampled rows and let R = [RT1 , RT2 ]T ? R(r1 +r2 )?n . Then
the following inequality holds:
?
E?A ? CC? AR? R?2F ? ?A ? CC? A?2F + ?A ? AR?1 R1 ?2F ,
r2
where the expectation is taken w.r.t. R2 .
4.2
The Fast CUR Algorithm
Based on the dual set sparsification algorithm of of Lemma 1 and the adaptive sampling algorithm
of Theorem 3, we develop a randomized algorithm to solve the second stage of CUR problem. We
present the results of the algorithm in Theorem 4. Theorem 5 of [2] is a special case of the following
theorem where C = Ak .
Theorem 4 (The Fast Row Selection Algorithm). Given a matrix A ? Rm?n and a matrix C ?
Rm?c such that rank(C) = rank(CC? A) = ?, (? ? c ? n), and a target rank k (? ?), the
r?n
proposed randomized algorithm selects r = 2?
, such
? (1 + o(1)) rows of A to construct R ? R
that
E?A ? CC? AR? R?2F ? ?A ? CC? A?2F + ??A ? Ak ?2F ,
where the expectation is taken w.r.t. R. Furthermore, the matrix R can be computed in O((mnk +
mk 3 )??2/3 ) time.
Based on Lemma 2 and Theorem 4, here we present the main theorem for the fast CUR algorithm.
4
Table 1: A summary of the datasets.
Dataset
Type
size
Source
Redrocknatural image18000 ? 4000
http://www.agarwala.org/efficient gdc/
Arcene
biology
10000 ? 900 http://archive.ics.uci.edu/ml/datasets/Arcene
Dexter bag of words 20000 ? 2600http://archive.ics.uci.edu/ml/datasets/Dexter
Theorem 5 (The Fast CUR Algorithm). Given a matrix A ? Rm?n and a positive integer k ?
min{m, n}, the fast CUR algorithm (described in Algorithm 1) randomly selects c = 2k
? (1 + o(1))
columns of A to construct C ? Rm?c with the near-optimal column selection algorithm of Lemma 2,
r?n
and then selects r = 2c
with the fast row selection
? (1 + o(1)) rows of A to construct R ? R
algorithm of Theorem 4. Then we have
E?A ? CUR?F = E?A ? C(C? AR? )R?F ? (1 + ?)?A ? Ak ?F .
(
)
Moreover, the algorithm runs in time O mnk??2/3 + (m + n)k 3 ??2/3 + mk 2 ??2 + nk 2 ??4 .
Since k, c, r ? min{m, n} by the assumptions, so the time complexity of the fast CUR algorithm
is lower than that of the SVD of A. This is the main reason why we call it the fast CUR algorithm.
Another advantage of this algorithm is avoiding loading the whole m ? n data matrix A into main
memory. None of three steps ? the randomized SVD, the dual set sparsification algorithm, and the
adaptive sampling algorithm ? requires loading the whole of A into memory. The most memoryexpensive operation throughout the fast CUR Algorithm is computing the Moore-Penrose inverse
of C and R, which requires maintaining an m ? c matrix or an r ? n matrix in memory. In
comparison, the subspace sampling algorithm requires loading the whole matrix into memory to
compute its truncated SVD.
5
Empirical Comparisons
In this section we provide empirical comparisons among the relative-error CUR algorithms on several datasets. We report the relative-error ratio and the running time of each algorithm on each data
set. The relative-error ratio is defined by
?A ? CUR?F
Relative-error ratio =
,
?A ? Ak ?F
where k is a specified target rank.
We conduct experiments on three datasets, including natural image, biology data, and bags of words.
Table 1 briefly summarizes some information of the datasets. Redrock is a large size natural image.
Arcene and Dexter are both from the UCI datasets [11]. Arcene is a biology dataset with 900
instances and 10000 attributes. Dexter is a bag of words dataset with a 20000-vocabulary and 2600
documents. Each dataset is actually represented as a data matrix, upon which we apply the CUR
algorithms.
We implement all the algorithms in MATLAB 7.10.0. We conduct experiments on a workstation
with 12 Intel Xeon 3.47GHz CPUs, 12GB memory, and Ubuntu 10.04 system. According to the
analysis in [10] and this paper, k, c, and r should be integers far less than m and n. For each data
set and each algorithm, we set k = 10, 20, or 50, and c = ?k, r = ?c, where ? ranges in each set of
experiments. We repeat each set of experiments for 20 times and report the average and the standard
deviation of the error ratios. The results are depicted in Figures 1, 2, 3.
The results show that the fast CUR algorithm has much lower relative-error ratio than the subspace
sampling algorithm. The experimental results well match our theoretical analyses in Section 4. As
for the running time, the fast CUR algorithm is more efficient when c and r are small. When c and
r become large, the fast CUR algorithm becomes less efficient. This is because the time complexity
of the fast CUR algorithm is linear in ??4 and large c and r imply small ?. However, the purpose
of CUR is to select a small number of columns and rows from the data matrix, that is, c ? n and
r ? m. So we are not interested in the cases where c and r are large compared with n and m, say
k = 20 and ? = 10.
5
Running Time
Running Time
700
700
600
600
600
500
500
Subspace Sampling (Exactly)
Subspace Sampling (Expected)
Fast CUR
300
500
Subspace Sampling (Exactly)
Subspace Sampling (Expected)
Fast CUR
400
Time (s)
400
Time (s)
Time (s)
Running Time
700
300
400
300
200
200
200
100
100
100
0
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36
0
2
?
4
Construction Error (Frobenius Norm)
8
0
2
10 12 14 16 18 20 22 24
?
Subspace Sampling (Exactly)
Subspace Sampling (Expected)
Fast CUR
1
0.9
0.8
0.7
1.2
1
0.9
0.8
0.7
0.6
?
?
12
14
16
18
1.1
1
0.9
0.8
0.7
0.6
0.5
0.6
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36
10
Subspace Sampling (Exactly)
Subspace Sampling (Expected)
Fast CUR
1.2
1.1
0.5
2
8
Construction Error (Frobenius Norm)
Relative Error Ratio
1.1
6
1.3
Subspace Sampling (Exactly)
Subspace Sampling (Expected)
Fast CUR
1.3
Relative Error Ratio
1.2
4
Construction Error (Frobenius Norm)
1.4
1.3
Relative Error Ratio
6
Subspace Sampling (Exactly)
Subspace Sampling (Expected)
Fast CUR
4
6
8
0.4
2
10 12 14 16 18 20 22 24
?
4
6
8
10
?
12
14
16
18
(a) k = 10, c = ?k, and r = ?c. (b) k = 20, c = ?k, and r = ?c. (c) k = 50, c = ?k, and r = ?c.
Figure 1: Empirical results on the Redrock data set.
Running Time
Running Time
14
20
10
12
18
16
6
Time (s)
10
8
Time (s)
Time (s)
Running Time
12
8
6
4
4
Subspace Sampling (Exactly)
Subspace Sampling (Expected)
Fast CUR
2
0
2
4
6
0
2
8 10 12 14 16 18 20 22 24 26 28 30
?
4
Construction Error (Frobenius Norm)
8
10
?
12
14
16
18
10
Subspace Sampling (Exactly)
Subspace Sampling (Expected)
Fast CUR
6
4
2
20
Construction Error (Frobenius Norm)
4
6
8
?
10
12
14
Construction Error (Frobenius Norm)
1.3
Subspace Sampling (Exactly)
Subspace Sampling (Expected)
Fast CUR
1.3
Relative Error Ratio
1.3
1.2
1.1
1
0.9
0.8
Subspace Sampling (Exactly)
Subspace Sampling (Expected)
Fast CUR
1.2
1.1
1
0.9
0.8
0.7
0.7
0.6
0.6
Subspace Sampling (Exactly)
Subspace Sampling (Expected)
Fast CUR
1.2
Relative Error Ratio
1.4
Relative Error Ratio
6
12
8
Subspace Sampling (Exactly)
Subspace Sampling (Expected)
Fast CUR
2
14
1.1
1
0.9
0.8
0.7
0.6
0.5
2
4
6
8 10 12 14 16 18 20 22 24 26 28 30
?
2
4
6
8
10
?
12
14
16
18
20
0.4
2
4
6
8
?
10
12
14
(a) k = 10, c = ?k, and r = ?c. (b) k = 20, c = ?k, and r = ?c. (c) k = 50, c = ?k, and r = ?c.
Figure 2: Empirical results on the Arcene data set.
6
Running Time
Running Time
250
Running Time
300
Subspace Sampling (Exactly)
Subspace Sampling (Expected)
Fast CUR
200
400
Subspace Sampling (Exactly)
Subspace Sampling (Expected)
Fast CUR
250
350
Subspace Sampling (Exactly)
Subspace Sampling (Expected)
Fast CUR
100
Time (s)
150
Time (s)
Time (s)
300
200
150
100
250
200
150
100
50
50
0
2
4
6
0
2
8 10 12 14 16 18 20 22 24 26 28 30
?
50
4
8
0
2
10 12 14 16 18 20 22 24
?
1.1
1.05
1
0.95
0.9
2
4
6
8 10 12 14 16 18 20 22 24 26 28 30
?
1.15
1.1
1.05
1
0.95
?
16
18
1
0.9
0.85
0.8
10 12 14 16 18 20 22 24
14
0.95
0.75
8
12
1.1
0.9
6
?
1.05
0.85
2
4
10
Subspace Sampling (Exactly)
Subspace Sampling (Expected)
Fast CUR
1.15
Relative Error Ratio
Relative Error Ratio
1.15
8
1.2
Subspace Sampling (Exactly)
Subspace Sampling (Expected)
Fast CUR
1.2
6
Construction Error (Frobenius Norm)
1.25
Subspace Sampling (Exactly)
Subspace Sampling (Expected)
Fast CUR
1.2
4
Construction Error (Frobenius Norm)
Construction Error (Frobenius Norm)
1.25
Relative Error Ratio
6
2
4
6
8
10
?
12
14
16
18
(a) k = 10, c = ?k, and r = ?c. (b) k = 20, c = ?k, and r = ?c. (c) k = 50, c = ?k, and r = ?c.
Figure 3: Empirical results on the Dexter data set.
6
Conclusions
In this paper we have proposed a novel randomized algorithm for the CUR matrix decomposition
problem. This algorithm is faster, more scalable, and more accurate than the state-of-the-art algorithm, i.e., the subspace sampling algorithm. Our algorithm requires only c = 2k??1 (1 + o(1))
columns and r = 2c??1 (1 + o(1)) rows to achieve (1+?) relative-error ratio. To achieve the same
relative-error bound, the subspace sampling algorithm requires c = O(k??2 log k) columns and
r = O(c??2 log c) rows selected from the original matrix. Our algorithm also beats the subspace
sampling algorithm in time-complexity. Our algorithm costs O(mnk??2/3 + (m + n)k 3 ??2/3 +
mk 2 ??2 + nk 2 ??4 ) time, which is lower than O(min{mn2 , m2 n}) of the subspace sampling algorithm when k is small. Moreover, our algorithm enjoys another advantage of avoiding loading the
whole data matrix into main memory, which also makes our algorithm more scalable. Finally, the
empirical comparisons have also demonstrated the effectiveness and efficiency of our algorithm.
A The Dual Set Sparsification Algorithm
For the sake of completeness, we attach the dual set sparsification algorithm here and describe
some implementation details. The dual set sparsification algorithms are deterministic algorithms
established in [2]. The fast CUR algorithm calls the dual set spectral-Frobenius sparsification algorithm [2, Lemma 13] in both stages. We show this algorithm in Algorithm 2 and its bounds in
Lemma 6.
Lemma 6 (Dual Set Spectral-Frobenius Sparsification). Let U = {x1 , ? ? ? , xn } ? Rl , (l < n),
l?n
k
contains the columns of an arbitrary matrix
?n X ? RT . Let V = {v1 , ? ? ? , vn } ? R , (k < n),
be a decompositions of the identity, i.e.
v
v
=
I
.
Given
an
integer
r
with
k
< r < n,
k
i=1 i i
Algorithm 2 deterministically computes a set of weights si ? 0 (i = 1, ? ? ? , n) at most r of which
are non-zero, such that
? )
n
n
(?
(?
)
) (
k 2
T
and
tr
?k
si xi xTi ? ?X?2F .
si vi vi ? 1 ?
r
i=1
i=1
7
Algorithm 2 Deterministic Dual Set Spectral-Frobenius Sparsification Algorithm.
?n
l
n
k
T
1: Input: U = {xi }n
i=1 ? R , (l < n); V = {vi }i=1 ? R , with
i=1 vi vi = Ik (k < n); k < r < n;
2: Initialize: s0 = 0m?1 , A0 = 0k?k ;
?n
?x ?2
3: Compute ?xi ?22 for i = 1, ? ? ? , n, and then compute ?U = i=1? i 2 ;
1?
k/r
4: for ? = 0 to r ? 1 do
5:
Compute the eigenvalue decomposition of A? ;
6:
Find an index j in {1, ? ? ? , n} and compute a weight t > 0 such that
(
)?2
(
)?1
vjT A? ? (L? + 1)Ik
vj
?1
2
?1
?U ?xj ?2 ? t
?
vj ;
? vjT A? ? (L? + 1)Ik
?(L? + 1, A? ) ? ?(L? , A? )
where
?(L, A) =
k (
)?1
?
?i (A) ? L
,
L? = ? ?
?
rk;
i=1
7:
Update the j-th component of s? and A? :
8: end for
?
1? k/r
9: return s =
sr .
r
s? +1 [j] = s? [j] + t,
A? +1 = A? + tvj vjT ;
The weights si can be computed deterministically in O(rnk 2 + nl) time.
Here we would like to mention the implementation of Algorithm 2, which is not described in detailed
by [2]. In each iteration the algorithm performs once eigenvalue decomposition: A? = W?WT .
(A? is guaranteed to be positive semi-definite in each iteration). Since
(
)q
(
)
A? ? ?Ik = WDiag (?1 ? ?)q , ? ? ? , (?k ? ?)q WT ,
we can efficiently compute (A? ? (L? + 1)Ik )q based on the eigenvalue decomposition of A? . With
the eigenvalues at hand, ?(L, A? ) can also be computed directly.
Acknowledgments
This work has been supported in part by the Natural Science Foundations of China (No. 61070239),
the Google visiting faculty program, and the Scholarship Award for Excellent Doctoral Student
granted by Ministry of Education.
References
[1] Adi Ben-Israel and Thomas N.E. Greville. Generalized Inverses: Theory and Applications.
Second Edition. Springer, 2003.
[2] Christos Boutsidis, Petros Drineas, and Malik Magdon-Ismail. Near-optimal column-based
matrix reconstruction. CoRR, abs/1103.0995, 2011.
[3] Christos Boutsidis, Petros Drineas, and Malik Magdon-Ismail. Near optimal column-based
matrix reconstruction. In Proceedings of the 2011 IEEE 52nd Annual Symposium on Foundations of Computer Science, FOCS ?11, pages 305?314, 2011.
[4] Scott Deerwester, Susan T. Dumais, George W. Furnas, Thomas K. Landauer, and Richard
Harshman. Indexing by latent semantic analysis. Journal of The American Society for Information Science, 41(6):391?407, 1990.
[5] Amit Deshpande and Luis Rademacher. Efficient volume sampling for row/column subset selection. In Proceedings of the 2010 IEEE 51st Annual Symposium on Foundations of Computer
Science, FOCS ?10, pages 329?338, 2010.
[6] Amit Deshpande, Luis Rademacher, Santosh Vempala, and Grant Wang. Matrix approximation
and projective clustering via volume sampling. Theory of Computing, 2(2006):225?247, 2006.
[7] Petros Drineas. Pass-efficient algorithms for approximating large matrices. In In Proceeding
of the 14th Annual ACM-SIAM Symposium on Dicrete Algorithms, pages 223?232, 2003.
8
[8] Petros Drineas, Ravi Kannan, and Michael W. Mahoney. Fast monte carlo algorithms for
matrices iii: Computing a compressed approximate matrix decomposition. SIAM Journal on
Computing, 36(1):184?206, 2006.
[9] Petros Drineas and Michael W. Mahoney. On the Nystr?om method for approximating a gram
matrix for improved kernel-based learning. Journal of Machine Learning Research, 6:2153?
2175, 2005.
[10] Petros Drineas, Michael W. Mahoney, and S. Muthukrishnan. Relative-error CUR matrix decompositions. SIAM Journal on Matrix Analysis and Applications, 30(2):844?881, September
2008.
[11] A. Frank and A. Asuncion. UCI machine learning repository, 2010.
[12] S. A. Goreinov, E. E. Tyrtyshnikov, and N. L. Zamarashkin. A theory of pseudoskeleton
approximations. Linear Algebra and Its Applications, 261:1?21, 1997.
[13] S. A. Goreinov, N. L. Zamarashkin, and E. E. Tyrtyshnikov. Pseudo-skeleton approximations
by matrices of maximal volume. Mathematical Notes, 62(4):619?623, 1997.
[14] Venkatesan Guruswami and Ali Kemal Sinop. Optimal column-based low-rank matrix reconstruction. In Proceedings of the Twenty-Third Annual ACM-SIAM Symposium on Discrete
Algorithms, SODA ?12, pages 1207?1214. SIAM, 2012.
[15] Nathan Halko, Per-Gunnar Martinsson, and Joel A. Tropp. Finding structure with randomness:
Probabilistic algorithms for constructing approximate matrix decompositions. SIAM Review,
53(2):217?288, 2011.
[16] John Hopcroft and Ravi Kannan. Computer Science Theory for the Information Age. 2012.
[17] Finny G. Kuruvilla, Peter J. Park, and Stuart L. Schreiber. Vector algebra in the analysis of
genome-wide expression data. Genome Biology, 3:research0011?research0011.1, 2002.
[18] Lester Mackey, Ameet Talwalkar, and Michael I. Jordan. Divide-and-conquer matrix factorization. In Advances in Neural Information Processing Systems 24. 2011.
[19] Michael W. Mahoney and Petros Drineas. CUR matrix decompositions for improved data
analysis. Proceedings of the National Academy of Sciences, 106(3):697?702, 2009.
[20] L. Sirovich and M. Kirby. Low-dimensional procedure for the characterization of human faces.
Journal of the Optical Society of America A, 4(3):519?524, Mar 1987.
[21] Matthew Turk and Alex Pentland. Eigenfaces for recognition. Journal of Cognitive Neuroscience, 3(1):71?86, 1991.
[22] Eugene E. Tyrtyshnikov. Incomplete cross approximation in the mosaic-skeleton method.
Computing, 64:367?380, 2000.
9
|
4745 |@word mild:1 trial:1 repository:1 briefly:1 faculty:1 polynomial:1 norm:12 proportion:1 loading:4 nd:1 seek:3 decomposition:15 q1:1 mention:1 nystr:2 tr:1 contains:2 selecting:3 document:2 interestingly:1 existing:5 si:4 luis:2 john:1 numerical:1 informative:2 update:1 mackey:1 selected:2 ubuntu:1 provides:1 completeness:1 characterization:1 org:1 zhang:1 height:1 mathematical:1 c2:5 direct:1 become:1 symposium:4 ik:5 focs:2 prove:1 introduce:2 manner:1 expected:20 p1:1 little:2 actual:3 cpu:1 xti:1 ua:7 becomes:1 moreover:5 notation:1 israel:1 emerging:1 developed:1 finding:1 sparsification:14 impractical:1 pseudo:1 exactly:21 rm:17 qm:1 lester:1 grant:1 harshman:1 sinop:1 positive:2 ak:12 doctoral:1 china:2 studied:1 factorization:2 projective:1 range:1 zhejiang:1 acknowledgment:1 block:1 implement:1 definite:1 procedure:3 empirical:6 projection:2 word:4 get:1 close:1 selection:20 arcene:5 www:1 deterministic:4 demonstrated:1 focused:1 m2:1 insight:1 target:9 construction:10 mosaic:1 recognition:1 particularly:3 wang:2 solved:1 zamarashkin:2 susan:1 decrease:1 sirovich:1 principled:1 complexity:7 skeleton:2 algebra:2 ali:1 upon:1 efficiency:1 basis:2 drineas:8 hopcroft:1 stock:1 represented:1 america:1 muthukrishnan:1 mn2:2 fast:35 describe:2 monte:1 artificial:1 solv:1 widely:2 solve:1 say:1 compressed:2 advantage:3 rr:1 eigenvalue:4 propose:1 reconstruction:3 maximal:1 uci:4 achieve:4 academy:1 ismail:2 frobenius:15 kv:3 everyday:1 requirement:1 r1:7 rademacher:2 ben:1 tk:1 develop:2 rt2:2 solves:1 attribute:1 human:1 eigenface:1 education:1 require:2 generalization:4 tighter:3 im:1 extension:1 hold:1 ic:2 great:1 matthew:1 purpose:1 bag:3 schreiber:1 avoid:1 pn:1 dexter:5 corollary:1 tvj:1 adiag:1 improvement:1 zju:1 rank:16 talwalkar:1 hangzhou:1 a0:1 w:1 manipulating:1 selects:5 interested:2 agarwala:1 among:2 dual:14 aforementioned:1 tyrtyshnikov:3 art:7 special:2 initialize:1 santosh:1 construct:17 once:1 sampling:58 biology:4 park:1 stuart:1 report:2 richard:1 modern:1 randomly:2 simultaneously:2 national:1 ab:1 interest:1 shusen:1 joel:1 mahoney:4 introduces:2 nl:1 kt:1 accurate:1 partial:1 vely:1 conduct:2 incomplete:1 divide:1 theoretical:4 delete:2 mk:3 instance:1 column:47 xeon:1 ar:9 cost:4 deviation:1 subset:4 successful:1 too:1 chooses:2 dumais:1 st:1 randomized:11 siam:6 probabilistic:1 michael:5 concrete:1 na:1 dicrete:1 cognitive:1 american:1 return:2 student:1 vi:5 try:1 complicated:2 asuncion:1 om:2 minimize:1 efficiently:1 correspond:1 ofthe:1 none:1 carlo:1 cc:16 randomness:1 boutsidis:3 deshpande:2 turk:1 e2:1 di:1 workstation:1 cur:71 sampled:1 petros:7 dataset:5 organized:1 actually:2 improved:2 though:1 mar:1 furthermore:4 stage:21 hand:1 web:2 tropp:1 google:1 aj:1 perhaps:1 facilitate:1 building:1 moore:2 semantic:2 illustrated:1 generalized:1 demonstrate:1 performs:1 bring:1 interpreting:1 image:3 meaning:1 nck:1 novel:5 recently:1 empirically:1 rl:1 volume:3 interpretation:1 approximates:2 martinsson:1 interpret:1 significant:2 rr1:1 base:1 recent:2 claimed:1 inequality:1 devise:1 analyzes:1 additional:1 ministry:1 george:1 employed:1 venkatesan:1 semi:1 ing:1 match:1 faster:1 offer:1 cross:1 devised:1 award:1 va:7 scalable:3 expectation:7 iteration:2 represent:1 kernel:1 c1:5 singular:3 source:1 rest:1 posse:1 archive:2 sr:1 effectiveness:1 jordan:1 call:4 integer:3 near:4 iii:1 variety:1 xj:1 idea:1 cn:1 qj:1 motivated:1 expression:1 guruswami:1 gb:1 granted:1 effort:1 peter:1 matlab:1 useful:1 detailed:1 http:3 neuroscience:1 per:1 mnk:5 discrete:1 gunnar:1 achieving:1 ravi:2 v1:3 sum:1 year:1 deerwester:1 run:1 inverse:3 powerful:1 soda:1 throughout:1 vn:1 appendix:1 summarizes:1 rnk:1 bound:15 guaranteed:1 annual:4 alex:1 sake:1 dominated:1 u1:2 nathan:1 min:5 vempala:1 ameet:1 optical:1 according:2 describes:2 kirby:1 s1:2 indexing:1 taken:3 vjt:3 discus:1 needed:1 end:1 operation:1 magdon:2 apply:1 v2:2 spectral:6 original:2 thomas:2 denotes:1 top:2 running:11 clustering:1 log2:2 maintaining:2 scholarship:1 amit:2 conquer:1 approximating:2 society:2 malik:2 question:1 rt:1 visiting:1 september:1 subspace:46 unstable:1 reason:1 kannan:2 index:1 goreinov:2 ratio:23 kemal:1 nc:1 difficult:1 unfortunately:1 frank:1 implementation:2 twenty:1 datasets:8 pentland:1 truncated:5 beat:2 arbitrary:2 specified:1 connection:3 nm2:1 established:2 krylov:1 rt1:2 scott:1 challenge:1 program:1 including:1 memory:7 video:1 natural:3 attach:2 residual:5 mn:2 technology:1 imply:1 review:1 understanding:1 literature:2 eugene:1 relative:34 interesting:1 age:2 foundation:3 zhzhang:1 s0:1 viewpoint:1 uncorrelated:1 pi:3 row:28 summary:1 repeat:1 supported:1 enjoys:1 aij:3 understand:1 wide:1 eigenfaces:1 face:1 ghz:1 vocabulary:1 world:1 xn:1 genome:3 computes:2 gram:1 author:2 adaptive:8 far:1 income:1 approximate:5 ml:2 xi:3 landauer:1 latent:2 why:1 table:2 additionally:1 pseudoskeleton:1 adi:1 excellent:1 constructing:2 diag:1 tva:2 vj:2 main:7 whole:5 s2:2 edition:1 x1:1 intel:1 furnas:1 rr2:1 christos:2 deterministically:2 third:1 theorem:24 rk:1 r2:8 exists:2 consist:1 corr:1 nk:3 depicted:1 halko:1 simply:1 penrose:2 zhihua:1 u2:2 springer:1 relies:4 acm:2 identity:2 hard:1 typical:1 wt:2 lemma:12 called:2 pas:1 svd:13 experimental:1 select:5 college:1 people:1 avoiding:2
|
4,139 | 4,746 |
MCMC for continuous-time discrete-state systems
Yee Whye Teh
Gatsby Computational Neuroscience Unit
University College London
[email protected]
Vinayak Rao
Gatsby Computational Neuroscience Unit
University College London
[email protected]
Abstract
We propose a simple and novel framework for MCMC inference in continuoustime discrete-state systems with pure jump trajectories. We construct an exact
MCMC sampler for such systems by alternately sampling a random discretization of time given a trajectory of the system, and then a new trajectory given the
discretization. The first step can be performed efficiently using properties of the
Poisson process, while the second step can avail of discrete-time MCMC techniques based on the forward-backward algorithm. We show the advantage of our
approach compared to particle MCMC and a uniformization-based sampler.
1
Introduction
There has been growing interest in the machine learning community to model dynamical systems in
continuous time. Examples include point processes [1], Markov processes [2], structured Markov
processes [3], infinite state Markov processes [4], semi-Markov processes [5] etc. However, a major
impediment towards the more widespread use of these models is the problem of inference. A simple
approach is to discretize time, and then run inference on the resulting approximation. This however
has a number of drawbacks, not least of which is that we lose the advantages that motivated the use
of continuous time in the first place. Time-discretization introduces a bias into our inferences, and
to control this, one has to work at a time resolution that results in a very large number of discrete
time steps. This can be computationally expensive.
Our focus in this paper is on posterior sampling via Markov chain Monte Carlo (MCMC), and there
is a huge literature on such techniques for discrete-time models [6]. Here, we construct an exact
MCMC sampler for pure jump processes in continuous time, using a workhorse of the discrete-time
domain, the forward-filtering backward-sampling algorithm [7, 8], to make efficient updates.
The core of our approach is an auxiliary variable Gibbs sampler that repeats two steps. The first
step runs the forward-backward algorithm on a random discretization of time to sample a new trajectory. The second step then resamples a new time-discretization given this trajectory. A random
discretization allows a relatively coarse grid, while still keeping inferences unbiased. Such a coarse
discretization allows us to apply the forward-backward algorithm to a Markov chain with relatively
few time steps, resulting in computational savings. Even though the marginal distribution of the
random time-discretization can be quite complicated, we show that conditioned on the system trajectory, it is just distributed as a Poisson process.
While the forward-backward algorithm was developed originally for finite state hidden Markov models and linear Gaussian systems, it also forms the core of samplers for more complicated systems
like nonlinear/non-Gaussian [9], infinite state [10], and non-Markovian [11] time series. Our ideas
thus apply to essentially any pure jump process, so long as it makes only finite transitions over finite
intervals. For concreteness, we focus on semi-Markov processes. We compare our sampler with
two other continuous-time MCMC samplers, a particle MCMC sampler [12], and a uniformizationbased sampler [13]. The latter turns out to be a special case of ours, corresponding to a random
time-discretization that is marginally distributed as a homogeneous Poisson process.
1
2
Semi-Markov processes
A semi-Markov (jump) process (sMJP) is a right-continuous, piecewise-constant stochastic process
on the nonnegative real-line taking values in some state space S [14, 15]. For simplicity, we assume
S is finite, labelling its elements from 1 to N . We also assume the process is stationary. Then, the
sMJP is parametrized by ?0 , an (arbitrary) initial distribution over states, as well as an N ?N matrix
of hazard functions, Ass0 (?) ?s, s0 ? S. For any ? , Ass0 (? ) gives the rate of transitioning to state s0 ,
? time units after entering state s (we allow self-transitions, so s0 can equal s). Let this transition
occur after a waiting time ?s0 . Then ?s0 is distributed according to the density rss0 (?), related to
Ass0 (?) as shown below (see eg. [16]):
Z ? s0
R? 0
? 0 s Ass0 (u)du)
(
rss0 (?s0 ) = Ass0 (?s0 )e
,
Ass0 (?s0 ) = rss0 (?s0 )/ 1 ?
rss0 (u)du
(1)
0
Sampling an sMJP trajectory proceeds as follows: on entering state s, sample waiting times ?s0 ?
Ass0 (?) ?s0 ? S. The sMJP enters a new state, snew , corresponding to the smallest of these waiting
times. Let this waiting time be ?hold (so that ?hold = ?snew = mins0 ?s0 ). Then, advance the current
time by ?hold , and set the sMJP state to snew . Repeat this procedure, now with the rate functions
Asnew s0 (?) ?s0 ? S.
P
Define As (?) = s0 ?S Ass0 (?). From the independence of the times ?ss0 , equation 1 tells us that
R?
R?
Y
P (?hold > ? ) =
P (?s0 > ? ) = e(? 0 As (u)du) , ?hold ? rs (? ) ? As (? )e(? 0 As (u)du) (2)
s0 ?S
Comparing with equation 1, we see that As (?) gives the rate of any transition out of state s. An
equivalent characterization of many continuous-time processes is to first sample the waiting time
?hold , and then draw a new state s0 . For the sMJP, the latter probability is proportional to Ass0 (?hold ).
A special sMJP is the Markov jump process (MJP) where the hazard functions are constant (giving
exponential waiting times). For an MJP, future behaviour is independent of the current waiting time.
By allowing general waiting-time distributions, an sMJP can model memory effects like burstiness
or refractoriness in the system dynamics.
We represent an sMJP trajectory on an interval [tstart , tend ] as (S, T ), where T = (t0 , ? ? ? , t|T | ) is
the sequence of jump times (including the endpoints) and S = (s0 , ? ? ? , s|S| ) is the corresponding
sequence of state values. Here |S| = |T |, and si+1 = si implies a self-transition at time ti+1 (except
at the end time t|T | = tend which does not correspond to a jump). The filled circles in figure 1(c)
represent (S, T ); since the process is right-continuous, si gives the state after the jump at ti .
2.1
Sampling by dependent thinning
We now describe an alternate thinning-based approach to sampling an sMJP trajectory. Our approach will produce candidate event times at a rate higher that the actual event rates in the system.
To correct for this, we probabilistically reject (or thin) these events. Define W as the sequence
of actual event times T , together with the thinned event times (which we call U , these are the
empty circles in figure 1(c)). W = (w0 , ? ? ? , w|W | ) forms a random discretization of time (with
|W | = |T | + |U |); define V = (v0 , ? ? ? , v|W | ) as a sequence of state assignments to the times W .
At any wi , let li represent the time since the last sMJP transition (so that, li = wi ? maxt?T,t?wi t),
and let L = l1 , ? ? ? , l|W | . Figures 1(b) and (c) show these quantities, as well as continuous-time
processes S(t) and L(t) such that li = L(wi ) and si = S(wi ). (V, L, W ) forms an equivalent
representation of (S, T ) that includes a redundant set of thinned events U . Note that if the ith event
is thinned, vi = vi?1 , however this is not a self-transition. L helps distinguish self-transitions (having associated l?s equal to 0) from thinned events. We explain the generative process of (V, L, W )
below; a proof of its correctness is included in the supplementary material.
For each hazard function As (? ), define another dominating hazard function Bs (? ), so that Bs (? ) ?
As (? ) ?s, ? . Suppose we have instantiated the system trajectory until time wi , with the sMJP
having just entered state vi ? S (so that li = 0). We sample the next candidate event time wi+1 ,
with ?wi = (wi+1 ? wi ) drawn from the hazard function Bvi (?). A larger rate implies faster events,
so that ?wi will on average be smaller than a waiting time ?hold drawn from Avi (?). We correct
A (?w +l )
for this by treating wi+1 as an actual event with probability Bvvi (?wii +lii ) . If this is the case, we
i
sample a new state vi+1 with probability proportional to Avi vi+1 (?wi + li ), and set li+1 = 0. On
the other hand, if the event is rejected, we set vi+1 to vi , and li+1 = (?wi + li ). We now sample
2
Figure 1: a) Instantaneous hazard rates given a trajectory b) State holding times, L(t) c) sMJP state values
S(t) d) Graphical model for the randomized time-discretization e) Resampling the sMJP trajectory. In b) and
c), the filled and empty circles represent actual and thinned events respectively.
?wi+1 (and thus wi+2 ), such that (?wi+1 + li+1 ) ? Bvi+1 (?). More simply, we sample a new
waiting time from Bvi+1 (?), conditioned on it being greater than li+1 . Again, accept this point with
Av
(?wi+1 +li+1 )
probability Bvi+1 (?wi+1 li+1 ) , and repeat this process. Proposition 1 confirms that this generative
i+1
process (summarized by the graphical model in figure 1(d), and algorithm 1) yields a trajectory from
the sMJP. Figure 1(d) also depicts observations X of the sMJP trajectory; we elaborate on this later.
Proposition 1. The path (V, L, W ) returned by the thinning procedure described above is equivalent
to a sample (S, T ) from the sMJP (?0 , A).
Algorithm 1 State-dependent thinning for sMJPs
Input:
Hazard functions Ass0 (?) ?s, s0 ? S, and an initial distribution over states
P ?0 .
Dominating hazard functions Bs (? ) ? As (? ) ??, s, where As (? ) = s0 Ass0 (? ).
Output:
A piecewise constant path (V, L, W ) ? ((vi , li , wi )) on the interval [tstart , tend ].
1: Draw v0 ? ?0 and set w0 = tstart . Set l0 = 0 and i = 0.
2: while wi < tend do
3:
Sample ?hold ? Bvi (?), with ?hold > li . Let ?wi = ?hold ? li , and wi+1 = wi + ?wi .
A (?
)
4:
with probability Bvvi (?hold
hold )
i
5:
Set li+1 = 0, and sample vi+1 , with P (vi+1 = s0 |vi ) ? Avi s0 (?hold ), s0 ? S.
6:
else
7:
Set li+1 = li + ?wi , and vi+1 = vi .
8:
end
9:
Increment i.
10: end while
11: Set w|W | = tend , v|W | = v|W |?1 , l|W | = l|W | + w|W | ? w|W |?1 .
3
2.2
Posterior inference via MCMC
We now define an auxiliary variable Gibbs sampler, setting up a Markov chain that converges to
the posterior distribution over the thinned representation (V, L, W ) given observations X of the
sMJP trajectory. The observations can lie in any space X , and for any time-discretization W , let xi
represent all observations in the interval (wi , wi+1 ). By construction, the sMJP stays in a single state
vi over this interval; let P (xi |vi ) be the corresponding likelihood vector. Given a time discretization
W ? (U ? T ) and the observations X, we discard the old state labels (V, L), and sample a new path
? W ) ? (S,
? T?) using the forward-backward algorithm. We then discard the thinned events U
?,
(V? , L,
?
?
and given the path (S, T ), resample new thinned events Unew , resulting in a new time discretization
Wnew ? (T? ? Unew ). We describe both operations below.
Resampling the sMJP trajectory given the set of times W :
Given W (and thus all ?wi ), this involves assigning each element wi ? W , a label (vi , li ) (see
figure 1(d)). Note that the system is Markov in the pair (vi , li ), so that this step is a straightforward
application of the forward-backward algorithm to the graphical model shown in figure 1(d). Observe
from this figure that the joint distribution factorizes as:
|W |?1
Y
P (xi |vi )P (?wi |vi , li )P (vi+1 , li+1 |vi , li , ?wi )
(3)
P (V, L, W, X) = P (v0 , l0 )
i=0
?
R (li +?wi )
B
(t)dt
vi
li
.
From equation 2, (with B instead of A), P (?wi |vi , li ) = Bvi (li + ?wi )e
The term P (vi+1 , li+1 |vi , li , ?wi ) is the thinning/state-transition probability from steps 4 and 5 of
algorithm 1. The forward-filtering stage then moves sequentially through the times in W , successively calculating the probabilities P (vi , li , w1:i+1 , x1:i ) using the recursion:
X
P (vi , li , w1:i+1 , x1:i ) = P (xi |vi )P (wi+1 |vi , li ) P (vi , li |vi?1 , li?1 , ?wi )P (vi?1 , li?1 , w1:i , x1:i?1 )
vi?1 ,li?1
? W ) ? (S,
? T?). See figure 1(e).
The backward sampling stage then returns a new trajectory (V? , L,
Observe that li can take (i + 1) values (in the set {0, wi ? wi?1 , ? ? ? , wi ? w0 }), with the value of
li affecting P (vi+1 , li+1 |vi , li , ?wi+1 ).Thus, the forward-backward algorithm for a general sMJP
scales quadratically with |W |. We can however use ideas from discrete-time MCMC to reduce this
cost (eg. [11] use a slice sampler to limit the maximum holding time of a state, and thus limit li ).
Resampling the thinned events given the sMJP trajectory:
Having obtained a new sMJP trajectory (V, L, W ), we discard all thinned events U , so that the
? , recovering a new
current state of the sampler is now (S, T ). We then resample the thinned events U
?
?
?
thinned representation (V , L, W ), and with it, a new discretization of time. To simplify notation,
we define the instantaneous hazard functions A(t) and B(t) (see figure 1(a)):
A(t) = AS(t) (L(t)),
and B(t) = BS(t) (L(t))
(4)
These were the event rates relevant at any time t during the generative process. Note that the
sMJP trajectory completely determines these quantities. The events W (whether thinned or not)
were generated from a rate B(?) process, while the probability that an event wi was thinned is
1 ? A(wi )/B(wi ). The Poisson thinning theorem [17] then suggests that the thinned events U are
distributed as a Poisson process with intensity (B(t) ? A(t)). The following proposition (see the
supplementary material for a proof) shows that this is indeed the case.
Proposition 2. Conditioned on a trajectory (S, T ) of the sMJP, the thinned events U are distributed
as a Poisson process with intensity (B(t) ? A(t)).
Observe that this is independent of the observations X. We show in section 2.4 how sampling from
such a Poisson process is straightforward for appropriately chosen bounding rates Bs .
2.3 Related work
An increasingly popular approach to inference in continuous-time systems is particle MCMC (pMCMC) [12]. At a high level, this uses particle filtering to generate a continuous-time trajectory,
which then serves as a proposal for a Metropolis-Hastings (MH) algorithm. Particle filtering however cannot propogate back information from future observations, and pMCMC methods can have
difficulty in situations where strong observations cause the posterior to deviate from the prior.
4
Recently, [13] proposed a sampler for MJPs that is a special case of ours. This was derived via a
classical idea called uniformization, and constructed the time discretization W from a homogeneous
Poisson process. Our sampler reduces to this when a constant dominating rate B > maxs,? As (? )
is used to bound all event rates. However, such a ?uniformizing? rate does not always exist (we
will discuss two such systems with unbounded rates). Moreover, with a single rate B, the average
number of candidate events |W |, (and thus the computational cost of the algorithm), scales with the
leaving rate of the most unstable state. Since this state is often the one that the system will spend the
least amount of time in, such a strategy can be wasteful. Under our sampler, the distribution of W is
not a Poisson process. Instead, events rates are coupled via the sMJP state. This allows our sampler
to adapt the granularity of time-discretization to that required by the posterior trajectories, moreover
this granularity can vary over the time interval.
There exists other work on continuous-time models based on the idea of a random discretization
of time [18, 1]. Like uniformization, these all are limited to specific continuous-time models with
specific thinning constructions, and are not formulated in as general a manner as we have done.
Moreover, none of these exploit the ability to efficiently resample the time-discretization from a
Poisson process, or a new trajectory using the forward-backward algorithm.
2.4 Experiments
In this section, we evaluate our sampler on a 3-state sMJP with Weibull hazard rates. Here
?ss0 ?1
?ss0 ?1
?
?ss0
?
(?(? /?ss0 )?ss0 ) ?ss0
rss0 (? |?ss0 , ?ss0 ) = e
, Ass0 (? |?ss0 , ?ss0 ) =
?ss0 ?ss0
?ss0 ?ss0
where ?ss0 is the scale parameter, and the shape parameter ?ss0 controls the stability of a state s.
When ?ss0 < 1, on entering state s, the system is likely to quickly jump to state s0 . By contrast,
?ss0 > 1 gives a ?recovery? period before transitions to s0 . Note that for ?ss0 < 1, the hazard
function tends to infinity as ? ? 0. Now, choose an ? > 1. We use the following simple upper
bound Bss0 (? ):
?ss0 ?1
?ss0 ?1
??ss0
?
?ss0
?
0
0
0
0
Bss (? ) = ?Ass (? |?ss , ?ss ) =
=
(5)
? ss0 ?
? ss0
?ss0
?ss0
?
?
? = ?/ ? ? for any ? and ?. Thus, sampling from the dominating hazard function Bss0 (?)
Here, ?
? ss0 . Note from
reduces to straightforward sampling from a Weibull with a smaller scale parameter ?
algorithm 1 that with
this
construction
of
the
dominating
rates,
each
candidate
event
is rejected with
probability 1 ? ?1 ; this can be a guide to choosing ?. In our experiments, we set ? equal to 2.
Sampling thinned events on an interval (ti , ti+1 ) (where the sMJP is in state si ) P
involves sampling
from a Poisson process with intensity (B(t) ? A(t)) = (? ? 1)A(t) = (? ? 1) s0 Asi s0 (t ? ti ).
This is just the superposition of N independent and shifted Poisson processes on (0, ti+1 ? ti ),
? is a Weibull hazard function
the nth having intensity (? ? 1)Asi n (?) ? A?si n (?). As before,
A(?)
?
?
obtained by correcting the scale parameter ? of A(?) by ? ? 1. A simple way to sample such
a Poisson process is by first drawing the number of events from a Poisson distribution with mean
R (ti+1 ?ti )
A?si n (u)du, and then drawing that many events i.i.d. from A?si n truncated at (ti+1 ? ti ).
0
Solving the integral for the Poisson mean is straightforward for the Weibull. Call the resulting
Poisson sequence T?n , and define T? = ?n?S T?n . Then Wi ? T? + ti is the set of resampled thinned
events on the interval (ti , ti+1 ). We repeat this over each segment (ti , ti+1 ) of the sMJP path.
In the following experiments, the shape parameters for each Weibull hazard (?ss0 ) was randomly
drawn from the interval [0.6, 3], while the scale parameter was always set to 1. ?0 was set to
the discrete uniform distribution. The unbounded hazards associated with ?ss0 < 1 meant that
uniformization is not applicable to this problem, and we only compared our sampler with pMCMC.
We implemented both samplers in Matlab. Our MCMC sampler was set up with ? = 2, so that the
dominating hazard rate at any instant equalled twice the true hazard rate (i.e. Bss0 (? ) = 2Ass0 (? )),
giving a probability of thinning equal to 0.5. For pMCMC, we implemented the particle independent
Metropolis-Hastings sampler from [12]. We tried different values for the number of particles; for
our problems, we found 10 gave best results.
All MCMC runs consisted of 5000 iterations following a burn-in period of 1000. After any MCMC
run, given a sequence of piecewise constant trajectories, we calculated the empirical distribution of
5
Thinning
particle MCMC10
particle MCMC20
10
8
6
4
2
0
40
Thinning
particle MCMC10
30
20
10
0
2
5
10
20
50
Effective samples per second
Effective samples per second
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
1
20
10
0
2
5
10
20
40
20
0
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
Thinning
particle MCMC10
30
Thinning
particle MCMC10
particle MCMC20
60
50
Effective samples per second
12
Effective samples per second
Effective samples per second
Figure 2: ESS per
unit time vs the
inverse-temperature
of the likelihood,
when the trajectories are over an
interval of length
20 (left) and 2
(right).
30
1
Thinning
particle MCMC10
25
20
15
10
5
0
2
5
10
20
50
Figure 3: ESS per second for increasing interval lengths. Temperature decreases from the left to right subplots.
the time spent in each state as well as the number of state transitions. We then used R-coda [19] to
estimate effective sample sizes (ESS) for these quantities. The ESS of the simulation was set to the
median ESS of all these statistics.
Effect of the observations For our first experiment, we distributed 10 observations over an interval
of length 20. Each observation favoured a particular, random state over the other two states by a
factor of 100, giving random likelihood vectors like (1, 100, 1)> . We then raised the likelihood
vector P (xi |?) to an ?inverse-temperature? ?, so that the effective likelihood at the ith observation
?
was (P (xi |si )) . As this parameter varied from 0 to 1, the problem moved from sampling from the
prior to a situation where the trajectory was observed (almost) perfectly at 10 random times.
The left plot in figure 2 shows the ESS produced per unit time by both samplers as the inversetemperature increased, averaging results from 10 random parametrizations of the sMJP. We see (as
one might expect), that when the effect of the observations is weak, particle MCMC (which uses
the prior distribution to make local proposals), outperforms our thinning-based sampler. pMCMC
also has the benefit of being simpler implementation-wise, and is about 2-3 times faster (in terms
of raw computation time) for a Weibull sMJP, than our sampler. As the effect of the likelihood
increases, pMCMC starts to have more and more difficulty tracking the observations. By contrast,
our sampler is fairly insensitive to the effect of the likelihood, eventually outperforming the particle
MCMC sampler. While there exist techniques to generate more data-driven proposals for the particle
MCMC [12, 20], these compromise the appealing simplicity of the original particle MCMC sampler.
Moreover, none of these really have the ability to propagate information back from the future (like
the forward-backward algorithm), rather they make more and more local moves (for instance, by
updating the sMJP trajectory on smaller and smaller subsets of the observation interval).
The right plot in figure 2 shows the ESS per unit time for both samplers, now with the observation
interval set to a smaller length of 2. Here, our sampler comprehensively outperforms pMCMC. There
are two reasons for this. First, more observations per unit time requires rapid switching between
states, a deviation from the prior that particle filtering is unlikely to propose. Additionally, over
short intervals, the quadratic cost of the forward-backward step of our algorithm is less pronounced.
Effect of the observation interval length In the next experiment, we more carefully compare the
two samplers as the interval length varies. For three setting of the inverse temperature parameter
(0.1, 0.5 and 0.9), we calculated the number of effective samples produced per unit time as the
length of the observation interval increased from 2 to 50. Once again, we averaged results from 10
random settings of the sMJP parameters. Figure 3 show the results for the low, medium and high
settings of the the inverse temperature. Again, we clearly see the benefit of the forward-backward
algorithm, especially in the low temperature and short interval regimes where the posterior deviates
from the prior. Of course, the performance of our sampler can be improved further using ideas from
the discrete-time domain; these can help ameliorate effect of the quadratic cost for long intervals.
6
60
40
20
0
1
2
5
10
Uniformization
Thinning
particle MCMC
100
20
80
60
40
20
0
1
2
5
10
20
Effective samples per second
80
Effective samples per second
Effective samples per second
Uniformization
Thinning
particle MCMC
100
80
Uniformization
Thinning
particle MCMC
60
40
20
0
1
2
5
10
20
Figure 4: Effect of increasing the leaving rate of a state. Temperature decreases from the left to right plots.
3
Markov jump processes
In this section, we look at the Markov jump process (MJP), which we saw has constant hazard
functions Ass0 . MJPs are also defined to disallow self-transitions, so that Ass = 0 ?s ? S. If we
use constant dominating hazard rates Bs , we see from algorithm 1 that all probabilities at time wi
depend only on the current state si , and are independent of the holding time li . Thus, we no longer
need to represent the holding times L. The forward message at time wi needs only to represent the
probability of vi taking different values in S; this completely specifies the state of the MJP. As a
result, the cost of a forward-backward iteration is now linear in |W |.
In the next experiment, we compare Matlab implementations of our thinning-based sampler and the
particle MCMC sampler with the uniformization-based sampler described in section 2.3. Recall
that the latter samples candidate event times W from a homogeneous Poisson process with a stateindependent rate B > maxs As . Following [13], we set B = 2 maxs As . As in section 2.4, we set
? = 2 for our sampler, so that Bs = 2As ?s. pMCMC was run with 20 particles.
Observe that for uniformization, the rate B is determined by the leaving rate of the most unstable
state; often this is the state the system spends the least time in. To study this, we applied all three
samplers to a 3-state MJP, two of whose states had leaving rates equal to 1. The leaving rate of
the third state, was varied from 1 to 20 (call this rate ?). On leaving any state, the probability of
transitioning to either of the other two was uniformly distributed between 0 and 1. This way, we
constructed 10 random MJPs for each ?. We distributed 5 observation times (again, favouring a
random state by a factor of 100) over the interval [0, 10]. Like section 2.4, we looked at the ESS per
unit time for 3 settings of the inverse temperature parameter ?, now as we varied ?.
Figure 4 shows the results. The pMCMC sampler clearly performs worse than the other two. The
Markov structure of the MJP makes the forward-backward algorithm very natural and efficient, by
contrast, running a particle filter with 20 particles took about twice as long as our sampler. Further,
we see that while both the uniformization and our sampler perform comparably for low values of ?,
our sampler starts to outperform uniformization for ??s greater than 2. In fact, for weak observations
and large ?s, even particle MCMC outperforms uniformization. As we mentioned earlier, this is
because for uniformization, the granularity of time-discretization is determined by the least stable
state, resulting in very long Markov chains for large values of ?.
3.1 The M/M/? queue
We finally apply our ideas to an infinite state MJP from queuing theory, the M/M/? queue (also
called an immigration-death process [21]). Here, individuals (customers, messages, jobs etc.) enter
a population according to a homogeneous Poisson process with rate ? independent of the population
size. The lifespan of each individual (or the job ?service time?) is exponentially distributed with rate
?, so that the rate at which a ?death? occurs in the population is proportional to the population size.
Let S(t) represent the population size (or the number of ?busy servers?) at time t. Then, under the
M/M/? queue, the stochastic process S(t) evolves according to a simple birth-death Markov jump
process on the space S = {1, ? ? ? , ?}, with rates As,s+1 = ? and As,s?1 = s?. All other rates
are 0. Observe that since the population size of the M/M/? queue is unbounded, we cannot upper
bound the event rates in the system. Thus, uniformization is not directly applicable to this system.
Instead, we have to truncate the maximum value of S(t) to some constant, say c. This is the so-called
M/M/c/c queue; now, when all c servers are busy, any incoming jobs are rejected.
In the following, we considered an M/M/? queue with ? and ? set to 10 and 1 respectively. For
some tend , the state of the system was observed perfectly at three times 0, tend /10 and tend , with
values 10, 2 and 15 respectively. Conditioned on these, we sought the posterior distribution over the
7
queue: a) ESS per unit time
b) ESS per unit time scaled
by interval length.
70
Uniformization
Dependent thinning
Thinning (trunc)
50
40
30
20
10
0
1
2
5
10
20
Effective samples per second
per unit interval length
Figure 5: The M/M/?
Effective samples per second
60
Uniformization
Dependent thinning
Thinning (trunc)
60
50
40
30
20
10
1
2
5
10
20
system trajectory on the interval [0, tend ]. Since the state of the system at time 0 is perfectly observed
to be 10, given any time-discretization, the maximum value of si at step i of the Markov chain is
(10 + i). Thus, message dimensions are always finite, and we can directly apply the forwardbackward algorithm. For noisy observations, we can use a slice sampler [22]. We compared our
sampler with uniformization; for this, we approximated the M/M/? system with an M/M/50/50
system. We also applied our sampler to this truncated approximation, labelling it as ?Thinning
(trunc)?. For both these samplers, the message dimensions were 50. The large state spaces involved
makes pMCMC very inefficient, and we did not include it in our results.
Figure 5(a) shows the ESS per unit time for all three samplers as we varied the interval length tend
from 1 to 20. Sampling a trajectory over a long interval will take more time than over a short one,
and to more clearly distinguish performance for large values of tend , we scale each ESS from the
left plot with tend , the length of the interval, in the right subplot of figure 5.
We see our sampler always outperforms uniformization, with the difference particularly significant
for short intervals. Interestingly, running our thinning-based sampler on the truncated system offers
no significant computational benefit over running it on the full model. As the observation interval becomes longer and longer, the MJP trajectory can make larger and larger excursions (especially over
the interval [tend /10, tend ]). Thus as tend increases, event rates witnessed in posterior trajectories
starts to increase. As our sampler adapts to this, the number of thinned events in all three samplers
start to become comparable, causing the uniformization-based sampler to approach the performance
of the other two samplers. At the same time, we see that the difference between our truncated and
our untruncated sampler starts to widen. Of course, we should remember that over long intervals,
truncating the system size to 50 becomes more likely to introduce biases into our inferences.
4
Discussion
We described a general framework for MCMC inference in continuous-time discrete-state systems.
Each MCMC iteration first samples a random discretization of time given the trajectory of the system. Given this, we then resample the sMJP trajectory using the forward-backward algorithm. While
we looked only at semi-Markov and Markov jump processes, it is easy to extend our approach to
piecewise-constant stochastic processes with more complicated dependency structures.
For our sampler, a bottleneck in the rate of mixing is that the new and old trajectories share an intermediate discretization W (see figure 1(e)). Recall that an sMJP trajectory defines an instantaneous
hazard function B(t); our scheme requires the discretization sampled from the old hazard function
be compatible with the new hazard function. Thus, the forward-backward algorithm is unlikely to
return a trajectory associated with a hazard function that differ significantly from the old one. By
contrast, for uniformization, the hazard function is a constant B, independent of the system state.
However, this comes at the cost of a conservatively high discretization of time. An interesting direction for future work is too see how different choices of the dominating hazard function can help
trade-off these factors. For instance, we proposed, using a single ?, with Bs (?) = ?As (?). It is
possible to use a different ?s for each state s, or even an ?s (?) that varies with time. Similarly, one
can consider additive (rather than multiplicative) constructions of Bs (?).
For general sMJPs, the forward-backward algorithm scales quadratically with |W |, the number of
candidate jump times. Such scaling is characteristic of sMJPs, though we can avail of discrete-time
MCMC techniques to ameliorate this. For sMJPs whose hazard functions are constant beyond a
?window of memory?, inference scales quadratically with the memory length, and only linearly with
|W |. One can use such approximations to devise efficient MH proposals for sMJPs trajectories.
8
References
[1] Ryan P. Adams, Iain Murray, and David J. C. MacKay. Tractable nonparametric Bayesian inference in
Poisson processes with Gaussian process intensities. In Proceedings of the 26th International Conference
on Machine Learning (ICML), 2009.
[2] Y. W. Teh, C. Blundell, and L. T. Elliott. Modelling genetic variations with fragmentation-coagulation
processes. In Advances In Neural Information Processing Systems, 2011.
[3] U. Nodelman, C.R. Shelton, and D. Koller. Continuous time Bayesian networks. In Proceedings of the
Eighteenth Conference on Uncertainty in Artificial Intelligence (UAI), pages 378?387, 2002.
[4] Ardavan Saeedi and Alexandre Bouchard-C?ot?e. Priors over Recurrent Continuous Time Processes. In
Advances in Neural Information Processing Systems 24 (NIPS), volume 24, 2011.
[5] Matthias Hoffman, Hendrik Kueck, Nando de Freitas, and Arnaud Doucet. New inference strategies
for solving Markov decision processes using reversible jump MCMC. In Proceedings of the TwentyFifth Conference Annual Conference on Uncertainty in Artificial Intelligence (UAI-09), pages 223?231,
Corvallis, Oregon, 2009. AUAI Press.
[6] A. Doucet, N. de Freitas, and N. J. Gordon. Sequential Monte Carlo Methods in Practice. Statistics for
Engineering and Information Science. New York: Springer-Verlag, May 2001.
[7] Fr?uwirth-Schnatter. Data augmentation and dynamic linear models. J. Time Ser. Anal., 15:183?202, 1994.
[8] C. K. Carter and R. Kohn. Markov chain Monte Carlo in conditionally Gaussian state space models.
Biometrika, 83:589?601, 1996.
[9] Radford M. Neal, Matthew J. Beal, and Sam T. Roweis. Inferring state sequences for non-linear systems with embedded hidden Markov models. In Advances in Neural Information Processing Systems 16
(NIPS), volume 16, pages 401?408. MIT Press, 2004.
[10] J. Van Gael, Y. Saatci, Y. W. Teh, and Z. Ghahramani. Beam sampling for the infinite hidden Markov
model. In Proceedings of the International Conference on Machine Learning, volume 25, 2008.
[11] M. Dewar, C. Wiggins, and F. Wood. Inference in hidden Markov models with explicit state duration
distributions. IEEE Signal Processing Letters, page To Appear, 2012.
[12] Christophe Andrieu, Arnaud Doucet, and Roman Holenstein. Particle Markov chain Monte Carlo methods. Journal of the Royal Statistical Society Series B, 72(3):269?342, 2010.
[13] V. Rao and Y. W. Teh. Fast MCMC sampling for Markov jump processes and continuous time Bayesian
networks. In Proceedings of the International Conference on Uncertainty in Artificial Intelligence, 2011.
[14] William Feller. On semi-Markov processes. Proceedings of the National Academy of Sciences of the
United States of America, 51(4):pp. 653?659, 1964.
[15] D. Sonderman. Comparing semi-Markov processes. Mathematics of Operations Research, 5(1):110?119,
1980.
[16] D. J. Daley and D. Vere-Jones. An Introduction to the Theory of Point Processes. Springer, 2008.
[17] J. F. C. Kingman. Poisson processes, volume 3 of Oxford Studies in Probability. The Clarendon Press
Oxford University Press, New York, 1993. Oxford Science Publications.
[18] A. Beskos and G.O. Roberts. Exact simulation of diffusions. Annals of applied probability, 15(4):2422 ?
2444, November 2005.
[19] Martyn Plummer, Nicky Best, Kate Cowles, and Karen Vines. CODA: Convergence diagnosis and output
analysis for MCMC. R News, 6(1):7?11, March 2006.
[20] Andrew Golightly and Darren J. Wilkinson. Bayesian parameter inference for stochastic biochemical
network models using particle Markov chain Monte Carlo. Interface Focus, 1(6):807?820, December
2011.
[21] S. Asmussen. Applied Probability and Queues. Applications of Mathematics. Springer, 2003.
[22] Stephen G. Walker. Sampling the Dirichlet mixture model with slices. Communications in Statistics Simulation and Computation, 36:45, 2007.
9
|
4746 |@word mjp:8 confirms:1 simulation:3 tried:1 r:1 propagate:1 initial:2 series:2 united:1 genetic:1 ours:2 interestingly:1 outperforms:4 favouring:1 freitas:2 current:4 discretization:25 comparing:2 si:11 assigning:1 vere:1 additive:1 shape:2 treating:1 plot:4 update:1 resampling:3 stationary:1 generative:3 v:1 intelligence:3 es:12 ith:2 core:2 short:4 coarse:2 characterization:1 coagulation:1 simpler:1 unbounded:3 constructed:2 become:1 thinned:19 introduce:1 manner:1 indeed:1 rapid:1 growing:1 actual:4 window:1 increasing:2 becomes:2 notation:1 moreover:4 medium:1 spends:1 weibull:6 developed:1 remember:1 ti:16 auai:1 biometrika:1 scaled:1 uk:2 control:2 unit:13 ser:1 appear:1 before:2 service:1 engineering:1 local:2 tends:1 limit:2 switching:1 oxford:3 path:5 might:1 burn:1 twice:2 suggests:1 equalled:1 limited:1 averaged:1 practice:1 procedure:2 empirical:1 asi:2 reject:1 significantly:1 cannot:2 yee:1 equivalent:3 customer:1 eighteenth:1 straightforward:4 truncating:1 duration:1 resolution:1 simplicity:2 recovery:1 snew:3 pure:3 correcting:1 iain:1 stability:1 population:6 variation:1 increment:1 annals:1 construction:4 suppose:1 exact:3 homogeneous:4 us:2 element:2 expensive:1 approximated:1 updating:1 particularly:1 observed:3 enters:1 vine:1 news:1 decrease:2 trade:1 forwardbackward:1 burstiness:1 mentioned:1 feller:1 wilkinson:1 dynamic:2 trunc:3 depend:1 solving:2 segment:1 compromise:1 completely:2 joint:1 mh:2 america:1 instantiated:1 fast:1 describe:2 london:2 monte:5 effective:13 artificial:3 plummer:1 tell:1 avi:3 choosing:1 birth:1 quite:1 whose:2 supplementary:2 dominating:8 larger:3 spend:1 s:2 drawing:2 say:1 ability:2 statistic:3 noisy:1 beal:1 advantage:2 sequence:7 matthias:1 ucl:2 propose:2 took:1 fr:1 causing:1 relevant:1 entered:1 parametrizations:1 mixing:1 adapts:1 roweis:1 academy:1 moved:1 pronounced:1 nicky:1 convergence:1 empty:2 produce:1 adam:1 converges:1 help:3 spent:1 recurrent:1 ac:2 martyn:1 avail:2 andrew:1 job:3 strong:1 auxiliary:2 recovering:1 involves:2 implies:2 implemented:2 come:1 differ:1 direction:1 drawback:1 correct:2 unew:2 filter:1 stochastic:4 nando:1 material:2 behaviour:1 really:1 proposition:4 ryan:1 hold:14 considered:1 propogate:1 matthew:1 major:1 vary:1 sought:1 smallest:1 resample:4 applicable:2 lose:1 label:2 superposition:1 saw:1 correctness:1 hoffman:1 mit:1 clearly:3 gaussian:4 always:4 rather:2 factorizes:1 probabilistically:1 publication:1 l0:2 focus:3 derived:1 modelling:1 likelihood:7 contrast:4 inference:14 dependent:4 biochemical:1 unlikely:2 accept:1 hidden:4 koller:1 raised:1 special:3 fairly:1 mackay:1 marginal:1 equal:5 construct:2 saving:1 having:4 once:1 sampling:17 look:1 icml:1 jones:1 thin:1 future:4 piecewise:4 roman:1 simplify:1 few:1 gordon:1 randomly:1 widen:1 national:1 individual:2 saatci:1 vrao:1 william:1 continuoustime:1 immigration:1 interest:1 huge:1 message:4 introduces:1 mixture:1 chain:8 integral:1 filled:2 old:4 circle:3 increased:2 instance:2 earlier:1 witnessed:1 rao:2 markovian:1 vinayak:1 assignment:1 cost:6 deviation:1 subset:1 uniform:1 too:1 dependency:1 varies:2 density:1 international:3 randomized:1 stay:1 off:1 together:1 quickly:1 w1:3 again:4 augmentation:1 successively:1 choose:1 worse:1 lii:1 inefficient:1 kingman:1 return:2 li:42 busy:2 de:2 summarized:1 includes:1 oregon:1 kate:1 vi:36 performed:1 later:1 queuing:1 multiplicative:1 start:5 complicated:3 bouchard:1 characteristic:1 efficiently:2 correspond:1 yield:1 weak:2 raw:1 bayesian:4 produced:2 comparably:1 marginally:1 carlo:5 trajectory:38 none:2 rss0:5 holenstein:1 explain:1 pp:1 involved:1 associated:3 proof:2 sampled:1 popular:1 recall:2 carefully:1 thinning:24 back:2 alexandre:1 clarendon:1 originally:1 higher:1 dt:1 asmussen:1 improved:1 done:1 though:2 refractoriness:1 just:3 rejected:3 stage:2 until:1 hand:1 hastings:2 nonlinear:1 reversible:1 widespread:1 defines:1 effect:8 consisted:1 unbiased:1 true:1 andrieu:1 entering:3 arnaud:2 death:3 neal:1 eg:2 conditionally:1 during:1 self:5 coda:2 whye:1 workhorse:1 performs:1 l1:1 temperature:8 interface:1 resamples:1 wise:1 instantaneous:3 novel:1 recently:1 endpoint:1 insensitive:1 exponentially:1 volume:4 extend:1 significant:2 corvallis:1 gibbs:2 enter:1 grid:1 mathematics:2 similarly:1 particle:29 had:1 stable:1 longer:3 v0:3 etc:2 posterior:8 driven:1 discard:3 verlag:1 server:2 outperforming:1 christophe:1 devise:1 greater:2 subplot:1 redundant:1 period:2 signal:1 semi:7 stephen:1 full:1 reduces:2 bvi:6 faster:2 adapt:1 offer:1 long:6 hazard:26 essentially:1 poisson:20 iteration:3 represent:8 beam:1 proposal:4 affecting:1 interval:31 else:1 median:1 leaving:6 walker:1 appropriately:1 ot:1 tend:15 december:1 call:3 granularity:3 intermediate:1 easy:1 subplots:1 independence:1 gave:1 perfectly:3 impediment:1 reduce:1 idea:6 beskos:1 blundell:1 t0:1 whether:1 motivated:1 bottleneck:1 kohn:1 queue:8 returned:1 karen:1 york:2 cause:1 matlab:2 gael:1 amount:1 nonparametric:1 lifespan:1 carter:1 generate:2 specifies:1 outperform:1 exist:2 shifted:1 neuroscience:2 per:21 diagnosis:1 discrete:11 waiting:10 drawn:3 wasteful:1 saeedi:1 diffusion:1 backward:18 concreteness:1 wood:1 run:5 inverse:5 letter:1 uncertainty:3 ameliorate:2 place:1 almost:1 excursion:1 draw:2 decision:1 scaling:1 comparable:1 bound:3 resampled:1 distinguish:2 quadratic:2 nonnegative:1 annual:1 occur:1 infinity:1 ywteh:1 relatively:2 structured:1 ss0:31 according:3 alternate:1 truncate:1 march:1 smaller:5 increasingly:1 sam:1 wi:48 appealing:1 metropolis:2 evolves:1 b:9 untruncated:1 ardavan:1 computationally:1 equation:3 turn:1 discus:1 eventually:1 tractable:1 end:3 serf:1 wii:1 operation:2 apply:4 observe:5 original:1 running:3 include:2 dirichlet:1 graphical:3 instant:1 calculating:1 exploit:1 giving:3 ghahramani:1 especially:2 murray:1 classical:1 society:1 move:2 quantity:3 looked:2 occurs:1 strategy:2 parametrized:1 w0:3 unstable:2 reason:1 length:12 robert:1 holding:4 implementation:2 anal:1 perform:1 teh:4 discretize:1 allowing:1 av:1 observation:23 markov:31 upper:2 finite:5 november:1 truncated:4 situation:2 communication:1 varied:4 wiggins:1 arbitrary:1 uniformization:20 community:1 intensity:5 david:1 pair:1 required:1 quadratically:3 alternately:1 nip:2 beyond:1 proceeds:1 dynamical:1 below:3 mjps:3 regime:1 hendrik:1 including:1 memory:3 max:3 royal:1 event:35 difficulty:2 natural:1 recursion:1 nth:1 scheme:1 golightly:1 coupled:1 deviate:2 prior:6 literature:1 nodelman:1 embedded:1 expect:1 interesting:1 filtering:5 proportional:3 elliott:1 s0:29 share:1 maxt:1 course:2 compatible:1 pmcmc:10 repeat:4 last:1 keeping:1 bias:2 allow:1 guide:1 disallow:1 comprehensively:1 taking:2 distributed:9 slice:3 benefit:3 bs:1 calculated:2 transition:12 dimension:2 van:1 conservatively:1 forward:19 jump:16 doucet:3 mins0:1 sequentially:1 incoming:1 uai:2 xi:6 continuous:17 additionally:1 du:5 as:2 domain:2 did:1 linearly:1 bounding:1 x1:3 schnatter:1 depicts:1 elaborate:1 gatsby:4 favoured:1 inferring:1 explicit:1 daley:1 exponential:1 candidate:6 lie:1 third:1 theorem:1 transitioning:2 specific:2 exists:1 sequential:1 fragmentation:1 labelling:2 conditioned:4 simply:1 likely:2 tracking:1 springer:3 radford:1 darren:1 determines:1 wnew:1 formulated:1 towards:1 included:1 infinite:4 except:1 determined:2 uniformly:1 sampler:53 averaging:1 called:3 college:2 latter:3 meant:1 evaluate:1 mcmc:30 shelton:1 cowles:1
|
4,140 | 4,747 |
Learning with Recursive Perceptual Representations
Oriol Vinyals
UC Berkeley
Berkeley, CA
Li Deng
Microsoft Research
Redmond, WA
Yangqing Jia
UC Berkeley
Berkeley, CA
Trevor Darrell
UC Berkeley
Berkeley, CA
Abstract
Linear Support Vector Machines (SVMs) have become very popular in vision as
part of state-of-the-art object recognition and other classification tasks but require
high dimensional feature spaces for good performance. Deep learning methods
can find more compact representations but current methods employ multilayer
perceptrons that require solving a difficult, non-convex optimization problem. We
propose a deep non-linear classifier whose layers are SVMs and which incorporates random projection as its core stacking element. Our method learns layers of
linear SVMs recursively transforming the original data manifold through a random projection of the weak prediction computed from each layer. Our method
scales as linear SVMs, does not rely on any kernel computations or nonconvex
optimization, and exhibits better generalization ability than kernel-based SVMs.
This is especially true when the number of training samples is smaller than the
dimensionality of data, a common scenario in many real-world applications. The
use of random projections is key to our method, as we show in the experiments
section, in which we observe a consistent improvement over previous ?often more
complicated? methods on several vision and speech benchmarks.
1
Introduction
In this paper, we focus on the learning of a general-purpose non-linear classifier applied to perceptual
signals such as vision and speech. The Support Vector Machine (SVM) has been a popular method
for multimodal classification tasks since its introduction, and one of its main advantages is the simplicity of training a linear model. Linear SVMs often fail to solve complex problems however, and
with non-linear kernels, SVMs usually suffer from speed and memory issues when faced with very
large-scale data, although techniques such as non-convex optimization [6] or spline approximations
[19] exist for speed-ups. In addition, finding the ?oracle? kernel for a specific task remains an open
problem, especially in applications such as vision and speech.
Our aim is to design a classifier that combines the simplicity of the linear Support Vector Machine
(SVM) with the power derived from deep architectures. The new technique we propose follows
the philosophy of ?stacked generalization? [23], i.e. the framework of building layer-by-layer architectures, and is motivated by the recent success of a convex stacking architecture which uses a
simplified form of neural network with closed-form, convex learning [10]. Specifically, we propose
a new stacking technique for building a deep architecture, using a linear SVM as the base building
block, and a random projection as its core stacking element.
The proposed model, which we call the Random Recursive SVM (R2 SVM), involves an efficient,
feed-forward convex learning procedure. The key element in our convex learning of each layer is to
randomly project the predictions of the previous layer SVM back to the original feature space. As we
will show in the paper, this could be seen as recursively transforming the original data manifold so
that data from different classes are moved apart, leading to better linear separability in the subsequent
layers. In particular, we show that randomly generating projection parameters, instead of fine-tuning
them using backpropagation, suffices to achieve a significant performance gain. As a result, our
1
Code 2
Layer1 (linear SVM)
Layer2
Layer3
Layer 2
Code 1
Input Space
Layer 3
Figure 1: A conceptual example of Random Recursive SVM separating edges from cross-bars. Starting from data manifolds that are not linearly separable, our method transforms the data manifolds
in a stacked way to find a linear separating hyperplane in the high layers, which corresponds to
non-linear separating hyperplanes in the lower layers. Non-linear classification is achieved without
kernelization, using a recursive architecture.
model does not require any complex learning techniques other than training linear SVMs, while
canonical deep architectures usually require carefully designed pre-training and fine-tuning steps,
which often depend on specific applications.
Using linear SVMs as building blocks our model scales in the same way as the linear SVM does,
enabling fast computation during both training and testing time. While linear SVM fails to solve
non-linearly separable problems, the simple non-linearity in our algorithm, introduced with sigmoid
functions, is shown to adapt to a wide range of real-world data with the same learning structure.
From a kernel based perspective, our method could be viewed as a special non-linear SVM, with
the benefit that the non-linear kernel naturally emerges from the stacked structure instead of being defined as in conventional algorithms. This brings additional flexibility to the applications, as
task-dependent kernel designs usually require detailed domain-specific knowledge, and may not
generalize well due to suboptimal choices of non-linearity. Additionally, kernel SVMs usually suffer from speed and memory issues when faced with large-scale data, although techniques such as
non-convex optimization [6] exist for speed-ups.
Our findings suggest that the proposed model, while keeping the simplicity and efficiency of training
a linear SVM, can exploit non-linear dependencies with the proposed deep architecture, as suggested
by the results on two well known vision and speech datasets. In addition, our model performs better
than other non-linear models under small training set sizes (i.e. it exhibits better generalization gap),
which is a desirable property inherited from the linear model used in the architecture presented in
the paper.
2
Previous Work
There has been a trend on object, acoustic and image classification to move the complexity from
the classifier to the feature extraction step. The main focus of many state of the art systems has
been to build rich feature descriptors (e.g. SIFT [18], HOG [7] or MFCC [8]), and use sophisticated
non-linear classifiers, usually based on kernel functions and SVM or mixture models. Thus, the
complexity of the overall system (feature extractor followed by the non-linear classifier) is shared
in the two blocks. Vector Quantization [12], and Sparse Coding [21, 24, 26] have theoretically and
empirically been shown to work well with linear classifiers. In [4], the authors note that the choice
of codebook does not seem to impact performance significantly, and encoding via an inner product
plus a non-linearity can effectively replace sparse coding, making testing significantly simpler and
faster.
A disturbing issue with sparse coding + linear classification is that with a limited codebook size,
linear separability might be an overly strong statement, undermining the use of a single linear classifier. This has been empirically verified: as we increase the codebook size, the performance keeps
improving [4], indicating that such representations may not be able to fully exploit the complexity
2
of the data [2]. In fact, recent success on PASCAL VOC could partially be attributed to a huge
codebook [25]. While this is theoretically valid, the practical advantage of linear models diminishes quickly, as the computation cost of feature generation, as well as training a high-dimensional
classifier (despite linear), can make it as expensive as classical non-linear classifiers.
Despite this trend to rely on linear classifiers and overcomplete feature representations, sparse coding is still a flat model, and efforts have been made to add flexibility to the features. In particular,
Deep Coding Networks [17] proposed an extension where a higher order Taylor approximation of
the non-linear classification function is used, which shows improvements over coding that uses one
layer. Our approach can be seen as an extension to sparse coding used in a stacked architecture.
Stacking is a general philosophy that promotes generalization in learning complex functions and that
improves classification performance. The method presented in this paper is a new stacking technique
that has close connections to several stacking methods developed in the literature, which are briefly
surveyed in this section. In [23], the concept of stacking was proposed where simple modules of
functions or classifiers are ?stacked? on top of each other in order to learn complex functions or
classifiers. Since then, various ways of implementing stacking operations have been developed, and
they can be divided into two general categories. In the first category, stacking is performed in a
layer-by-layer fashion and typically involves no supervised information. This gives rise to multiple
layers in unsupervised feature learning, as exemplified in Deep Belief Networks [14, 13, 9], layered
Convolutional Neural Networks [15], Deep Auto-encoder [14, 9], etc. Applications of such stacking
methods includes object recognition [15, 26, 4], speech recognition [20], etc.
In the second category of techniques, stacking is carried out using supervised information. The modules of the stacking architectures are typically simple classifiers. The new features for the stacked
classifier at a higher level of the hierarchy come from concatenation of the classifier output of lower
modules and the raw input features. Cohen and de Carvalho [5] developed a stacking architecture
where the simple module is a Conditional Random Field. Another successful stacking architecture
reported in [10, 11] uses supervised information for stacking where the basic module is a simplified
form of multilayer perceptron where the output units are linear and the hidden units are sigmoidal
nonlinear. The linearity in the output units permits highly efficient, closed-form estimation (results
of convex optimization) for the output network weights given the hidden units? outputs. Stacked
context has also been used in [3], where a set of classifier scores are stacked to produce a more
reliable detection. Our proposed method will build a stacked architecture where each layer is an
SVM, which has proven to be a very successful classifier for computer vision applications.
3
The Random Recursive SVM
In this section we formally introduce the Random Recursive SVM model, and discuss the motivation and justification behind it. Specifically, we consider a training set that contains N pairs of
tuples (d(i) , y (i) ), where d(i) ? RD is the feature vector, and y (i) ? {1, . . . , C} is the class label
corresponding to the i-th sample.
As depicted in Figure 2(b), the model is built by multiple layers of blocks, which we call Random
SVMs, that each learns a linear SVM classifier and transforms the data based on a random projection
of previous layers SVM outputs. The linear SVM classifiers are learned in a one-vs-all fashion.
For convenience, let ? ? RD?C be the classification matrix by stacking each parameter vector
column-wise, so that o(i) = ? T d(i) is the vector of scores for each class corresponding to the
sample d(i) , and y?(i) = arg maxc ?c T d(i) is the prediction for the i-th sample if we want to make
final predictions. From this point onward, we drop the index ?(i) for the i-th sample for notational
convenience.
3.1
Recursive Transform of Input Features
Figure 2(b) visualizes one typical layer in the pipeline of our algorithm. Each layer takes the output
of the previous layer, (starting from x1 = d for the first layer as our initial input), and feeds it to
a standard linear SVM that gives the output o1 . In general, o1 would not be a perfect prediction,
but would be better than a random guess. We then use a random projection matrix W2,1 ? RD?C
whose elements are sampled from N (0, 1) to project the output o1 into the original feature space,
3
RSVM
d
RSVM
???
o1???l
xl
prediction
transformed data
RSVM
1
1
SVM
ol
W
+
o1???l
xl
d
2
(a) Layered structure of R SVM
(b) Details of an RSVM layer.
Figure 2: The pipeline of the proposed Random Recursive SVM model. (a) The model is built with
layers of Random SVM blocks, which are based on simple linear SVMs. Speech and image signals
are provided as input to the first level. (b) For each random SVM layer, we train a linear SVM
using the transformed data manifold by combining the original features and random projections of
previous layers? predictions.
in order to use this noisy prediction to modify the original features. Mathematically, the additively
modified feature space after applying the linear SVM to obtain o1 is:
x2 = ?(d + ?W2,1 o1 ),
where ? is a weight parameter that controls the degree with which we move the original data sample
x1 , and ?(?) is the sigmoid function, which introduces non-linearity in a similar way as in the
multilayer perceptron models, and prevents the recursive structure to degenerate to a trivial linear
model. In addition, such non-linearity, akin to neural networks, has desirable properties in terms of
Gaussian complexity and generalization bounds [1].
Intuitively, the random projection aims to push data from different classes towards different directions, so that the resulting features are more likely to be linearly separable. The sigmoid function
controls the scale of the resulting features, and at the same time prevents the random projection to
be ?too confident? on some data points, as the prediction of the lower-layer is still imperfect. An important note is that, when the dimension of the feature space D is relatively large, then the column
vectors of Wl are much likely to be approximately orthogonal, known as the quasi-orthogonality
property of high-dimensional spaces [16]. At the same time, the column vectors correspond to the
per class bias applied to the original sample d if the output was close to ideal (i.e. ol = ec , where
ec is the one-hot encoding representing class c), so the fact that they are approximately orthogonal
means that (with high probability) they are pushing the per-class manifolds apart.
The training of the R2 SVM is then carried out in a purely feed-forward way. Specifically, we train
a linear SVM for the l-th layer, and then compute the input of the next layer as the addition of the
original feature space and the random projection of previous layers? outputs, which is then passed
through a simple sigmoid function:
ol = ?lT xl
xl+1 = ?(d + ?Wl+1 [oT1 , oT2 , ? ? ? , oTl ]T )
where ?l are the linear SVM parameters trained with xl , and Wl+1 is the concatenation of l random projection matrices [Wl+1,1 , Wl+1,2 , ? ? ? , Wl+1,l ], one for each previous layer, each being a
random matrix sampled from N (0, 1).
Following [10], for each layer we use the outputs from all lower modules, instead of only the immediately lower module. A chief difference of our proposed method from previous approaches is that,
instead of concatenating predictions with the raw input data to form the new expanded input data,
we use the predictions to modify the features in the original space with a non-linear transformation.
As will be shown in the next section, experimental results demonstrate that this approach is superior
than simple concatenation in terms of classification performance.
3.2
On the Randomness in R2 SVM
The motivation behind our method is that projections of previous predictions help to move apart the
manifolds that belong to each class in a recursive fashion, in order to achieve better linear separability (Figure 1 shows a vision example separating different image patches).
Specifically, consider that we have a two class problem which is non-linearly separable. The following Lemma illustrates the fact that, if we are given an oracle prediction of the labels, it is possible to
4
add an offset to each class to ?pull? the manifolds apart with this new architecture, and to guarantee
an improvement on the training set if we assume perfect labels.
Lemma 3.1 Let T be a set of N tuples (d(i) , y (i) ), where d(i) ? RD is the feature vector, and
y (i) ? {1, . . . , C} is the class label corresponding to the i-th sample. Let ? ? RD?C be the
corresponding linear SVM solution with objective function value fT ,? . Then, there exist wi ? RD
for i = {1, . . . , C} such that the translated set T 0 defined as (d(i) + wy(i) , y (i) ) has a linear SVM
solution ? 0 which achieves a better optimum fT 0 ,?0 < fT ,? .
Proof Let ?i be the i-th column of ? (which corresponds to the one vs all classifier for class i).
Define wi = ||??ii||2 . Then we have
2
max(0, 1 ?
?yT(i) (d(i)
+ wy(i) )) = max(0, 1 ? (?yT(i) d(i) + 1)) ? max(0, 1 ? (?yT(i) d(i) )),
which leads to fT 0 ,? ? fT ,? . Since ? 0 is defined to be the optimum for the set T 0 , fT 0 ,?0 ? fT 0 ,? ,
which concludes the proof.
Lemma 3.1 would work for any monotonically decreasing loss function (in particular, for the hinge
loss of SVM), and motivates our search for a transform of the original features to achieve linear
separability, under the guidance of SVM predictions. Note that we would achieve perfect classification under the assumption that we have oracle labels, while we only have noisy predictions for each
class y?(i) during testing time. Under such noisy predictions, a deterministic choice of wi , especially
linear combinations of the data as in the proof for Lemma 3.1, suffers from over-confidence in the
labels and may add little benefit to the learned linear SVMs.
A first choice to avoid degenerated results is to take random weights. This enables us to use labelrelevant information in the predictions, while at the same time de-correlate it with the original input
d. Surprisingly, as shown in Figure 4(a), randomness achieves a significant performance gain in
contrast to the ?optimal? direction given by Lemma 3.1 (which degenerates due to imperfect predictions), or alternative stacking strategies such as concatenation as in [10]. We also note that beyond
sampling projection matrices from a zero-mean Gaussian distribution, a biased sampling that favors
directions near the ?optimal? direction may also work, but the degree of bias would be empirically
difficult to determine and may be data-dependent. In general, we aim to avoid supervision in the
projection parameters, as trying to optimize the weights jointly would defeat the purpose of having a
computationally efficient method, and would, perhaps, increase training accuracy at the expense of
over-fitting. The risk of over-fitting is also lower in this way, as we do not increase the dimensionality of the input space, and we do not learn the matrices Wl , which means we pass a weak signal
from layer to layer. Also, training Random Recursive SVM is carried out in a feed-forward way,
where each step involves a convex optimization problem that can be efficiently solved.
3.3
Synthetic examples
To visually show the effectiveness of our approach in learning non-linear SVM classifiers without
kernels, we apply our algorithm to two synthetic examples, neither of which can be linearly separated. The first example contains two classes distributed in a two-moon shaped way, and the second
example contains data distributed as two more complex spirals. Figure 3 visualizes the classification
hyperplane at different stages of our algorithm. The first layer of our approach is identical to the
linear SVM, which is not able to separate the data well. However, when classifiers are recursively
stacked in our approach, the classification hyperplane is able to adapt to the nonlinear characteristics
of the two classes.
4
Experiments
In this section we empirically evaluate our method, and support our claims: (1) for low-dimensional
features, linear SVMs suffer from their limited representation power, while R2 SVMs significantly
improve performance; (2) for high-dimensional features, and especially when faced with limited
amount of training data, R2 SVMs exhibit better generalization power than conventional kernelized
non-linear SVMs; and (3) the random, feed-forward learning scheme is able to achieve state-of-theart performance, without complex fine-tuning.
5
(a)
(b)
(c)
(d)
(e)
(f)
Figure 3: Classification hyperplane from different stages of our algorithm: first layer, second layer,
and final layer outputs. (a)-(c) show the two-moon data and (d)-(f) show the spiral data.
80
78
69
76
68
74
Accuracy
Accuracy
Accuracy vs. Number of Layers on CIFAR-10
random
concatenation
deterministic
67
66
65
64
0
10
20
30
Layer Index
40
50
72
70
68
Linear SVM
66
R2 SVM
RBF-SVM
64
60
(a)
0
200
400
600
800
1000
Codebook Size
1200
1400
1600
(b)
Figure 4: Results on CIFAR-10. (a) Accuracy versus number of layers on CIFAR-10 for Random
Recursive SVM with all the training data and 50 codebook size, for a baseline where the output of
a classifier is concatenated with the input feature space, and for a deterministic version of recursive
SVM where the projections are as in the proof of Lemma 3.1. (b) Accuracy versus codebook size
on CIFAR-10 for linear SVM, RBF SVM, and our proposed method.
We describe the experimental results on two well known classification benchmarks: CIFAR-10 and
TIMIT. The CIFAR-10 dataset contains large amount of training/testing data focusing on object
classification. TIMIT is a speech database that contains two orders of magnitude more training
samples than the other datasets, and the largest output label space.
Recall that our method relies on two parameters: ?, which is the factor that controls how much to
shift the original feature space, and C, the regularization parameter of the linear SVM trained at
1
for all the experiments, which was experimentally found to work well for
each layer. ? is set to 10
one of the CIFAR-10 configurations. C controls the regularization of each layer, and is an important
parameter ? setting it too high will yield overfitting as the number of layers is increased. As a result,
we learned this parameter via cross validation for each configuration, which is the usual practice of
other approaches. Lastly, for each layer, we sample a new random matrix Wl . As a result, even
if the training and testing sets are fixed, randomness still exists in our algorithm. Although one
may expect the performance to fluctuate from run to run, in practice we never observe a standard
deviation larger than 0.25 (and typically less than 0.1) for the classification accuracy, over multiple
runs of each experiment.
CIFAR-10
The CIFAR-10 dataset contains 10 object classes with a fair amount of training examples per class
(5000), with images of small size (32x32 pixels). For this dataset, we follow the standard pipeline
defined in [4]: dense 6x6 local patches with ZCA whitening are extracted with stride 1, and thresholding coding with ? = 0.25 is adopted for encoding. The codebook is trained with OMP-1. The
features are then average-pooled on a 2 ? 2 grid to form the global image representation. We tested
three classifiers: linear SVM, RBF kernel based SVM, and the Random Recursive SVM model as
introduced in Section 3.
As have been shown in Figure 4(b), the performance is almost monotonically increasing as we stack
more layers in R2 SVM. Also, stacks of SVMs by concatenation of output and input feature space
does not yield much gain above 1 layer (which is a linear SVM), and neither does a deterministic
6
Table 2: Results on CIFAR-10, with 25 training
data per class.
Table 1: Results on CIFAR-10, with different
codebook sizes (hence feature dimensions).
Method
Linear SVM
RBF SVM
R2 SVM
DCN
Linear SVM
RBF SVM
R2 SVM
DCN
Tr. Size
All
All
All
All
All
All
All
All
Code. Size
50
50
50
50
1600
1600
1600
1600
Acc.
64.7%
74.4%
69.3%
67.2%
79.5%
79.0%
79.7%
78.1%
Method
Linear SVM
RBF SVM
R2 SVM
DCN
Linear SVM
RBF SVM
R2 SVM
DCN
Tr. Size
25/class
25/class
25/class
25/class
25/class
25/class
25/class
25/class
Code. Size
50
50
50
50
1600
1600
1600
1600
Acc.
41.3%
42.2%
42.8%
40.7%
44.1%
41.6%
45.1%
42.7%
version of recursive SVM where a projection matrix as in the proof for Lemma 3.1 is used. For
the R2 SVM, in most cases the performance asymptotically converges within 30 layers. Note that
training each layer involves training a linear SVM, so the computational complexity is simply linear
to the depth of our model. In contrast to this, the difficulty of training deep learning models based on
many hidden layers may be significantly harder, partially due to the lack of supervised information
for its hidden layers.
Figure 4(b) shows the effect that the feature dimensionality (controlled by the codebook size of
OMP-1) has on the performance of the linear and non-linear classifiers, and Table 1 provides representative numerical results. In particular, when the codebook size is low, the assumption that we
can approximate the non-linear function f as a globally linear classifier fails, and in those cases the
R2 SVM and RBF SVM clearly outperform the linear SVM. Moreover, as the codebook size grows,
non-linear classifiers, represented by RBF SVM in our experiments, suffer from the curse of dimensionality partially due to the large dimensionality of the over-complete feature representation. In
fact, as the dimensionality of the over-complete representation becomes too large, RBF SVM starts
performing worse than linear SVM. For linear SVM, increasing the codebook size makes it perform
better with respect to non-linear classifiers, but additional gains can still be consistently obtained by
the Random Recursive SVM method. Also note how our model outperforms DCN, another stacking
architecture proposed in [10].
Similar to the change of codebook sizes, it is interesting to experiment with the number of training
examples per class. In the case where we use fewer training examples per class, little gain is obtained
by classical RBF SVMs, and performance even drops when the feature dimension is too high (Table 2), while our Random Recursive SVM remains competitive and does not overfit more than any
baseline. This again suggests that our proposed method may generalize better than RBF, which is a
desirable property when the number of training examples is small with respect to the dimensionality
of the feature space, which are cases of interest to many computer vision applications.
In general, our method is able to combine the advantages of both linear and nonlinear SVM: it has
higher representation power than linear SVM, providing consistent performance gains, and at the
same time has a better robustness against overfitting. It is also worth pointing out again that R2 SVM
is highly efficient, since each layer is a simple linear SVM that can be carried out by simple matrix
multiplication. On the other hand, non-linear SVMs like RBF SVM may take much longer to run
especially for large-scale data, when special care has to be taken [6].
TIMIT
Finally, we report our experiments using the popular speech database TIMIT. The speech data is
analyzed using a 25-ms Hamming window with a 10-ms fixed frame rate. We represent the speech
using first- to 12th-order Mel frequency cepstral coefficients (MFCCs) and energy, along with their
first and second temporal derivatives. The training set consists of 462 speakers, with a total number
of frames in the training data of size 1.1 million, making classical kernel SVMs virtually impossible
to train. The development set contains 50 speakers, with a total of 120K frames, and is used for
cross validation. Results are reported using the standard 24-speaker core test set consisting of 192
sentences with 7333 phone tokens and 57920 frames.
The data is normalized to have zero mean and unit variance. All experiments used a context window
of 11 frames. This gives a total of 39 ? 11 = 429 elements in each feature vector. We used 183
7
Table 3: Performance comparison on TIMIT.
Method
Linear SVM
R2 SVM
DCN, learned per-layer
DCN, jointly fine-tuned
Phone state accuracy
50.1% (2000 codes) 53.5% (8000 codes)
53.5% (2000 codes) 55.1% (8000 codes)
48.5%
54.3%
target class labels (i.e., three states for each of the 61 phones), which are typically called ?phone
states?, with a one-hot encoding.
The pipeline adopted is otherwise unchanged from the previous dataset. However, we did not apply pooling, and instead coded the whole 429 dimensional vector with a dictionary with 2000 and
8000 elements found with OMP-1, with the same parameter ? as in the vision tasks. The competitive results with a framework known in vision adapted to speech [22], as shown in Table 3, are
interesting on their own right, as the optimization framework for linear SVM is well understood,
and the dictionary learning and encoding step are almost trivial and scale well with the amounts of
data available in typical speech tasks. On the other hand, our R2 SVM boosts performance quite
significantly, similar to what we observed on other datasets.
In Table 3 we also report recent work on this dataset [10], which uses multi-layer perceptron with
a hidden layer and linear output, and stacks each block on top of each other. In their experiments,
the representation used from the speech signal is not sparse, and uses instead Restricted Boltzman
Machine, which is more time consuming to learn. In addition, only when jointly optimizing the
network weights (fine tuning), which requires solving a non-convex problem, the accuracy achieves
state-of-the-art performance of 54.3%. Our method does not include this step, which could be
added as future work; we thus think the fairest comparison of our result is to the per-layer DCN
performance.
In all the experiments above we have observed two advantages of R2 SVM. First, it provides a consistent improvement over linear SVM. Second, it can offer a better generalization ability over nonlinear SVMs, especially when the ratio of dimensionality to the number of training data is large.
These advantages, combined with the fact that R2 SVM is efficient in both training and testing, suggests that it could be adopted as an improvement over the existing classification pipeline in general.
We also note that in the current work we have not employed techniques of fine tuning similar to
the one employed in the architecture of [10]. Fine tuning of the latter architecture has accounted
for between 10% to 20% error reduction, and reduces the need for having large depth in order to
achieve a fixed level of recognition accuracy. Development of fine-tuning is expected to improve
recognition accuracy further, and is in the interest of future research. However, even without fine
tuning, the recognition accuracy is still shown to consistently improve until convergence, showing
the robustness of the proposed method.
5
Conclusions and Future Work
In this paper, we investigated low level vision and audio representations. We combined the simplicity of linear SVMs with the power derived from deep architectures, and proposed a new stacking
technique for building a better classifier, using linear SVM as the base building blocks and emplying
a random non-linear projection to add flexibility to the model. Our work is partially motivated by the
recent trend of using coding techniques as feature representation with relatively large dictionaries.
The chief advantage of our method lies in the fact that it learns non-linear classifiers without the
need of kernel design, while keeping the efficiency of linear SVMs. Experimental results on vision
and speech datasets showed that the method provides consistent improvement over linear baselines,
even with no learning of the model parameters. The convexity of our model could lead to better
theoretical analysis of such deep structures in terms of generalization gap, adds interesting opportunities for learning using large computer clusters, and would potentially help understanding the
nature of other deep learning approaches, which is the main interest of future research.
8
References
[1] P L Bartlett and S Mendelson. Rademacher and gaussian complexities: Risk bounds and
structural results. The Journal of Machine Learning Research, 3:463?482, 2003.
[2] O Boiman, E Shechtman, and M Irani. In defense of nearest-neighbor based image classification. In CVPR, 2008.
[3] L Bourdev, S Maji, T Brox, and J Malik. Detecting people using mutually consistent poselet
activations. In ECCV, 2010.
[4] A Coates and A Ng. The importance of encoding versus training with sparse coding and vector
quantization. In ICML, 2011.
[5] W Cohen and V R de Carvalho. Stacked sequential learning. In IJCAI, 2005.
[6] R Collobert, F Sinz, J Weston, and L Bottou. Trading convexity for scalability. In ICML, 2006.
[7] N Dalal. Histograms of oriented gradients for human detection. In CVPR, 2005.
[8] S Davis and P Mermelstein. Comparison of parametric representations for monosyllabic word
recognition in continuously spoken sentences. Acoustics, Speech and Signal Processing, IEEE
Transactions on, 28(4):357?366, 1980.
[9] L Deng, M L Seltzer, D Yu, A Acero, A Mohamed, and G Hinton. Binary coding of speech
spectrograms using a deep auto-encoder. In Interspeech, 2010.
[10] L Deng and D Yu. Deep convex network: A scalable architecture for deep learning. In Interspeech, 2011.
[11] L Deng, D Yu, and J Platt. Scalable stacking and learning for building deep architectures. In
ICASSP, 2012.
[12] L Fei-Fei and P Perona. A bayesian hierarchical model for learning natural scene categories.
In CVPR, 2005.
[13] G Hinton, L Deng, D Yu, G Dahl, A Mohamed, N Jaitly, A Senior, V Vanhoucke, P Nguyen,
T Sainath, and B Kingsbury. Deep Neural Networks for Acoustic Modeling in Speech Recognition. IEEE Signal Processing Magazine, 28:82?97, 2012.
[14] G Hinton and R Salakhutdinov. Reducing the dimensionality of data with neural networks.
Science, 313(5786):504, 2006.
[15] K Jarrett, K Kavukcuoglu, M A Ranzato, and Y LeCun. What is the best multi-stage architecture for object recognition? In ICCV, 2009.
[16] T Kohonen. Self-Organizing Maps. Springer-Verlag, 2001.
[17] Y Lin, T Zhang, S Zhu, and K Yu. Deep coding network. In NIPS, 2010.
[18] D Lowe. Distinctive image features from scale-invariant keypoints. IJCV, 2004.
[19] S Maji, AC Berg, and J Malik. Classification using intersection kernel support vector machines
is efficient. In Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference
on, pages 1?8. Ieee, 2008.
[20] A Mohamed, D Yu, and L Deng. Investigation of full-sequence training of deep belief networks
for speech recognition. In Interspeech, 2010.
[21] B Olshausen and D J Field. Sparse coding with an overcomplete basis set: a strategy employed
by V1? Vision research, 37(23):3311?3325, 1997.
[22] O Vinyals and L Deng. Are Sparse Representations Rich Enough for Acoustic Modeling? In
Interspeech, 2012.
[23] D H Wolpert. Stacked generalization. Neural networks, 5(2):241?259, 1992.
[24] J Yang, K Yu, and Y Gong. Linear spatial pyramid matching using sparse coding for image
classification. In CVPR, 2009.
[25] J Yang, K Yu, and T Huang. Efficient highly over-complete sparse coding using a mixture
model. In ECCV, 2010.
[26] K Yu and T Zhang. Improved Local Coordinate Coding using Local Tangents. In ICML, 2010.
9
|
4747 |@word version:2 briefly:1 dalal:1 open:1 additively:1 tr:2 harder:1 recursively:3 shechtman:1 reduction:1 configuration:2 contains:7 score:2 initial:1 tuned:1 outperforms:1 existing:1 current:2 activation:1 subsequent:1 numerical:1 enables:1 designed:1 drop:2 v:3 fewer:1 guess:1 core:3 provides:3 detecting:1 codebook:14 hyperplanes:1 sigmoidal:1 simpler:1 zhang:2 kingsbury:1 along:1 become:1 consists:1 ijcv:1 combine:2 fitting:2 introduce:1 theoretically:2 expected:1 multi:2 ol:3 salakhutdinov:1 voc:1 decreasing:1 globally:1 little:2 curse:1 window:2 increasing:2 becomes:1 project:2 provided:1 linearity:6 moreover:1 what:2 developed:3 spoken:1 finding:2 transformation:1 sinz:1 guarantee:1 temporal:1 berkeley:6 classifier:31 platt:1 control:4 unit:5 understood:1 local:3 modify:2 despite:2 encoding:6 approximately:2 might:1 plus:1 suggests:2 limited:3 monosyllabic:1 range:1 jarrett:1 practical:1 lecun:1 testing:6 recursive:17 block:7 practice:2 backpropagation:1 procedure:1 significantly:5 projection:18 ups:2 pre:1 confidence:1 word:1 matching:1 suggest:1 convenience:2 close:2 layered:2 acero:1 context:2 applying:1 risk:2 impossible:1 optimize:1 conventional:2 deterministic:4 map:1 yt:3 dcn:8 starting:2 sainath:1 convex:11 simplicity:4 x32:1 immediately:1 mermelstein:1 pull:1 otl:1 coordinate:1 justification:1 hierarchy:1 target:1 magazine:1 us:5 jaitly:1 element:6 trend:3 recognition:11 expensive:1 database:2 observed:2 ft:7 module:7 solved:1 ranzato:1 transforming:2 convexity:2 complexity:6 trained:3 depend:1 solving:2 purely:1 distinctive:1 efficiency:2 basis:1 translated:1 multimodal:1 icassp:1 various:1 represented:1 maji:2 stacked:12 train:3 separated:1 fast:1 describe:1 whose:2 quite:1 larger:1 solve:2 cvpr:5 otherwise:1 encoder:2 ability:2 favor:1 think:1 transform:2 noisy:3 jointly:3 final:2 advantage:6 sequence:1 propose:3 product:1 kohonen:1 combining:1 organizing:1 flexibility:3 achieve:6 degenerate:2 moved:1 scalability:1 convergence:1 cluster:1 darrell:1 optimum:2 rademacher:1 produce:1 generating:1 perfect:3 converges:1 ijcai:1 object:6 help:2 bourdev:1 ac:1 gong:1 nearest:1 strong:1 involves:4 come:1 trading:1 direction:4 human:1 ot1:1 seltzer:1 implementing:1 require:5 suffices:1 generalization:9 investigation:1 mathematically:1 extension:2 onward:1 visually:1 claim:1 pointing:1 achieves:3 dictionary:3 purpose:2 estimation:1 diminishes:1 label:8 largest:1 wl:8 clearly:1 gaussian:3 aim:3 modified:1 avoid:2 fluctuate:1 derived:2 focus:2 improvement:6 notational:1 consistently:2 contrast:2 zca:1 baseline:3 dependent:2 typically:4 hidden:5 kernelized:1 perona:1 transformed:2 quasi:1 layer2:1 pixel:1 issue:3 classification:20 overall:1 pascal:1 arg:1 development:2 art:3 special:2 spatial:1 brox:1 uc:3 field:2 never:1 extraction:1 having:2 sampling:2 ng:1 identical:1 shaped:1 yu:9 unsupervised:1 icml:3 theart:1 future:4 report:2 spline:1 employ:1 randomly:2 oriented:1 consisting:1 microsoft:1 detection:2 huge:1 interest:3 highly:3 introduces:1 mixture:2 analyzed:1 behind:2 edge:1 orthogonal:2 taylor:1 overcomplete:2 guidance:1 theoretical:1 increased:1 column:4 modeling:2 stacking:21 cost:1 deviation:1 successful:2 too:4 reported:2 dependency:1 synthetic:2 combined:2 confident:1 quickly:1 continuously:1 again:2 huang:1 worse:1 derivative:1 leading:1 li:1 de:3 stride:1 coding:16 pooled:1 includes:1 coefficient:1 collobert:1 performed:1 lowe:1 closed:2 start:1 competitive:2 complicated:1 inherited:1 jia:1 timit:5 accuracy:12 convolutional:1 descriptor:1 moon:2 efficiently:1 characteristic:1 correspond:1 yield:2 variance:1 boiman:1 generalize:2 weak:2 raw:2 bayesian:1 kavukcuoglu:1 mfcc:1 worth:1 visualizes:2 randomness:3 acc:2 maxc:1 suffers:1 trevor:1 against:1 energy:1 frequency:1 mohamed:3 naturally:1 proof:5 attributed:1 hamming:1 gain:6 sampled:2 dataset:5 popular:3 recall:1 knowledge:1 emerges:1 dimensionality:9 improves:1 carefully:1 sophisticated:1 back:1 focusing:1 feed:5 higher:3 supervised:4 follow:1 x6:1 improved:1 stage:3 lastly:1 until:1 overfit:1 hand:2 nonlinear:4 lack:1 brings:1 perhaps:1 grows:1 olshausen:1 building:7 effect:1 concept:1 true:1 normalized:1 regularization:2 hence:1 irani:1 during:2 interspeech:4 self:1 davis:1 speaker:3 mel:1 m:2 trying:1 complete:3 demonstrate:1 performs:1 image:8 wise:1 common:1 sigmoid:4 superior:1 empirically:4 cohen:2 defeat:1 million:1 belong:1 significant:2 tuning:8 rd:6 grid:1 mfccs:1 supervision:1 longer:1 whitening:1 etc:2 base:2 add:5 own:1 recent:4 showed:1 perspective:1 optimizing:1 apart:4 phone:4 scenario:1 poselet:1 nonconvex:1 verlag:1 binary:1 success:2 seen:2 additional:2 care:1 omp:3 deng:7 employed:3 spectrogram:1 determine:1 monotonically:2 signal:6 ii:1 multiple:3 desirable:3 keypoints:1 reduces:1 full:1 faster:1 adapt:2 cross:3 offer:1 cifar:11 lin:1 divided:1 promotes:1 coded:1 controlled:1 impact:1 prediction:18 scalable:2 basic:1 multilayer:3 vision:14 histogram:1 kernel:14 represent:1 pyramid:1 achieved:1 addition:5 want:1 fine:9 w2:2 biased:1 pooling:1 virtually:1 incorporates:1 seem:1 effectiveness:1 call:2 structural:1 layer3:1 near:1 ideal:1 yang:2 spiral:2 enough:1 architecture:21 suboptimal:1 inner:1 imperfect:2 shift:1 motivated:2 bartlett:1 defense:1 passed:1 effort:1 akin:1 suffer:4 speech:18 deep:20 detailed:1 transforms:2 amount:4 svms:24 category:4 outperform:1 exist:3 canonical:1 coates:1 overly:1 per:8 key:2 yangqing:1 neither:2 verified:1 dahl:1 v1:1 asymptotically:1 run:4 almost:2 patch:2 layer:57 bound:2 followed:1 oracle:3 adapted:1 orthogonality:1 fei:2 x2:1 flat:1 scene:1 speed:4 performing:1 separable:4 expanded:1 relatively:2 combination:1 smaller:1 separability:4 wi:3 making:2 intuitively:1 restricted:1 iccv:1 invariant:1 pipeline:5 taken:1 computationally:1 mutually:1 remains:2 discus:1 fail:1 adopted:3 available:1 operation:1 permit:1 apply:2 observe:2 hierarchical:1 undermining:1 alternative:1 robustness:2 original:13 top:2 include:1 opportunity:1 hinge:1 pushing:1 exploit:2 concatenated:1 especially:6 build:2 classical:3 unchanged:1 move:3 objective:1 added:1 malik:2 strategy:2 parametric:1 usual:1 exhibit:3 gradient:1 ot2:1 separate:1 separating:4 concatenation:6 manifold:8 trivial:2 degenerated:1 code:8 o1:7 index:2 providing:1 ratio:1 difficult:2 statement:1 hog:1 expense:1 potentially:1 rise:1 design:3 motivates:1 perform:1 datasets:4 benchmark:2 enabling:1 hinton:3 frame:5 stack:3 introduced:2 pair:1 connection:1 sentence:2 acoustic:4 learned:4 boost:1 nip:1 able:5 redmond:1 bar:1 usually:5 suggested:1 exemplified:1 wy:2 beyond:1 pattern:1 built:2 reliable:1 memory:2 max:3 belief:2 power:5 hot:2 difficulty:1 rely:2 natural:1 zhu:1 representing:1 scheme:1 improve:3 carried:4 concludes:1 auto:2 faced:3 literature:1 understanding:1 tangent:1 multiplication:1 fully:1 loss:2 expect:1 generation:1 interesting:3 carvalho:2 proven:1 versus:3 validation:2 degree:2 vanhoucke:1 consistent:5 thresholding:1 eccv:2 token:1 surprisingly:1 accounted:1 keeping:2 bias:2 senior:1 perceptron:3 wide:1 neighbor:1 cepstral:1 sparse:11 benefit:2 distributed:2 rsvm:4 dimension:3 depth:2 world:2 layer1:1 rich:2 valid:1 forward:4 author:1 disturbing:1 made:1 simplified:2 nguyen:1 ec:2 boltzman:1 correlate:1 transaction:1 approximate:1 compact:1 keep:1 global:1 overfitting:2 conceptual:1 tuples:2 consuming:1 search:1 chief:2 table:7 additionally:1 learn:3 nature:1 ca:3 improving:1 investigated:1 complex:6 bottou:1 domain:1 did:1 main:3 dense:1 linearly:5 motivation:2 whole:1 fair:1 x1:2 representative:1 fashion:3 fails:2 surveyed:1 concatenating:1 xl:5 lie:1 perceptual:2 extractor:1 learns:3 specific:3 sift:1 showing:1 r2:18 offset:1 svm:88 exists:1 mendelson:1 quantization:2 sequential:1 effectively:1 importance:1 magnitude:1 illustrates:1 push:1 gap:2 wolpert:1 depicted:1 lt:1 intersection:1 simply:1 likely:2 vinyals:2 prevents:2 partially:4 springer:1 corresponds:2 relies:1 extracted:1 weston:1 conditional:1 viewed:1 rbf:13 towards:1 shared:1 replace:1 experimentally:1 change:1 specifically:4 typical:2 reducing:1 hyperplane:4 lemma:7 total:3 called:1 pas:1 experimental:3 perceptrons:1 indicating:1 formally:1 berg:1 support:5 people:1 latter:1 philosophy:2 oriol:1 evaluate:1 kernelization:1 audio:1 tested:1
|
4,141 | 4,748 |
Automatic Feature Induction
for Stagewise Collaborative Filtering
Joonseok Leea , Mingxuan Suna , Seungyeon Kima , Guy Lebanona, b
College of Computing, Georgia Institute of Technology, Atlanta, GA 30332
b
Google Research, Mountain View, CA 94043
{jlee716, msun3, seungyeon.kim}@gatech.edu, [email protected]
a
Abstract
Recent approaches to collaborative filtering have concentrated on estimating an
algebraic or statistical model, and using the model for predicting missing ratings.
In this paper we observe that different models have relative advantages in different regions of the input space. This motivates our approach of using stagewise
linear combinations of collaborative filtering algorithms, with non-constant combination coefficients based on kernel smoothing. The resulting stagewise model
is computationally scalable and outperforms a wide selection of state-of-the-art
collaborative filtering algorithms.
1
Introduction
Recent approaches to collaborative filtering (CF) have concentrated on estimating an algebraic or
statistical model, and using the model for predicting the missing rating of user u on item i. We
denote CF methods as f (u, i), and the family of potential CF methods as F.
Ensemble methods, which combine multiple models from F into a ?meta-model?, have been a
significant research direction in classification and regression. Linear combinations of K models
F (K) (x) =
K
X
?k fk (x)
(1)
k=1
where ?1 , . . . ?K ? R and f1 , . . . , fK ? F, such as boosting or stagewise linear regression and
stagewise logistic regression, enjoy a significant performance boost over the single top-performing
model. This is not surprising since (1) includes as a degenerate case each of the models f ? F by
itself. Stagewise models are greedy incremental models of the form
(?k , fk ) = arg min Risk(F (k?1) + ?k fk ),
k = 1, . . . , K,
(2)
?k ?R,fk ?F
where the parameters of F (K) are estimated one by one without modifying previously selected parameters). Stagewise models have two important benefits: (a) a significant resistance to overfitting,
and (b) computational scalability to large data and high K.
It is somewhat surprising that ensemble methods have had relatively little success in the collaborative
filtering literature. Generally speaking, ensemble or combination methods have shown only a minor
improvement over the top-performing CF methods. The cases where ensemble methods did show
an improvement (for example the Netflix prize winner [10] and runner up), relied heavily on manual
feature engineering, manual parameter setting, and other tinkering.
This paper follows up on an experimental discovery: different recommendation systems perform
better than others for some users and items but not for others. In other words, the relative strengths
of two distinct CF models f1 (u, i), f2 (u, i) ? F depend on the user u and the item i whose rating
1
0.95
User Average
Item Average
0.90
MAE
0.85
0.80
0.75
0.70
40
70
100
130
160
Number of available ratings for target item
190
Figure 1: Test set loss (mean absolute error) of two simple algorithms (user average and item average) on items with different number of ratings.
is being predicted. One example of two such systems appears in Figure 1 that graphs the test-set
loss of two recommendation rules (user average and item average) as a function of the number of
available ratings for the recommended item i. The two recommendation rules outperform each other,
depending on whether the item in question has few or many ratings in the training data. We conclude
from this graph and other comprehensive experiments [14] that algorithms that are inferior in some
circumstances may be superior in other circumstances.
The inescapable conclusion is that the weights ?k in the combination should be functions of u and
i rather than constants
K
X
F (K) (u, i) =
?k (u, i)fk (u, i)
(3)
k=1
where ?k (u, i) ? R and fk ? F for k = 1, . . . , K. In this paper we explore the use of such models
for collaborative filtering, where the weight functions ?k (u, i) are learned from. A major part of our
contribution is a feature induction strategy to identify feature functions expressing useful locality
information. Our experimental study shows that the proposed method outperforms a wide variety of
state-of-the-art and traditional methods, and also outperforms other CF ensemble methods.
2
Related Work
Many memory-based CF methods predict the rating of items based on the similarity of the test user
and the training users [21, 3, 6]. Similarity measures include Pearson correlation [21] and Vector
cosine similarity [3, 6]. Other memory-based CF methods includes item-based CF [25] and a nonparametric probabilistic model based on ranking preference similarities [28].
Model-based CF includes user and item clustering [3, 29, 32], Bayesian networks [3], dependence
network [5] and probabilistic latent variable models [19, 17, 33]. Slope-one [16] achieved fast
and reasonably accurate prediction. The state-of-the-art methods including the Netflix competition
winner are based on matrix factorization. The factorized matrix can be used to fill out the unobserved
entries of the user-rating matrix in a way similar to latent factor analysis [20, 12, 9, 13, 24, 23, 11].
Some recent work suggested that combining different CF models may improve the prediction accuracy. Specifically, a memory-based method linearly combined with a latent factor method [1, 8]
retains the advantages of both models. Ensembles of maximum margin matrix factorizations were
explored to improve the result of a single MMMF model in [4]. A mixture of experts model is
proposed in [27] to linearly combine the prediction results of more than two models. In many cases,
there is significant manual intervention such as setting the combination weights manually.
Feature-weighted linear stacking [26] is the ensemble method most closely related to our approach.
The primary difference is the manual selection of features in [26] as opposed to automatic induction
of local features in our paper that leads to a significant improvement in prediction quality. Model
combination based on locality has been proposed in other machine learning topics, such as classification [31, 18] or sensitivity estimation [2].
2
3
Combination of CF Methods with Non-Constant Weights
Recalling the linear combination (3) from Section 1, we define non-constant combination weights
?k (u, i) that are functions of the user and item that are being predicted. We propose the following
algebraic form
?k (u, i) = ?k hk (u, i),
?k ? R,
hk ? H
(4)
where ?k is a parameter and hk is a function selected from a family H of candidate feature functions.
The combination (3) with non-constant weights (4) enables some CF methods fk to be emphasized
for some user-item combinations through an appropriate selection of the ?k parameters. We assume
that H contains the constant function, capturing the constant-weight combination within our model.
Substituting (4) into (3) we get
F (K) (u, i) =
K
X
?k hk (u, i) fk (u, i),
?k ? R,
hk ? H,
fk ? F.
(5)
k=1
Note that since hk and fk are selected from the sets of CF methods and feature functions respectively,
we may have fj = fl or hj = hl for j 6= l. This is similar to boosting and other stagewise algorithms
where one feature or base learner may be chosen multiple times, effectively updating its associate
feature
functions and parameters. The total weight function associated with a particular f ? F is
P
k:fk =f ?k hk (u, i).
A simple way to fit ? = (?1 , . . . , ?K ) is least squares
2
X
? ? = arg min
F (K) (u, i) ? Ru,i ,
??C
(6)
u,i
where Ru,i denotes the rating of user u on item i in the training data and the summation ranges over
all ratings in the training set. A variation of (6), where ? is constrained such that ?k (u, i) ? 0, and
PK
k=1 ?k (u, i) = 1 endows F with the following probabilistic interpretation
F (u, i) = Ep {f | u, i} ,
(7)
where f represents a random draw from F, with probabilities p(f |u, i) proportional to
P
k:fk =f ?k hk (u, i). In contrast to standard combination models with fixed weights, (7) forms a
conditional expectation, rather than an expectation.
4
Inducing Local Features
In contrast to [26] that manually defined 25 features, we induce the features hk from data. The
features hk (u, i) should emphasize users u and items i that are likely to lead to variations in the
relative strength of the f1 , . . . , fK . We consider below two issues: (i) defining the set H of candidate
features, and (ii) a strategy for selecting features from H to add to the combination F .
4.1
Candidate Feature Families H
We denote the sets of users and items by U and I respectively, and the domain of f ? F and h ? H
as ? = U ? I. The set R ? ? is the set of user-item pairs present in the training set, and the set of
user-item pairs that are being predicted is ? ? ? \ R.
We consider the following three unimodal functions on ?, parameterized by a location parameter or
mode ? ? = (u? , i? ) ? ? and a bandwidth h > 0
d(u? , u)
(1)
I (d(u? , u) ? h) ,
Kh,(u? ,i? ) (u, i) ? 1 ?
h
d(i? , i)
(2)
Kh,(u? ,i? ) (u, i) ? 1 ?
I (d(i? , i) ? h) ,
h
d(u? , u)
d(i? , i)
(3)
?
I (d(u , u) ? h) ? 1 ?
I (d(i? , i) ? h) , (8)
Kh,(u? ,i? ) (u, i) ? 1 ?
h
h
3
where I(A) = 1 if A holds and 0 otherwise. The first function is unimodal in u, centered around
u? , and constant in i. The second function is unimodal in i, centered around i? , and constant in u.
The third is unimodal in u, i and centered around (u? , i? ).
There are several possible choices for the distance functions in (8) between users and between items.
For simplicity, we use in our experiments the angular distance
hx, yi
d(x, y) = arc cos
(9)
kxk ? kyk
where the inner products above are computed based on the user-item rating matrix expressing the
training set (ignoring entries not present in both arguments).
The functions (8) are the discrete analogs of the triangular kernel Kh (x) = h?1 (1 ? |x ?
x? |/h)I(|x ? x? | ? h) used in non-parametric kernel smoothing [30]. Their values decay linearly with the distance from their mode (truncated at zero), and feature a bandwidth parameter h,
controlling the rate of decay. As h increases the support size |{? ? ? : K(?) > 0}| increases and
max??? K(?) decreases.
The unimodal feature functions (8) capture locality in the ? space by measuring proximity to a
mode, representing a user u? , an item i? , or a user-item pair. We define the family of candidate
features H as all possible additive mixtures or max-mixtures of the functions (8), parameterized by
a set of multiple modes ? ? = {?1? , . . . , ?r? }
K?? (u, i) ?
r
X
K?j? (u, i)
(10)
j=1
K?? (u, i) ? max K?j? (u, i).
(11)
j=1,...,r
Using this definition, features functions hk (u, i) ? H are able to express a wide variety of locality
information involving multiple potential modes.
We discuss next the strategy for identifying useful features from H and adding them to the model F
in a stagewise manner.
4.2
Feature Induction Strategy
Adapting the stagewise learning approach to the model (5) we have
F (K) (u, i) =
K
X
?k hk (u, i) fk (u, i),
(12)
k=1
(?k , hk , fk ) =
arg min
?k ?R,hk ?H,fk ?F
X
F (k?1) (u, i) + ?k hk (u, i)fk (u, i) ? Ru,i
2
.
(u,i)?R
It is a well-known fact that stagewise algorithms sometimes outperform non-greedy algorithms due
to resistance to overfitting (see [22], for example). This explains the good generalization ability of
boosting and stage-wise linear regression.
From a computational standpoint, (12) scales nicely with K and with the training set size. The onedimensional quadratic optimization with respect to ? is solved via a closed form, but the optimization
over F and H has to be done by brute force or by some approximate method such as sampling. The
computational complexity of each iteration is thus O(|H| ? |F| ? |R|), assuming no approximation
are performed.
Since we consider relatively small families F of CF methods, the optimization over F does not pose
a substantial problem. The optimization over H is more problematic since H is potentially infinite,
or otherwise very large. We address this difficulty by restricting H to a finite collection of additive
or max-mixtures kernels with r modes, randomly sampled from the users or items present in the
training data. Our experiments conclude that it is possible to find useful features from a surprisingly
small number of randomly-chosen samples.
4
5
Experiments
We describe below the experimental setting, followed by the experimental results and conclusions.
5.1
Experimental Design
We used a recommendation algorithm toolkit PREA [15] for candidate algorithms, including three
simple baselines (Constant model, User Average, and Item Average) and five matrix-factorization
methods (Regularized SVD, NMF [13], PMF [24], Bayesian PMF [23], and Non-Linear PMF [12]),
and Slope-one [16]. This list includes traditional baselines as well as state-of-the-art CF methods
that were proposed recently in the research literature. We evaluate the performance using the Root
Mean Squared Error (RMSE), measured on the test set.
Table 1 lists 5 experimental settings. SINGLE runs each CF algorithm individually, and chooses the
one with the best average performance. CONST combines all candidate algorithms with constant
weights as in (1). FWLS combines all candidate algorithms with non-constant weights as in (3)
[26]. For CONST and FWLS, the weights are estimated from data by solving a least-square problem. STAGE combines CF algorithms in stage-wise manner. FEAT applies the feature induction
techniques discussed in Section 4.
To evaluate whether the automatic feature induction in FEAT works better or worse than manually
constructed features, we used in FWLS and STAGE manual features similar to the ones in [26]
(excluding features requiring temporal data). Examples include number of movies rated per user,
number of users rating each movie, standard deviation of the users? ratings, and standard deviation
of the item?s ratings.
The feature induction in FEAT used a feature space H with additive multi-mode smoothing kernels
as described in Section 4 (for simplicity we avoided kernels unimodal in both u and i). The family
H included 200 randomly sampled features (a new sample was taken for each of the iterations in the
stagewise algorithms). The r in (11) was set to 5% of user or item count, and bandwidth h values of
0.05 (an extreme case where most features have value either 0 or 1) and 0.8 (each user or item has
moderate similarity values). The stagewise algorithm continues until either five consecutive trials
fail to improve the RMSE on validation set, or the iteration number reaches 100, which occur only in
a few cases. We used similar L2 regularization for all methods (both stagewise and non-stagewise),
where the regularization parameter was selected among 5 different values based on a validation set.
We experimented with the two standard MovieLens datasets: 100K and 1M, and with the Netflix
dataset. In the Netflix dataset experiments, we sub-sampled the data since (a) running state-ofthe-art candidate algorithms on the full Netflix data takes too long time - for example, Bayesian
PMF was reported to take 188 hours [23], and (b) it enables us to run extensive experiments measuring the performance of the CF algorithms as a function of the number of users, number of
items, voting sparsity, and facilitates cross-validation and statistical tests. More specifically, we
sub-sampled from the most active M users and the most often rated N items to obtain pre-specified
data density levels |R|/|?|. As shown in Table 2, we varied either the user or item count in the
set {1000, 1500, 2000, 2500, 3000}, holding the other variable fixed at 1000 and the density at 1%,
which is comparable density of the original Netflix dataset. We also conducted an experiment where
the data density varied in the set {1%, 1.5%, 2%, 2.5%} with fixed user and item count of 1000 each.
We set aside a randomly chosen 20% for test set, and used the remaining 80% for both for training
the individual recommenders and for learning the ensemble model. It is possible, and perhaps more
motivated, to use two distinct train sets for the CF models and the ensemble. However, in our case,
we got high performance even in the case of using the same training dataset in both stages.
Method
SINGLE
CONST
FWLS
STAGE
FEAT
C
W
S
I
O
O
O
O
O
O
O
O
O
O
Explanation
Best-performed single CF algorithm
Mixture of CF without features
Mixture of CF with manually-designed features
Stagewise mixture with manual features
Stagewise mixture with induced features
Table 1: Experimental setting. (C: Combination of multiple algorithms, W: Weights varying with
features, S: Stage-wise algorithm, I: Induced features)
5
Combined
Single CF
Dataset
User Count
Item Count
Density
Constant
UserAvg
ItemAvg
Slope1
RegSVD
NMF
PMF
BPMF
NLPMF
SINGLE
CONST
FWLS
STAGE
FEAT
p-Value
1000
1000
1.0%
1.2188
1.0566
1.1260
1.4490
1.0623
1.0784
1.6180
1.3973
1.0561
1.0561
1.0429
1.0288
1.0036
0.9862
0.0028
2000
3000
1000
1.0%
1.2013
1.2072
1.0513
1.0375
1.0611
1.0445
1.4012
1.3321
1.0155
1.0083
1.0205
1.0069
1.4824
1.4081
1.2951
1.2949
1.0507
1.0382
1.0155
1.0069
1.0072
0.9963
1.0050
0.9946
0.9784
0.9668
0.9607
0.9607
0.0001
0.0003
Netflix
1000
2000
3000
1.0%
1.1964
1.1888
1.0359
1.0174
1.1221
1.1444
1.4049
1.3196
1.0354
1.0289
1.0423
1.0298
1.4953
1.4804
1.2566
1.2102
1.0361
1.0471
1.0354
1.0174
1.0198
1.0102
1.0089
1.0016
0.9967
0.9821
0.9740
0.9717
0.0008
0.0014
1.5%
1.2188
1.0566
1.1260
1.4490
1.0343
1.0406
1.4903
1.3160
1.0436
1.0343
1.0255
1.0179
0.9935
0.9703
0.0002
1000
1000
2.0%
1.2235
1.0318
1.1029
1.3505
1.0154
1.0151
1.3594
1.2021
1.0382
1.0151
0.9968
0.9935
0.9846
0.9589
0.0019
2.5%
1.2113
1.0252
1.0900
1.0725
1.0020
1.0091
1.1818
1.1514
1.0523
1.0020
0.9824
0.9802
0.9769
0.9492
0.0013
MovieLens
943
6039
1682
3883
6.3%
4.3%
1.2408
1.2590
1.0408
1.0352
1.0183
0.9789
0.9371
0.9017
0.9098
0.8671
0.9601
0.9268
0.9328
0.9623
0.9629
0.9000
0.9560
0.9415
0.9098
0.8671
0.9073
0.8660
0.9010
0.8649
0.8961
0.8623
0.8949
0.8569
0.0014
0.0023
Table 2: Test error in RMSE (lower values are better) for single CF algorithms used as candidates
and combined models. Data where M or N is 1500 or 2500 are omitted due to the lack of space, as
it is shown in Figure 2. The best-performing one in each group is indicated in Italic. The last row
indicates p-value for statistical test of hypothesis FEAT FWLS.
1.10
1.10
1.10
1.08
1.08
1.08
CONST
1.06
1.06
1.06
FWLS
1.04
1.04
1.04
SINGLE
1.02
1.00
RMSE
RMSE
RMSE
STAGE
1.02
1.00
1.00
0.98
0.98
0.98
0.96
0.96
0.96
0.94
1000
0.94
1000
1500
2000
2500
User Count
3000
1500
2000
Item Count
2500
3000
FEAT
1.02
0.94
1.0%
1.5%
2.0%
2.5%
Density
Figure 2: Performance trend with varied user count (left), item count (middle), and density (right)
on Netflix dataset.
For stagewise methods, the 80% train set was divided to 60% training set and 20% validation set,
used to determine when to stop the stagewise addition process. The non-stagewise methods used the
entire 80% for training. The 10% of training set is used to select regularization parameter for both
stagewise and non-stagewise. The results were averaged over 10 random data samples.
6
6.1
Result and Discussion
Performance Analysis and Example
Table 2 displays the performance in RMSE of each combination method, as well as the individual
algorithms. Examining it, we observe the following partial order with respect to prediction accuracy:
FEAT STAGE FWLS CONST SINGLE.
? FWLS CONST SINGLE: Combining CF algorithms (even only with constant
weights) produces better prediction than the best-single CF method. Also, using nonconstant weights improves performance further. This result is consistent with what has
been known in literature [7, 26].
? STAGE FWLS: Figure 2 indicates that stagewise combinations where features are chosen with replacement are more accurate. The selection with replacement allow certain
features to be selected more than once, correcting a previous inaccurate parameter setting.
? FEAT STAGE: Making use of induced features improves prediction accuracy further
from stagewise optimization with manually-designed features.
Overall, our experiments indicate that the combination with non-constant weights and feature induction (FEAT) outperforms three baselines (the best single method, standard combinations with
constant weight, and the FWLS method using manually constructed features [26]). We tested the
6
1
1
1
alg 1
alg 1
alg 2
0.8
alg 2
0.8
weight
alg 8
0.6
0.6
0.6
0.4
0.4
0.4
0.2
0.2
0.2
0
0
0
?0.2
?0.2
?0.2
?0.4
?0.4
0
200
400
600
800
item (sorted by algorithm 1)
1000
0.7
alg 2
alg 8
alg 8
0
200
400
600
800
item (sorted by algorithm 2)
1000
0.7
?0.4
0
200
400
600
800
item (sorted by algorithm 8)
alg 1
0.6
alg 2
0.6
alg 2
0.6
alg 2
0.5
alg 8
0.5
alg 8
0.5
alg 8
0.4
0.4
0.4
0.3
0.3
0.3
0.2
0.2
0.2
0.1
0.1
0.1
0
0
0
?0.1
?0.1
?0.1
0
200
400
600
800
user (sorted by algorithm 1)
1000
1000
0.7
alg 1
alg 1
weight
alg 1
0.8
0
200
400
600
800
user (sorted by algorithm 2)
1000
0
200
400
600
800
user (sorted by algorithm 8)
1000
Figure 3: Average weight values of each item (top) and user (bottom) sorted in the order of high to
low weight of selected algorithm. Note that the sorting order is similar between algorithm 1 (User
Average) and 2 (Item Average). In contrast, algorithm 8 (NLPMF) has opposite order, which will be
weighted higher in different part of the data, compared to algorithm 1 and 2.
hypothesis RM SEF EAT < RM SEF W LS with paired t-test. Based on the p-values (See the last
row in Table 2), we can reject the null hypothesis with significance of 99%. We conclude that
our proposed combination outperforms state-of-the-art methods, and several previously proposed
combination methods.
To see how feature induction works in detail, we illustrate an example with a case where the user
count and item count equals 1000. Figure 3 shows the average weight distribution that each user or
item receives under three CF methods: user average, item average, and NLPMF. We focused on these
three methods since they are frequently selected by the stagewise algorithm. The x axis variables
in the three panels are sorted in the order of decreasing weights of selected algorithm. Note that in
each figure, one curve is monotonically decaying, showing the weights of the CF method according
to which the sorting was done. An interesting observation is that algorithm 1 (User Average) and
algorithm 2 (Item Average) have similar pattern of sorting order in Figure 3 (right column). In other
words, these two algorithms are similar in nature, and are relatively strong or weaker in similar
regions of ?. Algorithm 8 (NLPMF) on the other hand, has a very different relative strength pattern.
6.2
Trend Analysis
Figure 2 graphs the RMSE of the different combination methods as a function of the user count,
item count, and density. We make the following observations.
? As expected, prediction accuracy for all combination methods and for the top single method
improves with the user count, item count, and density.
? The performance gap between the best single algorithm and the combinations tends to
decrease with larger user and item count. This is a manifestation of the law of diminishing returns, and the fact that the size of a suitable family H capturing locality information increases with the user and item count. Thus, the stagewise procedure becomes more
challenging computationally, and less accurate since in our experiment we sampled same
number of compositions from H, rather than increasing it for larger data.
? We note that all combination methods and the single best CF method improve performance
as the density increases. The improvement seems to be the most pronounced for the single
best algorithm and for the FEAT method, indicating that FEAT scales up its performance
aggressively with increasing density levels.
7
? Comparing the left and middle panels of Figure 2 implies that having more users is more
informative than having more items. In other words, if the total dataset size M ? N is
equal, the performance tends to be better when M > N (left panel of Figure 2) than
M < N (middle panel of Figure 2).
6.3
Scalability
Our proposed stagewise algorithm is very efficient, when compared to other feature selection algorithms such as step-wise or subset selection. Nevertheless, the large number of possible features may
result in computational issues. In our experiments, we sampled from the space of candidate features
a small subset of features that was considered for addition (the random subset is different in each
iteration of the stagewise algorithm). In the limit K ? ?, such a sampling scheme would recover
the optimal ensemble as each feature will be selected for consideration infinitely often. Our experiments conclude that this scheme works well also in practice and results in significant improvement
to the state-of-the-art even for a relatively small sample of feature candidates such as 200. Viewed
from another perspective, this implies that randomly selecting such a small subset of features each
iteration ensures the selection of useful features. In fact, the features induced in this manner were
found to be more useful than the manually crafted features in the FWLS algorithm [26].
7
Summary
We started from an observation that the relative performance of different candidate recommendation
systems f (u, i) depends on u and i, for example on the activity level of user u and popularity of item
i. This motivated the development of combination of recommendation systems with non-constant
weights that emphasize different candidates based on their relative strengths in the feature space.
In contrast to the FWLS method that focused on manual construction of features, we developed a
feature induction algorithm that works in conjunction with stagewise least-squares. We formulate a
family of feature function, based on the discrete analog of triangular kernel smoothing. This family
captures a wide variety of local information and is thus able to model the relative strengths of the
different CF methods and how they change across ?.
The combination with induced features outperformed any of the base candidates as well as other
combination methods in literature. This includes the recently proposed FWLS method that uses
manually constructed feature function. As our candidates included many of the recently proposed
state-of-the-art recommendation systems our conclusions are significant for the engineering community as well as recommendation system scientists.
References
[1] R. Bell, Y. Koren, and C. Volinsky. Modeling relationships at multiple scales to improve
accuracy of large recommender systems. In Proc. of the ACM SIGKDD, 2007.
[2] P. Bennett. Neighborhood-based local sensitivity. In Proc. of the European Conference on
Machine Learning, 2007.
[3] J. Breese, D. Heckerman, and C. Kadie. Empirical analysis of predictive algorithms for collaborative filtering. In Uncertainty in Artificial Intelligence, 1998.
[4] D. DeCoste. Collaborative prediction using ensembles of maximum margin matrix factorizations. In Proc. of the International Conference on Machine Learning, 2006.
[5] D. Heckerman, D. Maxwell Chickering, C. Meek, R. Rounthwaite, and C. Kadie. Dependency
networks for inference, collaborative filtering, and data visualization. Journal of Machine
Learning Research, 1, 2000.
[6] J. L. Herlocker, J. A. Konstan, A. Borchers, and J. Riedl. An algorithmic framework for
performing collaborative filtering. In Proc. of ACM SIGIR Conference, 1999.
[7] M. Jahrer, A. T?oscher, and R. Legenstein. Combining predictions for accurate recommender
systems. In Proc. of the ACM SIGKDD, 2010.
[8] Y. Koren. Factorization meets the neighborhood: a multifaceted collaborative filtering model.
In Proc. of the ACM SIGKDD, 2008.
8
[9] Y. Koren. Factor in the neighbors: Scalable and accurate collaborative filtering. ACM Transactions on Knowledge Discovery from Data, 4(1):1?24, 2010.
[10] Y. Koren, R. Bell, and C. Volinsky. Matrix factorization techniques for recommender systems.
Computer, 42(8):30?37, 2009.
[11] B. Lakshminarayanan, G. Bouchard, and C. Archambeau. Robust bayesian matrix factorisation. In Proc. of the International Conference on Artificial Intelligence and Statistics, 2011.
[12] N. D. Lawrence and R. Urtasun. Non-linear matrix factorization with gaussian processes. In
Proc. of the International Conference on Machine Learning, 2009.
[13] D. Lee and H. Seung. Algorithms for non-negative matrix factorization. In Advances in Neural
Information Processing Systems, 2001.
[14] J. Lee, M. Sun, and G. Lebanon. A comparative study of collaborative filtering algorithms.
ArXiv Report 1205.3193, 2012.
[15] J. Lee, M. Sun, and G. Lebanon. Prea: Personalized recommendation algorithms toolkit.
Journal of Machine Learning Research, 13:2699?2703, 2012.
[16] D. Lemire and A. Maclachlan. Slope one predictors for online rating-based collaborative filtering. Society for Industrial Mathematics, 5:471?480, 2005.
[17] B. Marlin. Modeling user rating profiles for collaborative filtering. In Advances in Neural
Information Processing Systems, 2004.
[18] C. J. Merz. Dynamical selection of learning algorithms. Lecture Notes in Statistics, pages
281?290, 1996.
[19] D. M. Pennock, E. Horvitz, S. Lawrence, and C. L. Giles. Collaborative filtering by personality diagnosis: A hybrid memory- and model-based approach. In Uncertainty in Artificial
Intelligence, 2000.
[20] J.D.M. Rennie and N. Srebro. Fast maximum margin matrix factorization for collaborative
prediction. In Proc. of the International Conference on Machine Learning, 2005.
[21] P. Resnick, N. Iacovou, M. Suchak, P. Bergstrom, and J. Riedl. Grouplens: an open architecture
for collaborative filtering of netnews. In Proc. of the Conference on CSCW, 1994.
[22] L. Reyzin and R. E. Schapire. How boosting the margin can also boost classifier complexity.
In Proc. of the International Conference on Machine Learning, 2006.
[23] R. Salakhutdinov and A. Mnih. Bayesian probabilistic matrix factorization using markov chain
monte carlo. In Proc. of the International Conference on Machine Learning, 2008.
[24] R. Salakhutdinov and A. Mnih. Probabilistic matrix factorization. In Advances in Neural
Information Processing Systems, 2008.
[25] B. Sarwar, G. Karypis, J. Konstan, and J. Reidl. Item-based collaborative filtering recommendation algorithms. In Proc. of the International Conference on World Wide Web, 2001.
[26] J. Sill, G. Takacs, L. Mackey, and D. Lin. Feature-weighted linear stacking. Arxiv Report
arXiv:0911.0460, 2009.
[27] X. Su, R. Greiner, T. M. Khoshgoftaar, and X. Zhu. Hybrid collaborative filtering algorithms
using a mixture of experts. In Proc. of the IEEE/WIC/ACM International Conference on Web
Intelligence, 2007.
[28] M. Sun, G. Lebanon, and P. Kidwell. Estimating probabilities in recommendation systems. In
Proc. of the International Conference on Artificial Intelligence and Statistics, 2011.
[29] L. H. Ungar and D. P. Foster. Clustering methods for collaborative filtering. In AAAI Workshop
on Recommendation Systems, 1998.
[30] M. P. Wand and M. C. Jones. Kernel Smoothing. Chapman and Hall/CRC, 1995.
[31] K. Woods, W.P. Kegelmeyer Jr, and K. Bowyer. Combination of multiple classifiers using
local accuracy estimates. IEEE Transactions on Pattern Analysis and Machine Intelligence,
19(4):405?410, 1997.
[32] G. R. Xue, C. Lin, Q. Yang, W. S. Xi, H. J. Zeng, Y. Yu, and Z. Chen. Scalable collaborative
filtering using cluster-based smoothing. In Proc. of ACM SIGIR Conference, 2005.
[33] K. Yu, S. Zhu, J. Lafferty, and Y. Gong. Fast nonparametric matrix factorization for large-scale
collaborative filtering. In Proc. of ACM SIGIR Conference, 2009.
9
|
4748 |@word trial:1 middle:3 seems:1 open:1 contains:1 selecting:2 outperforms:5 horvitz:1 comparing:1 surprising:2 additive:3 informative:1 enables:2 designed:2 aside:1 mackey:1 greedy:2 selected:9 intelligence:6 item:52 kyk:1 prize:1 boosting:4 location:1 preference:1 five:2 constructed:3 combine:5 manner:3 expected:1 frequently:1 multi:1 salakhutdinov:2 decreasing:1 little:1 decoste:1 increasing:2 becomes:1 estimating:3 panel:4 factorized:1 null:1 what:1 mountain:1 developed:1 unobserved:1 marlin:1 temporal:1 voting:1 rm:2 classifier:2 brute:1 enjoy:1 intervention:1 kegelmeyer:1 engineering:2 local:5 scientist:1 tends:2 limit:1 meet:1 tinkering:1 bergstrom:1 challenging:1 co:1 archambeau:1 factorization:12 sill:1 range:1 karypis:1 averaged:1 practice:1 procedure:1 empirical:1 bell:2 adapting:1 got:1 reject:1 word:3 induce:1 pre:1 get:1 ga:1 selection:8 risk:1 missing:2 l:1 focused:2 formulate:1 sigir:3 simplicity:2 identifying:1 correcting:1 factorisation:1 rule:2 fill:1 bpmf:1 variation:2 target:1 controlling:1 heavily:1 user:51 construction:1 us:1 hypothesis:3 associate:1 trend:2 updating:1 continues:1 ep:1 bottom:1 resnick:1 solved:1 capture:2 region:2 ensures:1 sun:3 decrease:2 substantial:1 complexity:2 sef:2 seung:1 depend:1 solving:1 predictive:1 f2:1 learner:1 train:2 distinct:2 fast:3 describe:1 monte:1 borchers:1 artificial:4 netnews:1 pearson:1 neighborhood:2 whose:1 larger:2 rennie:1 otherwise:2 triangular:2 ability:1 statistic:3 itself:1 online:1 advantage:2 propose:1 product:1 combining:3 reyzin:1 degenerate:1 inducing:1 kh:4 competition:1 scalability:2 pronounced:1 cluster:1 produce:1 comparative:1 incremental:1 depending:1 illustrate:1 gong:1 pose:1 measured:1 minor:1 strong:1 predicted:3 indicate:1 implies:2 direction:1 closely:1 modifying:1 centered:3 explains:1 crc:1 hx:1 ungar:1 f1:3 generalization:1 summation:1 hold:1 proximity:1 around:3 considered:1 hall:1 lawrence:2 algorithmic:1 predict:1 substituting:1 major:1 consecutive:1 omitted:1 estimation:1 proc:17 outperformed:1 khoshgoftaar:1 grouplens:1 individually:1 weighted:3 gaussian:1 rather:3 hj:1 varying:1 gatech:2 conjunction:1 improvement:5 indicates:2 hk:15 contrast:4 sigkdd:3 industrial:1 kim:1 baseline:3 inference:1 inaccurate:1 entire:1 diminishing:1 arg:3 classification:2 issue:2 among:1 overall:1 development:1 smoothing:6 art:8 constrained:1 equal:2 once:1 nicely:1 having:2 sampling:2 manually:8 chapman:1 represents:1 jones:1 yu:2 others:2 report:2 few:2 randomly:5 comprehensive:1 individual:2 replacement:2 recalling:1 atlanta:1 mnih:2 runner:1 mixture:9 extreme:1 chain:1 accurate:5 partial:1 pmf:5 column:1 modeling:2 giles:1 suchak:1 retains:1 measuring:2 stacking:2 deviation:2 entry:2 subset:4 predictor:1 examining:1 conducted:1 too:1 reported:1 dependency:1 xue:1 mmmf:1 combined:3 chooses:1 density:11 international:9 sensitivity:2 probabilistic:5 lee:3 squared:1 aaai:1 opposed:1 guy:1 worse:1 expert:2 return:1 potential:2 includes:5 coefficient:1 kadie:2 lakshminarayanan:1 ranking:1 depends:1 performed:2 view:1 root:1 closed:1 netflix:8 relied:1 decaying:1 recover:1 bouchard:1 slope:3 rmse:8 collaborative:24 contribution:1 square:3 accuracy:6 ensemble:11 identify:1 ofthe:1 bayesian:5 carlo:1 cc:1 reach:1 manual:7 definition:1 iacovou:1 volinsky:2 associated:1 sampled:6 stop:1 dataset:7 knowledge:1 improves:3 jahrer:1 appears:1 maxwell:1 higher:1 done:2 angular:1 stage:12 takacs:1 correlation:1 until:1 hand:1 receives:1 web:2 su:1 zeng:1 inescapable:1 lack:1 google:1 logistic:1 stagewise:29 quality:1 mode:7 perhaps:1 indicated:1 multifaceted:1 requiring:1 regularization:3 aggressively:1 inferior:1 cosine:1 manifestation:1 fj:1 wise:4 consideration:1 recently:3 superior:1 winner:2 analog:2 interpretation:1 discussed:1 mae:1 onedimensional:1 significant:7 expressing:2 composition:1 automatic:3 fk:18 mathematics:1 recommenders:1 had:1 toolkit:2 similarity:5 sarwar:1 base:2 add:1 recent:3 perspective:1 moderate:1 certain:1 meta:1 success:1 yi:1 somewhat:1 determine:1 recommended:1 monotonically:1 ii:1 multiple:7 unimodal:6 full:1 cross:1 long:1 lin:2 divided:1 paired:1 prediction:11 scalable:3 regression:4 involving:1 circumstance:2 expectation:2 arxiv:3 iteration:5 kernel:8 sometimes:1 achieved:1 addition:2 standpoint:1 pennock:1 induced:5 facilitates:1 lafferty:1 yang:1 variety:3 fit:1 architecture:1 bandwidth:3 opposite:1 inner:1 whether:2 motivated:2 algebraic:3 resistance:2 speaking:1 generally:1 useful:5 nonparametric:2 concentrated:2 schapire:1 outperform:2 problematic:1 estimated:2 per:1 popularity:1 diagnosis:1 discrete:2 express:1 group:1 nevertheless:1 graph:3 wood:1 wand:1 run:2 parameterized:2 uncertainty:2 family:9 draw:1 legenstein:1 comparable:1 bowyer:1 mingxuan:1 capturing:2 fl:1 followed:1 koren:4 display:1 meek:1 quadratic:1 activity:1 strength:5 occur:1 personalized:1 argument:1 min:3 performing:4 eat:1 relatively:4 according:1 combination:30 riedl:2 jr:1 across:1 heckerman:2 making:1 hl:1 taken:1 computationally:2 visualization:1 previously:2 discus:1 count:17 fail:1 available:2 observe:2 appropriate:1 original:1 personality:1 top:4 clustering:2 include:2 cf:31 denotes:1 running:1 remaining:1 const:7 society:1 question:1 strategy:4 primary:1 dependence:1 parametric:1 traditional:2 italic:1 distance:3 topic:1 urtasun:1 induction:10 assuming:1 ru:3 relationship:1 potentially:1 holding:1 negative:1 herlocker:1 design:1 motivates:1 perform:1 recommender:3 observation:3 datasets:1 markov:1 arc:1 finite:1 truncated:1 defining:1 excluding:1 varied:3 community:1 nmf:2 rating:17 pair:3 specified:1 extensive:1 learned:1 boost:2 hour:1 address:1 able:2 suggested:1 below:2 pattern:3 fwls:14 dynamical:1 sparsity:1 including:2 memory:4 max:4 explanation:1 suitable:1 difficulty:1 force:1 endows:1 predicting:2 regularized:1 hybrid:2 zhu:2 representing:1 scheme:2 improve:5 movie:2 technology:1 rated:2 axis:1 started:1 nonconstant:1 rounthwaite:1 literature:4 discovery:2 l2:1 relative:7 law:1 loss:2 lecture:1 interesting:1 filtering:22 proportional:1 srebro:1 validation:4 consistent:1 wic:1 foster:1 row:2 summary:1 surprisingly:1 last:2 allow:1 weaker:1 institute:1 wide:5 neighbor:1 absolute:1 benefit:1 curve:1 world:1 collection:1 avoided:1 transaction:2 lebanon:4 approximate:1 emphasize:2 feat:12 overfitting:2 active:1 conclude:4 xi:1 latent:3 table:6 nature:1 reasonably:1 robust:1 ca:1 ignoring:1 alg:18 european:1 domain:1 did:1 pk:1 significance:1 linearly:3 profile:1 crafted:1 georgia:1 sub:2 konstan:2 candidate:15 chickering:1 third:1 emphasized:1 showing:1 explored:1 decay:2 list:2 experimented:1 workshop:1 restricting:1 adding:1 effectively:1 margin:4 sorting:3 gap:1 chen:1 locality:5 explore:1 likely:1 infinitely:1 greiner:1 cscw:1 kxk:1 recommendation:12 applies:1 acm:8 conditional:1 sorted:8 viewed:1 bennett:1 change:1 included:2 specifically:2 infinite:1 movielens:2 total:2 breese:1 experimental:7 svd:1 merz:1 indicating:1 select:1 college:1 support:1 evaluate:2 tested:1
|
4,142 | 4,749 |
From Deformations to Parts:
Motion-based Segmentation of 3D Objects
Soumya Ghosh1 , Erik B. Sudderth1 , Matthew Loper2 , and Michael J. Black2
1
Department of Computer Science, Brown University, {sghosh,sudderth}@cs.brown.edu
2
Perceiving Systems Department, Max Planck Institute for Intelligent Systems,
{mloper,black}@tuebingen.mpg.de
Abstract
We develop a method for discovering the parts of an articulated object from
aligned meshes of the object in various three-dimensional poses. We adapt the distance dependent Chinese restaurant process (ddCRP) to allow nonparametric discovery of a potentially unbounded number of parts, while simultaneously guaranteeing a spatially connected segmentation. To allow analysis of datasets in which
object instances have varying 3D shapes, we model part variability across poses
via affine transformations. By placing a matrix normal-inverse-Wishart prior on
these affine transformations, we develop a ddCRP Gibbs sampler which tractably
marginalizes over transformation uncertainty. Analyzing a dataset of humans captured in dozens of poses, we infer parts which provide quantitatively better deformation predictions than conventional clustering methods.
1 Introduction
Mesh segmentation methods decompose a three-dimensional (3D) mesh, or a collection of aligned
meshes, into their constituent parts. This well-studied problem has numerous applications in computational graphics and vision, including texture mapping, skeleton extraction, morphing, and mesh
registration and simplification. We focus in particular on the problem of segmenting an articulated
object, given aligned 3D meshes capturing various object poses. The meshes we consider are complete surfaces described by a set of triangular faces, and we seek a segmentation into spatially coherent parts whose spatial transformations capture object articulations. Applied to various poses of
human bodies as in Figure 1, our approach identifies regions of the mesh that deform together, and
thus provides information which could inform applications such as the design of protective clothing.
Mesh segmentation has been most widely studied as a static clustering problem, where a single
mesh is segmented into ?semantic? parts using low-level geometric cues such as distance and curvature [1, 2]. While supervised training data can sometimes lead to improved results [3], there are
many applications where such data is unavailable, and the proper way to partition a single mesh is
inherently ambiguous. By searching for parts which deform consistently across many meshes, we
create a better-posed problem whose solution is directly useful for modeling objects in motion.
Several issues must be addressed to effectively segment collections of articulated meshes. First, the
number of parts comprising an articulated object is unknown a priori, and must be inferred from
the observed deformations. Second, mesh faces exhibit strong spatial correlations, and the inferred
parts must be contiguous. This spatial connectivity is needed to discover parts which correspond
with physical object structure, and required by target applications such as skeleton extraction. Finally, our primary goal is to understand the structure of human bodies, and humans vary widely in
size and shape. People move and deform in different ways depending on age, fitness, body fat, etc.
A segmentation of the human body should take into account this range of variability in the popula1
Figure 1: Human body segmentation. Left: Reference poses for two female bodies, and those bodies captured
in five other poses. Right: A manual segmentation used to align these meshes [6], and the segmentation inferred
by our ddCRP model from 56 poses. The ddCRP segmentation discovers parts whose motion is nearly rigid,
and includes small parts such as elbows and knees absent from the manual segmentation.
tion. To our knowledge, no previous methods for segmenting meshes combine information about
deformation from multiple bodies to address this corpus segmentation problem.
In this paper, we develop a statistical model which addresses all of these issues. We adapt the
distance dependent Chinese restaurant process (ddCRP) [4] to model spatial dependencies among
mesh triangles, and enforce spatial contiguity of the inferred parts [5]. Unlike most previous mesh
segmentation methods, our Bayesian nonparametric approach allows data-driven inference of an appropriate number of parts, and uses a affine transformation-based likelihood to accommodate object
instances of varying shape. After developing our model in Section 2, Section 3 develops a Gibbs
sampler which efficiently marginalizes the latent affine transformations defining part deformation.
We conclude in Section 4 with results examining meshes of humans and other articulated objects,
where we introduce a metric for quantitative evaluation of deformation-based segmentations.
2 A Part-Based Model for Mesh Deformation
Consider a collection of J meshes, each with N triangles. For some input mesh j, we let yjn ? R3
denote the 3D location of the center of triangular face n, and Yj = [yj1 , . . . , yjN ] ? R3?N the
full mesh configuration. Each mesh j has an associated N -triangle reference mesh, indexed by bj .
We let xbn ? R4 denote the location of triangle n in reference mesh b, expressed in homogeneous
coordinates (xbn (4) = 1). A full reference mesh Xb = [xb1 , . . . , xbN ]. In our later experiments, Yj
encodes the 3D mesh for a person in pose j, and Xbj is the reference pose for the same individual.
We estimate aligned correspondences between the triangular faces of the input pose meshes Yj , and
the reference meshes Xb , using a recently developed method [6]. This approach robustly handles
3D data capturing varying shapes and poses, and outputs meshes which have equal numbers of faces
in one-to-one alignment. Our segmentation model does not depend on the details of this alignment
method, and could be applied to data produced by other correspondence algorithms.
2.1 Nonparametric Spatial Priors for Mesh Partitions
The recently proposed distance dependent Chinese restaurant process (ddCRP) [4], a generalization
of the CRP underlying Dirichlet process mixture models [7], has a number of attractive properties
which make it particularly well suited for modeling segmentations of articulated objects. By placing
prior probability mass on partitions with arbitrary numbers of parts, it allows data-driven inference
of the true number of mostly-rigid parts underlying the observed data. In addition, by choosing an
appropriate distance function we can encourage spatially adjacent triangles to lie in the same part,
and guarantee that all inferred parts are spatially contiguous [5].
The Chinese restaurant process (CRP) is a distribution on all possible partitions of a set of objects (in
our case, mesh triangles). The generative process can be described via a restaurant with an infinite
number of tables (in our case, parts). Customers (triangles) i enter the restaurant in sequence and
select a table zi to join. They pick an occupied table with probability proportional to the number of
customers already sitting there, or a new table with probability proportional to a scaling parameter ?.
2
Figure 2: Left: A reference mesh in which links (yellow arrows) currently define three parts (connected
components). Right: Each part undergoes a distinct affine transformation, generated as in Equation (2).
The final seating arrangement gives a partition of the data, where each occupied table corresponds
to a part in the final segmentation.
Although described sequentially, the CRP induces an exchangeable distribution on partitions, for
which the segmentation probability is invariant to the order in which triangle allocations are sampled.
This is inappropriate for mesh data, in which nearby triangles are far more likely to lie in the same
part. The ddCRP alters the CRP by modeling customer links not to tables, but to other customers.
The link cm for customer m is sampled according to the distribution
f (dmn ) m 6= n,
p (cm = n | D, f, ?) ?
(1)
?
m = n.
Here, dmn is an externally specified distance between data points m and n, and ? determines the
probability that a customer links to themselves rather than another customer. The monotonically
decreasing decay function f (d) mediates how the distance between two data points affects their
probability of connecting to each other. The overall link structure specifies a partition: two customers are clustered together if and only if one can reach the other by traversing the link edges.
We define the distance between two triangles as the minimal number of hops, between adjacent
faces, required to reach one triangle from the other. A ?window? decay function of width 1, f (d) =
1[d ? 1], then restricts triangles to link only to immediately adjacent faces. Note that this doesn?t
limit the size of parts, since all pairs of faces are potentially reachable via a sequence of adjacent
links. However, it does guarantee that only spatially contiguous parts have non-zero probability
under the prior. This constraint is preserved by our MCMC inference algorithm.
2.2 Modeling Part Deformation via Affine Transformations
Articulated object deformation is naturally described via the spatial transformations of its constituent
parts. We expect the triangular faces within a part to deform according to a coherent part-specific
transformation, up to independent face-specific noise. The near-rigid motions of interest are reasonably modeled as affine transformations, a family of co-linearity preserving linear transformations.
We concisely denote the transformation from a reference triangle to an observed triangle via a matrix A ? R3?4 . The fourth column of A encodes translation of the corresponding reference triangle
via homogeneous coordinates xbn , and the other entries encode rotation, scaling, and shearing.
Previous approaches have treated such transformations as parameters to be estimated during inference [8, 9]. Here, we instead define a prior distribution over affine transformations. Our construction
allows transformations to be analytically marginalized when learning our part-based segmentation,
but retains the flexibility to later estimate transformations if desired. Explictly modeling transformation uncertainty makes our MCMC inference more robust and rapidly mixing [7], and also allows
data-driven determination of an appropriate number of parts.
The matrix of numbers encoding an affine transformation is naturally modeled via multivariate Gaussian distributions. We place a conjugate, matrix normal-inverse-Wishart [10, 11] prior on the affine
transformation A and residual noise covariance matrix ?:
? ? IW(n0 , S0 )
A | ? ? MN (M, ?, K)
3
(2)
Here, n0 ? R and S0 ? R3?3 control the variance and mean of the Wishart prior on ??1 . The
mean affine transformation is M ? R3?4 , and K ? R4?4 and ? determine the variance of the prior
on A. Applied to mesh data, these parameters have physical interpretations and can be estimated
from the data collection process. While such priors are common in Bayesian regression models, our
application to the modeling of geometric affine transformations appears novel.
Allocating a different affine transformation for the motion of each part in each pose (Figure 2), the
overall generative model can be summarized as follows:
1. For each triangle n, sample an associated link cn ? ddCRP (?, f, D). The part assignments
z are a deterministic function of the sampled links c = [c1 , . . . , cN ].
2. For each pose j of each part k, sample an affine transformation Ajk and residual noise
covariance ?jk from the matrix normal-inverse-Wishart prior of Equation (2).
3. Given these pose-specific affine transformations and assignments of mesh faces to parts, independently sample the observed location of each pose triangle relative to its corresponding
reference triangle, yjn ? N (Ajzn xbjn , ?jzn ).
Note that ?jk governs the degree of non-rigid deformation of part k in pose j. It also indirectly
influences the number of inferred parts: a large S0 makes large ?jk more probable, which allows
more non-rigid deformation and permits models which utilize fewer parts. The overall model is
p(Y, c, A, ? | X, b, D, ?, f, ?) = p(c | D, f, ?)
" K(c)
#" N
#
J
Y
Y
Y
p(Ajk , ?jk | ?)
N (yjn | Ajzn xbjn , ?jzn )
j=1
(3)
n=1
k=1
where Y = {Y1 , . . . , YJ }, X = {X1 , . . . , XB }, b = [b1 , . . . , bJ ], the ddCRP links c define assignments z to K(c) parts, and ? = {n0 , S0 , M, K} are likelihood hyperparameters. There is a single
reference mesh Xb for each object instance b, and Yj captures a single deformed pose of Xbj .
2.3 Previous Work
Previous work has also sought to segment a mesh into parts based on observed articulations [8, 12,
13, 14]. The two-stage procedure of Rosman et al. [13] first minimizes a variational functional
regularized to favor piecewise constant transformations, and then clusters the transformations into
parts. Several other segmentation procedures [12, 14] lack coherent probabilistic models, and thus
have difficulty quantifying uncertainty and determining appropriate segmentation resolutions.
Anguelov et al. [8] define a global probabilistic model, and use the EM algorithm to jointly estimate
parts and their transformations. They explicitly model spatial dependencies among mesh faces, but
their Markov random field cannot ensure that parts are spatially connected; a separate connected
components process is required. Heuristics are used to determine an appropriate number of parts.
Ambitious recent work has considered a model for joint mesh alignment and segmentation [9]. However, this approach suffers from many of the issues noted above: the number of parts must be specified a priori, parts may not be contiguous, and their EM inference appears prone to local optima.
3 Inference
We seek the constituent parts of an articulated model, given observed data (X, Y, and b). These parts
are characterized by the posterior distribution of the customer links c. We approximate this posterior
using a collapsed Gibbs sampler, which iteratively draws cn from the conditional distribution
p(cn | c?n , X, Y, b, D, f, ?, ?) ? p(cn | D, f, ?)p(Y | z(c), X, b, ?).
(4)
Here, z(c) is the clustering into parts defined by the customer links c. The ddCRP prior is given by
Equation (1), while the likelihood term in the above equation further factorizes as
p(Y | z(c), X, b, ?) =
K(c) J
Y Y
k=1 j=1
4
p(Yjk | Xbj k , ?)
(5)
where Yjk ? R3?Nk is the set of triangular faces in part k of pose j, and Xbj k are the corresponding
reference faces. Exploiting the conjugacy of the normal likelihood to the prior over affine transformations in Equation (2), we marginalize the part-specific latent variables Ajk and ?jk to compute
the marginal likelihood in closed form (see the supplement for a derivation):
p(Yjk | Xbj k , ?) =
|K|3/2 |S0 |(n0 /2) ?3
Nk +n0
2
? (3Nk /2) |Sxx |(3/2) |S0 +Sy|x |((Nk +n0 )/2) ?3 (
Sxx = Xbj k Xbj k T + K, Syx = Yjk Xbj k T + M K,
T
T
.
Sy|x = Yjk Yjk + M KM T ? Syx (Sxx )?1 Syx
n0
2
)
,
(6)
(7)
(8)
Instead of explicitly sampling from Equation (4), a more efficient sampler [4] can be derived by
observing that different realizations of the link cn only make a small change to the partition structure.
First, note that removing a link cn generates a partition z(c?n ) which is either identical to the old
partition z(c) or contains one extra part, created by splitting some existing part. Sampling new
(new)
realizations of cn will give rise to new partitions z(c?n ? cn
), which may either be identical to
z(c?n ) or contain one less part, due to a merge of two existing parts. We thus sample cn from the
following distribution which only tracks those parts which change with different realizations of cn :
p(cn | D, f, ?)?(Y, X, b, z(c), ?) if cn links k1 and k2 ;
p(cn | c?n , X, Y, b, D, f, ?, ?) ?
p(cn | D, ?)
otherwise,
QJ
j=1 p(Yjk1 ?k2 | Xbj k1 ?k2 , ?)
?(Y, X, b, z(c), ?) = QJ
.
(9)
QJ
j=1 p(Yjk2 | Xbj k2 , ?)
j=1 p(Yjk1 | Xbj k1 , ?)
Here, k1 and k2 are parts in z(c?n ). Note that if the mesh segmentation c is the only quantity
of interest, the analytically marginalized affine transformations Ajk need not be directly estimated.
However, for some applications the transformations are of direct interest. Given a sampled segmentation, the part-specific parameters for pose j have the following posterior [10]:
?1
p(Ajk , ?jk | Yjk , X k , ?) ? MN (Ajk | Syx Sxx
, ?jk , Sxx )IW(?jk | Nk + n0 , Sy|x + S0 ) (10)
Marginalizing the noise covariance matrix, the distribution over transformations is then
Z
k
k
?1
p(Ajk | Yj , X , ?) = MN (Ajk | Syx Sxx
, ?jk , Sxx )IW (?jk | Nk + n0 , Sy|x + S0 ) d?jk
?1
= MT (Ajk | Nk + n0 , Syx Sxx
, Sxx , Sy|x + S0 )
(11)
?1
where MT (?) is a matrix-t distribution [11] with mean Syx Sxx
, and Nk + n0 degrees of freedom.
4 Experimental Results
We now experimentally validate, both qualitatively and quantitatively, our mesh-ddcrp model. Because ?ground truth? parts are unavailable for the real body pose datasets of primary interest, we
propose an alternative evaluation metric based on the prediction of held-out object poses, and show
that the mesh-ddcrp performs favorably against competing approaches.
We primarily focus on a collection of 56 training meshes, acquired and aligned [6] from 3D scans
of two female subjects in 27 and 29 poses. For quantitative tests, we employ 12 meshes of each of
six different female subjects [15] (Figure 4). For each subject, a mesh in a canonical pose is chosen
as the reference mesh (Figure 1). These meshes contain about 20,000 faces.
4.1 Hyperparameter Specification and MCMC Learning
The hyperparameters that regularize our mesh-ddcrp prior have intuitive interpretations, and can be
specified based on properties of the mesh data under consideration. As described in Section 2.1,
the ddCRP distances D and f are set to guarantee spatially connected parts. The self-connection
parameter is set to a small value, ? = 10?8 , to encourage creation of larger parts.
The matrix normal-inverse-Wishart prior on affine transformations Ajk , and residual noise covariances ?jk , has hyperparameters ? = {n0 , S0 , M, K}. The mean affine transformation M is set to
5
the identity transformation, because on average we expect mesh faces to undergo small deformations.
For the noise covariance prior, we set the degrees of freedom n0 = 5, a value which makes the prior
variance nearly as large as possible while ensuring that the mean remains finite. The expected part
variance S0 captures the degree of non-rigidity which we expect parts to demonstrate, as well as
noise from the mesh alignment process. The correspondence error in our human meshes is approximately 0.01m; allowing for some part non-rigidity, we set ? = 0.015m and S0 = ? 2 ? I3?3 . K is
a precision matrix set to K = ? 2 ?diag(1, 1, 1, 0.1).The Kronecker product of K ?1 and S0 governs
the covariance of the distribution on A. Our settings make this nearly identity for most components,
but the translation components of A have variance which is an order of magnitude larger, so that the
expected scale of the translation parameters matches that of the mesh coordinates.
In our experiments, we ran the mesh-ddcrp sampler for 200 iterations from each of five random
initializations, and selected the most probable posterior sample. The computational cost of a Gibbs
iteration scales linearly with the number of meshes; our unoptimized Matlab implementation required around 10 hours to analyze 56 human meshes.
4.2 Baseline Segmentation Methods
We compare the mesh-ddcrp model to three competing methods. The first is a modified agglomerative clustering technique [16] which enforces spatial contiguity of the faces within each part. At
initialization, each face is deemed to be its own part. Adjacent parts on the mesh are then merged
based on the squared error in describing their motion by affine transformations. Only adjacent parts
are considered in these merge steps, so that parts remain spatially connected.
Our second baseline is based on a publicly available implementation of spectral clustering methods [17], a popular approach which has been previously used for mesh segmentation [18]. We compare to an affinity matrix specifically designed to cluster faces with
similar motions [19]. The affinity
?
P
?uv + muv
between two mesh faces u, v is defined as Cuv = exp{?
}, where muv = J12 j ?uvj ,
S2
q P
1
? 2
?uvj is the Euclidean distance between u and v in pose j, ?uv =
j (?uvj ? ?uv ) is the
J
P
?
1
muv for all M pairs of faces u, v.
corresponding standard deviation, and S = M
u,v ?uv +
For the agglomerative and spectral clustering approaches, the number of parts must be externally
specified; we experimented with K = 5, 10, 15, 20, 25, 30 parts. We also consider a Bayesian
nonparametric baseline which replaces the ddCRP prior over mesh partitions with a standard CRP
prior. The resulting mesh-crp model may estimate the number of parts, but doesn?t model mesh
structure or enforce part contiguity. The expected number of parts under the CRP prior is roughly
? log N ; we set ? = 2 so that the expected number of mesh-crp parts is similar to the number of
parts discovered by the mesh-ddcrp. To exploit bilateral symmetry, for all methods we only segment
the right half of each mesh. The resulting segmentation is then reflected onto the left half.
4.3 Part Discovery and Motion Prediction
We first consider the synthetic Tosca dataset [20], and separately analyze the Centaur (six poses)
and Horse (eight poses) meshes. These meshes contain about 31,000 and 38,000 triangular faces,
respectively. Figure 3 displays the segmentations of the Tosca meshes inferred by mesh-ddcrp. The
inferred parts largely correspond to groups of mesh faces which undergo similar transformations.
Figure 4 displays the results produced by the ddCRP, as well as our baseline methods, on the human
mesh data. Qualitatively, the segmentations produced by mesh-ddcrp correspond to our intuitions
about the body. Note that in addition to capturing the head and limbs, the segmentation successfully
segregates distinctly moving small regions such as knees, elbows, shoulders, biceps, and triceps. In
all, the mesh-ddcrp detects 20 distinctly moving parts for one half of the body.
We now introduce a quantitative measure of segmentation quality: segmentations are evaluated by
their ability to explain the articulations of test meshes with novel shapes and poses. Given a collection of T test meshes Yt with corresponding reference meshes Xbt , and a candidate segmentation
into K parts, we compute
E=
T
K
1 XX
||Ytk ? A?tk Xbt k ||2 .
T t=1
k=1
6
(12)
Figure 3: Segmentations produced by mesh-ddcrp on synthetic Tosca meshes [20]. The first mesh in each row
displays the chosen reference mesh. For illustration, we have only segmented the right half of each mesh.
Here, A?tk is the least squares estimate of the single affine transformation responsible for mapping
Xbt k to Ytk . Note that Equation (12) is trivially zero for a degenerate solution wherein each mesh
face is assigned to its own part. However, segmentations of similar resolution may safely be compared using Equation (12), with lower errors corresponding to better segmentations.
On our test set of human meshes, the mesh-ddcrp model produces an error of E = 1.39 meters, which
corresponds to sub-millimeter accuracy when normalized by the number of faces. Figure 4 displays
a plot comparing the errors achieved by the different methods. Mesh-ddcrp is significantly better
than all other methods, including for settings of K which allocate 50% more parts to competing
approaches, according to a Wilcoxon?s signed rank test (5% significance level).
Next, we demonstrate the benefits of sharing information among differently shaped bodies. We
selected an illustrative articulated pose for each of the two training subjects in addition to their
respective reference poses (Figure 4). The chosen poses either exhibit upper or lower body deformations, but not both. The meshes were then segmented both independently for the two subjects
and jointly sharing information across subjects. Figure 5 demonstrates that the independent segmentations exhibit both undersegmented (legs in the first set) and oversegmented (head in the second)
parts. However, sharing information among subjects results in parts which correspond well with
physical human bodies. Note that with only two articulated poses, we are able to generate meaningful segmentations in about an hour of computation. This data-limited scenario also demonstrates
the benefits of the ddCRP prior: as shown in Figure 5, the parts extracted by mesh-crp are ?patchy?,
spatially disconnected, and physically implausible.
5 Discussion
Adapting the ddCRP to collections of 3D meshes, we have developed an effective approach for
the discovery an unknown number of parts underlying articulated object motion. Unlike previous
methods, our model guarantees that parts are spatially connected, and uses transformations to model
instances with potentially varying body shapes. Via a novel application of matrix normal-inverseWishart priors, our sampler analytically marginalizes transformations for improved efficiency. While
we have modeled part motion via affine transformations, future work should explore more accurate
Lie algebra characterizations of deformation manifolds [21].
Experiments with dozens of real human body poses provide strong quantitative evidence that our approach produces state-of-the-art segmentations with many potential applications. We are currently
exploring methods for using multiple samples from the ddCRP posterior to characterize part uncertainty, and scaling our Monte Carlo learning algorithms to datasets containing thousands of meshes.
Acknowledgments This work was supported in part by the Office of Naval Research under contract W911QY-10-C-0172. We thank Eric Rachlin, Alex Weiss, and David Hirshberg for acquiring
and aligning the human meshes, and Aggeliki Tsoli for her helpful comments.
7
Spect15
Spect20
Spect25
Agglom15
Agglom20
Agglom25
mesh-crp
4.5
ddcrp?mesh
Spectral Clustering
Agglomerative
crp?mesh
4
Error in meters
mesh-ddcrp
3.5
3
2.5
2
1.5
1
5
10
15
20
25
30
Number of Parts
Figure 4: Top two rows (left to right): Segmentations produced by spectral and agglomerative clustering with
15, 20, and 25 clusters respectively, followed by the mesh-crp and mesh-ddcrp segmentations. Bottom row: Test
set results. We display mesh-ddcrp segmentations for several test meshes, and quantitatively compare methods.
Ref. pose
Illust. pose
ind. mesh-crp ind. mesh-ddcrp mesh-crp mesh-ddcrp
Figure 5: Impact of sharing information across bodies with varying shapes. The two rows correspond to the
training subjects. Each row displays the reference pose, an illustrative articulated pose, mesh-crp and meshddcrp segmentations produced by independently segmenting the pair of poses of each individual, and mesh-crp
and mesh-ddcrp segmentations produced by jointly segmenting the chosen poses from both subjects.
8
References
[1] M. Attene, S. Katz, M. Mortara, G. Patane, M. Spagnuolo, and A. Tal. Mesh segmentation ? A comparative study. In SMI, 2006.
[2] Xiaobai Chen, Aleksey Golovinskiy, and Thomas Funkhouser. A benchmark for 3D mesh segmentation.
ACM Transactions on Graphics (Proc. SIGGRAPH), 28(3):73:1?73:12, 2009.
[3] Evangelos Kalogerakis, Aaron Hertzmann, and Karan Singh. Learning 3D Mesh Segmentation and Labeling. ACM Transactions on Graphics, 29(4):102:1?102:12, July 2010.
[4] David M. Blei and Peter I. Frazier. Distance dependent Chinese restaurant processes. J. Mach. Learn.
Res., 12:2461?2488, November 2011.
[5] S. Ghosh, A. B. Ungureanu, E. B. Sudderth, and D. Blei. Spatial distance dependent Chinese restaurant
processes for image segmentation. In NIPS, pages 1476?1484, 2011.
[6] D. Hirshberg, M. Loper, E. Rachlin, and M.J. Black. Coregistration: Simultaneous alignment and modeling of articulated 3D shape. In ECCV, pages 242?255, 2012.
[7] R. M. Neal. Markov chain sampling methods for Dirichlet process mixture models. JCGS, 9(2):249?265,
2000.
[8] D. Anguelov, D. Koller, H. Pang, P. Srinivasan, and S. Thrun. Recovering articulated object models from
3d range data. In UAI, pages 18?26, 2004.
[9] J. Franco and E. Boyer. Learning temporally consistent rigidities. In IEEE CVPR, pages 1241?1248,
2011.
[10] E. B. Fox. Bayesian Nonparametric Learning of Complex Dynamical Phenomena. PhD thesis, Massachusetts Institute of Technology, Cambridge, MA, 2009.
[11] A. K. Gupta and D. K. Nagar. Matrix Variate Distributions. Chapman & Hall/CRC, October 2000.
[12] Tong-Yee Lee, Yu-Shuen Wang, and Tai-Guang Chen. Segmenting a deforming mesh into near-rigid
components. The Visual Computer, 22(9):729?739, September 2006.
[13] Guy Rosman, Michael M. Bronstein, Alexander M. Bronstein, Alon Wolf, and Ron Kimmel. Groupvalued regularization framework for motion segmentation of dynamic non-rigid shapes. In SSVM?11,
pages 725?736, 2012.
[14] Stefanie Wuhrer and Alan Brunton. Segmenting animated objects into near-rigid components. The Visual
Computer, 26:147?155, 2010.
[15] N. Hasler, C. Stoll, M. Sunkel, B. Rosenhahn, and H.-P. Seidel. A statistical model of human pose and
body shape. In Computer Graphics Forum (Proc. Eurographics 2009), volume 2, pages 337?346, March
2009.
[16] R. N. Shepard. Multidimensional scaling, tree-fitting, and clustering. Science, 210:390?398, October
1980.
[17] Wen-Yen Chen, Yangqiu Song, Hongjie Bai, Chih-Jen Lin, and Edward Y. Chang. Parallel spectral
clustering in distributed systems. IEEE PAMI, 33(3):568?586, 2011.
[18] Rong Liu and Hao Zhang. Segmentation of 3D meshes through spectral clustering. In Pacific Conference
on Computer Graphics and Applications, pages 298?305, 2004.
[19] Edilson de Aguiar, Christian Theobalt, Sebastian Thrun, and Hans-Peter Seidel. Automatic conversion of
mesh animations into skeleton-based animations. Computer Graphics Forum, 27(2):389?397, 2008.
[20] Alexander Bronstein, Michael Bronstein, and Ron Kimmel. Calculus of nonrigid surfaces for geometry
and texture manipulation. IEEE Tran. on Viz. and Computer Graphics, 13:902?913, 2007.
[21] Oren Freifeld and Michael J. Black. Lie bodies: A manifold representation of 3D human shape. In
European Conf. on Computer Vision (ECCV), Part I, LNCS 7572, pages 1?14. Springer-Verlag, October
2012.
9
|
4749 |@word deformed:1 calculus:1 km:1 seek:2 covariance:6 pick:1 accommodate:1 bai:1 configuration:1 contains:1 liu:1 animated:1 existing:2 comparing:1 must:5 mesh:114 partition:12 xb1:1 christian:1 shape:11 designed:1 plot:1 n0:13 cue:1 discovering:1 generative:2 fewer:1 selected:2 half:4 blei:2 provides:1 characterization:1 location:3 ron:2 zhang:1 five:2 unbounded:1 direct:1 kalogerakis:1 combine:1 fitting:1 introduce:2 acquired:1 expected:4 shearing:1 roughly:1 mpg:1 themselves:1 detects:1 decreasing:1 inappropriate:1 window:1 elbow:2 brunton:1 discover:1 underlying:3 linearity:1 xx:1 mass:1 cm:2 minimizes:1 contiguity:3 developed:2 ghosh:1 transformation:41 guarantee:4 safely:1 quantitative:4 multidimensional:1 fat:1 k2:5 demonstrates:2 exchangeable:1 control:1 planck:1 segmenting:6 local:1 limit:1 encoding:1 mach:1 analyzing:1 merge:2 approximately:1 black:3 signed:1 pami:1 initialization:2 studied:2 r4:2 co:1 limited:1 range:2 acknowledgment:1 responsible:1 enforces:1 yj:6 centaur:1 procedure:2 lncs:1 dmn:2 significantly:1 adapting:1 cannot:1 marginalize:1 onto:1 collapsed:1 influence:1 yee:1 conventional:1 deterministic:1 customer:10 center:1 yt:1 independently:3 resolution:2 knee:2 immediately:1 splitting:1 explictly:1 regularize:1 rigidity:3 j12:1 searching:1 handle:1 coordinate:3 target:1 construction:1 homogeneous:2 us:2 particularly:1 jk:12 observed:6 bottom:1 wang:1 capture:3 thousand:1 region:2 connected:7 ran:1 intuition:1 skeleton:3 hertzmann:1 dynamic:1 depend:1 singh:1 segment:3 algebra:1 creation:1 efficiency:1 eric:1 triangle:18 joint:1 siggraph:1 differently:1 various:3 derivation:1 articulated:14 distinct:1 effective:1 monte:1 horse:1 labeling:1 choosing:1 whose:3 heuristic:1 widely:2 posed:1 larger:2 cvpr:1 otherwise:1 triangular:6 favor:1 ability:1 jointly:3 final:2 sequence:2 propose:1 tran:1 product:1 sudderth1:1 undersegmented:1 aligned:5 realization:3 rapidly:1 mixing:1 flexibility:1 degenerate:1 intuitive:1 validate:1 constituent:3 exploiting:1 cluster:3 optimum:1 produce:2 comparative:1 guaranteeing:1 object:20 tk:2 depending:1 develop:3 alon:1 pose:40 edward:1 strong:2 recovering:1 c:1 merged:1 human:16 crc:1 generalization:1 clustered:1 decompose:1 probable:2 exploring:1 rong:1 clothing:1 around:1 considered:2 ground:1 normal:6 exp:1 hall:1 mapping:2 bj:2 matthew:1 vary:1 sought:1 proc:2 currently:2 iw:3 tosca:3 create:1 successfully:1 evangelos:1 gaussian:1 i3:1 rather:1 occupied:2 modified:1 varying:5 factorizes:1 office:1 encode:1 derived:1 focus:2 loper:1 naval:1 frazier:1 consistently:1 rank:1 likelihood:5 viz:1 baseline:4 helpful:1 inference:7 dependent:5 rigid:8 her:1 koller:1 boyer:1 unoptimized:1 comprising:1 issue:3 among:4 overall:3 smi:1 priori:2 spatial:10 art:1 marginal:1 equal:1 field:1 inversewishart:1 extraction:2 shaped:1 sampling:3 hop:1 identical:2 placing:2 chapman:1 yu:1 nearly:3 future:1 intelligent:1 quantitatively:3 employ:1 wen:1 develops:1 piecewise:1 primarily:1 soumya:1 simultaneously:1 individual:2 fitness:1 geometry:1 freedom:2 interest:4 evaluation:2 alignment:5 mixture:2 held:1 xb:4 chain:1 allocating:1 accurate:1 edge:1 encourage:2 respective:1 traversing:1 mortara:1 indexed:1 fox:1 old:1 euclidean:1 tree:1 desired:1 re:1 deformation:14 minimal:1 yjn:4 instance:4 column:1 modeling:7 contiguous:4 patchy:1 retains:1 assignment:3 kimmel:2 cost:1 deviation:1 entry:1 examining:1 graphic:7 characterize:1 dependency:2 muv:3 synthetic:2 person:1 probabilistic:2 contract:1 lee:1 michael:4 together:2 connecting:1 theobalt:1 connectivity:1 squared:1 thesis:1 eurographics:1 containing:1 marginalizes:3 wishart:5 guy:1 conf:1 deform:4 protective:1 account:1 de:2 potential:1 spagnuolo:1 summarized:1 includes:1 explicitly:2 tion:1 later:2 bilateral:1 closed:1 observing:1 analyze:2 parallel:1 yen:1 square:1 pang:1 publicly:1 accuracy:1 variance:5 largely:1 efficiently:1 sy:5 correspond:5 sitting:1 yellow:1 millimeter:1 bayesian:4 produced:7 carlo:1 biceps:1 explain:1 implausible:1 inform:1 reach:2 suffers:1 manual:2 sharing:4 simultaneous:1 sebastian:1 against:1 naturally:2 associated:2 static:1 sampled:4 dataset:2 popular:1 massachusetts:1 knowledge:1 segmentation:50 appears:2 supervised:1 reflected:1 wherein:1 improved:2 wei:1 evaluated:1 stage:1 crp:16 correlation:1 lack:1 undergoes:1 quality:1 brown:2 true:1 contain:3 normalized:1 analytically:3 assigned:1 regularization:1 spatially:10 iteratively:1 semantic:1 funkhouser:1 neal:1 attractive:1 adjacent:6 ind:2 during:1 width:1 self:1 ambiguous:1 noted:1 illustrative:2 nonrigid:1 complete:1 demonstrate:2 performs:1 motion:11 image:1 variational:1 consideration:1 discovers:1 recently:2 novel:3 common:1 rotation:1 functional:1 mt:2 physical:3 xbj:11 shepard:1 volume:1 interpretation:2 katz:1 anguelov:2 cambridge:1 gibbs:4 enter:1 automatic:1 uv:4 trivially:1 reachable:1 moving:2 specification:1 han:1 surface:2 etc:1 align:1 aligning:1 wilcoxon:1 curvature:1 multivariate:1 posterior:5 recent:1 female:3 own:2 nagar:1 driven:3 scenario:1 manipulation:1 verlag:1 ddcrp:35 captured:2 preserving:1 determine:2 monotonically:1 july:1 multiple:2 full:2 infer:1 seidel:2 segmented:3 alan:1 match:1 adapt:2 determination:1 characterized:1 lin:1 ensuring:1 prediction:3 impact:1 regression:1 vision:2 metric:2 physically:1 iteration:2 sometimes:1 achieved:1 oren:1 c1:1 preserved:1 addition:3 cuv:1 separately:1 addressed:1 sudderth:2 extra:1 unlike:2 comment:1 subject:9 undergo:2 near:3 affect:1 restaurant:8 zi:1 variate:1 competing:3 cn:15 absent:1 qj:3 six:2 allocate:1 song:1 peter:2 matlab:1 ssvm:1 useful:1 governs:2 nonparametric:5 ytk:2 induces:1 generate:1 specifies:1 restricts:1 canonical:1 alters:1 estimated:3 track:1 hyperparameter:1 srinivasan:1 group:1 triceps:1 registration:1 utilize:1 hasler:1 inverse:4 uncertainty:4 fourth:1 xbn:4 family:1 place:1 chih:1 draw:1 scaling:4 capturing:3 followed:1 simplification:1 display:6 correspondence:3 replaces:1 constraint:1 kronecker:1 alex:1 encodes:2 tal:1 nearby:1 generates:1 franco:1 stoll:1 department:2 developing:1 according:3 pacific:1 march:1 disconnected:1 conjugate:1 across:4 remain:1 em:2 leg:1 invariant:1 equation:8 conjugacy:1 remains:1 previously:1 describing:1 r3:6 tai:1 needed:1 jcgs:1 yj1:1 available:1 permit:1 eight:1 limb:1 enforce:2 appropriate:5 indirectly:1 spectral:6 robustly:1 alternative:1 thomas:1 top:1 clustering:11 dirichlet:2 ensure:1 marginalized:2 exploit:1 k1:4 chinese:6 forum:2 move:1 rachlin:2 already:1 arrangement:1 quantity:1 primary:2 rosenhahn:1 exhibit:3 affinity:2 september:1 distance:12 link:16 separate:1 thank:1 thrun:2 seating:1 manifold:2 agglomerative:4 tuebingen:1 erik:1 modeled:3 illustration:1 mostly:1 october:3 potentially:3 yjk:7 favorably:1 hao:1 rise:1 design:1 implementation:2 proper:1 ambitious:1 unknown:2 bronstein:4 allowing:1 upper:1 conversion:1 datasets:3 markov:2 benchmark:1 finite:1 november:1 defining:1 segregate:1 variability:2 head:2 shoulder:1 y1:1 discovered:1 arbitrary:1 inferred:8 david:2 pair:3 required:4 specified:4 connection:1 coherent:3 concisely:1 oversegmented:1 mediates:1 hour:2 tractably:1 nip:1 address:2 able:1 dynamical:1 articulation:3 max:1 including:2 treated:1 difficulty:1 regularized:1 residual:3 mn:3 technology:1 numerous:1 temporally:1 identifies:1 created:1 deemed:1 stefanie:1 prior:21 morphing:1 discovery:3 geometric:2 meter:2 determining:1 relative:1 marginalizing:1 expect:3 proportional:2 allocation:1 age:1 degree:4 affine:22 freifeld:1 consistent:1 s0:13 translation:3 row:5 prone:1 eccv:2 supported:1 allow:2 understand:1 institute:2 face:25 distinctly:2 benefit:2 distributed:1 doesn:2 collection:7 qualitatively:2 far:1 transaction:2 approximate:1 global:1 sequentially:1 uai:1 corpus:1 b1:1 conclude:1 latent:2 table:6 learn:1 xbt:3 reasonably:1 robust:1 inherently:1 symmetry:1 unavailable:2 complex:1 european:1 diag:1 significance:1 linearly:1 arrow:1 s2:1 noise:7 hyperparameters:3 animation:2 guang:1 ref:1 body:19 x1:1 join:1 tong:1 precision:1 sub:1 lie:4 candidate:1 dozen:2 externally:2 removing:1 specific:5 jen:1 illust:1 decay:2 experimented:1 gupta:1 evidence:1 effectively:1 supplement:1 texture:2 magnitude:1 phd:1 nk:8 chen:3 suited:1 likely:1 explore:1 visual:2 expressed:1 syx:7 chang:1 acquiring:1 springer:1 corresponds:2 truth:1 determines:1 wolf:1 extracted:1 acm:2 ma:1 conditional:1 goal:1 identity:2 quantifying:1 aguiar:1 ajk:10 change:2 experimentally:1 infinite:1 perceiving:1 rosman:2 specifically:1 sampler:6 experimental:1 meaningful:1 deforming:1 aaron:1 select:1 people:1 scan:1 alexander:2 mcmc:3 phenomenon:1
|
4,143 | 475 |
Recurrent Networks and N ARMA Modeling
Jerome Connor
Les E. Atlas
FT-lO
Interactive Systems Design Laboratory
Dept. of Electrical Engineering
University of Washington
Seattle, Washington 98195
Douglas R. Martin
B-317
Dept. of Statistics
University of Washington
Seattle, Washington 98195
Abstract
There exist large classes of time series, such as those with nonlinear moving
average components, that are not well modeled by feedforward networks
or linear models, but can be modeled by recurrent networks. We show that
recurrent neural networks are a type of nonlinear autoregressive-moving
average (N ARMA) model. Practical ability will be shown in the results of
a competition sponsored by the Puget Sound Power and Light Company,
where the recurrent networks gave the best performance on electric load
forecasting.
1
Introduction
This paper will concentrate on identifying types of time series for which a recurrent
network provides a significantly better model, and corresponding prediction, than
a feedforward network. Our main interest is in discrete time series that are parsimoniously modeled by a simple recurrent network, but for which, a feedforward
neural network is highly non-parsimonious by virtue of requiring an infinite amount
of past observations as input to achieve the same accuracy in prediction.
Our approach is to consider predictive neural networks as stochastic models. Section
2 will be devoted to a brief summary of time series theory that will be used to
illustrate the the differences between feedforward and recurrent networks. Section 3
will investigate some of the problems associated with nonlinear moving average and
state space models of time series. In particular, neural networks will be analyzed as
301
302
Connor, Atlas, and Martin
nonlinear extensions oftraditionallinear models. From the preceding sections, it will
become apparent that the recurrent network will have advantages over feedforward
neural networks in much the same way that ARMA models have over autoregressive
models for some types of time series.
Finally in section 4, the results of a competition in electric load forecasting sponsored by the Puget Sound Power and Light Company will discussed. In this competition, a recurrent network model gave superior results to feed forward networks
and various types of linear models. The advantages of a state space model for
multivariate time series will be shown on the Puget Power time series.
2
Traditional Approaches to Time Series Analysis
The statistical approach to forecasting involves the construction of stochastic models to predict the value of an observation Xt using previous observations. This is
often accomplished using linear stochastic difference equation models, with random
inputs.
A very general class of linear models used for forecasting purposes is the class of
ARMA(p,q) models
p
q
Xt = L <PXt-1 + L (Jet-i
1=1
+ et
i=l
where et denotes random noise, independent of past X"~ The conditional mean
(minimum mean square error) predictor Xt of Xt can be expressed in the recurrent
form
p
q
Xt = L<pXt-, + L(Jet-i?
1=1
i=l
where ek is approximated by
fk = Xk - Xk,
Ie = t - 1, ... , t - q
The key properties of interest for an ARMA(p,q) model are stationarity and invertibility. If the process Xt is stationary, its statistical properties are independent of
time. Any stationary ARMA(p,q) process can be written as a moving average
00
Xt = L hket-k + et?
k=l
An invertible process can be equivalently expressed in terms of previous observations
or residuals. For a process to be invertible, all the poles of the z-transform must
lie inside the unit circle of the z plane. An invertible ARMA(p,q) process can be
written as an infinite autoregression
00
Xt = L <PkXt-k
+ et?
k=l
As an example of how the inverse process occurs, let et be solved for in terms of Xt
and then substitute previous et's into the original process. This can be illustrated
Recurrent Networks and NARMA Modeling
with an MA(I) process
Xt
et-i
=
=
et
+ (}et-1
Xt-i - (}et-i-1
+ (}(Xt-1 - (}et-2)
Xt = et + ~(-I)i-1(}iXt_i
Xt
=
et
i
Looking at this example, it can be seen that an MA(I) processes with I(}I ~ 1 will
depend significantly on observations in the distant past. However, if I(}I < 1, then
the effect of the distant past is negligible.
In the nonlinear case, it will be shown that it is not always possible to go back
and forth between descriptions in terms of observables (e.g. Xi) and descriptions
in terms of unobservables (e.g. ei) even when St = O. For a review of time series
prediction in greater depth see the works of Box [1] or Harvey [2].
3
Nonlinear ARMA Models
Many types of nonlinear models have been proposed in the literature. Here we focus
on feed forward and recurrent neural networks and how they relate to nonlinear
ARMA models.
3.1
Nonlinear Autoregressive Models
The simplest generalization to the nonlinear case would be the nonlinear autoregressive (NAR) model
Xt
=
h(xt-1! Xt-2, ... , Xt-p)
+ et,
where hO is an unknown smooth function with the assumption the best (i.e., minimum mean square error) prediction of Xt given Xt-1I ... , Xt-p is its conditional mean
Zt =
=
E(xtl x t-1I ... , Xt_p)
h(xt-1I ... , Xt-p).
Feedforward networks were first proposed as an NAR model for time series prediction by Lapedes and Farber [3]. A feedforward network is a nonlinear approximation
to h given by
I
Zt =
h(Xt-1I ... , Xt-p)
p
= ~ Wd(~ WijXt-j).
i=l
;=1
The weight matrix W is lower diagonal and will allow no feedback. Thus the feedforward network is a nonlinear mapping from previous observation onto predictions
of future observations. The function /(x) is a smooth bounded monotonic function,
typically a sigmoid.
The parameters Wi and Wij are estimates from a training sample x~, ... , x')." thereby
obtaining an estimate of h of h. Estimates are obtained by minimizing the sum
of the square residuals E~l (Xt - Zt)2 by gradient descent procedure known as
"backpropagation" [4].
303
304
Connor, Atlas, and Martin
3.2
NARMA or NMA
A simple nonlinear generalization of ARMA models is
It is natural to predict
Zt = h(Xt-b Xt-2, ... , Xt-p, et-b ... , et-q).
If the model h(Xt-b Xt-2, ... , Xt-p, et-l, ... , et-q) is chosen, then a recurrent network
can approximate it as
Zt
1
p
q
i=1
j=1
;=1
= h(Xt-1' ... , Xt-p) = L Wd(L WijXt-j + L wij(Xt-j -
Zt-j?.
This model is a special case of the fully interconnected recurrent network
Zt =
1
n
i=1
j=1
L Wd(L wijXt-j)
where wij are coefficients of a full matrix.
Nonlinear autoregressive models and nonlinear moving average models are not always equivalent for nondeterministic processes as in the linear case. If the probability of the next observation depends on the previous state of the process, a
representation built on et may not be complete unless some information on the previous state is added[8]. The problem is that if et, ... , et-m are known, there is still
not enough information to determine which state the series is in at t - m. Given
the lack of knowledge of the initial state, it is impossible to predict future states
and without the state information, the best predictions cannot be made.
If the moving average representation cannot be made with et alone, it still may be
possible to express a model in terms of past et and state information.
It has been shown that for a large class of nondeterministic Markov processes, a
model of this form can be constructed[8]. This link is important, because a recurrent
network is this type of model. For further details on using recurrent networks to
NARMA modeling see Connor et al[9].
4
Competition on Load Forecasting Data
A fully interconnected recurrent network trained with the Williams and Zipser algorithm [10] was part of a competition to predict the loads of the Puget Sound Power
and Light Company from November 11, 1990 to March 31, 1991. The object was
to predict the demand for the electric power, known as the load, profile of each day
on the previous working day. Because the forecast is made on Friday morning, the
Monday prediction is the most difficult. Actual loads and temperatures of the past
are available as well as forecasted temperatures for the day of the prediction.
Recurrent Networks and NARMA Modeling
Neural networks are not parsimonious and many parameters need to be determined.
Seasonality limits the amount of useful data for the load forecasting problem. For
example, the load profile in August is not useful for predicting the load profile in
January. This limited amount of data severely constrains the number of parameters
a model can accurately determine. We avoided seasonality, while increasing the size
of the training set by including data form the last four winters. In total 26976
vectors were available when data from August 1 to March 31 for 1986 to 1990 were
included. The larger training set enables neural network models be trained with
less danger of overfitting the data. If the network can accurately model load growth
over the years, then the network will have the added advantage of being exposed
to a larger temperature spectrum on which to base future predictions. The larger
temperature spectrum is hypothetically useful for predicting phenomenon such as
cold snaps which can result in larger loads than normal. It should be noted that
neural networks have been applied to this model in the past[6].
Initially five recurrent models were constructed, one for each day of the week, with
Wednesday, Thursday and Friday in a single network. Each network has temperature and load values from a week previous at that hour, the forecasted temperature
of the hour to be predicted, the hour year and the week of the forecast. The week of
the forecast was included to allow the network to model the seasonality of the data.
Some models have added load and temperature from earlier in the week, depending
on the availability of the data. The networks themselves consisted of three to four
neurons in the hidden layer. This predictor is of the form
It(k) = et(k - 7) + I(lt(k - 7), et(k - 7), it(k), T8(k - 1), t, d, y),
where 10 is a nonlinear function, It(k) is the load at time t and day k,
is the
noise, T is the temperature, T is the forecasted temperature, d is the day of the
week, and y is the year of the data.
et
After comparing its performance to the winner of the competition, the linear model
in Fig. 1, the poor performance could be attributed to the choice of model, rather
than a problem with recurrent networks. It should be mentioned that the linear
model took as one of its inputs, the square of the last available load. This is a
parsimonious way of modeling nonlinearities. A second recurrent predictor was
then built with the same input and output configuration as the linear model, save
the square of the previous load term which the nets nonlinearities can handle. This
net, denoted as the Recurrent Network, had a different recurrent model for each
hour of the day. Each hour of the day had a different model, this yielded the best
predictions. This predictor is of the form
It(k) et(k) + It.(lt(k - 1), et(k - 1), it(k), Ts(k - 1), d, y).
=
All of the models in the figure use the last available load, forecasted temperature
at the hour to be predicted, maximum forecasted temperature of the day to be
predicted, the previous midnight temperatures, and the hour and year of the prediction. A second recurrent network was also trained with the last available load
at that hour, this enabled et-l to be modeled. The availability of et-l turned out
to be the difference between making superior and average predictions. It should be
noted that the use of et-l did not improve the results of linear models.
The three most important error measures are the weekly morning, afternoon, and
total loads and are listed in the table below. The A.M. peak is the mean average
305
306
Connor, Atlas, and Martin
Recurrent
.0275
.0355
.0218
.0311
Table 1: Mean Square Error
percent error (MAPE) of the summed predictions of 7 A.M. to 9 A.M., the P.M.
peak is the MAPE of the summed predictions of 5 P.M. to 7 P.M, and the total
is the MAPE of the summed predictions over the entire day. Results, of the total
power for the day prediction, of the recurrent network and other predictors are
shown in Fig. 1. The performance on the A.M. and P.M. peaks were similar[9].
The failure of the daily recurrent network to accurately predict is a product of trying
to model to complex a problem. When the complexity of the problem was reduced
to that of predicting a single hour of the day, results improved significantly[7].
The superior performance of the recurrent network over the feedforward network
is time series dependent. A feedforward and a recurrent network with the same
input representation was trained to predict the 5 P.M. load on the previous work
day. The feedforward network succeeded in modeling the training set with a mean
square error of .0153 compared to the recurrent networks .0179. However, when
the tested on several winter outside the training set the results, listed in the table
below, varied. For the 1990-91 winter, the recurrent network did better with a
mean square error of .0311 compared to the feedforward networks .0331. For the
other winter of the years before the training set, the results were quite different,
the feedforward network won in all cases. The differences in prediction performance
can be explained by the inability of the feedforward network to model load growth
in the future. The loads experience in the 1990-91 winter were outside the range of
the entire training set. The earlier winters range of loads were not as far form the
training set and the feedforward network modeled them well.
The effect of the nonlinear nature of neural networks was apparent in the error
residuals of the training and test sets. Figs. 2 and 3 are plots of the residuals
against the predicted load for the training and test sets respectively. In Fig. 2,
the mean and variance of the residuals is roughly constant as a function of the
predicted load, this is indicative of a good fit to the data. However, in Fig. 3,
the errors tend to be positive for larger loads and negative for lesser loads. This
is a product of the squashing effect of the sigmoidal nonlinearities. The squashing
effect becomes acute during the prediction of the peak loads of the winter. These
peak loads are caused when a cold spell occurs and the power demand reaches record
levels. This is the only measure on which the performance of the recurrent networks
is surpassed, human experts outperformed the recurrent network for predictions
during cold spells. The recurrent network did outperform all other statistical models
on this measure.
Recurrent Networks and NARMA Modeling
8~1----------------------------------~I
:tI
i
~.
.. ,I-I
E
w
J
~
?
;:
..
n
u
i
n
:1I
Forward Besl Linearl
I Recunenl IFeed.'4elWOu!
~elWOr"
.'.fodel
:
fj\!fIIJrl'
~r
I
I
Ori.
Recunenl I
~elWOrlt :
?O~----------~:------3------4----~~--~6?
Figure 1; Competition Performance on Total Power
error
400
:
200
..
'..
. .."
" ..
? ?
#'.
? ? &. ..-,
?
.
.
..
?,&?"~?i?-f/: ",,'
d'
d
_ _ _..\--I'
..-.i-'"r,.;'.,..c",.~~i:..;o:!..?I..:.?~'......____-'-_ _ _ p r e ~ c t e
.:2' ~J).~,?l-~~5?0...
3750
Load
. ..... ?.'......
-. .. .'
? :
??
?
",
I,
".
? ?
.'
l!o.........................
"
.
?
-20a'
:.e'.,':-:'
.'... 1:
...' ., ...
?? ?
?
I.
:_
.. .
~
-400
?
? ????
.
Figure 2: Prediction vs. Residual on Training Set
error
400
'.
200
..
.. .
.'
?
.' ? '.1
? '.
___+-___~~~......._ ....,.'~.:~:,~.~.~.-..-._ ................_predi c ted
. :. . .
. . -..
2750 .. ?.. 3"25'0
.
.' .
-200
3750
Load
-400
Figure 3: Prediction vs. Residual on Testing Set
307
308
Connor, Atlas, and Martin
5
Conclusion
Recurrent networks are the nonlinear neural network analog of linear ARMA models. As such, they are well-suited for time series that possess moving average components, are state dependent, or have trends. Recurrent neural networks can give
superior results for load forecasting, but as with linear models, the choice of model
is critical to good prediction performance.
6
Acknowledgements
We would like to than Milan Casey Brace of the Puget Power Corporation, Dr. Seho
Oh, Dr. Mohammed EI-Sharkawi, Dr. Robert Marks, and Dr. Mark Damborg for
helpful discussions. We would also like to thank the National Science Foundation
for partially supporting this work.
References
[1] G. Box, Time series analysis: forecasting and control, Holden-Day, 1976.
[2] A. C. Harvey, The econometric analysis 0/ time series, MIT Press, 1990.
[3] A. Lapedes and R. Farber, "Nonlinear Signal Processing Using Neural Networks: Prediction and System Modeling", Technical Report, LA-UR87-2662,
Los Alamos National Laboratory, Los Alamos, New Mexico, 1987.
[4] D.E. Rumelhart, G.E. Hinton, and R.J. Williams, "Learning internal representations by error propagation", in Parallel Distributed Processing, vol. 1, D.E.
Rumelhart, and J.L. NcCelland,eds. Cambridge:M.I.T. Press,1986, pp. 318-362.
[5] M.C. Brace, A Comparison 0/ the Forecasting Accuracy of Neural Networks
with Other Established Techniques, Proc. of the 1st Int. Forum on Applications
of Neural Networks to Power Systems, Seattle, July 23-26, 1991.
[6] L. Atlas, J. Connor, et al., "Performance Comparisons Between Backpropagation Networks and Classification Trees on Three Real-World Applications",
Advances in Neural In/ormation Processing Systems 2, pp. 622-629, ed. D.
Touretzky, 1989.
[7] S. Oh et al., Electric Load Forecasting Using an Adaptively Trained Layered
Perceptron, Proc. of the 1st Int. Forum on Applications of Neural Networks to
Power Systems, Seattle, July 23-26, 1991.
[8] M. Rosenblatt, Markov Processes. Structure and Asymptotic Behavior,
Springer-Verlag, 1971, 160-182.
[9] J. Connor, L. E. Atlas, and R. D. Martin,"Recurrent Neural Networks and
Time Series Prediction", to be submitted to IEEE Trans. on Neural Networks,
1992.
[10] R. Williams and D. Zipser. A Learning Algorithm for Continually Running
Fully Recurrent Neural Networks, Neural Computation, 1, 1989, 270-280.
|
475 |@word thereby:1 initial:1 configuration:1 series:17 lapedes:2 past:7 wd:3 comparing:1 written:2 must:1 distant:2 enables:1 atlas:7 sponsored:2 plot:1 v:2 stationary:2 alone:1 indicative:1 plane:1 xk:2 record:1 provides:1 monday:1 sigmoidal:1 five:1 constructed:2 become:1 midnight:1 xtl:1 nondeterministic:2 inside:1 behavior:1 themselves:1 roughly:1 company:3 actual:1 increasing:1 becomes:1 bounded:1 corporation:1 ti:1 growth:2 interactive:1 weekly:1 control:1 unit:1 continually:1 before:1 negligible:1 engineering:1 positive:1 limit:1 severely:1 limited:1 range:2 practical:1 testing:1 backpropagation:2 cold:3 procedure:1 danger:1 significantly:3 onto:1 cannot:2 layered:1 impossible:1 equivalent:1 go:1 williams:3 identifying:1 oh:2 enabled:1 handle:1 construction:1 trend:1 rumelhart:2 approximated:1 ft:1 electrical:1 solved:1 ormation:1 mentioned:1 complexity:1 constrains:1 trained:5 depend:1 exposed:1 predictive:1 observables:1 various:1 outside:2 apparent:2 quite:1 larger:5 snap:1 besl:1 ability:1 statistic:1 transform:1 advantage:3 net:2 took:1 interconnected:2 product:2 turned:1 achieve:1 forth:1 description:2 competition:7 milan:1 los:2 seattle:4 object:1 illustrate:1 recurrent:39 depending:1 predicted:5 involves:1 concentrate:1 farber:2 stochastic:3 human:1 generalization:2 extension:1 normal:1 mapping:1 predict:7 week:6 purpose:1 proc:2 outperformed:1 mit:1 always:2 rather:1 forecasted:5 focus:1 casey:1 narma:5 helpful:1 dependent:2 typically:1 entire:2 holden:1 initially:1 hidden:1 wij:3 classification:1 denoted:1 nar:2 special:1 summed:3 washington:4 ted:1 future:4 pxt:2 report:1 winter:7 national:2 parsimoniously:1 stationarity:1 interest:2 highly:1 investigate:1 analyzed:1 light:3 devoted:1 succeeded:1 daily:1 experience:1 unless:1 tree:1 arma:11 circle:1 modeling:8 earlier:2 pole:1 predictor:5 alamo:2 adaptively:1 st:3 peak:5 ie:1 nma:1 invertible:3 dr:4 ek:1 expert:1 friday:2 puget:5 nonlinearities:3 invertibility:1 coefficient:1 availability:2 int:2 caused:1 depends:1 ori:1 parallel:1 square:8 accuracy:2 variance:1 accurately:3 submitted:1 reach:1 touretzky:1 ed:2 failure:1 against:1 pp:2 associated:1 attributed:1 knowledge:1 back:1 feed:2 day:14 improved:1 box:2 jerome:1 working:1 ei:2 nonlinear:20 lack:1 propagation:1 morning:2 effect:4 requiring:1 consisted:1 spell:2 laboratory:2 illustrated:1 during:2 noted:2 won:1 trying:1 complete:1 temperature:12 fj:1 percent:1 superior:4 sigmoid:1 winner:1 discussed:1 analog:1 connor:8 cambridge:1 fk:1 had:2 moving:7 acute:1 base:1 multivariate:1 mohammed:1 verlag:1 harvey:2 accomplished:1 seen:1 minimum:2 greater:1 preceding:1 determine:2 signal:1 july:2 full:1 sound:3 smooth:2 technical:1 jet:2 prediction:25 surpassed:1 brace:2 posse:1 tend:1 zipser:2 feedforward:15 enough:1 fit:1 gave:2 lesser:1 forecasting:10 useful:3 listed:2 amount:3 simplest:1 reduced:1 outperform:1 exist:1 seasonality:3 rosenblatt:1 discrete:1 vol:1 express:1 key:1 four:2 douglas:1 econometric:1 sum:1 year:5 inverse:1 parsimonious:3 layer:1 yielded:1 martin:6 march:2 poor:1 wi:1 making:1 damborg:1 explained:1 equation:1 autoregression:1 available:5 save:1 ho:1 substitute:1 original:1 denotes:1 running:1 forum:2 added:3 occurs:2 traditional:1 diagonal:1 gradient:1 link:1 thank:1 predi:1 modeled:5 minimizing:1 mexico:1 equivalently:1 difficult:1 robert:1 relate:1 negative:1 design:1 zt:7 unknown:1 observation:8 neuron:1 markov:2 unobservables:1 descent:1 november:1 t:1 january:1 supporting:1 hinton:1 looking:1 varied:1 august:2 established:1 hour:9 trans:1 below:2 built:2 including:1 power:11 critical:1 natural:1 predicting:3 residual:7 improve:1 brief:1 review:1 literature:1 acknowledgement:1 ur87:1 asymptotic:1 fully:3 foundation:1 squashing:2 lo:1 summary:1 last:4 allow:2 perceptron:1 distributed:1 feedback:1 depth:1 world:1 autoregressive:5 forward:3 made:3 avoided:1 far:1 approximate:1 overfitting:1 xi:1 spectrum:2 table:3 nature:1 obtaining:1 complex:1 electric:4 t8:1 did:3 main:1 noise:2 profile:3 fig:5 lie:1 mape:3 load:33 xt:35 virtue:1 sharkawi:1 demand:2 forecast:3 suited:1 lt:2 expressed:2 partially:1 monotonic:1 wednesday:1 springer:1 afternoon:1 ma:2 conditional:2 included:2 infinite:2 determined:1 total:5 la:1 thursday:1 hypothetically:1 internal:1 mark:2 inability:1 dept:2 tested:1 phenomenon:1
|
4,144 | 4,750 |
Learning optimal spike-based representations
Ralph Bourdoukan?
Group for Neural Theory
?
Ecole
Normale Sup?erieure
Paris, France
[email protected]
David G.T. Barrett?
Group for Neural Theory
?
Ecole
Normale Sup?erieure
Paris, France
[email protected]
Christian K. Machens
Champalimaud Neuroscience Programme
Champalimaud Centre for the Unknown
Lisbon, Portugal
[email protected]
Sophie Den`eve
Group for Neural Theory
?
Ecole
Normale Sup?erieure
Paris, France
[email protected]
Abstract
How can neural networks learn to represent information optimally? We answer
this question by deriving spiking dynamics and learning dynamics directly from
a measure of network performance. We find that a network of integrate-and-fire
neurons undergoing Hebbian plasticity can learn an optimal spike-based representation for a linear decoder. The learning rule acts to minimise the membrane
potential magnitude, which can be interpreted as a representation error after learning. In this way, learning reduces the representation error and drives the network
into a robust, balanced regime. The network becomes balanced because small representation errors correspond to small membrane potentials, which in turn results
from a balance of excitation and inhibition. The representation is robust because
neurons become self-correcting, only spiking if the representation error exceeds a
threshold. Altogether, these results suggest that several observed features of cortical dynamics, such as excitatory-inhibitory balance, integrate-and-fire dynamics
and Hebbian plasticity, are signatures of a robust, optimal spike-based code.
A central question in neuroscience is to understand how populations of neurons represent information and how they learn to do so. Usually, learning and information representation are treated as two
different functions. From the outset, this separation seems like a good idea, as it reduces the problem into two smaller, more manageable chunks. Our approach, however, is to study these together.
This allows us to treat learning and information representation as two sides of a single mechanism,
operating at two different timescales.
Experimental work has given us several clues about the regime in which real networks operate in
the brain. Some of the most prominent observations are: (a) high trial-to-trial variability?a neuron responds differently to repeated, identical inputs [1, 2]; (b) asynchronous firing at the network
level?spike trains of different neurons are at most very weakly correlated [3, 4, 5]; (c) tight balance
of excitation and inhibition?every excitatory input is met by an inhibitory input of equal or greater
size [6, 7, 8] and (4) spike-timing-dependent plasticity (STDP)?the strength of synapses change as
a function of presynaptic and postsynaptic spike times [9].
Previously, it has been shown that observations (a)?(c) can be understood as signatures of an optimal,
spike-based code [10, 11]. The essential idea is to derive spiking dynamics from the assumption that
neurons only fire if their spike improves information representation. Information in a network may
?
Authors contributed equally
1
originate from several possible sources: external sensory input, external neural network input, or
alternatively, it may originate within the network itself as a memory, or as a computation. Whatever
the source, this initial assumption leads directly to the conclusion that a network of integrate-and-fire
neurons can optimally represent a signal while exhibiting properties (a)?(c).
A major problem with this framework is that network connectivity must be completely specified a
priori, and requires the tuning of N 2 parameters, where N is the number of neurons in the network.
Although this is feasible mathematically, it is unclear how a real network could tune itself into this
optimal regime. In this work, we solve this problem using a simple synaptic learning rule. The key
insight is that the plasticity rule can be derived from the same basic principle as the spiking rule in
the earlier work?namely, that any change should improve information representation.
Surprisingly, this can be achieved with a local, Hebbian learning rule, where synaptic plasticity
is proportional to the product of presynaptic firing rates with post-synaptic membrane potentials.
Spiking and synaptic plasticity then work hand in hand towards the same goal: the spiking of a
neuron decreases the representation error on a fast time scale, thereby giving rise to the actual
population representation; synaptic plasticity decreases the representation error on a slower time
scale, thereby improving or maintaining the population representation. For a large set of initial
connectivities and spiking dynamics, neural networks are driven into a balanced regime, where
excitation and inhibition cancel each other and where spike trains are asynchronous and irregular.
Furthermore, the learning rule that we derive reproduces the main features of STDP (property (d)
above). In this way, a network can learn to represent information optimally, with synaptic, neural
and network dynamics consistent with those observed experimentally.
1
Derivation of the learning rule for a single neuron
We begin by deriving a learning rule for a single neuron with an autapse (a self-connection) (Fig.
1A). Our approach is to derive synaptic dynamics for the autapse and spiking dynamics for the
neuron such that the neuron learns to optimally represent a time-varying input signal. We will derive
a learning rule for networks of neurons later, after we have developed the fundamental concepts for
the single neuron case.
Our first step is to derive optimal spiking dynamics for the neuron, so that we have a target for our
learning rule. We do this by making two simple assumptions [11]. First, we assume that the neuron
can provide an estimate or read-out x
?(t) of a time-dependent signal x(t) by filtering its spike train
o(t) as follows:
x
?? (t) = ??
x(t) + ?o(t),
(1)
where ? is a fixed read-out weight, P
which we will refer to as the neuron?s ?output kernel? and the
spike train can be written as o(t) = i ?(t ? ti ), where {ti } are the spike times. Next, we assume
that the neuron only produces a spike if that spike improves the read-out, where we measure the
read-out performance through a simple squared-error loss function:
2
L(t) = x(t) ? x
?(t) .
(2)
With these two assumptions, we can now derive optimal spiking dynamics. First, we observe that if
the neuron produces an additional spike at time t, the read-out increases by ?, and the loss function
becomes L(t|spike) = (x(t) ? (x?(t) + ?))2 . This allows us to restate our spiking rule as follows:
the neuron should only produce a spike if L(t|no spike) > L(t|spike), or (x(t) ? x?(t))2 > (x(t) ?
(x?(t) + ?))2 . Now, squaring both sides of this inequality, defining V (t) ? ?(x(t) ? x
?(t)) and
defining T ? ?2 /2 we find that the neuron should only spike if:
V (t) > T.
(3)
We interpret V (t) to be the membrane potential of the neuron, and we interpret T as the spike
threshold. This interpretation allows us to understand the membrane potential functionally: the
voltage is proportional to a prediction error?the difference between the read-out x
?(t) and the actual
signal x(t). A spike is an error reduction mechanism?the neuron only spikes if the error exceeds
the spike threshold. This is a greedy minimisation, in that the neuron fires a spike whenever that
action decreases L(t) without considering the future impact of that spike. Importantly, the neuron
does not require direct access to the loss function L(t).
2
To determine the membrane potential dynamics, we take the derivative of the voltage, which gives
us V? = ?(x? ? x
?? ). (Here, and in the following, we will drop the time index for notational brevity.)
Now, using Eqn. (1) we obtain V? = ?x? ? ?(?x? + ?o) = ??(x ? x?) + ?(x? + x) ? ?2 o, so that:
V? = ?V + ?c ? ?2 o,
(4)
where c = x? + x is the neural input. This corresponds exactly to the dynamics of a leaky integrateand-fire neuron with an inhibitory autapse1 of strength ?2 , and a feedforward connection strength ?.
The dynamics and connectivity guarantee that a neuron spikes at just the right times to optimise the
loss function (Fig. 1B). In addition, it is especially robust to noise of different forms, because of
its error-correcting nature. If x is constant in time, the voltage will rise up to the threshold T at
which point a spike is fired, adding a delta function to the spike train o at time t, thereby producing
a read-out x
? that is closer to x and causing an instantaneous drop in the voltage through the autapse,
by an amount ?2 = 2T , effectively resetting the voltage to V = ?T .
We now have a target for learning?we know the connection strength that a neuron must have at the
end of learning if it is to represent information optimally, for a linear read-out. We can use this target
to derive synaptic dynamics that can learn an optimal representation from experience. Specifically,
we consider an integrate-and-fire neuron with some arbitrary autapse strength ?. The dynamics of
this neuron are given by
V? = ?V + ?c ? ?o.
(5)
This neuron will not produce the correct spike train for representing x through a linear read-out
(Eqn. (1)) unless ? = ?2 .
Our goal is to derive a dynamical equation for the synapse ? so that the spike train becomes optimal.
We do this by quantifying the loss that we are incurring by using the suboptimal strength, and then
deriving a learning rule that minimises this loss with respect to ?. The loss function underlying
the spiking dynamics determined by Eqn. (5) can be found by reversing the previous membrane
potential analysis. First, we integrate the differential equation for V , assuming that ? changes on
time scales much slower than the membrane potential. We obtain the following (formal) solution:
V = ?x ? ??
o,
(6)
where o? is determined by o?? = ??
o + o. The solution to this latter equation is o? = h ? o, a convolution
of the spike train with the exponential kernel h(? ) = ?(? ) exp(?? ). As such, it is analogous to the
instantaneous firing rate of the neuron.
Now, using Eqn. (6), and rewriting the read-out as x
? = ??
o, we obtain the loss incurred by the
sub-optimal neuron,
1
L = (x ? x
?)2 = 2 V 2 + 2(? ? ?2 )?
o + (? ? ?2 )2 o?2 .
(7)
?
We observe that the last two terms of Eqn. (7) will vanish whenever ? = ?2 , i.e., when the optimal
reset has been found. We can therefore simplify the problem by defining an alternative loss function,
1 2
V ,
(8)
2
which has the same minimum as the original loss (V = 0 or x = x
?, compare Eqn. (2)), but yields a
simpler learning algorithm. We can now calculate how changes to ? affect LV :
LV =
?LV
?V
? o?
=V
= ?V o? ? V ?
.
(9)
??
??
??
We can ignore the last term in this equation (as we will show below). Finally, using simple gradient
descent, we obtain a simple Hebbian-like synaptic plasticity rule:
? ?? = ?
?LV
= V o?,
??
(10)
where ? is the learning time constant.
1
This contribution of the autapse can also be interpreted as the reset of an integrate-and-fire neuron. Later,
when we generalise to networks of neurons, we shall employ this interpretation.
3
This synaptic learning rule is capable of learning the synaptic weight ? that minimises the difference
between x and x
? (Fig. 1B). During learning, the synaptic weight changes in proportion to the postsynaptic voltage V and the pre-synaptic firing rate o? (Fig. 1C). As such, this is a Hebbian learning
rule. Of course, in this single neuron case, the pre-synaptic neuron and post-synaptic neuron are the
same neuron. The synaptic weight gradually approaches its optimal value ?2 . However, it never
completely stabilises, because learning never stops as long as neurons are spiking. Instead, the
synapse oscillates closely about the optimal value (Fig. 1D).
This is also a ?greedy? learning rule, similar to the spiking rule, in that it seeks to minimise the error
at each instant in time, without regard for the future impact of those changes. To demonstrate that the
second term in Eqn. (5) can be neglected we note that the equations for V , o?, and ? define a system
of coupled differential equations that can be solved analytically by integrating between spikes. This
results in a simple recurrence relation for changes in ? from the ith to the (i + 1)th spike,
?i+1 = ?i +
?i (?i ? 2T )
.
? (T ? ?c ? ?i )
(11)
This iterative equation has a single stable fixed point at ? = 2T = ?2 , proving that the neuron?s
autaptic weight or reset will approach the optimal solution.
2
Learning in a homogeneous network
We now generalise our learning rule derivation to a network of N identical, homogeneously connected neurons. This generalisation is reasonably straightforward because many characteristics of
the single neuron case are shared by a network of identical neurons. We will return to the more
general case of heterogeneously connected neurons in the next section.
We begin by deriving optimal spiking dynamics, as in the single neuron case. This provides a target
for learning, which we can then use to derive synaptic dynamics. As before, we want our network
to produce spikes that optimally represent a variable x for a linear read-out. We assume that the
read-out x
? is provided by summing and filtering the spike trains of all the neurons in the network:
x
?? = ??
x + ?o,
(12)
2
where the row vector ? = (?, . . . , ?) contains the read-out weights of the neurons and the column
vector o = (o1 , . . . , oN ) their spike trains. Here, we have used identical read-out weights for each
neuron, because this indirectly leads to homogeneous connectivity, as we will demonstrate.
Next, we assume that a neuron only spikes if that spike reduces a loss-function. This spiking rule is
similar to the single neuron spiking rule except that this time there is some ambiguity about which
neuron should spike to represent a signal. Indeed, there are many different spike patterns that provide
exactly the same estimate x
?. For example, one neuron could fire regularly at a high rate (exactly like
our previous single neuron example) while all others are silent. To avoid this firing rate ambiguity,
we use a modified loss function, that selects amongst all equivalent solutions, those with the smallest
neural firing rates. We do this by adding a ?metabolic cost? term to our loss function, so that high
firing rates are penalised:
L = (x ? x
?)2 + ?k?
ok2 ,
(13)
where ? is a small positive constant that controls the cost-accuracy trade-off, akin to a regularisation
parameter.
Each neuron in the optimal network will seek to reduce this loss function by firing a spike. Specifically, the ith neuron will spike whenever L(no spike in i) > L(spike in i). This leads to the following spiking rule for the ith neuron:
Vi > Ti
(14)
where Vi ? ?(x ? x
?) ? ?oi and Ti ? ?2 /2 + ?/2. We can naturally interpret Vi as the membrane
potential of the ith neuron and Ti as the spiking threshold of that neuron. As before, we can now
derive membrane potential dynamics:
? = ?V + ?T c ? (?T ? + ?I)o,
V
2
(15)
The read-out weights must scale as ? ? 1/N so that firing rates are not unrealistically small in large
P
networks. We can see this by calculating the average firing rate N
?i /N ? x/(?N ) ? O(N/N ) ? O(1).
i=1 o
4
where I is the identity matrix and ?T ? + ?I is the network connectivity. We can interpret the selfconnection terms {?2 +?} as voltage resets that decrease the voltage of any neuron that spikes. This
optimal network is equivalent to a network of identical integrate-and-fire neurons with homogeneous
inhibitory connectivity.
The network has some interesting dynamical properties. The voltages of all the neurons are largely
synchronous, all increasing to the spiking threshold at about the same time3 (Fig. 1F). Nonetheless,
neural spiking is asynchronous. The first neuron to spike will reset itself by ?2 + ?, and it will
inhibit all the other neurons in the network by ?2 . This mechanism prevents neurons from spik-
x
3
The first neuron to spike will be random if there is some membrane potential noise.
V
(A)
(B)
x
x
x?
10
1
0.1
0
50
100
150
200
250
300
350
400
0
50
100
150
200
250
300
350
400
1
D
V 0.5
V
0
x?
V
x?
(C)
0
1
2
0
0.625
25
25.625
1
1
2.4
O
?
1.77
25
1
0
1
2
!me$
3
4
400.625
400.625
!me$
25
1
5
V
200.625 400
2.35
1.05
(F)
25
neuron$
neuron$
(E)
100.625 200
1.049
400
25.625
!me$
100
!me$
end of learning
1.4
1.78
50.625
1
V
1.35
?
50
(D)
start of learning
1
V
O
1
0
1
2
!me$
3
4
5
V
!me$
!me$
Figure 1: Learning in a single neuron and a homogeneous network. (A) A single neuron represents
an input signal x by producing an output x
?. (B) During learning, the single neuron output x
? (solid red
line, top panel) converges towards the input x (blue). Similarly, for a homogeneous network the output x
? (dashed red line, top panel) converges towards x. Connectivity also converges towards optimal
connectivity in both the single neuron case (solid black line, middle panel) and the homogeneous net
2 opt 2
/ ? )
work case (dashed black line, middle panel), as quantified by D = maxi,j (?ij ? ?opt
ij
ij
at each point in time. Consequently, the membrane potential reset (bottom panel) converges towards
the optimal reset (green line, bottom panel). Spikes are indicated by blue vertical marks, and are
produced when the membrane potential reaches threshold (bottom panel). Here, we have rescaled
time, as indicated, for clarity. (C) Our learning rule dictates that the autapse ? in our single neuron
(bottom panel) changes in proportion to the membrane potential (top panel) and the firing rate (middle panel). (D) At the end of learning, the reset ? fluctuates weakly about the optimal value. (E) For
a homogeneous network, neurons spike regularly at the start of learning, as shown in this raster plot.
Membrane potentials of different neurons are weakly correlated. (F) At the end of learning, spiking
is very irregular and membrane potentials become more synchronous.
5
ing synchronously. The population as a whole acts similarly to the single neuron in our previous
example. Each neuron fires regularly, even if a different neuron fires in every integration cycle.
The design of this optimal network requires the tuning of N (N ? 1) synaptic parameters. How can
an arbitrary network of integrate-and-fire neurons learn this optimum? As before, we address this
question by using the optimal network as a target for learning. We start with an arbitrarily connected
network of integrate-and-fire neurons:
? = ?V + ?T c ? ?o,
V
(16)
where ? is a matrix of connectivity weights, which includes the resets of the individual neurons.
Assuming that learning occurs on a slow time scale, we can rewrite this equation as
V = ?T x ? ??
o.
(17)
Now, repeating the arguments from the single neuron derivation, we modify the loss function to
obtain an online learning rule. Specifically, we set LV = kVk2 /2, and calculate the gradient:
X ?Vk
X
X
?LV
? o?l
=
Vk
= ? Vk ?ki o?j ?
Vk ?kl
.
(18)
??ij
??ij
??ij
k
k
kl
We can simplify this equation considerably by observing that the contribution of the second summation is largely averaged out under a wide variety of realistic conditions4 . Therefore, it can be
neglected, and we obtain the following local learning rule:
?LV
= Vi o?j .
? ?? ij = ?
??ij
(19)
This is a Hebbian plasticity rule, whereby connectivity changes in proportion to the presynaptic
firing rate o?j and post-synaptic membrane potential Vi . We assume that the neural thresholds are set
to a constant T and that the neural resets are set to their optimal values ?T . In the previous section
we demonstrated that these resets can be obtained by a Hebbian plasticity rule (Eqn. (10)).
This learning rule minimises the difference between the read-out and the signal, by approaching
the optimal recurrent connection strengths for the network (Fig. 1B). As in the single neuron case,
learning does not stop, so the connection strengths fluctuate close to their optimal value. During
learning, network activity becomes progressively more asynchronous as it progresses towards optimal connectivity (Fig. 1E, F).
3
Learning in the general case
Now that we have developed the fundamental concepts underlying our learning rule, we can derive
a learning rule for the more general case of a network of N arbitrarily connected leaky integrateand-fire neurons. Our goal is to understand how such networks can learn to optimally represent a
J-dimensional signal x = (x1 , . . . , xJ ), using the read-out equation x? = ?x + ?o.
We consider a network with the following membrane potential dynamics:
? = ?V + ?T c ? ?o,
V
(20)
where c is a J-dimensional input. We assume that this input is related to the signal according to
c = x? + x. This assumption can be relaxed by treating the input as the control for an arbitrary
linear dynamical system, in which case the signal represented by the network is the output of such a
computation [11]. However, this further generalisation is beyond the scope of this work.
As before, we need to identify the optimal recurrent connectivity so that we have a target for learning.
Most generally, the optimal recurrent connectivity is ?opt ? ?T ? + ?I. The output kernels of the
individual neurons, ?i , are given by the rows of ?, and their spiking thresholds by Ti ? k?i k2 /2 +
4
From the definition of the membrane potential
we can see that Vk ? O(1/N ) because ? ? 1/N . ThereP
fore, the size of the first term in Eqn. (18) is k Vk ?ki o?j = Vi o?j ? O(1/N ). Therefore, the second term can
P
be ignored if kl Vk ?kl ? o?l /??ij O(1/N ). This happens if ?kl O(1/N 2 ) as at the start of learning.
It also happens towards the end of learning if the terms {?kl ? o?l /??ij } are weakly correlated with zero mean,
or if the membrane potentials {Vi } are weakly correlated with zero mean.
6
?/2. With these connections and thresholds, we find that a network of integrate-and-fire neurons
? k2 + ?k?
will produce spike trains in such a way that the loss function L = kx ? x
ok2 is minimised,
? = ??
where the read-out is given by x
o. We can show this by prescribing a greedy5 spike rule:
a spike is fired by neuron i whenever L(no spike in i) > L(spike in i) [11]. The resulting spike
generation rule is
Vi > Ti ,
(21)
? ) ? ??
where Vi ? ?Ti (x ? x
oi is interpreted as the membrane potential.
5
Despite being greedy, this spiking rule can generate firing rates that are practically identical to the optimal
solutions: we checked this numerically in a large ensemble of networks with randomly chosen kernels.
(A)
xx1 ??
11
(B)
xxJJ
10
L
10
TT
10
4
6
8
1
VViii
D
x?x?11 x?x?JJ
F
0.5
0
0.4
?
?
0.2
0
0
2000
4000
!me
?
(C)
x
V
V
1
x 10
x
3
x?
8
0
x 10
1
2
3
!me
?
4
5
4
0
1
04
18
V
(F)
?(?t)
?
E-??I
?input
?
0.4
x?
0
3
0
1
x 10
1.3
0.95
x 10
x?
4
V
(E)
1
x
0
end of learning
50
neuron
neuron
50
!me
?
2
0
x?
0
0.5
ISI
??t
?
?
1
2
!me
?
4
5
4
1.5
1.32
23
0.1
?(?t)
?
x
E-??I
?input
?
(D)
start of learning
0
!me
?
2
0
0
0.5
ISI
??t
?
1
Figure 2: Learning in a heterogeneous network. (A) A network of neurons represents an input
? . (B) During learning, the loss L decreases (top panel). The differsignal x by producing an output x
ence between the connection strengths and the optimal strengths also decreases (middle panel), as
2
2
quantified by the mean difference (solid line), given by D =
? ? ?opt
/
?opt
and the maxi
2 / ?opt 2 ). The mean population firing
mum difference (dashed line), given by maxi,j (?ij ? ?opt
ij
ij
rate (solid line, bottom panel) also converges towards the optimal firing rate (dashed line, bottom
panel). (C, E) Before learning, a raster plot of population spiking shows that neurons produce bursts
? (red line, middle panel) fails to represent x (blue
of spikes (upper panel). The network output x
line, middle panel). The excitatory input (red, bottom left panel) and inhibitory input (green, bottom
left panel) to a randomly selected neuron is not tightly balanced. Furthermore, a histogram of interspike intervals shows that spiking activity is not Poisson, as indicated by the red line that represents
a best-fit exponential distribution. (D, F) At the end of learning, spiking activity is irregular and
? matches x.
Poisson-like, excitatory and inhibitory input is tightly balanced and x
7
How can we learn this optimal connection matrix? As before, we can derive a learning rule by
minimising the cost function LV = kVk2 /2. This leads to a Hebbian learning rule with the same
form as before:
? ?? ij = Vi o?j .
(22)
Again, we assume that the neural resets are given by ?Ti . Furthermore, in order for this learning rule
to work, we must assume that the network input explores all possible directions in the J-dimensional
input space (since the kernels ?i can point in any of these directions). The learning performance
does not critically depend on how the input variable space is sampled as long as the exploration
is extensive. In our simulations, we randomly sample the input c from a Gaussian white noise
distribution at every time step for the entire duration of the learning.
We find that this learning rule decreases the loss function L, thereby approaching optimal network
connectivity and producing optimal firing rates for our linear decoder (Fig. 2B). In this example, we
have chosen connectivity that is initially much too weak at the start of learning. Consequently, the
initial network behaviour is similar to a collection of unconnected single neurons that ignore each
other. Spike trains are not Poisson-like, firing rates are excessively large, excitatory and inhibitory
? is highly unreliable (Fig. 2C, E). As a result of
input is unbalanced and the decoded variable x
learning, the network becomes tightly balanced and the spike trains become asynchronous, irregular
and Poisson-like with much lower rates (Fig. 2D, F). However, despite this apparent variability, the
population representation is extremely precise, only limited by the the metabolic cost and the discrete
nature of a spike. This learnt representation is far more precise than a rate code with independent
Poisson spike trains [11]. In particular, shuffling the spike trains in response to identical inputs
drastically degrades this precision.
4
Conclusions and Discussion
In population coding, large trial-to-trial spike train variability is usually interpreted as noise [2]. We
show here that a deterministic network of leaky integrate-and-fire neurons with a simple Hebbian
plasticity rule can self-organise into a regime where information is represented far more precisely
than in noisy rate codes, while appearing to have noisy Poisson-like spiking dynamics.
Our learning rule (Eqn. (22)) has the basic properties of STDP. Specifically, a presynaptic spike
occurring immediately before a post-synaptic spike will potentiate a synapse, because membrane
potentials are positive immediately before a postsynaptic spike. Furthermore, a presynaptic spike
occurring immediately after a post-synaptic spike will depress a synapse, because membrane potentials are always negative immediately after a postsynaptic spike. This is similar in spirit to the
STDP rule proposed in [12], but different to classical STDP, which depends on post-synaptic spike
times [9].
This learning rule can also be understood as a mechanism for generating a tight balance between
excitatory and inhibitory input. We can see this by observing that membrane potentials after learning
can be interpreted as representation errors (projected onto the read-out kernels). Therefore, learning
acts to minimise the magnitude of membrane potentials. Excitatory and inhibitory input must be
balanced if membrane potentials are small, so we can equate balance with optimal information
representation.
Previous work has shown that the balanced regime produces (quasi-)chaotic network dynamics,
thereby accounting for much observed cortical spike train variability [13, 14, 4]. Moreover, the
STDP rule has been known to produce a balanced regime [16, 17]. Additionally, recent theoretical
studies have suggested that the balanced regime plays an integral role in network computation [15,
13]. In this work, we have connected these mechanisms and functions, to conclude that learning this
balance is equivalent to the development of an optimal spike-based population code, and that this
learning can be achieved using a simple Hebbian learning rule.
Acknowledgements
We are grateful for generous funding from the Emmy-Noether grant of the Deutsche Forschungsgemeinschaft (CKM) and the Chaire d?excellence of the Agence National de la Recherche (CKM,
DB), as well as a James Mcdonnell Foundation Award (SD) and EU grants BACS FP6-IST-027140,
BIND MECT-CT-20095-024831, and ERC FP7-PREDSPIKE (SD).
8
References
[1] Tolhurst D, Movshon J, and Dean A (1982) The statistical reliability of signals in single
neurons in cat and monkey visual cortex. Vision Res 23: 775?785.
[2] Shadlen MN, Newsome WT (1998) The variable discharge of cortical neurons: implications
for connectivity, computation, and information coding. J Neurosci 18(10): 3870?3896.
[3] Zohary E, Newsome WT (1994) Correlated neuronal discharge rate and its implication for
psychophysical performance. Nature 370: 140?143.
[4] Renart A, de la Rocha J, Bartho P, Hollender L, Parga N, Reyes A, & Harris, KD (2010) The
asynchronous state in cortical circuits. Science 327, 587?590.
[5] Ecker AS, Berens P, Keliris GA, Bethge M, Logothetis NK, Tolias AS (2010) Decorrelated
neuronal firing in cortical microcircuits. Science 327: 584?587.
[6] Okun M, Lampl I (2008) Instantaneous correlation of excitation and inhibition during ongoing
and sensory-evoked activities. Nat Neurosci 11, 535?537.
[7] Shu Y, Hasenstaub A, McCormick DA (2003) Turning on and off recurrent balanced cortical
activity. Nature 423, 288?293.
[8] Gentet LJ, Avermann M, Matyas F, Staiger JF, Petersen CCH (2010) Membrane potential
dynamics of GABAergic neurons in the barrel cortex of behaving mice. Neuron 65: 422?435.
[9] Caporale N, Dan Y (2008) Spike-timing-dependent plasticity: a Hebbian learning rule. Annu
Rev Neurosci 31: 25?46.
[10] Boerlin M, Deneve S (2011) Spike-based population coding and working memory. PLoS
Comput Biol 7, e1001080.
[11] Boerlin M, Machens CK, Deneve S (2012) Predictive coding of dynamic variables in balanced
spiking networks. under review.
[12] Clopath C, B?using L, Vasilaki E, Gerstner W (2010) Connectivity reflects coding: a model of
voltage-based STDP with homeostasis. Nat Neurosci 13(3): 344?352.
[13] van Vreeswijk C, Sompolinsky H (1998) Chaotic balanced state in a model of cortical circuits.
Neural Comput 10(6): 1321?1371.
[14] Brunel N (2000) Dynamics of sparsely connected networks of excitatory and inhibitory neurons. J Comput Neurosci 8, 183?208.
[15] Vogels TP, Rajan K, Abbott LF (2005) Neural network dynamics. Annu Rev Neurosci 28:
357?376.
[16] Vogels TP, Sprekeler H, Zenke F, Clopath C, Gerstner W. (2011) Inhibitory plasticity balances
excitation and inhibition in sensory pathways and memory networks. Science 334(6062):1569?
73.
[17] Song S, Miller KD, Abbott LF (2000) Competitive Hebbian learning through spike-timingdependent synaptic plasticity. Nat Neurosci 3(9): 919?926.
9
|
4750 |@word trial:4 middle:6 manageable:1 seems:1 proportion:3 heterogeneously:1 seek:2 simulation:1 accounting:1 thereby:5 solid:4 reduction:1 initial:3 contains:1 ecole:3 must:5 written:1 realistic:1 interspike:1 plasticity:14 christian:2 drop:2 plot:2 progressively:1 treating:1 greedy:3 selected:1 ith:4 recherche:1 provides:1 tolhurst:1 org:1 simpler:1 burst:1 kvk2:2 direct:1 become:3 differential:2 dan:1 pathway:1 excellence:1 indeed:1 isi:2 brain:1 chaire:1 actual:2 considering:1 autapse:6 becomes:5 begin:2 provided:1 underlying:2 increasing:1 panel:19 moreover:1 deutsche:1 circuit:2 barrel:1 interpreted:5 monkey:1 developed:2 guarantee:1 every:3 act:3 ti:9 exactly:3 oscillates:1 k2:2 whatever:1 control:2 grant:2 producing:4 before:9 positive:2 understood:2 bind:1 timing:2 treat:1 local:2 modify:1 sd:2 despite:2 firing:18 black:2 quantified:2 evoked:1 limited:1 averaged:1 lf:2 chaotic:2 dictate:1 outset:1 pre:2 integrating:1 suggest:1 petersen:1 onto:1 close:1 ga:1 equivalent:3 deterministic:1 demonstrated:1 dean:1 ecker:1 straightforward:1 duration:1 immediately:4 correcting:2 rule:43 insight:1 importantly:1 deriving:4 bourdoukan:2 rocha:1 population:10 proving:1 analogous:1 conditions4:1 discharge:2 target:6 play:1 logothetis:1 homogeneous:7 machens:3 sparsely:1 ckm:2 observed:3 bottom:8 role:1 solved:1 champalimaud:2 calculate:2 connected:6 cycle:1 sompolinsky:1 eu:1 decrease:7 trade:1 inhibit:1 rescaled:1 plo:1 balanced:13 dynamic:27 neglected:2 signature:2 weakly:5 tight:2 rewrite:1 depend:1 grateful:1 predictive:1 completely:2 differently:1 represented:2 cat:1 derivation:3 train:17 fast:1 emmy:1 apparent:1 fluctuates:1 solve:1 itself:3 noisy:2 online:1 net:1 okun:1 product:1 reset:12 fr:3 causing:1 fired:2 optimum:1 produce:9 generating:1 converges:5 derive:12 recurrent:4 minimises:3 ij:14 progress:1 sprekeler:1 met:1 exhibiting:1 direction:2 restate:1 closely:1 correct:1 exploration:1 fchampalimaud:1 require:1 integrateand:2 behaviour:1 opt:7 summation:1 mathematically:1 practically:1 stdp:7 exp:1 scope:1 major:1 boerlin:2 generous:1 smallest:1 homeostasis:1 reflects:1 gaussian:1 always:1 modified:1 normale:3 ck:1 avoid:1 fluctuate:1 varying:1 voltage:10 minimisation:1 derived:1 notational:1 vk:7 dependent:3 squaring:1 prescribing:1 entire:1 lj:1 initially:1 relation:1 quasi:1 france:3 selects:1 zohary:1 ralph:2 priori:1 development:1 integration:1 equal:1 never:2 identical:7 represents:3 cancel:1 future:2 others:1 simplify:2 employ:1 randomly:3 tightly:3 national:1 individual:2 fire:17 highly:1 implication:2 integral:1 closer:1 capable:1 experience:1 unless:1 re:1 theoretical:1 column:1 earlier:1 ence:1 tp:2 hasenstaub:1 newsome:2 cost:4 too:1 optimally:7 answer:1 learnt:1 considerably:1 chunk:1 fundamental:2 explores:1 off:2 minimised:1 together:1 bethge:1 mouse:1 connectivity:17 squared:1 central:1 ambiguity:2 again:1 external:2 derivative:1 matyas:1 return:1 potential:27 de:2 coding:5 includes:1 vi:10 depends:1 later:2 observing:2 sup:3 red:5 start:6 cch:1 competitive:1 contribution:2 oi:2 accuracy:1 characteristic:1 resetting:1 miller:1 correspond:1 yield:1 largely:2 identify:1 ensemble:1 equate:1 weak:1 parga:1 produced:1 critically:1 fore:1 drive:1 synapsis:1 penalised:1 reach:1 whenever:4 synaptic:23 checked:1 definition:1 decorrelated:1 raster:2 nonetheless:1 james:1 naturally:1 stop:2 sampled:1 improves:2 response:1 synapse:4 microcircuit:1 furthermore:4 just:1 correlation:1 hand:2 eqn:10 working:1 e1001080:1 indicated:3 vogels:2 excessively:1 concept:2 analytically:1 read:19 time3:1 white:1 during:5 self:3 recurrence:1 whereby:1 excitation:5 timingdependent:1 prominent:1 tt:1 demonstrate:2 reyes:1 instantaneous:3 funding:1 spiking:29 interpretation:2 interpret:4 functionally:1 numerically:1 refer:1 potentiate:1 shuffling:1 tuning:2 erieure:3 similarly:2 erc:1 portugal:1 centre:1 reliability:1 access:1 stable:1 cortex:2 operating:1 inhibition:5 behaving:1 agence:1 recent:1 driven:1 inequality:1 arbitrarily:2 minimum:1 greater:1 additional:1 relaxed:1 determine:1 signal:11 dashed:4 reduces:3 hebbian:12 exceeds:2 ing:1 match:1 minimising:1 long:2 post:6 equally:1 award:1 impact:2 prediction:1 neuro:1 basic:2 renart:1 heterogeneous:1 vision:1 poisson:6 histogram:1 represent:10 kernel:6 achieved:2 irregular:4 addition:1 want:1 unrealistically:1 interval:1 source:2 operate:1 db:1 regularly:3 spirit:1 eve:1 feedforward:1 forschungsgemeinschaft:1 variety:1 affect:1 xj:1 fit:1 approaching:2 suboptimal:1 silent:1 reduce:1 idea:2 minimise:3 synchronous:2 caporale:1 depress:1 clopath:2 akin:1 movshon:1 song:1 jj:1 action:1 ignored:1 generally:1 tune:1 amount:1 repeating:1 generate:1 inhibitory:11 neuroscience:2 delta:1 blue:3 discrete:1 shall:1 rajan:1 group:3 key:1 ist:1 threshold:10 clarity:1 rewriting:1 abbott:2 deneve:3 fp6:1 separation:1 ki:2 ct:1 activity:5 strength:10 precisely:1 argument:1 extremely:1 according:1 mcdonnell:1 kd:2 membrane:27 smaller:1 postsynaptic:4 rev:2 making:1 happens:2 den:1 gradually:1 equation:10 previously:1 turn:1 vreeswijk:1 mechanism:5 know:1 fp7:1 end:7 noether:1 incurring:1 observe:2 indirectly:1 appearing:1 homogeneously:1 alternative:1 altogether:1 slower:2 original:1 lampl:1 top:4 maintaining:1 xx1:1 instant:1 calculating:1 giving:1 especially:1 classical:1 vasilaki:1 psychophysical:1 question:3 spike:75 occurs:1 degrades:1 responds:1 unclear:1 gradient:2 amongst:1 decoder:2 me:12 originate:2 presynaptic:5 assuming:2 code:5 o1:1 index:1 balance:7 shu:1 negative:1 rise:2 design:1 unknown:1 contributed:1 mccormick:1 upper:1 vertical:1 neuron:100 observation:2 convolution:1 descent:1 defining:3 unconnected:1 variability:4 precise:2 synchronously:1 arbitrary:3 david:2 namely:1 paris:3 specified:1 kl:6 connection:8 extensive:1 address:1 beyond:1 suggested:1 usually:2 dynamical:3 below:1 pattern:1 regime:8 bacs:1 optimise:1 memory:3 green:2 lisbon:1 treated:1 zenke:1 turning:1 mn:1 representing:1 improve:1 gabaergic:1 coupled:1 review:1 acknowledgement:1 regularisation:1 loss:18 interesting:1 generation:1 proportional:2 filtering:2 lv:8 foundation:1 integrate:11 incurred:1 consistent:1 shadlen:1 principle:1 metabolic:2 row:2 excitatory:8 course:1 ok2:2 surprisingly:1 last:2 asynchronous:6 drastically:1 side:2 formal:1 understand:3 generalise:2 therep:1 wide:1 organise:1 leaky:3 van:1 regard:1 cortical:7 sensory:3 author:1 collection:1 clue:1 projected:1 programme:1 far:2 ignore:2 unreliable:1 reproduces:1 summing:1 conclude:1 tolias:1 alternatively:1 iterative:1 additionally:1 learn:8 nature:4 robust:4 reasonably:1 improving:1 gerstner:2 berens:1 da:1 timescales:1 main:1 stabilises:1 neurosci:7 whole:1 noise:4 repeated:1 x1:1 neuronal:2 fig:11 en:3 slow:1 precision:1 sub:1 fails:1 decoded:1 exponential:2 comput:3 vanish:1 learns:1 annu:2 undergoing:1 maxi:3 barrett:2 essential:1 adding:2 effectively:1 magnitude:2 nat:3 occurring:2 kx:1 nk:1 visual:1 prevents:1 brunel:1 corresponds:1 harris:1 goal:3 identity:1 quantifying:1 consequently:2 towards:8 shared:1 jf:1 feasible:1 change:9 experimentally:1 specifically:4 determined:2 generalisation:2 sophie:2 reversing:1 except:1 wt:2 experimental:1 la:2 mark:1 latter:1 unbalanced:1 brevity:1 ongoing:1 mum:1 biol:1 correlated:5
|
4,145 | 4,752 |
Bayesian Hierarchical Reinforcement Learning
Soumya Ray
Department of EECS
Case Western Reserve University
Cleveland, OH 44106
[email protected]
Feng Cao
Department of EECS
Case Western Reserve University
Cleveland, OH 44106
[email protected]
Abstract
We describe an approach to incorporating Bayesian priors in the MAXQ framework
for hierarchical reinforcement learning (HRL). We define priors on the primitive
environment model and on task pseudo-rewards. Since models for composite tasks
can be complex, we use a mixed model-based/model-free learning approach to
find an optimal hierarchical policy. We show empirically that (i) our approach
results in improved convergence over non-Bayesian baselines, (ii) using both task
hierarchies and Bayesian priors is better than either alone, (iii) taking advantage
of the task hierarchy reduces the computational cost of Bayesian reinforcement
learning and (iv) in this framework, task pseudo-rewards can be learned instead of
being manually specified, leading to hierarchically optimal rather than recursively
optimal policies.
1
Introduction
Reinforcement learning (RL) is a well known framework that formalizes decision making in unknown, uncertain environments. RL agents learn policies that map environment states to available
actions while optimizing some measure of long-term utility. While various algorithms have been developed for RL [1], and applied successfully to a variety of tasks [2], the standard RL setting suffers
from at least two drawbacks. First, it is difficult to scale standard RL approaches to large state spaces
with many factors (the well-known ?curse of dimensionality?). Second, vanilla RL approaches do
not incorporate prior knowledge about the environment and good policies.
Hierarchical reinforcement learning (HRL) [3] attempts to address the scaling problem by simplifying the overall decision making problem in different ways. For example, one approach introduces
macro-operators for sequences of primitive actions. Planning at the level of these operators may
result in simpler policies [4]. Another idea is to decompose the task?s overall value function, for
example by defining task hierarchies [5] or partial programs with choice points [6]. The structure
of the decomposition provides several benefits: first, for the ?higher level? subtasks, policies are
defined by calling ?lower level? subtasks (which may themselves be quite complex); as a result
policies for higher level subtasks may be expressed compactly. Second, a task hierarchy or partial
program can impose constraints on the space of policies by encoding knowledge about the structure
of good policies and thereby reduce the search space. Third, learning within subtasks allows state
abstraction, that is, some state variables can be ignored because they do not affect the policy within
that subtask. This also simplifies the learning problem.
While HRL attempts to address the scalability issue, it does not take into account probabilistic prior
knowledge the agent may have about the task. For example, the agent may have some idea about
where high/low utility states may be located and what their utilities may be, or some idea about the
approximate shape of the value function or policy. Bayesian reinforcement learning addresses this
issue by incorporating priors on models [7], value functions [8, 9] or policies [10]. Specifying good
1
priors leads to many benefits, including initial good policies, directed exploration towards regions
of uncertainty, and faster convergence to the optimal policy.
In this paper, we propose an approach that incorporates Bayesian priors in hierarchical reinforcement
learning. We use the MAXQ framework [5], that decomposes the overall task into subtasks so that
value functions of the individual subtasks can be combined to recover the value function of the
overall task. We extend this framework by incorporating priors on the primitive environment model
and on task pseudo-rewards. In order to avoid building models for composite tasks (which can
be very complex), we adopt a mixed model-based/model-free learning approach. We empirically
evaluate our algorithm to understand the effect of the priors in addition to the task hierarchy. Our
experiments indicate that: (i) taking advantage of probabilistic prior knowledge can lead to faster
convergence, even for HRL, (ii) task hierarchies and Bayesian priors can be complementary sources
of information, and using both sources is better than either alone, (iii) taking advantage of the task
hierarchy can reduce the computational cost of Bayesian RL, which generally tends to be very
high, and (iv) task pseudo-rewards can be learned instead of being manually specified, leading to
automatic learning of hierarchically optimal rather than recursively optimal policies. In this way
Bayesian RL and HRL are synergistic: Bayesian RL improves convergence of HRL and can learn
hierarchy parameters, while HRL can reduce the significant computational cost of Bayesian RL.
Our work assumes the probabilistic priors to be given in advance and focuses on learning with
them. Other work has addressed the issue of obtaining these priors. For example, one source of
prior information is multi-task reinforcement learning [11, 12], where an agent uses the solutions of
previous RL tasks to build priors over models or policies for future tasks. We also assume the task
hierarchy is given. Other work has explored learning MAXQ hierarchies in different settings [13].
2
Background and Related Work
In the MAXQ framework, each composite subtask Ti defines a semi-Markov decision process with
parameters hSi , Xi , Ci , Gi i. Si defines the set of ?non-terminal? states for Ti , where Ti may be
called by its parent. Gi defines a set of ?goal? states for Ti . The actions available within Ti are
described by the set of ?child tasks? Ci . Finally, Xi denotes the set of ?relevant state variables? for
Ti . Often, we unify the non-Si states and Gi into a single ?termination? predicate, Pi . An (s, a, s0 )
triple where Pi (s) is false, Pi (s0 ) is true, a ? Ci , and the transition probability P (s0 |s, a) > 0
? a) can be defined over exits to
is called an exit of the subtask Ti . A pseudo-reward function R(s,
express preferences over the possible exits of a subtask.
A hierarchical policy ? for the overall task is an assignment of a local policy to each SMDP Ti .
A hierarchically optimal policy is a hierarchical policy that has the maximum expected reward. A
hierarchical policy is said to be recursively optimal if the local policy for each subtask is optimal
given that all its subtask policies are optimal. Given a task graph, model-free [5] or model-based [14]
methods can be used to learn value functions for each task-subtask pair. In the model-free method,
a policy is produced by maintaining a value and a completion function for each subtask. For a task
i, the value V (a, s) denotes the expected value of calling child task a in state s. This is (recursively)
estimated as the expected reward obtained while executing a. The completion function C(i, s, a)
denotes the expected reward obtained while completing i after having called a in s. The central idea
behind MAXQ is that the value of i, V (i, s), can be (recursively) decomposed in terms of V (a, s)
and C(i, s, a). The model-based RMAXQ [14] algorithm extends RMAX [15] to MAXQ by learning
models for all primitive and composite tasks. Value iteration is used with these models to learn a
policy for each subtask. An optimistic exploration strategy is used together with a parameter m that
determines how often a transition or reward needs to be seen to be usable in the planning step.
In the MAXQ framework, pseudo-rewards must be manually specified to learn hierarchically optimal
policies. Recent work has attempted to directly learn hierarchically optimal policies for ALisp
partial programs, that generalize MAXQ task hierarchies [6, 16], using a model-free approach. Here,
along with task value and completion functions, an ?external? Q function QE is maintained for each
subtask. This function stores the reward obtained after the parent of a subtask exits. A problem here
is that this hurts state abstraction, since QE is no longer ?local? to a subtask. In later work [16],
this is addressed by recursively representing QE in terms of task value and completion functions,
linked by conditional probabilities of parent exits given child exits. The conditional probabilities
and recursive decomposition are used to compute QE as needed to select actions.
2
Bayesian reinforcement learning methods incorporate probabilistic prior knowledge on models [7],
value functions [8, 9], policies [10] or combinations [17]. One Bayesian model-based RL algorithm
proceeds as follows. At each step, a distribution over model parameters is maintained. At each
step, a model is sampled from this distribution (Thompson sampling [18, 19]). This model is then
solved and actions are taken according to the policy obtained. This yields observations that are used
to update the parameters of the current distribution to create a posterior distribution over models.
This procedure is then iterated to convergence. Variations of this idea have been investigated; for
example, some work converts the distribution over models to an empirical distribution over Qfunctions, and produces policies by sampling from this distribution instead [7].
Relatively little work exists that attempts to incorporate probabilistic priors into HRL. We have
found one preliminary attempt [20] that builds on the RMAX + MAXQ [14] method. This approach
adds priors to each subtask model and performs (separate) Bayesian model-based learning for each
subtask. 1 In our approach, we do not construct models for subtasks, which can be very complex
in general. Instead, we only maintain distributions over primitive actions, and use a mixed modelbased/model-free learning algorithm that is naturally integrated with the standard MAXQ learning
algorithm. Further, we show how to learn pseudo-rewards for MAXQ in the Bayesian framework.
3
Bayesian MAXQ Algorithm
In this section, we describe our approach to incorporating probabilistic priors into MAXQ. We use
priors over primitive models and pseudo-rewards. As we explain below, pseudo-rewards are value
functions; thus our approach uses priors both on models and value functions. While such an integration may not be needed for standard Bayesian RL, it appears naturally in our setting.
We first describe our approach to incorporating priors on environment models alone (assuming
pseudo-rewards are fixed). We do this following the Bayesian model-based RL framework. At
each step we have a distribution over environment models (initially the prior). The algorithm has
two main subroutines: the main BAYESIAN MAXQ routine (Algorithm 1) and an auxiliary R ECOM PUTE VALUE routine (Algorithm 2). In this description, the value V and completion C functions
are assumed to be global. At the start of each episode, the BAYESIAN MAXQ routine is called with
the Root task and the initial state for the current episode. The MAXQ execution protocol is then
followed, where each task chooses an action based on its current value function (initially random).
When a primitive action is reached and executed, it updates the posterior over model parameters
(Line 3) and its own value estimate (which is just the reward function for primitive actions). When
a task exits and returns to its parent, the parent subsequently updates its completion function based
on the current estimates of the value of the exit state (Lines 14 and 15). Note that in MAXQ, the
value function of a composite task can be (recursively) computed using the completion functions of
subtasks and the rewards obtained by executing primitive actions, so we do not need to separately
store or update the value functions (except for the primitive actions where the value function is the
reward). Finally, each primitive action maintains a count of how many times it has been executed
and each composite task maintains a count of how many child actions have been taken.
When k (an algorithm parameter) steps have been executed in a composite task, BAYESIAN MAXQ
calls R ECOMPUTE VALUE to re-estimate the value and completion functions (the check on k is
shown in R ECOMPUTE VALUE, Line 2). When activated, this function recursively re-estimates the
value/completion functions for all subtasks of the current task. At the level of a primitive action,
this simply involves resampling the reward and transition parameters from the current posterior
over models. For a composite task, we use the MAXQ - Q algorithm (Table 4 in [5]). We run this
algorithm for Sim episodes, starting with the current subtask as the root, with the current pseudoreward estimates (we explain below how these are obtained). This algorithm recursively updates the
completion function of the task graph below the current task. Note that in this step, the subtasks
with primitive actions use model-based updates. That is, when a primitive action is ?executed? in
such tasks, the currently sampled transition function (part of ? in Line 5) is used to find the next
state, and then the associated reward is used to update the completion function. This is similar to
Lines 12, 14 and 15 in BAYESIAN MAXQ, except that it uses the sampled model ? instead of the
1
While we believe this description is accurate, unfortunately, due to language issues and some missing
technical and experimental details in the cited article, we have been unable to replicate this work.
3
Algorithm 1 BAYESIAN MAXQ
Input: Task i, State s, Update Interval k, Simulation Episodes Sim
Output: Next state s0 , steps taken N , cumulative reward CR
1: if i is primitive then
2:
Execute i, observe r, s0
3:
Update current posterior parameters ? using (s, i, r, s0 )
4:
Update current value estimate: V (i, s) ? (1 ? ?) ? V (i, s) + ? ? r
5:
Count(i) ? Count(i) + 1
6:
return (s0 , 1, r)
7: else
8:
N ? 0, CR ? 0, taskStack ? Stack(){i is composite}
9:
while i is not terminated do
10:
R ECOMPUTE VALUE(i, k, Sim)
11:
a ? -greedy action from V (i, s)
12:
hs0 , Na , cri ? BAYESIAN MAXQ(a, s)
0
13:
taskStack.push(ha,
a , cri)
s ,N
?
? s0 , a0 ) + V (a0 , s0 )
14:
as0 ? arg maxa0 C(i,
15:
C(i, s, a) ? (1 ? ?) ? C(i, s, a) + ? ? ? Na C(i, s0 , a?s0 ) + V (a?s0 , s0 )
? s0 ) + C(i,
? s0 , a?0 ) + V (a?0 , s0 )
? s, a) ? (1 ? ?) ? C(i,
? s, a) + ? ? ? Na R(i,
16:
C(i,
s
s
17:
s ? s0 , CR ? CR + ? N ? cr, N ? N + Na , Count(i) ? Count(i) + 1
18:
end while
? s0 ))
19:
U PDATE PSEUDO REWARD(taskStack, R(i,
20:
return (s0 , N, CR)
21: end if
Algorithm 2 R ECOMPUTE VALUE
Input: Task i, Update Interval k, Simulation Episodes Sim
Output: Recomputed value and completion functions for the task graph below and including i
1: if Count(i) < k then
2:
return
3: end if
4: if i is primitive then
5:
Sample new transition and reward parameters ? from current posterior ?
6: else
7:
for all child tasks a of i do
8:
R ECOMPUTE VALUE(a, k, Sim)
9:
end for
10:
for Sim episodes do
11:
s ? random nonterminal state of i
?
12:
Run MAXQ - Q(i, s, ?, R)
13:
end for
14: end if
15: Count(i) ? 0
real environment. After R ECOMPUTE VALUE terminates, a new set of value/completion functions
are available for BAYESIAN MAXQ to use to select actions.
Next we discuss task pseudo-rewards (PRs). A PR is a value associated with a subtask exit that
defines how ?good? that exit is for that subtask. The ideal PR for an exit is the expected reward under
the hierarchically optimal policy after exiting the subtask, until the global task (Root) ends; thus the
PR is a value function. This PR would enable the subtask to choose the ?right? exit in the context
of what the rest of the task hierarchy is doing. In standard MAXQ, these have to be set manually.
This is problematic because it presupposes (quite detailed) knowledge of the hierarchically optimal
policy. Further, setting the wrong PRs can result in non-convergence or highly suboptimal policies.
Sometimes this problem is sidestepped simply by setting all PRs to zero, resulting in recursively
optimal policies. However, it is easy to construct examples where a recursively optimal policy
4
Algorithm 3 U PDATE PSEUDO REWARD
?p
Input: taskStack, Parent?s pseudo reward R
0
?
0
1: tempCR ? Rp , Na ? 0, cr ? 0
2: while taskStack is not empty do
3:
ha, s, Na , cri ? taskStack.pop()
0
4:
tempCR ? ? Na ? tempCR + cr0
? s) using (a, s, tempCR)
5:
Update pseudo-reward posterior ? for R(a,
? s) from ?
6:
Resample R(a,
7:
Na0 ? Na , cr0 ? cr
8: end while
is arbitrarily worse than the hierarchically optimal policy. For all these reasons, PRs are major
?nuisance parameters? in the MAXQ framework.
What makes learning PRs tricky is that they are not only value functions, but also function as parameters of MAXQ. That is, setting different PRs essentially results in a new learning problem. For
this reason, simply trying to learn PRs in a standard temporal difference (TD) way fails (as we show
in our experiments). Fortunately, Bayesian RL allows us to address both these issues. First, we
can treat value functions as probabilistic unknown parameters. Second, and more importantly, a key
idea in Bayesian RL is the ?lifting? of exploration to the space of task parameters. That is, instead
of exploration through action selection, Bayesian RL can perform exploration by sampling task parameters. Thus treating a PR as an unknown Bayesian parameter also leads to exploration over the
value of this parameter, until an optimal value is found. In this way, hierarchically optimal policies
can be learned from scratch?a major advantage over the standard MAXQ setting.
To learn PRs, we again maintain a distribution over all such parameters, ?, initially a prior. For
simplicity, we only focus on tasks with multiple exits, since otherwise, a PR has no effect on the
policy (though the value function changes). When a composite task executes, we keep track of each
child task?s execution in a stack. When the parent itself exits, we obtain a new observation of the
PRs of each child by computing the discounted cumulative reward received after it exited, added to
the current estimate of the parent?s PR (Algorithm 3). This observation is used to update the current
posterior over the child?s PR. Since this is a value function estimate, early in the learning process,
the estimates are noisy. Following prior work [8], we use a window containing the most recent
observations. When a new observation arrives, the oldest observation is removed, the new one is
added and a new posterior estimate is computed. After updating the posterior, it is sampled to obtain
a new PR estimate for the associated exit. This estimate is used where needed (in Algorithms 1 and
2) until the next posterior update. Combined with the model-based priors above, we hypothesize
that this procedure, iterated till convergence, will produce a hierarchically optimal policy.
4
Empirical Evaluation
In this section, we evaluate our approach and test four hypotheses: First, does incorporating modelbased priors help speed up the convergence of MAXQ to the optimal policy? Second, does the
task hierarchy still matter if very good priors are available for primitive actions? Third, how does
Bayesian MAXQ compare to standard (flat) Bayesian RL? Does Bayesian RL perform better (in
terms of computational time) if a task hierarchy is available? Finally, can our approach effectively
learn PRs and policies that are hierarchically optimal?
We first focus on evaluating the first three hypotheses using domains where a zero PR results in
hierarchical optimality. To evaluate these hypotheses, we use two domains: the fickle version of
Taxi-World [5] (625 states) and Resource-collection [13] (8265 states). 2 In Taxi-World, the
agent controls a taxi in a grid-world that has to pick up a passenger from a source location and drop
them off at their destination. The state variables consist of the location of the taxi and the source
and destination of the passenger. The actions available to the agent consist of navigation actions and
actions to pickup and putdown the passenger. The agent gets a reward of +20 upon completing the
task, a constant ?1 reward for every action and a ?10 penalty for an erroneous action. Further, each
2
Task hierarchies for all domains are available in the supplementary material.
5
Average Cumulative Reward Per Episode
0
0
-200
-200
-400
-400
-600
-600
B-MaxQ Uninformed
B-MaxQ Good
B-MB-Q Uninformed
B-MB-Q Good
B-MB-Q Good Comparable Simulations
-800
-1000
Average Cumulative Reward Per Episode
0
0
100
200
300
400
-800
-1000
500
0
B-MaxQ Uninformed
B-MaxQ Good
B-MB-Q Uninformed
B-MB-Q Good
-200
B-MaxQ Uninformed
R-MaxQ
MaxQ
FlatQ
0
100
200
-200
-400
-400
-600
-600
-800
-800
-1000
300
400
500
B-MaxQ Uninformed
MaxQ
R-MaxQ
FlatQ
-1000
0
200
400
600
800
1000
0
Episode
200
400
600
800
1000
Episode
Figure 1: Performance on Taxi-World (top row) and Resource-collection (bottom). The x-axis
shows episodes. The prefix ?B-? denotes Bayesian, ?Uninformed/Good? denotes the prior and ?MB?
denotes model-based. Left column: Bayesian methods, right: non-Bayesian methods, with Bayesian
MAXQ for reference.
navigation action has a 15% chance of moving in each direction orthogonal to the intended move. In
the Resource-collection domain, the agent collects resources (gold and wood) from a grid world
map. Here the state variables consist of the location of the agent, what the agent is carrying, whether
a goldmine or forest is adjacent to its current location and whether a desired gold or wood quota has
been met. The actions available to the agent are to move to a specific location, chop gold or harvest
wood, and to deposit the item it is carrying (if any). For each navigation action, the agent has a 30%
chance of moving to a random location. In our experiments, the map contains two goldmines and
two forests, each containing two units of gold and two units of wood, and the gold and wood quota
is set to three each. The agent gets a +50 reward when it meets the gold/wood quota, a constant ?1
reward for every action and an additional ?1 for erroneous actions (such as trying to deposit when
it is not carrying anything).
For the Bayesian methods, we use Dirichlet priors for the transition function parameters and NormalGamma priors for the reward function parameters. We use two priors: an uninformed prior, set
to approximate a uniform distribution, and a ?good? prior where a previously computed model
posterior is used as the ?prior.? The prior distributions we use are conjugate to the likelihood, so
we can compute the posterior distributions in closed form. In general, this is not necessary; more
complex priors could be used as long as we can sample from the posterior distribution.
The methods we evaluate are: (i) Flat Q, the standard Q-learning algorithm, (ii) MAXQ -0, the standard, Q-learning algorithm for MAXQ with no PR, (iii) Bayesian model-based Q-learning with an
uninformed prior and (iv) a ?good? prior, (v) Bayesian MAXQ (our proposed approach) with an uninformed prior and (vi) a ?good? prior, and (vii) R MAXQ [14]. In our implementation, the Bayesian
model-based Q-learning uses the same code as the Bayesian MAXQ algorithm, with a ?trivial? hierarchy consisting of the Root task with only the primitive actions as children. For the Bayesian
methods, the update frequency k was set to 50 for Taxi-World and 25 for Resource-collection.
Sim was set to 200 for Bayesian MAXQ for Taxi-World and 1000 for Bayesian model-based Q, and
to 1000 for both for Resource collection. For R MAXQ, the threshold sample size m was set to
5 following prior work [14]. The value iteration was terminated either after 300 loops or when the
successive difference between iterations was less than 0.001. The theoretical version of R MAXQ
requires updating and re-solving the model every step. In practice for the larger problems, this is too
6
time-consuming, so we re-solve the models every 10 steps. This is similar to the update frequency
k for Bayesian MAXQ. The results are shown in Figure 1 (episodes on x-axis).
From these results, comparing the Bayesian versions of MAXQ to standard MAXQ, we observe that
for Taxi-World, the Bayesian version converges faster to the optimal policy even with the uninformed prior, while for Resource-collection, the convergence rates are similar. When a good prior
is available, convergence is very fast (almost immediate) in both domains. Thus, the availability
of model priors can help speed up convergence in many cases for HRL. We further observe that
R MAXQ converges more slowly than MAXQ or Bayesian MAXQ, though it is much better than Flat
Q. This is different from prior work [14]. This may be because our domains are more stochastic than
the Taxi-world on which prior results [14] were obtained. We conjecture that, as the environment
becomes more stochastic, errors in primitive model estimates may propagate into subtask models
and hurt the performance of this algorithm. In their analysis [14], the authors noted that the error in
the transition function for a composite task is a function of the total number of terminal states in the
subtask. The error is also compounded as we move up the task hierarchy. This could be countered by
increasing m, the sample size used to estimate model parameters. This would improve the accuracy
of the primitive model, but would further hurt the convergence rate of the algorithm.
Next, we compare the Bayesian MAXQ approach to ?flat? Bayesian model-based Q learning. We
note that in Taxi-World, with uninformed priors, though the ?flat? method initially does worse,
it soon catches up to standard MAXQ and then to Bayesian MAXQ. This is probably because in
this domain, the primitive models are relatively easy to acquire, and the task hierarchy provides no
additional leverage. For Resource-collection, however, even with a good prior, ?flat? Bayesian
model-based Q does not converge. The difference is that in this case, the task hierarchy encodes
extra information that cannot be deduced just from the models. In particular, the task hierarchy
tells the agent that good policies consist of gold/wood collection moves followed by deposit moves.
Since the reward structure in this domain is very sparse, it is difficult to deduce this even if very
good models are available. Taken together, these results show that task hierarchies and model priors
can be complementary: in general, Bayesian MAXQ outperforms both flat Bayesian RL and MAXQ
(in speed of convergence, since here MAXQ can learn the hierarchically optimal policy).
Table 1: Time for 500 episodes, Taxi-World.
Next, we compare the time taken by the difMethod
Time (s)
ferent approaches in our experiments in TaxiBayesian MaxQ, Uninformed Prior
205
World (Table 1). As expected, the Bayesian
Bayesian Model-based Q, Uninformed 4684
RL approaches are significantly slower than
Prior
the non-Bayesian approaches. Further, among
Bayesian MaxQ, Good Prior
96
non-Bayesian approaches, the hierarchical apBayesian Model-based Q, Good Prior
3089
proaches (MAXQ and R MAXQ) are slower than
Bayesian Model-based Q, Good Prior 4006
the non-hierarchical flat Q. Out of the Bayesian
& Comparable Simulations
methods, however, the Bayesian MAXQ apR MAXQ
229
MAXQ
2.06
proaches are significantly faster than the flat
Flat Q
1.77
Bayesian model-based approaches. This is because for the flat case, during the simulation in R ECOMPUTE VALUE, a much larger task needs to be
solved, while the Bayesian MAXQ approach is able to take into account the structure of the hierarchy
to only simulate subtasks as needed, which ends up being much more efficient. However, we note
that we allowed the flat Bayesian model-based approach 1000 episodes of simulation as opposed to
200 for Bayesian MAXQ. Clearly this increases the time taken for the flat cases. But at the same
time, this is necessary: the ?Comparable Simulations? row (and curve in Figure 1 top left) shows
that, if the simulations are reduced to 250 episodes for this approach, the resulting values are no
longer reliable and the performance of the Bayesian flat approach drops sharply. Notice that while
Flat Q runs faster than MAXQ (because of the additional ?bookkeeping? overhead due to the task
hierarchy), Bayesian MAXQ runs much faster than Bayesian model-based Q. Thus, taking advantage
of the hierarchical task decomposition helps reduce the computational cost of Bayesian RL.
Finally we evaluate how well our approach estimates PRs. Here we use two domains: a ModifiedTaxi-World and a Hallway domain [5, 21] (4320 states). In Modified-Taxi-World, we allow
dropoffs at any one of the four locations and do not provide a reward for task termination. Thus
the Navigate subtask needs a PR (corresponding to the correct dropoff location) to learn a good
policy. The Hallway domain consists of a maze with a large scale structure of hallways and intersections. The agent has stochastic movement actions. For these experiments, we use uninformed
priors on the environment model. The PR Gaussian-Gamma priors are set to prefer each exit from
7
Average Cumulative Reward Per Episode
0
0
-200
-200
-400
-400
-600
-600
-800
-800
B-MaxQ Bayes PR
B-MaxQ Manual PR
B-MaxQ No PR
-1000
Average Cumulative Reward Per Episode
-200
0
100
200
300
400
B-MaxQ Bayes PR
MaxQ Non-Bayes PR
MaxQ Manual PR
MaxQ No PR
ALispQ
FlatQ
-1000
500
-200
-400
-400
-600
-600
-800
-800
-1000
-1000
-1200
-1200
-1400
-1400
-1600
0
100
200
300
400
500
0
1000
2000
3000
Episode
4000
5000
-1600
B-MaxQ Bayes PR
B-MaxQ Manual PR
B-MaxQ No PR
-1800
-2000
0
1000
2000
3000
Episode
4000
-1800
-2000
5000
Figure 2: Performance on Modified-Taxi-World (top row) and Hallway (bottom). ?B-?: Bayesian,
?PR?: Pseudo Reward. Left: Bayesian methods, right: non-Bayesian methods, with Bayesian MAXQ
as reference. The x-axis is episodes. The bottom right figure has the same legend as the top right.
a subtask equally. The baselines we use are: (i) Bayesian MAXQ and MAXQ with fixed zero PR, (ii)
Bayesian MAXQ and MAXQ with fixed manually set PR, (iii) flat Q, (iv) ALISPQ [6] and (v) MAXQ
with a non-Bayesian PR update. This last method tracks PR just as our approach; however, instead
of a Bayesian update, it updates the PR using a temporal difference update, treating it as a simple
value function. The results are shown in Figure 2 (episodes on x-axis).
From these results, we first observe that the methods with zero PR always do worse than those with
?proper? PR, indicating that in these cases the recursively optimal policy is not the hierarchically
optimal policy. When a PR is manually set, in both domain, MAXQ converges to better policies. We
observe that in each case, the Bayesian MAXQ approach is able to learn a policy that is as good, starting with no pseudo rewards; further, its convergence rates are often better. Further, we observe that
the simple TD update strategy (MAXQ Non-Bayes PR in Figure 2) fails in both cases?in ModifiedTaxi-World, it is able to learn a policy that is approximately as good as a recursively optimal policy,
but in the Hallway domain, it fails to converge completely, indicating that this strategy cannot generally learn PRs. Finally, we observe that the tripartite Q-decomposition of ALISPQ is also able to
correctly learn hierarchically optimal policies, however, it converges slowly compared to Bayesian
MAXQ or MAXQ with manual PRs. This is especially visible in the Hallway domain, where there are
not many opportunities for state abstraction. We believe this is likely because it is estimating entire
Q-functions rather than just the PRs. In a sense, it is doing more work than is needed to capture
the hierarchically optimal policy, because an exact Q-function may not be needed to capture the
preference for the best exit, rather, a value that assigns it a sufficiently high reward compared to the
other exits would suffice. Taken together, these results indicate that incorporating Bayesian priors
into MAXQ can successfully learn PRs from scratch and produce hierarchically optimal policies.
5
Conclusion
In this paper, we have proposed an approach to incorporating probabilistic priors on environment
models and task pseudo-rewards into HRL by extending the MAXQ framework. Our experiments
indicate that several synergies exist between HRL and Bayesian RL, and combining them is fruitful.
In future work, we plan to investigate approximate model and value representations, as well as
multi-task RL to learn the priors.
8
References
[1] R.S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. MIT Press, 1998.
[2] Leslie Pack Kaelbling, Michael L. Littman, and Andrew W. Moore. Reinforcement learning: A survey.
Journal of Artificial Intelligence Research, 4:237?285, 1996.
[3] Andrew G. Barto and Sridhar Mahadevan. Recent advances in hierarchical reinforcement learning. Discrete Event Dynamic Systems, 13(4):341?379, 2003.
[4] Martin Stolle and Doina Precup. Learning Options in reinforcement Learning, volume 2371/2002 of
Lecture Notes in Computer Science, pages 212?223. Springer, 2002.
[5] Thomas G. Dietterich. Hierarchical reinforcement learning with the maxq value function decomposition.
Journal of Artificial Intelligence Research, 13:227?303, 2000.
[6] D. Andre and S. Russell. State Abstraction for Programmable Reinforcement Learning Agents. In Proceedings of the Eighteenth National Conference on Artificial Intelligence (AAAI), 2002.
[7] R. Dearden, N. Friedman, and D. Andre. Model based bayesian exploration. In Proceedings of Fifteenth
Conference on Uncertainty in Artificial Intelligence. Morgan Kaufmann, 1999.
[8] R. Dearden, N. Friedman, and S. Russell. Bayesian Q-learning. In Proceedings of the Fifteenth National
Conference on Artificial Intelligence, 1998.
[9] Y. Engel, S. Mannor, and R. Meir. Bayes meets Bellman:the Gaussian process approach to temporal
difference learning. In Proceedings of the Twentieth Internationl Conference on Machine Learning, 2003.
[10] Mohammad Ghavamzadeh and Yaakov Engel. Bayesian policy gradient algorithms. In Advances in
Neural Information Processing Systems 19. MIT Press, 2007.
[11] Alessandro Lazaric and Mohammad Ghavamzadeh. Bayesian multi-task reinforcement learning. In Proceedings of the 27th International Conference on Machine Learning, 2010.
[12] Aaron Wilson, Alan Fern, Soumya Ray, and Prasad Tadepalli. Multi-task reinforcement learning: a
hierarchical bayesian approach. In Proceedings of the 24th international conference on Machine learning,
pages 1015?1022, New York, NY, USA, 2007. ACM.
[13] N. Mehta, S. Ray, P. Tadepalli, and T. Dietterich. Automatic discovery and transfer of MAXQ hierarchies.
In Andrew McCallum and Sam Roweis, editors, Proceedings of the 25th International Conference on
Machine Learning, pages 648?655. Omnipress, 2008.
[14] Nicholas K. Jong and Peter Stone. Hierarchical model-based reinforcement learning: R-MAX + MAXQ.
In Proceedings of the 25th International Conference on Machine Learning, 2008.
[15] Ronen I. Brafman, Moshe Tennenholtz, and Pack Kaelbling. R-MAX - a general polynomial time algorithm for near-optimal reinforcement learning. Journal of Machine Learning Research, 2001.
[16] B. Marthi, S. Russell, and D. Andre. A compact, hierarchically optimal q-function decomposition. In
22nd Conference on Uncertainty in Artificial Intelligence, 2006.
[17] M. Ghavamzadeh and Y. Engel. Bayesian actor-critic algorithms. In Zoubin Ghahramani, editor, Proceedings of the 24th Annual International Conference on Machine Learning, pages 297?304. Omnipress,
2007.
[18] W. R. Thompson. On the likelihood that one unknown probability exceeds another in view of the evidence
of two samples. Biometrika, 25:285?294, 1933.
[19] M. J. A. Strens. A Bayesian framework for reinforcement learning. In Proceeding of the 17th International
Conference on Machine Learning, 2000.
[20] Zhaohui Dai, Xin Chen, Weihua Cao, and Min Wu. Model-based learning with bayesian and maxq value
function decomposition for hierarchical task. In Proceedings of the 8th World Congress on Intelligent
Control and Automation, 2010.
[21] Ronald Edward Parr. Hierarchical Control and Learning for Markov Decision Processes. PhD thesis,
1998.
9
|
4752 |@word version:4 polynomial:1 replicate:1 tadepalli:2 nd:1 mehta:1 termination:2 simulation:8 propagate:1 prasad:1 simplifying:1 decomposition:7 pick:1 thereby:1 recursively:13 initial:2 contains:1 prefix:1 outperforms:1 current:15 comparing:1 si:2 must:1 ronald:1 visible:1 shape:1 hypothesize:1 treating:2 drop:2 update:21 smdp:1 alone:3 resampling:1 greedy:1 intelligence:6 item:1 oldest:1 hallway:6 mccallum:1 provides:2 mannor:1 location:8 preference:2 successive:1 simpler:1 along:1 consists:1 overhead:1 ray:3 expected:6 themselves:1 planning:2 multi:4 terminal:2 bellman:1 discounted:1 decomposed:1 td:2 little:1 curse:1 window:1 increasing:1 becomes:1 cleveland:2 estimating:1 suffice:1 what:4 rmax:2 developed:1 formalizes:1 pseudo:18 temporal:3 every:4 ti:8 biometrika:1 wrong:1 tricky:1 control:3 unit:2 local:3 treat:1 congress:1 tends:1 taxi:13 encoding:1 sutton:1 normalgamma:1 meet:2 approximately:1 specifying:1 collect:1 directed:1 recursive:1 practice:1 procedure:2 empirical:2 significantly:2 composite:11 zoubin:1 get:2 cannot:2 synergistic:1 selection:1 operator:2 context:1 fruitful:1 map:3 missing:1 eighteenth:1 primitive:21 starting:2 thompson:2 survey:1 unify:1 simplicity:1 assigns:1 importantly:1 oh:2 variation:1 hurt:3 hierarchy:24 exact:1 us:4 hypothesis:3 located:1 updating:2 bottom:3 solved:2 capture:2 region:1 episode:21 movement:1 removed:1 russell:3 alessandro:1 subtask:23 environment:11 reward:44 littman:1 dynamic:1 ghavamzadeh:3 carrying:3 solving:1 upon:1 exit:18 completely:1 compactly:1 various:1 pdate:2 fast:1 describe:3 artificial:6 tell:1 quite:2 supplementary:1 presupposes:1 larger:2 solve:1 otherwise:1 gi:3 itself:1 noisy:1 advantage:5 sequence:1 propose:1 mb:6 macro:1 cao:2 relevant:1 loop:1 combining:1 till:1 roweis:1 gold:7 description:2 proaches:2 scalability:1 convergence:14 parent:8 empty:1 extending:1 produce:3 executing:2 converges:4 help:3 andrew:3 completion:13 uninformed:15 nonterminal:1 received:1 sim:7 edward:1 as0:1 auxiliary:1 involves:1 indicate:3 met:1 direction:1 drawback:1 correct:1 subsequently:1 stochastic:3 exploration:7 enable:1 material:1 maxa0:1 decompose:1 preliminary:1 dropoff:1 sufficiently:1 reserve:2 parr:1 major:2 adopt:1 early:1 resample:1 currently:1 create:1 successfully:2 engel:3 sidestepped:1 mit:2 clearly:1 gaussian:2 always:1 modified:2 rather:4 avoid:1 cr:8 barto:2 wilson:1 focus:3 check:1 likelihood:2 baseline:2 sense:1 abstraction:4 cri:3 integrated:1 entire:1 a0:2 initially:4 subroutine:1 overall:5 issue:5 arg:1 among:1 plan:1 integration:1 construct:2 having:1 sampling:3 manually:6 putdown:1 future:2 intelligent:1 soumya:2 gamma:1 national:2 individual:1 intended:1 consisting:1 maintain:2 attempt:4 friedman:2 highly:1 investigate:1 evaluation:1 introduces:1 navigation:3 arrives:1 behind:1 activated:1 accurate:1 partial:3 necessary:2 orthogonal:1 iv:4 re:4 desired:1 theoretical:1 uncertain:1 column:1 assignment:1 leslie:1 cost:4 kaelbling:2 uniform:1 predicate:1 too:1 eec:2 combined:2 chooses:1 deduced:1 cited:1 international:6 probabilistic:8 off:1 destination:2 modelbased:2 together:3 cr0:2 michael:1 precup:1 na:8 thesis:1 aaai:1 again:1 central:1 exited:1 choose:1 containing:2 slowly:2 opposed:1 stolle:1 worse:3 external:1 usable:1 leading:2 return:4 account:2 availability:1 automation:1 matter:1 doina:1 vi:1 passenger:3 later:1 root:4 view:1 closed:1 optimistic:1 linked:1 doing:2 reached:1 start:1 recover:1 maintains:2 bayes:6 option:1 accuracy:1 kaufmann:1 yield:1 ronen:1 generalize:1 bayesian:91 iterated:2 produced:1 fern:1 executes:1 explain:2 suffers:1 manual:4 andre:3 zhaohui:1 frequency:2 naturally:2 associated:3 sampled:4 knowledge:6 dimensionality:1 improves:1 routine:3 appears:1 higher:2 improved:1 execute:1 though:3 just:4 until:3 western:2 defines:4 believe:2 building:1 effect:2 dietterich:2 usa:1 true:1 moore:1 adjacent:1 during:1 nuisance:1 chop:1 maintained:2 noted:1 qe:4 anything:1 strens:1 trying:2 stone:1 fickle:1 mohammad:2 performs:1 omnipress:2 bookkeeping:1 empirically:2 rl:24 volume:1 extend:1 significant:1 automatic:2 vanilla:1 grid:2 language:1 pute:1 moving:2 actor:1 longer:2 deduce:1 add:1 posterior:13 own:1 recent:3 optimizing:1 store:2 arbitrarily:1 seen:1 morgan:1 fortunately:1 additional:3 impose:1 yaakov:1 dai:1 converge:2 ii:4 semi:1 hsi:1 multiple:1 reduces:1 compounded:1 technical:1 faster:6 alan:1 exceeds:1 long:2 equally:1 essentially:1 fifteenth:2 iteration:3 sometimes:1 addition:1 background:1 separately:1 addressed:2 interval:2 else:2 source:5 extra:1 rest:1 probably:1 legend:1 incorporates:1 call:1 near:1 leverage:1 ideal:1 iii:4 easy:2 mahadevan:1 variety:1 affect:1 suboptimal:1 reduce:4 idea:6 simplifies:1 whether:2 utility:3 penalty:1 harvest:1 peter:1 york:1 action:32 programmable:1 ignored:1 generally:2 detailed:1 reduced:1 meir:1 exist:1 problematic:1 notice:1 estimated:1 lazaric:1 track:2 per:4 correctly:1 discrete:1 express:1 recomputed:1 key:1 four:2 threshold:1 graph:3 convert:1 wood:7 run:4 uncertainty:3 extends:1 almost:1 wu:1 decision:4 prefer:1 scaling:1 comparable:3 hrl:11 completing:2 followed:2 annual:1 constraint:1 sharply:1 flat:16 encodes:1 calling:2 speed:3 simulate:1 optimality:1 min:1 relatively:2 conjecture:1 martin:1 department:2 according:1 combination:1 conjugate:1 terminates:1 sam:1 making:2 pr:48 taken:7 ecom:1 resource:8 previously:1 discus:1 count:8 needed:6 end:9 available:10 observe:7 hierarchical:18 nicholas:1 slower:2 rp:1 thomas:1 assumes:1 denotes:6 top:4 dirichlet:1 opportunity:1 maintaining:1 ghahramani:1 build:2 especially:1 feng:1 move:5 added:2 moshe:1 strategy:3 countered:1 said:1 gradient:1 separate:1 unable:1 trivial:1 reason:2 assuming:1 code:1 acquire:1 difficult:2 executed:4 unfortunately:1 implementation:1 proper:1 policy:55 unknown:4 perform:2 observation:6 markov:2 pickup:1 immediate:1 defining:1 stack:2 exiting:1 subtasks:11 pair:1 specified:3 marthi:1 learned:3 pop:1 maxq:99 address:4 able:4 tennenholtz:1 proceeds:1 below:4 program:3 including:2 reliable:1 max:2 dearden:2 event:1 representing:1 improve:1 axis:4 catch:1 prior:61 discovery:1 deposit:3 lecture:1 mixed:3 triple:1 quota:3 agent:16 s0:19 article:1 editor:2 pi:3 critic:1 row:3 brafman:1 last:1 free:6 soon:1 allow:1 understand:1 taking:4 sparse:1 benefit:2 curve:1 transition:7 cumulative:6 evaluating:1 world:17 ferent:1 author:1 collection:8 reinforcement:20 maze:1 approximate:3 compact:1 keep:1 synergy:1 global:2 assumed:1 consuming:1 xi:2 search:1 decomposes:1 table:3 learn:18 pack:2 transfer:1 obtaining:1 forest:2 investigated:1 complex:5 na0:1 protocol:1 domain:14 apr:1 hierarchically:17 main:2 terminated:2 sridhar:1 child:9 complementary:2 allowed:1 ny:1 fails:3 third:2 erroneous:2 specific:1 navigate:1 explored:1 evidence:1 incorporating:8 exists:1 consist:4 false:1 effectively:1 ci:3 lifting:1 phd:1 execution:2 push:1 chen:1 vii:1 intersection:1 simply:3 likely:1 twentieth:1 expressed:1 springer:1 determines:1 chance:2 acm:1 conditional:2 goal:1 hs0:1 towards:1 change:1 except:2 called:4 total:1 experimental:1 xin:1 attempted:1 indicating:2 select:2 aaron:1 jong:1 tripartite:1 incorporate:3 evaluate:5 scratch:2
|
4,146 | 4,753 |
Risk?Aversion in Multi?armed Bandits
Amir Sani
Alessandro Lazaric
R?mi Munos
INRIA Lille - Nord Europe, Team SequeL
{amir.sani,alessandro.lazaric,remi.munos}@inria.fr
Abstract
Stochastic multi?armed bandits solve the Exploration?Exploitation dilemma and
ultimately maximize the expected reward. Nonetheless, in many practical problems, maximizing the expected reward is not the most desirable objective. In this
paper, we introduce a novel setting based on the principle of risk?aversion where
the objective is to compete against the arm with the best risk?return trade?off. This
setting proves to be more difficult than the standard multi-arm bandit setting due
in part to an exploration risk which introduces a regret associated to the variability
of an algorithm. Using variance as a measure of risk, we define two algorithms,
investigate their theoretical guarantees, and report preliminary empirical results.
1
Introduction
The multi?armed bandit [13] elegantly formalizes the problem of on?line learning with partial feedback, which encompasses a large number of real?world applications, such as clinical trials, online
advertisements, adaptive routing, and cognitive radio. In the stochastic multi?armed bandit model,
a learner chooses among several arms (e.g., different treatments), each characterized by an independent reward distribution (e.g., the treatment effectiveness). At each point in time, the learner selects
one arm and receives a noisy reward observation from that arm (e.g., the effect of the treatment on
one patient). Given a finite number of n rounds (e.g., patients involved in the clinical trial), the
learner faces a dilemma between repeatedly exploring all arms and collecting reward information
versus exploiting current reward estimates by selecting the arm with the highest estimated reward.
Roughly speaking, the learning objective is to solve this exploration?exploitation dilemma and accumulate as much reward as possible over n rounds. Multi?arm bandit literature typically focuses
on the problem of finding a learning algorithm capable of maximizing the expected cumulative reward (i.e., the reward collected over n rounds averaged over all possible observation realizations),
thus implying that the best arm returns the highest expected reward. Nonetheless, in many practical
problems, maximizing the expected reward is not always the most desirable objective. For instance,
in clinical trials, the treatment which works best on average might also have considerable variability; resulting in adverse side effects for some patients. In this case, a treatment which is less effective
on average but consistently effective on different patients may be preferable to an effective but risky
treatment. More generally, some applications require an effective trade?off between risk and reward.
There is no agreed upon definition for risk. A variety of behaviours result in an uncertainty which
might be deemed unfavourable for a specific application and referred to as a risk. For example, an
algorithm which is consistent over multiple runs may not satisfy the desire for a solution with low
variability in every single realization of the algorithm. Two foundational risk modeling paradigms
are Expected Utility theory [12] and the historically popular and accessible Mean-Variance paradigm
[10]. A large part of decision?making theory focuses on defining and managing risk (see e.g., [9]
for an introduction to risk from an expected utility theory perspective).
Risk has mostly been studied in on?line learning within the so?called expert advice setting (i.e.,
adversarial full?information on?line learning). In particular, [8] showed that in general, although
it is possible to achieve a small regret w.r.t. to the expert with the best average performance, it is
not possible to compete against the expert which best trades off between average return and risk.
On the other hand, it is possible to define no?regret algorithms for simplified measures of risk?
1
return. [16] studied the case of pure risk minimization (notably variance minimization) in an on-line
setting where at each step the learner is given a covariance matrix and must choose a weight vector
that minimizes the variance. The regret is then computed over horizon and compared to the fixed
weights minimizing the variance in hindsight. In the multi?arm bandit domain, the most interesting
results are by [5] and [14]. [5] introduced an analysis of the expected regret and its distribution,
revealing that an anytime version of UCB [6] and UCB-V might have large regret with some nonnegligible probability.1 This analysis is further extended by [14] who derived negative results which
show no anytime algorithm can achieve a regret with both a small expected regret and exponential
tails. Although these results represent an important step towards the analysis of risk within bandit
algorithms, they are limited to the case where an algorithm?s cumulative reward is compared to the
reward obtained by pulling the arm with the highest expectation.
In this paper, we focus on the problem of competing against the arm with the best risk?return trade?
off. In particular, we refer to the popular mean?variance model introduced by [10]. In Sect. 2 we
introduce notation and define the mean?variance bandit problem. In Sect. 3 and 4 we introduce two
algorithms and study their theoretical properties. In Sect. 5 we report a set of numerical simulations
aiming at validating the theoretical results. Finally, in Sect. 7 we conclude with a discussion on
possible extensions. The proofs and additional experiments are reported in the extended version [15].
2
Mean?Variance Multi?arm Bandit
In this section we introduce the notation and define the mean?variance multi?arm bandit problem.
We consider the standard multi?arm bandit setting with K arms, each characterized by a distribution
?i bounded in the interval [0, 1]. Each distribution has a mean ?i and a variance ?i2 . The bandit
problem is defined over a finite horizon of n rounds. We denote by Xi,s ? ?i the s-th random
sample drawn from the distribution of arm i. All arms and samples are independent. In the multi?
arm bandit protocol, at each round t, an algorithm selects arm It and observes sample XIt ,Ti,t ,
Pt
where Ti,t is the number of samples observed from arm i up to time t (i.e., Ti,t = s=1 I{It = i}).
While in the standard bandit literature the objective is to select the arm leading to the highest reward
in expectation (the arm with the largest expected value ?i ), here we focus on the problem of finding
the arm which effectively trades off between its expected reward (i.e., the return) and its variability
(i.e., the risk). Although a large number of models for risk?return trade?off have been proposed, here
we focus on the most historically popular and simple model: the mean?variance model proposed by
[10],where the return of an arm is measured by the expected reward and its risk by its variance.
Definition 1. The mean?variance of an arm i with mean ?i , variance ?i2 and coefficient of absolute
risk tolerance ? is defined as2 MVi = ?i2 ? ??i .
Thus the optimal arm is the arm with the smallest mean-variance, that is i? = arg mini MVi . We notice that we can obtain two extreme settings depending on the value of risk tolerance ?. As ? ? ?,
the mean?variance of arm i tends to the opposite of its expected value ?i and the problem reduces to
the standard expected reward maximization traditionally considered in multi?arm bandit problems.
With ? = 0, the mean?variance reduces to ?i2 and the objective becomes variance minimization.
Given {Xi,s }ts=1 i.i.d. samples from the distribution ?i , we define the empirical mean?variance of
2
d i,t = ?
an arm i with t samples as MV
?i,t
? ??
?i,t , where
t
?
?i,t =
1X
Xi,s ,
t s=1
t
2
?
?i,t
=
2
1X
Xi,s ? ?
?i,t .
t s=1
(1)
We now consider a learning algorithm A and its corresponding performance over n rounds. Similar
to a single arm i we define its empirical mean?variance as
d n (A) = ?
MV
?n2 (A) ? ??
?n (A),
(2)
where
n
?
?n (A) =
1
2
1X
Zt ,
n t=1
?
?n2 (A) =
n
2
1X
Zt ? ?
?n (A) ,
n t=1
(3)
The analysis is for the pseudo?regret but it can be extended to the true regret (see Remark 2 at p.23 of [5]).
The coefficient of risk tolerance is the inverse of the more popular coefficient of risk aversion A = 1/?.
2
with Zt = XIt ,Ti,t , that is the reward collected by the algorithm at time t. This leads to a natural
definition of the (random) regret at each single run of the algorithm as the difference in the mean?
variance performance of the algorithm compared to the best arm.
Definition 2. The regret for a learning algorithm A over n rounds is defined as
d n (A) ? MV
d i? ,n .
Rn (A) = MV
(4)
Given this definition, the objective is to design an algorithm whose regret decreases as the number
of rounds increases (in high probability or in expectation).
d i? ,n is
We notice that the previous definition actually depends on unobserved samples. In fact, MV
computed on n samples i? which are not actually observed when running A. This matches the definition of true regret in standard bandits (see e.g., [5]). Thus, in order to clarify the main components
characterizing the regret, we introduce additional notation. Let
?
?Xi? ,t
if i = i?
P
Yi,t = Xi? ,t0 with t0 = Ti? ,n +
Tj,n + t otherwise
?
j<i,j6=i?
be a renaming of the samples from the optimal arm, such that while the algorithm was pulling arm i
for the t-th time, Yi,t is the unobserved sample from i? . The corresponding mean and variance is
?
?i,Ti,n =
Ti,n
1 X
Yi,t ,
Ti,n t=1
2
?
?i,T
=
i,n
Ti,n
2
1 X
Yi,t ? ?
?i,Ti,n .
Ti,n t=1
(5)
Given these additional definitions, we can rewrite the regret as (see App. A.1 in [15])
h
i
1 X
2
2
Ti,n (?
Rn (A) =
?i,T
?
??
?
)
?
(?
?
?
??
?
)
i,T
i,T
i,Ti,n
i,n
i,n
i,n
n ?
i6=i
+
K
K
2 1 X
2
1X
Ti,n ?
?i,Ti,n ? ?
Ti,n ?
?i,Ti,n ? ?
?n (A) ?
?i? ,n .
n i=1
n i=1
(6)
Since the last term is always negative and small 3 , our analysis focuses on the first two terms which
reveal two interesting characteristics of A. First, an algorithm A suffers a regret whenever it chooses
a suboptimal arm i 6= i? and the regret corresponds to the difference in the empirical mean?variance
of i w.r.t. the optimal arm i? . Such a definition has a strong similarity to the standard definition
of regret, where i? is the arm with highest expected value and the regret depends on the number of
times suboptimal arms are pulled and their respective gaps w.r.t. the optimal arm i? . In contrast to the
standard formulation of regret, A also suffers an additional regret from the variance ?
?n2 (A), which
depends on the variability of pulls Ti,n over different arms. Recalling the definition of the mean
?
?n (A) as the weighted mean of the empirical means ?
?i,Ti,n with weights Ti,n /n (see eq. 3), we
notice that this second term is a weighted variance of the means and illustrates the exploration risk
of the algorithm. In fact, if an algorithm simply selects and pulls a single arm from the beginning, it
would not suffer any exploration risk (secondary regret) since ?
?n (A) would coincide with ?
?i,Ti,n for
the chosen arm and all other components would have zero weight. On the other hand, an algorithm
accumulates exploration risk through this second term as the mean ?
?n (A) deviates from any specific
arm; where the maximum exploration risk peaks at the mean ?
?n (A) furthest from all arm means.
The previous definition of regret can be further elaborated to obtain the upper bound (see App. A.1)
K
1 XX
1 X
b
b 2i,j ,
Ti,n ?i + 2
Ti,n Tj,n ?
Rn (A) ?
n ?
n i=1
i6=i
(7)
j6=i
2
2
b i = (?
b 2 = (?
where ?
?i,T
??
?i,T
) ? ?(?
?i,Ti,n ? ?
?i,Ti,n ) and ?
?i,Ti,n ? ?
?j,Tj,n )2 . Unlike the
i,j
i,n
i,n
definition in eq. 6, this upper bound explicitly illustrates the relationship between the regret and the
number of pulls Ti,n ; suggesting that a bound on the pulls is sufficient to bound the regret.
Finally, we can also introduce a definition of the pseudo-regret.
3
More precisely, it can be shown that this term decreases with rate O(K log(1/?)/n) with probability 1??.
3
Input: Confidence ?
for t = 1, . . . , n do
for i = 1, . . . , K do
q
d i,Ti,t?1 ? (5 + ?) log 1/?
Compute Bi,Ti,t?1 = MV
2Ti,t?1
end for
Return It = arg mini=1,...,K Bi,Ti,t?1
Update Ti,t = Ti,t?1 + 1
Observe XIt ,Ti,t ? ?It
d i,Ti,t
Update MV
end for
Figure 1: Pseudo-code of the MV-LCB algorithm.
Definition 3. The pseudo regret for a learning algorithm A over n rounds is defined as
K
X
2 XX
e n (A) = 1
Ti,n ?i + 2
R
Ti,n Tj,n ?2i,j ,
n ?
n i=1
i6=i
(8)
j6=i
where ?i = MVi ? MVi? and ?i,j = ?i ? ?j .
In the following, we denote the two components of the pseudo?regret as
1 X
e?
Ti,n ?i ,
R
n (A) =
n ?
and
i6=i
K X
X
e ?n (A) = 2
R
Ti,n Tj,n ?2i,j .
n2 i=1
(9)
j6=i
e?
Where R
n (A) constitutes the standard regret derived from the traditional formulation of the multie ?n (A) denotes the exploration risk. This regret can be shown to be close
arm bandit problem and R
to the true regret up to small terms with high probability.
Lemma 1. Given definitions 2 and 3,
r
e n (A) + (5 + ?)
Rn (A) ? R
? K log(6nK/?)
2K log(6nK/?)
+4 2
,
n
n
with probability at least 1 ? ?.
The previous lemma shows that any (high?probability) bound on the pseudo?regret immediately
translates into a bound on the true regret. Thus, we report most of the theoretical analysis according
e n (A). Nonetheless, it is interesting to notice the major difference between the true and pseudo?
to R
regret when compared to the standard bandit problem. In fact, it is possible to show in the risk?averse
e n ].
case that the pseudo?regret is not an unbiased estimator of the true regret, i.e., E[Rn ] 6= E[R
Thus, to bound the expectation of Rn we build on the high?probability result from Lemma 1.
3
The Mean?Variance Lower Confidence Bound Algorithm
In this section we introduce a risk?averse bandit algorithm whose objective is to identify the arm
which best trades off risk and return. The algorithm is a natural extension of UCB1 [6] and we
report a theoretical performance analysis on how its mean?variance.
3.1
The Algorithm
We propose an index?based bandit algorithm which estimates the mean?variance of each arm and
selects the optimal arm according to the optimistic confidence?bounds on the current estimates. A
sketch of the algorithm is reported in Figure 1. For each arm, the algorithm keeps track of the
d i,s computed according to s samples. We can build high?probability
empirical mean?variance MV
confidence bounds on empirical mean?variance through an application of the Chernoff?Hoeffding
inequality (see e.g., [1] for the bound on the variance) on terms ?
? and ?
?2.
4
Lemma 2. Let {Xi,s } be i.i.d. random variables bounded in [0, 1] from the distribution ?i with mean
2
?i and variance ?i2 , and the empirical mean ?
?i,s and variance ?
?i,s
computed as in Equation 1, then
"
#
r
log
1/?
d i,s ? MVi | ? (5 + ?)
P ?i = 1, . . . , K, s = 1, . . . , n, |MV
? 6nK?,
2s
The algorithm in Figure 1 implements the principle of optimism in the face of uncertainty used in
many multi?arm bandit algorithms. On the basis of the previous confidence bounds, we define a
lower?confidence bound on the mean?variance of arm i when it has been pulled s times as
r
d i,s ? (5 + ?) log 1/? ,
Bi,s = MV
(10)
2s
where ? is an input parameter of the algorithm. Given the index of each arm at each round t, the algorithm simply selects the arm with the smallest mean?variance index, i.e., It = arg mini Bi,Ti,t?1 .
We refer to this algorithm as the mean?variance lower?confidence bound (MV-LCB ) algorithm.
Remark 1. We notice that MV-LCB reduces to UCB1 for ? ? ?. This is coherent with the fact
that for ? ? ? the mean?variance problem reduces to expected reward maximization, for which
UCB1 is known to be nearly-optimal. On the other hand, for ? = 0 (variance minimization), the
algorithm plays according to a lower?confidence?bound on the variances.
Remark 2. The MV-LCB algorithm has a parameter ? defining the confidence level of the bounds
employed in (10). In Thm. 1 we show how to optimize the parameter when the horizon n is known
in advance. On the other hand, if n is not known, it is possible to design an anytime version of
MV-LCB by defining a non-decreasing exploration sequence (?t )t instead of the term log 1/?.
3.2
Theoretical Analysis
In this section we report the analysis of the regret Rn (A) of MV-LCB (Fig. 1). As highlighted in
eq. 7, it is enough to analyze the number of pulls for each of the arms to recover a bound on the
regret. The proofs (reported in [15]) are mostly based on similar arguments to the proof of UCB.
We derive the following regret bound in high probability and expectation.
Theorem 1. Let the optimal arm i? be unique and b = 2(5 + ?), the MV-LCB algorithm achieves
a pseudo?regret bounded as
X ?2i? ,i
2b2 log 1/? X X ?2i,j
b2 log 1/? X 1
5K
e
,
+4
Rn (A) ?
+
2 +
2 ?2
n
?
?
n
?
n
i
i
i j
?
?
?
i6=i
i6=i
i6=i
j6=i
j6=i?
with probability at least 1 ? 6nK?. Similarly, if MV-LCB is run with ? = 1/n2 then
X
2
X ?2i? ,i
1
4b2 log n X X ?2i,j
K
e n (A)] ? 2b log n
E[R
+4
+
+ (17 + 6?) .
2
2 ?2
n
?
?
n
?
n
i
i
i j
?
?
?
i6=i
i6=i
i6=i
j6=i
j6=i?
Remark 1 (the bound). Let ?min = mini6=i? ?i and ?max = maxi |?i |, then a rough simplification
of the previous bound leads to
2
2
e n (A)] ? O K log n + K 2 ?max log n .
E[R
?min n
?4min n
First we notice that the regret decreases as O(log2 n/n), implying that MV-LCB is a consistent
algorithm. As already highlighted in Def. 2, the regret is mainly composed by two terms. The
first term is due to the difference in the mean?variance of the best arm and the arms pulled by the
algorithm, while the second term denotes the additional variance introduced by the exploration risk
of pulling arms with different means. In particular, this additional term depends on the squared
difference of the arm means ?2i,j . Thus, if all the arms have the same mean, this term would be zero.
Remark 2 (worst?case analysis). We can further study the result of Thm. 1 by considering the
worst?case performance of MV-LCB, that is the performance when the distributions of the arms are
5
chosen so as to maximize the regret. In order to illustrate our argument we consider the simple case
of K = 2 arms, ? = 0 (variance minimization), ?1 6= ?2 , and ?12 = ?22 = 0 (deterministic arms). 4
2
In this case we have a variance gap ? = 0 and
p? > 0. According to the definition of MV-LCB,
the index Bi,s would simply reduce to Bi,s = log(1/?)/s, thus forcing the algorithm to pull both
arms uniformly (i.e., T1,n = T2,n = n/2 up to rounding effects). Since the arms have the same
variance, there is no direct regret in pulling either one or the other. Nonetheless, the algorithm has
an additional variance due to the difference in the samples drawn from distributions with different
means. In this case, the algorithm suffers a constant (true) regret
Rn (MV-LCB) = 0 +
T1,n T2,n 2
1
? = ?2 ,
2
n
4
independent from the number of rounds n. This argument can be generalized to multiple arms and
? 6= 0, since it is always possible to design an environment (i.e., a set of distributions) such that
?min = 0 and ?max 6= 0. 5 This result is not surprising. In fact, two arms with the same mean?
variance are likely to produce similar observations, thus leading MV-LCB to pull the two arms
repeatedly over time, since the algorithm is designed to try to discriminate between similar arms.
Although this behavior does not suffer from any regret in pulling the ?suboptimal? arm (the two arms
are equivalent), it does introduce an additional variance, due to the difference in the means of the
arms (? 6= 0), which finally leads to a regret the algorithm is not ?aware? of. This argument suggests
that, for any n, it is always possible to design an environment for which MV-LCB has a constant
regret. This is particularly interesting since it reveals a huge gap between the mean?variance and
the standard expected regret minimization problem and will be further investigated in?the numerical
simulations in Sect. 5. In fact, UCB is known to have a worst?case regret of ?(1/ n) [3], while
in the worst case, MV-LCB suffers a constant regret. In the next section we introduce a simple
algorithm able to deal with this problem and achieve a vanishing worst?case regret.
4
The Exploration?Exploitation Algorithm
The ExpExp algorithm divides the time horizon n into two distinct phases of length ? and n ? ?
respectively. During the first phase all the arms are explored uniformly, thus collecting ? /K samples
each 6 . Once the exploration phase is over, the mean?variance of each arm is computed and the arm
with the smallest estimated mean?variance MVi,? /K is repeatedly pulled until the end.
The MV-LCB is specifically designed to minimize the probability of pulling the wrong arms, so
whenever there are two equivalent arms (i.e., arms with the same mean?variance), the algorithm
tends to pull them the same number of times, at the cost of potentially introducing an additional
variance which might result in a constant regret. On the other hand, ExpExp stops exploring the
arms after ? rounds and then elicits one arm as the best and keeps pulling it for the remaining n ? ?
rounds. Intuitively, the parameter ? should be tuned so as to meet different requirements. The
first part of the regret (i.e., the regret coming from pulling the suboptimal arms) suggests that the
exploration phase ? should be long enough for the algorithm to select the empirically best arm ?i?
at ? equivalent to the actual optimal arm i? with high probability; and at the same time, as short as
possible to reduce the number of times the suboptimal arms are explored. On the other hand, the
second part of the regret (i.e., the variance of pulling arms with different means) is minimized by
taking ? as small as possible (e.g., ? = 0 would guarantee a zero regret). The following theorem
illustrates the optimal trade-off between these contrasting needs.
Theorem 2. Let ExpExp be run with ? = K(n/14)2/3 , then for any choice of distributions {?i }
e n (A)] ? 2 K
.
the expected regret is E[R
n1/3
Remark 1 (the bound). We first notice that this bound suggests that ExpExp performs worse than
MV-LCB on easy problems. In fact, Thm. 1 demonstrates that MV-LCB has a regret decreasing as
O(K log(n)/n) whenever the gaps ? are not small compared to n, while in the remarks of Thm. 1
we highlighted the fact that for any value of n, it is always possible to design an environment which
leads MV-LCB to suffer a constant regret. On the other hand, the previous bound for ExpExp is
distribution independent and indicates the regret is still a decreasing function of n even in the worst
4
Note that in this case (i.e., ? = 0), Thm. 1 does not hold, since the optimal arm is not unique.
Notice that this is always possible for a large majority of distributions with independent mean and variance.
6
In the definition and in the following analysis we ignore rounding effects.
5
6
MV-LCB Regret Terms vs. n
0.15
Worst Case Regret vs. n
35.3
Regret
Regret?
Regret?
MV-LCB
ExpExp
31.2
MeanRegret ? 10? 2
27.5
26.3
25.3
Mean Regret
22
14
8.7
n ? 103
250
50
25
5
10
100
n ? 103
250
100
50
0
5
10
25
5.6
3.3
2
Figure 2: Regret of MV-LCB and ExpExp in different scenarios.
case. This opens the question whether it is possible to design an algorithm which works as well as
MV-LCB on easy problems and as robustly as ExpExp on difficult problems.
Remark 2 (exploration phase). The previous result can be improved by changing the exploration
strategy used in the first ? rounds. Instead of a pure uniform exploration of all the arms, we could
adopt a best?arm identification algorithms such as Successive Reject or UCB-E, which maximize
the probability of returning the best arm given a fixed budget of rounds ? (see e.g., [4]).
5
Numerical Simulations
In this section we report numerical simulations aimed at validating the main theoretical findings
reported in the previous sections. In the following graphs we study the true regret Rn (A) averaged
over 500 runs. We first consider the variance minimization problem (? = 0) with K = 2 Gaussian
arms set to ?1 = 1.0, ?2 = 0.5, ?12 = 0.05, and ?22 = 0.25 and run MV-LCB 7 . In Figure 2 we
b
b
?
report the true regret Rn (as in the original definition in eq. 4) and its two components R?
n and Rn
b and ?
b replacing ? and ?). As expected (see e.g.,
(these two values are defined as in eq. 9 with ?
Thm. 1), the regret is characterized by the regret realized from pulling suboptimal arms and arms
with different means (Exploration Risk) and tends to zero as n increases. Indeed, if we considered
b
two distributions with equal means (?1 = ?2 ), the average regret coincides with R?
n . Furthermore,
as shown in Thm. 1 the two regret terms decrease with the same rate O(log n/n).
A detailed analysis of the impact of ? and ? on the performance of MV-LCB is reported in App. D
in [15]. Here we only compare the worst?case performance of MV-LCB to ExpExp (see Figure 2).
In order to have a fair comparison, for any value of n and for each of the two algorithms, we select
the pair ?w , ?w which corresponds to the largest regret (we search in a grid of values with ?1 = 1.5,
?2 ? [0.4; 1.5], ?12 ? [0.0; 0.25], and ?22 = 0.25, so that ? ? [0.0; 0.25] and ? ? [0.0; 1.1]). As
discussed in Sect. 4, while the worst?case regret of ExpExp keeps decreasing over n, it is always
possible to find a problem for which regret of MV-LCB stabilizes to a constant. For numerical
results with multiple values of ? and 15 arms, see App. D in [15].
6
Discussion
In this paper we evaluate the risk of an algorithm in terms of the variability of the sequences of
samples that it actually generates. Although this notion might resemble other analyses of bandit
algorithms (see e.g., the high-probability analysis in [5]), it captures different features of the learning
algorithm. Whenever a bandit algorithm is run over n rounds, its behavior, combined with the arms?
distributions, generates a probability distribution over sequences of n rewards. While the quality
of this sequence is usually defined by its cumulative sum (or average), here we say that a sequence
of rewards is good if it displays a good trade-off between its (empirical) mean and variance. The
variance of the sequence does not coincide with the variance of the algorithm over multiple runs.
Let us consider a simple case with two arms that deterministically generate 0s and 1s respectively,
and two different algorithms. Algorithm A1 pulls the arms in a fixed sequence at each run (e.g.,
arm 1, arm 2, arm 1, arm 2, and so on), so that each arm is always pulled n/2 times. Algorithm A2
chooses one arm uniformly at random at the beginning of the run and repeatedly pulls this arm for
n rounds. Algorithm A1 generates sequences such as 010101... which have high variability within
7
Notice that although in the paper we assumed the distributions to be bounded in [0, 1] all the results can be
extended to sub-Gaussian distributions.
7
each run, incurs a high regret (e.g., if ? = 0), but has no variance over multiple runs because it
always generates the same sequence. On the other hand, A2 has no variability in each run, since it
generates sequences with only 0s or only 1s, suffers no regret in the case of variance minimization,
but has high variance over multiple runs since the two completely different sequences are generated
with equal probability. This simple example shows that an algorithm with small standard regret
(e.g., A1 ), might generate at each run sequences with high variability, while an algorithm with small
mean-variance regret (e.g., A2 ) might have a high variance over multiple runs.
7
Conclusions
The majority of multi?armed bandit literature focuses on the problem of minimizing the regret w.r.t.
the arm with the highest return in expectation. In this paper, we introduced a novel multi?armed
bandit setting where the objective is to perform as well as the arm with the best risk?return trade?off.
In particular, we relied on the mean?variance model introduced in [10] to measure the performance
of the arms and define the regret of a learning algorithm. We show that defining the risk of a learning
algorithm as the variability (i.e., empirical variance) of the sequence of rewards generated at each
run, leads to an interesting effect on the regret where an additional algorithm variance appears. We
proposed two novel algorithms to solve the mean?variance bandit problem and we reported their
corresponding theoretical analysis. To the best of our knowledge this is the first work introducing
risk?aversion in the multi?armed bandit setting and it opens a series of interesting questions.
Lower
p bound. As discussed in the remarks of Thm. 1 and Thm. 2, MV-LCB has a regret of order
O( K/n) on easy problems and O(1) on difficult problems, while ExpExp achieves the same
regret O(K/n1/3 ) over all problems. The primary open question is whether O(K/n1/3 ) is actually
the best possible achievable rate (in the worst?case) for this problem. This question is of particular
interest
p since the standard reward expectation maximization problem has a known lower?bound of
?( 1/n), and a minimax rate of ?(1/n1/3 ) for the mean?variance problem would imply that the
risk?averse bandit problem is intrinsically more difficult than standard bandit problems.
Different measures of return?risk. Considering alternative notions of risk is a natural extension
to the previous setting. In fact, over the years the mean?variance model has often been criticized.
From a point of view of the expected utility theory, the mean?variance model is only justified under a
Gaussianity assumption on the arm distributions. It also violates the monotonocity condition due to
the different orders of the mean and variance and is not a coherent measure of risk [2]. Furthermore,
the variance is a symmetric measure of risk, while it is often the case that only one?sided deviations
from the mean are undesirable (e.g., in finance only losses w.r.t. to the expected return are considered
as a risk, while any positive deviation is not considered as a real risk). Popular replacements for the
mean?variance are the ? value?at?risk (i.e., the quantile) or Conditional Value at Risk (otherwise
known as average value at risk, tail value at risk, expected shortfall and lower tail risk) or other
coherent measures of risk [2]. While the estimation of the ? value?at?risk might be challenging 8 ,
concentration inequalities exist for the CVaR [7]. Another issue in moving from variance to other
measures of risk is whether single-period or multi-period risk evaluation should be used. While
the single-period risk of an arm is simply the risk of its distribution, in a multi-period evaluation
we consider the risk of the sum of rewards obtained by repeatedly pulling the same arm over n
rounds. Unlike the variance, for which the variance of a sum of n i.i.d. samples is simply n times
their variance, for other measures of risk (e.g., ? value?at?risk) this is not necessarily the case. As a
result, an arm with the smallest single-period risk might not be the optimal choice over an horizon of
n rounds. Therefore, the performance of an algorithm should be compared to the smallest risk that
can be achieved by any sequence of arms over n rounds, thus requiring a new definition of regret.
Simple regret. Finally, an interesting related problem is the simple regret setting where the learner
is allowed to explore over n rounds and it only suffers a regret defined on the solution returned at
the end. It is known that it is possible to design algorithm able to effectively estimate the mean of
the arms and finally return the best arm with high probability. In the risk-return setting, the objective
would be to return the arm with the best risk-return tradeoff.
Acknowledgments This work was supported by Ministry of Higher Education and Research, NordPas de Calais Regional Council and FEDER through the ?contrat de projets ?tat region 2007?2013",
European Community?s Seventh Framework Programme (FP7/2007-2013) under grant agreement
n? 270327, and PASCAL2 European Network of Excellence.
8
While the cumulative distribution of a random variable can be reliably estimated (see e.g., [11]), estimating
the quantile might be more difficult
8
References
[1] Andr?s Antos, Varun Grover, and Csaba Szepesv?ri. Active learning in heteroscedastic noise.
Theoretical Computer Science, 411:2712?2728, June 2010.
[2] P Artzner, F Delbaen, JM Eber, and D Heath. Coherent measures of risk. Mathematical
finance, (June 1996):1?24, 1999.
[3] Jean-Yves Audibert and S?bastien Bubeck. Regret bounds and minimax policies under partial
monitoring. Journal of Machine Learning Research, 11:2785?2836, 2010.
[4] Jean-Yves Audibert, S?bastien Bubeck, and R?mi Munos. Best arm identification in multiarmed bandits. In Proceedings of the Twenty-third Conference on Learning Theory (COLT?10),
2010.
[5] Jean-Yves Audibert, R?mi Munos, and Csaba Szepesv?ri. Exploration-exploitation trade-off
using variance estimates in multi-armed bandits. Theoretical Computer Science, 410:1876?
1902, 2009.
[6] Peter Auer, Nicol? Cesa-Bianchi, and Paul Fischer. Finite-time analysis of the multi-armed
bandit problem. Machine Learning, 47:235?256, 2002.
[7] David B. Brown. Large deviations bounds for estimating conditional value-at-risk. Operations
Research Letters, 35:722?730, 2007.
[8] Eyal Even-Dar, Michael Kearns, and Jennifer Wortman. Risk-sensitive online learning. In
Proceedings of the 17th international conference on Algorithmic Learning Theory (ALT?06),
pages 199?213, 2006.
[9] Christian Gollier. The Economics of Risk and Time. The MIT Press, 2001.
[10] Harry Markowitz. Portfolio selection. The Journal of Finance, 7(1):77?91, 1952.
[11] Pascal Massart. The tight constant in the dvoretzky-kiefer-wolfowitz inequality. The Annals of
Probability, 18(3):pp. 1269?1283, 1990.
[12] J Neumann and O Morgenstern. Theory of games and economic behavior. Princeton University, Princeton, 1947.
[13] Herbert Robbins. Some aspects of the sequential design of experiments. Bulletin of the AMS,
58:527?535, 1952.
[14] Antoine Salomon and Jean-Yves Audibert. Deviations of stochastic bandit regret. In Proceedings of the 22nd international conference on Algorithmic learning theory (ALT?11), pages
159?173, 2011.
[15] Amir Sani, Alessandro Lazaric, and R?mi Munos. Risk-aversion in multi-arm bandit. Technical Report hal-00750298, INRIA, 2012.
[16] Manfred K. Warmuth and Dima Kuzmin. Online variance minimization. In Proceedings of the
19th Annual Conference on Learning Theory (COLT?06), pages 514?528, 2006.
9
|
4753 |@word trial:3 exploitation:4 version:3 achievable:1 nd:1 open:3 simulation:4 tat:1 covariance:1 incurs:1 series:1 selecting:1 tuned:1 current:2 surprising:1 must:1 numerical:5 christian:1 designed:2 update:2 v:2 implying:2 amir:3 warmuth:1 beginning:2 vanishing:1 short:1 manfred:1 successive:1 mathematical:1 direct:1 introduce:9 excellence:1 notably:1 indeed:1 expected:22 behavior:3 roughly:1 multi:21 decreasing:4 actual:1 armed:9 jm:1 considering:2 becomes:1 xx:2 notation:3 bounded:4 estimating:2 minimizes:1 morgenstern:1 contrasting:1 finding:3 hindsight:1 unobserved:2 csaba:2 guarantee:2 formalizes:1 pseudo:9 every:1 collecting:2 ti:40 finance:3 preferable:1 returning:1 nonnegligible:1 wrong:1 demonstrates:1 dima:1 grant:1 t1:2 positive:1 tends:3 aiming:1 accumulates:1 meet:1 inria:3 might:10 studied:2 suggests:3 challenging:1 heteroscedastic:1 salomon:1 limited:1 bi:6 averaged:2 practical:2 unique:2 acknowledgment:1 regret:93 implement:1 foundational:1 empirical:10 reject:1 revealing:1 confidence:9 lcb:28 renaming:1 close:1 undesirable:1 selection:1 risk:68 optimize:1 equivalent:3 deterministic:1 maximizing:3 economics:1 immediately:1 pure:2 estimator:1 pull:10 notion:2 traditionally:1 annals:1 pt:1 play:1 agreement:1 particularly:1 observed:2 capture:1 worst:10 region:1 averse:3 sect:6 trade:11 highest:6 decrease:4 observes:1 alessandro:3 environment:3 reward:26 ultimately:1 rewrite:1 contrat:1 tight:1 dilemma:3 upon:1 delbaen:1 learner:5 sani:3 basis:1 completely:1 distinct:1 effective:4 whose:2 jean:4 solve:3 say:1 otherwise:2 fischer:1 highlighted:3 noisy:1 online:3 sequence:14 propose:1 coming:1 fr:1 realization:2 achieve:3 exploiting:1 requirement:1 neumann:1 produce:1 depending:1 derive:1 illustrate:1 measured:1 eq:5 strong:1 resemble:1 mini6:1 stochastic:3 exploration:18 routing:1 unfavourable:1 violates:1 education:1 require:1 behaviour:1 preliminary:1 exploring:2 extension:3 clarify:1 hold:1 considered:4 algorithmic:2 stabilizes:1 major:1 achieves:2 adopt:1 smallest:5 a2:3 estimation:1 radio:1 calais:1 council:1 sensitive:1 largest:2 robbins:1 weighted:2 minimization:9 rough:1 mit:1 always:9 gaussian:2 derived:2 focus:7 xit:3 june:2 consistently:1 indicates:1 mainly:1 contrast:1 adversarial:1 am:1 typically:1 bandit:35 selects:5 arg:3 among:1 issue:1 colt:2 pascal:1 equal:2 aware:1 once:1 chernoff:1 lille:1 constitutes:1 nearly:1 minimized:1 report:8 t2:2 markowitz:1 composed:1 phase:5 replacement:1 n1:4 recalling:1 huge:1 interest:1 investigate:1 evaluation:2 introduces:1 extreme:1 tj:5 antos:1 capable:1 partial:2 respective:1 divide:1 theoretical:10 criticized:1 instance:1 modeling:1 maximization:3 cost:1 introducing:2 deviation:4 uniform:1 wortman:1 rounding:2 seventh:1 reported:6 chooses:3 combined:1 peak:1 international:2 accessible:1 shortfall:1 sequel:1 off:11 michael:1 squared:1 cesa:1 choose:1 hoeffding:1 worse:1 cognitive:1 expert:3 leading:2 return:18 suggesting:1 de:2 harry:1 b2:3 gaussianity:1 coefficient:3 satisfy:1 explicitly:1 mv:38 depends:4 audibert:4 try:1 view:1 optimistic:1 analyze:1 eyal:1 recover:1 relied:1 elaborated:1 minimize:1 yves:4 kiefer:1 variance:79 who:1 characteristic:1 identify:1 identification:2 monitoring:1 j6:8 app:4 suffers:6 whenever:4 nordpas:1 definition:20 against:3 nonetheless:4 pp:1 involved:1 associated:1 mi:4 proof:3 stop:1 treatment:6 popular:5 intrinsically:1 anytime:3 knowledge:1 agreed:1 actually:4 auer:1 appears:1 dvoretzky:1 higher:1 varun:1 improved:1 formulation:2 furthermore:2 until:1 hand:8 receives:1 sketch:1 replacing:1 quality:1 reveal:1 pulling:11 hal:1 effect:5 requiring:1 true:9 unbiased:1 brown:1 symmetric:1 i2:5 deal:1 round:21 during:1 game:1 coincides:1 generalized:1 performs:1 novel:3 empirically:1 tail:3 discussed:2 accumulate:1 refer:2 multiarmed:1 grid:1 i6:10 similarly:1 portfolio:1 moving:1 europe:1 similarity:1 showed:1 perspective:1 forcing:1 scenario:1 inequality:3 yi:4 herbert:1 ministry:1 additional:10 employed:1 managing:1 maximize:3 paradigm:2 period:5 wolfowitz:1 multiple:7 desirable:2 full:1 reduces:4 technical:1 match:1 characterized:3 clinical:3 long:1 a1:3 impact:1 patient:4 expectation:7 represent:1 achieved:1 justified:1 szepesv:2 interval:1 unlike:2 regional:1 heath:1 massart:1 validating:2 effectiveness:1 enough:2 easy:3 variety:1 competing:1 opposite:1 suboptimal:6 reduce:2 economic:1 tradeoff:1 translates:1 t0:2 whether:3 optimism:1 utility:3 feder:1 suffer:3 peter:1 returned:1 speaking:1 repeatedly:5 remark:9 dar:1 generally:1 detailed:1 aimed:1 generate:2 exist:1 andr:1 notice:9 estimated:3 lazaric:3 track:1 drawn:2 changing:1 graph:1 sum:3 year:1 compete:2 run:17 inverse:1 uncertainty:2 letter:1 decision:1 bound:27 def:1 simplification:1 display:1 annual:1 precisely:1 as2:1 ri:2 generates:5 aspect:1 argument:4 min:4 according:5 making:1 intuitively:1 sided:1 equation:1 jennifer:1 fp7:1 end:4 operation:1 observe:1 robustly:1 alternative:1 original:1 denotes:2 running:1 remaining:1 log2:1 quantile:2 prof:1 build:2 objective:10 already:1 question:4 realized:1 strategy:1 primary:1 concentration:1 traditional:1 antoine:1 elicits:1 majority:2 cvar:1 mvi:6 collected:2 furthest:1 code:1 length:1 index:4 relationship:1 mini:3 minimizing:2 difficult:5 mostly:2 potentially:1 nord:1 negative:2 design:8 reliably:1 zt:3 policy:1 twenty:1 perform:1 bianchi:1 upper:2 observation:3 finite:3 t:1 projets:1 defining:4 extended:4 variability:10 team:1 rn:12 thm:9 community:1 introduced:5 david:1 pair:1 coherent:4 able:2 usually:1 encompasses:1 max:3 pascal2:1 natural:3 arm:117 minimax:2 historically:2 imply:1 risky:1 deemed:1 deviate:1 literature:3 nicol:1 loss:1 interesting:7 artzner:1 grover:1 versus:1 aversion:5 sufficient:1 consistent:2 principle:2 supported:1 last:1 side:1 pulled:5 face:2 characterizing:1 munos:5 absolute:1 taking:1 bulletin:1 tolerance:3 feedback:1 world:1 cumulative:4 adaptive:1 coincide:2 simplified:1 programme:1 ignore:1 keep:3 active:1 reveals:1 conclude:1 assumed:1 xi:7 search:1 investigated:1 necessarily:1 european:2 elegantly:1 domain:1 protocol:1 main:2 noise:1 paul:1 n2:5 fair:1 allowed:1 kuzmin:1 advice:1 referred:1 fig:1 sub:1 deterministically:1 exponential:1 third:1 advertisement:1 theorem:3 specific:2 bastien:2 maxi:1 explored:2 alt:2 sequential:1 effectively:2 illustrates:3 budget:1 horizon:5 nk:4 gap:4 ucb1:3 remi:1 simply:5 likely:1 explore:1 bubeck:2 desire:1 corresponds:2 eber:1 conditional:2 towards:1 considerable:1 adverse:1 specifically:1 uniformly:3 lemma:4 kearns:1 called:1 secondary:1 discriminate:1 ucb:5 select:3 evaluate:1 princeton:2
|
4,147 | 4,754 |
Convergence Rate Analysis of MAP Coordinate
Minimization Algorithms
Ofer Meshi ?
Tommi Jaakkola ?
Amir Globerson ?
[email protected]
[email protected]
[email protected]
Abstract
Finding maximum a posteriori (MAP) assignments in graphical models is an important task in many applications. Since the problem is generally hard, linear programming (LP) relaxations are often used. Solving these relaxations efficiently
is thus an important practical problem. In recent years, several authors have proposed message passing updates corresponding to coordinate descent in the dual
LP. However, these are generally not guaranteed to converge to a global optimum.
One approach to remedy this is to smooth the LP, and perform coordinate descent
on the smoothed dual. However, little is known about the convergence rate of this
procedure. Here we perform a thorough rate analysis of such schemes and derive
primal and dual convergence rates. We also provide a simple dual to primal mapping that yields feasible primal solutions with a guaranteed rate of convergence.
Empirical evaluation supports our theoretical claims and shows that the method is
highly competitive with state of the art approaches that yield global optima.
1
Introduction
Many applications involve simultaneous prediction of multiple variables. For example, we may seek
to label pixels in an image, infer amino acid residues in protein design, or find the semantic role of
words in a sentence. These problems can be cast as maximizing a function over a set of labels (or
minimizing an energy function). The function typically decomposes into a sum of local functions
over overlapping subsets of variables.
Such maximization problems are nevertheless typically hard. Even for simple decompositions (e.g.,
subsets correspond to pairs of variables), maximizing over the set of labels is often provably NPhard. One approach would be to reduce the problem to a tractable one, e.g., by constraining the
model to a low tree-width graph. However, empirically, using more complex interactions together
with approximate inference methods is often advantageous. One popular family of approximate
methods is the linear programming (LP) relaxation approach. Although these LPs are generally
tractable, general purpose LP solvers typically do not exploit the problem structure [28]. Therefore
a great deal of effort has gone into designing solvers that are specifically tailored to typical MAPLP relaxations. These include, for example, cut based algorithms [2], accelerated gradient methods
[8], and augmented Lagrangian methods [10, 12]. One class of particularly simple algorithms,
which we will focus on here, are coordinate minimization based approaches. Examples include
max-sum-diffusion [25], MPLP [5] and TRW-S [9]. These work by first taking the dual of the LP
and then optimizing the dual in a block coordinate fashion [21]. In many cases, the coordinate
block operations can be performed in closed form, resulting in updates quite similar to the maxproduct message passing algorithm. By coordinate minimization we mean that at each step a set
of coordinates is chosen, all other coordinates are fixed, and the chosen coordinates are set to their
?
?
School of Computer Science and Engineering, The Hebrew University of Jerusalem, Israel
CSAIL, MIT, Cambridge, MA
1
optimal value given the rest. This is different from a coordinate descent strategy where instead a
gradient step is performed on the chosen coordinates (rather than full optimization).
A main caveat of the coordinate minimization approach is that it will not necessarily find the global
optimum of the LP (although in practice it often does). This is a direct result of the LP objective
not being strictly convex. Several authors have proposed to smooth the LP with entropy terms
and employ variants of coordinate minimization [7, 26]. However, the convergence rates of these
methods have not been analyzed. Moreover, since the algorithms work in the dual, there is no
simple procedure to map the result back into primal feasible variables. We seek to address all these
shortcomings: we present a convergence rate analysis of dual coordinate minimization algorithms,
provide a simple scheme for generating primal variables from dual ones, and analyze the resulting
primal convergence rates.
Convergence rates for coordinate minimization are not common in the literature. While asymptotic
convergence is relatively well understood [22], finite rates have been harder to obtain. Recent work
[17] provides rates for rather limited settings which do not hold in our case. On the other hand,
for coordinate descent methods, some rates have been recently obtained for greedy and stochastic
update schemes [16, 20]. These do not apply directly to the full coordinate minimization case which
we study. A related analysis of MAP-LP using smoothing appeared in [3]. However, their approach
is specific to LDPC codes, and does not apply to general MAP problems as we analyze here.
2
MAP and LP relaxations
Consider a set of n discrete variables x1 , . . . , xn , and a set C of subsets of these variables (i.e., c ? C
is a subset of {1, . . . , n}). We consider maximization problems over functions that decompose
according to these subsets. In particular, each subset c is associated with a local function or factor
?c (xc ) and we also include factors ?i (xi ) for each individual variable.1 The MAP problem is to find
an assignment x = (x1 , . . . , xn ) to all the variables which maximizes the sum of these factors:
MAP(?) = max
x
X
?c (xc ) +
n
X
?i (xi )
(1)
i=1
c?C
Linear programming relaxations are a popular approach to approximating combinatorial optimization problems [6, 23, 25]. For example, we obtain a relaxation of the discrete optimization problem
given in Eq. (1) by replacing it with the following linear program:2
XX
XX
P M AP : max P (?) = max
?c (xc )?c (xc ) +
?i (xi )?i (xi ) = max ? ? ? (2)
??ML
??ML
c
xc
i
xi
??ML
where P (?) is the primal (linear) objective and the local marginal polytope ML enforces basic
consistency constraints on the marginals {?i (xi ), ?xi } and {?c (xc ), ?xc }. Specifically,
P
?c (xc ) = ?i (xi ) ?c, i ? c, xi
ML = ? ? 0 : Pxc\i
(3)
?i
xi ?i (xi ) = 1
If the maximizer of P M AP has only integral values (i.e., 0 or 1) it can be used to find the MAP
assignment (e.g., by taking the xi that maximizes ?i (xi )). However, in the general case the solution
may be fractional [24] and the maximum of P M AP is an upper bound on MAP(?).
2.1
Smoothing the LP
As mentioned earlier, several authors have considered a smoothed version of the LP in Eq. (2).
As we shall see, this offers several advantages over solving the LP directly. Given a smoothing
parameter ? > 0, we consider the following smoothed primal problem:
1X
1X
P M AP? :
max P? (?) = max ? ? ? +
H(?i )
(4)
H(?c ) +
??ML
??ML
? c
? i
1
2
Although singleton factors are not needed for generality, we keep them for notational convenience.
We use ? and ? to denote vectors consisting of all ? and ? values respectively.
2
where H(?c ) and H(?i ) are local entropy terms. Note that as ? ? ? we obtain the original primal
problem. In fact, a stronger result can be shown. Namely, that the optimal value of P M AP is O( ?1 )
close to the optimal value of P M AP? . This justifies using the smoothed objective P? as a proxy to
P in Eq. (2). We express this in the following lemma (which appears in similar forms in [7, 15]).
Lemma 2.1. Denote by ?? the optimum of problem P M AP in Eq. (2) and by ?
?? the optimum of
problem P M AP? in Eq. (4). Then:
Hmax
?
?? ? ? ? ?? ? ? ? ?
?? ? ? +
(5)
?
P
P
where Hmax = c log |xc |+ i log |xi |. In other words, the smoothed optimum is an O( ?1 )-optimal
solution of the original non-smoothed problem.
We shall be particularly interested in the dual of P M AP? since it facilitates simple coordinate
minimization updates. Our dual variables will be denoted by ?ci (xi ), which can be interpreted as
the messages from subset c to node i about the value of variable xi . The dual variables are therefore
indexed by (c, i, xi ) and written as ?ci (xi ). The dual objective can be shown to be:
F (?) =
!
!
X1
X
X
X1
X
X
log
exp ? ?c (xc ) ? ?
?ci (xi ) +
log
exp ? ?i (xi ) + ?
?ci (xi )
?
?
c
x
x
i:i?c
i
c:i?c
c
i
(6)
The dual problem is an unconstrained smooth minimization problem:
DM AP? : min F (?)
(7)
?
Convex duality implies that the optima of DM AP? and P M AP? coincide.
Finally, we shall be interested in transformations between dual variables ? and primal variables ?
(see Section 5). The following are the transformations obtained from the Lagrangian derivation (i.e.,
they can be used to switch from optimal dual variables to optimal primal variables).
!
!
X
X
?c (xc ; ?) ? exp ? ?c (xc ) ? ?
?ci (xi )
, ?i (xi ; ?) ? exp ? ?i (xi ) + ?
?ci (xi )
i:i?c
c:i?c
(8)
We denote the vector of all such marginals by ?(?). For the dual variables ? that minimize F (?)
it holds that ?(?) are feasible (i.e., ?(?) ? ML ). However, we will also consider ?(?) for non
optimal ?, and show how to obtain primal feasible approximations from ?(?). These will be helpful
in obtaining primal convergence rates.
It is easy to see that: (?F (? t ))c,i,xi = ?i (xi ; ? t ) ? ?c (xi ; ? t ), where (with some abuse of notation)
P
we denote: ?c (xi ) = xc\i ?c (xc\i , xi ). The elements of the gradient thus correspond to inconsistency between the marginals ?(? t ) (i.e., the degree to which they violate the constraints in Eq. (3)).
We shall make repeated use of this fact to link primal and dual variables.
3
Coordinate Minimization Algorithms
In this section we propose several coordinate minimization procedures for solving DM AP? (Eq.
(7)). We first set some notation to define block coordinate minimization algorithms. Denote the
objective we want to minimize by F (?) where ? corresponds to a set of N variables. Now define
S = {S1 , . . . , SM } as a set of subsets, where each subset Si ? {1, . . . , N } describes a coordinate
block. We will assume that Si ? Sj = ? for all i, j and that ?i Si = {1, . . . , N }.
Block coordinate minimization algorithms work as follows: at each iteration, first set ? t+1 = ? t .
Next choose a block Si and set:
?St+1
= arg min Fi (?Si ; ? t )
(9)
i
?Si
where we use Fi (?Si ; ? t ) to denote the function F restricted to the variables ?Si and where all other
variables are set to their value in ? t . In other words, at each iteration we fully optimize only over the
variables ?Si while fixing all other variables. We assume that the minimization step in Eq. (9) can
be solved in closed form, which is indeed the case for the updates we consider.
Regarding the choice of an update schedule, several options are available:
3
? Cyclic: Decide on a fixed order (e.g., S1 , . . . , SM ) and cycle through it.
? Stochastic: Draw an index i uniformly at random3 at each iteration and use the block Si .
? Greedy: Denote by ?Si F (? t ) the gradient ?F (? t ) evaluated at coordinates Si only. The
greedy scheme is to choose Si that maximizes k?Si F (? t )k? . In other words, choose the
set of coordinates that correspond to maximum gradient of the function F . Intuitively this
corresponds to choosing the block that promises the maximal (local) decrease in objective.
Note that to find the best coordinate we presumably must process all sets Si to find the best
one. We will show later that this can be done rather efficiently in our case.
In our analysis, we shall focus on the Stochastic and Greedy cases, and analyze their rate of convergence. The cyclic case is typically hard to analyze, with results only under multiple conditions
which do not hold here (e.g., see [17]).
Another consideration when designing coordinate minimization algorithms is the choice of block
size. One possible choice is all variables ?ci (?) (for a specific pair ci). This is the block chosen in the
max-sum-diffusion (MSD) algorithm (see [25] and [26] for non-smooth and smooth MSD). A larger
block that also facilitates closed form updates is the set of variables ??i (?). Namely, all messages
into a variable i from c such that i ? c. We call this a star update. The update is used in [13] for the
non-smoothed dual (but the possibility of applying it to the smoothed version is mentioned).
For simplicity, we focus here only on the star update, but the derivation is similar for other choices.
To derive the star update around variable i, one needs to fix all variables except ??i (?) and then set
the latter to minimize F (?). Since F (?) is differentiable this is pretty straightforward. The update
turns out to be:4
!
Y
1
1
1
t+1
t
?ci
(xi ) = ?ci
(xi ) + log ?tc (xi ) ?
? log ?ti (xi ) ?
?tc0 (xi )
(10)
?
Ni + 1 ?
0
0
c :i?c
where Ni = |{c : i ? c}|. It is interesting to consider the improvement in F (?) as a result of the
star update. It can be shown to be exactly:
?
! N 1+1 ?Ni +1
i
Y
X
1
?
?ti (xi ) ?
?tc (xi )
F (? t ) ? F (? t+1 ) = ? log ?
?
x
c:i?c
i
The RHS is known as Matusita?s divergence measure [11], and is a generalization of the Bhattacharyya divergence to several distributions. Thus the improvement can be easily computed before actually applying the update and is directly related to how consistent the Ni + 1 distributions
?tc (xi ), ?ti (xi ) are. Recall that at the optimum they all agree as ? ? ML , and thus the expected
improvement is zero.
4
Dual Convergence Rate Analysis
We begin with the convergence rates of the dual F using greedy and random schemes described in
Section 3. In Section 5 we subsequently show how to obtain a primal feasible solution and how
the dual rates give rise to primal rates. Our analysis builds on the fact that we can lower bound the
improvement at each step, as a function of some norm of the block gradient.
4.1
Greedy block minimization
Theorem 4.1. Define B1 to be a constant such that k? t ? ? ? k1 ? B1 for all t. If coordinate
minimization of each block Si satisfies:
F (? t ) ? F (? t+1 ) ?
for all t, then for any > 0 after T =
3
4
kB12
1
k?Si F (? t )k2?
k
(11)
iterations of the greedy algorithm, F (? T ) ? F (? ? ) ? .
Non uniform schedules are also possible. We consider the uniform for simplicity.
The update is presented here in additive form, there is an equivalent absolute form [21].
4
Proof. Using H?older?s inequality we obtain the bound:
F (? t ) ? F (? ? ) ? ?F (? t )> (? t ? ? ? ) ? k?F (? t )k? ? k? t ? ? ? k1
(12)
t
1
B1
?
t
Implying: k?F (? )k? ?
(F (? ) ? F (? )). Now, using the condition on the improvement and
the greedy nature of the update, we obtain a bound on the improvement:
1
1
F (? t ) ? F (? t+1 ) ?
k?Si F (? t )k2? = k?F (? t )k2?
k
k
2
1
1
?
F (? t ) ? F (? ? ) ?
F (? t ) ? F (? ? ) F (? t+1 ) ? F (? ? )
2
2
kB1
kB1
Hence,
F (? t ) ? F (? ? ) ? F (? t+1 ) ? F (? ? )
1
1
1
?
=
?
kB12
(F (? t ) ? F (? ? )) (F (? t+1 ) ? F (? ? ))
F (? t+1 ) ? F (? ? ) F (? t ) ? F (? ? )
Summing over t we obtain:
T
1
1
1
?
?
?
2
T
?
0
?
T
kB1
F (? ) ? F (? ) F (? ) ? F (? )
F (? ) ? F (? ? )
(13)
(14)
and the desired result follows.
4.2
Stochastic block minimization
Theorem 4.2. Define B2 to be a constant such that k? t ? ? ? k2 ? B2 for all t. If coordinate
minimization of each block Si satisfies:
1
(15)
F (? t ) ? F (? t+1 ) ? k?Si F (? t )k22
k
for all t, then for any > 0 after T =
E[F (? T )] ? F (? ? ) ? .5
k|S|B22
iterations of the stochastic algorithm we have that
The proof is similar to Nesterov?s analysis (see Theorem 1 in [16]). The proof in [16] relies on the
improvement condition in Eq. (15) and not on the precise nature of the update. Note that since the
cost of the update is roughly linear in the size of the block then this bound does not tell us which
block size is better (the cost of an update times the number of blocks is roughly constant).
Analysis of DM AP? block minimization
4.3
We can now obtain rates for our coordinate minimization scheme for optimizing DM AP? by finding
the k to be used in conditions Eq. (15) and Eq. (11). The result for the star update is given below.
Proposition 4.3. The star update for xi satisfies the conditions in Eqs. 15 and 11 with k = 4? Ni .
This can be shown using Equation 2.4 in [14], which states that if Fi (?Si ; ?) (see Eq. (9)) has
Lipschitz constant Li then Eq. (15) is satisfied with k = 2Li . We can then use the fact that the
Lipschitz constant of a star block is at most 2? Ni (this can be calculated as in [18]) to obtain the
result.6 To complete the analysis, it turns out that B1 and B2 can be bounded via a function of ? by
bounding k?k1 (see supplementary, Lemma 1.2). We proceed to discuss the implications of these
bounds.
4.4
Comparing the different schemes
The results we derived have several implications. First, we see that both stochastic and greedy
schemes achieve a rate of O( ? ). This matches the known rates for regular (non-accelerated) gradient descent on functions with Lipschitz continuous gradient (e.g., see [14]), although in practice
coordinate minimization is often much faster.
5
6
Expectation is taken with respect to the randomization of blocks.
We also provide a direct proof in the supplementary, Section 2.
5
The main difference between the greedy and stochastic rates is that the factor |S| (the number of
blocks) does not appear in the greedy rate, and does appear in the stochastic one. This can have a
considerable effect since |S| is either the number of variables n (in the star update) or the number
of factors |C| (in MPLP). Both can be significant (e.g., |C| is the number of edges in a pairwise
MRF model). The greedy algorithm does pay a price for this advantage, since it has to find the
optimal block to update at each iteration. However, for the problem we study here this can be
done much more efficiently using a priority queue. To see this, consider the star update. A change
in the variables ??i (?) will only affect the blocks that correspond to variables j that are in c such
that i ? c. In many cases this is small (e.g., low degree pairwise MRFs) and thus we will only
have to change the priority queue a small number of times, and this cost would be negligible when
using a Fibonacci heap for example.7 Indeed, our empirical results show that the greedy algorithm
consistently outperforms the stochastic one (see Section 6).
5
Primal convergence
Thus far we have considered only dual variables. However, it is often important to recover the primal
variables. We therefore focus on extracting primal feasible solutions from current ?, and characterize
the degree of primal optimality and associated rates. The primal variables ?(?) (see Eq. (8)) need
not be feasible in the sense that the consistency constraints in Eq. (3) are not necessarily satisfied.
This is true also for other approaches to recovering primal variables from the dual, such as averaging
subgradients when using subgradient descent (see, e.g., [21]).
We propose a simple two-step algorithm for transforming any dual variables ? into primal feasible
variables ?
?(?) ? ML . The resulting ?
?(?) will also be shown to converge to the optimal primal
solution in Section 5.1. The procedure is described in Algorithm 1 below.
Algorithm 1 Mapping to feasible primal solution
Step 1: Make marginals consistent.
For all i do: ?
?i (xi ) =
P 1
1+ c:i?c
1
|Xc\i |
?i (xi ) +
P
For all c do: ?
?c (xc ) = ?c (xc ) ? i:i?c
Step 2: Make marginals non-negative.
?=0
for c ? C, xc do
if ?
?c (xc ) < 0then
??
?c (xc )
? = max ?, ??? (x )+ 1
c
c
(?c (xi ) ? ?
?i (xi ))
|Xc |
else if ?
?c (xc )> 1 then
? = max ?,
1
|Xc\i |
1
c:i?c |Xc\i | ?c (xi )
P
?
? c (xc )?1
?
? c (xc )? |X1
c|
end if
end for
for ` = 1, . . . , n; c ? C do
?
?` (x` ) = (1 ? ?)?
?` (x` ) + ? |X1` |
end for
Importantly, all steps consist of cheap elementary local calculations in contrast to other methods previously proposed for this task (compare to [18, 27]). The first step performs a Euclidian projection
of ?(?) to consistent marginals ?
?. Specifically, it solves:
X
1
min
k?(?) ? ?
?k2 , s.t. ?
?c (xi ) = ?
?i (xi ), for all c, i ? c, xi ,
?
?i (xi ) = 1, for all i
?
?
2
i
Note that we did not include non-negativity constraints above, so the projection might result in negative ?
?. In the second step we ?pull? ?
? back into the feasible regime by taking a convex combination
7
This was also used in the residual belief propagation approach [4], which however is less theoretically
justified than what we propose here.
6
with the uniform distribution u (see [3] for a related approach). In particular, this step solves the
simple problem of finding the smallest ? ? [0, 1] such that 0 ? ?
? ? 1 (where ?
? = (1 ? ?)?
? + ?u).
Since this step interpolates between two distributions that satisfy consistency and normalization
constraints, ?
? will be in the local polytope ML .
5.1
Primal convergence rate
Now that we have a procedure for obtaining a primal solution we analyze the corresponding convergence rate. First, we show that if we have ? for which k?F (?)k? ? then ?
?(?) (after Algorithm
1) is an O() primal optimal solution.
Theorem 5.1. Denote by P?? the optimum of the smoothed primal P M AP? . For any set of dual
variables ?, and any ? R(? ) (see supp. for definition of R(? )) it holds that if k?F (?)k? ? then
P?? ? P? (?
?(?)) ? C0 . The constant C0 depends only on the parameters ? and is independent of ? .
The proof is given in the supplementary file (Section 1). The key idea is to break F (?) ? P? (?
?(?))
into components, and show that each component is upper bounded by O(). The range R(? ) consists
of ? O( ?1 ) and ? O(e?? ). As we show in the supplementary this range is large enough to
guarantee any desired accuracy in the non-smoothed primal. We can now translate dual rates into
primal rates. This can be done via the following well known lemma:
Lemma 5.2. Any convex function F with Lipschitz continuous gradient and Lipschitz constant L
satisfies k?F (?)k22 ? 2L (F (?) ? F (? ? )).
These results together with the fact that k?F (?)k22 ? k?F (?)k2? , and the Lipschitz constant of
F (?) is O(? ), lead to the following theorem.
Theorem 5.3. Given any algorithm for optimizing DM AP? and ? R(? ), if the algorithn is
guaranteed to achieve F (? t ) ? F (? ? ) ? after O(g()) iterations, then it is guaranteed to be
2
?(? t )) ? after O(g( ? )) iterations.8
primal optimal, i.e., P?? ? P? (?
The theorem lets us directly translate dual convergence rates into primal ones. Note that it applies
to any algorithm for DM AP? (not only coordinate minimization), and the only property of the
algorithm used in the proof is F (? t ) ? F (0) for all t. Put in the context of our previous results, any
algorithm that achieves F (? t ) ? F (? ? ) ? in t = O(? /) iterations, then it is guaranteed to achieve
0
P?? ? P? (?
?(? t )) ? in t0 = O(? 2 /2 ) iterations.
6
Experiments
In this section we evaluate coordinate minimization algorithms on a MAP problem, and compare
them to state-of-the-art baselines. Specifically, we compare the running time of greedy coordinate
minimization, stochastic coordinate minimization, full gradient descent, and FISTA ? an accelerated
gradient method [1] (details on the gradient-based algorithmsare provided in the supplementary,
1
Section
3).
Gradient descent is known to converge in O iterations while FISTA converges
in O ?1 iterations [1]. We compare the performance of the algorithms on protein side-chain
prediction problems from the dataset of Yanover et al. [28]. These problems involve finding the 3D
configuration of rotamers given the backbone structure of a protein. The problems are modeled by
singleton and pairwise factors and can be posed as finding a MAP assignment for the given model.
Figure 1(a) shows the objective value for each algorithm over time. We first notice that the greedy
algorithm converges faster than the stochastic one. This is in agreement with our theoretical analysis.
Second, we observe that the coordinate minimization algorithms are competitive with the accelerated gradient method FISTA and are much faster than the gradient method. Third, as Theorem 5.3
predicts, primal convergence is slower than dual convergence (notice the logarithmic timescale).
Finally, we can see that better convergence of the dual objective corresponds to better convergence
of the primal objective, in both fractional and integral domains. In our experiments the quality of
the decoded integral solution (dashed lines) significantly exceeds that of the fractional solution. Although sometimes a fractional solution can be useful in itself, this suggests that if only an integral
solution is sought then it could be enough to decode directly from the dual variables.
8
We omit constants not depending on ? and .
7
(a)
250
Greedy
Stochastic
FISTA
Gradient
Objective
200
(b) talg /tgreedy
150
Greedy
Stochastic
FISTA
Gradient
100
50
1
8.6 ? 0.6
814.2 ? 38.1
13849.8 ? 6086.5
0
?50 ?2
10
0
10
2
10
Runtime (secs)
4
6
10
10
Figure 1: Comparison of coordinate minimization, gradient descent, and the accelerated gradient
algorithms on protein side-chain prediction task. Figure (a) shows a typical run of the algorithms.
For each algorithm the dual objective of Eq. (6) is plotted as a function of execution time. The value
(Eq. (4)) of the feasible primal solution of Algorithm 1 is also shown (lower solid line), as well as
the objective (Eq. (1)) of the best decoded integer solution (dashed line; those are decoded directly
from the dual variables ?). Table (b) shows the ratio of runtime of each algorithm w.r.t. the greedy
algorithm. The mean ratio over the proteins in the dataset is shown followed by standard error.
The table in Figure 1(b) shows overall statistics for the proteins in the dataset. Here we run each
algorithm until the duality gap drops bellow a fixed desired precision ( = 0.1) and compare the
total runtime. The table presents the ratio of runtime of each algorithm w.r.t. the greedy algorithm
(talg /tgreedy ). These results are consistent with the example in Figure 1(a).
7
Discussion
We presented the first convergence rate analysis of dual coordinate minimization algorithms on
MAP-LP relaxations. We also showed how such dual iterates can be turned into primal feasible
iterates and analyzed the rate with which these primal iterates converge to the primal optimum. The
primal mapping is of considerable practical value, as it allows us to monitor the distance between the
upper (dual) and lower (primal) bounds on the optimum and use this as a stopping criterion. Note
that this cannot be done without a primal feasible solution.9
The overall rates we obtain are of the order O( ? ) for the DM AP? problem. If one requires an
accurate solution for P M AP , then ? needs to be set to O( 1 ) (see Eq. (5)) and the overall rate is
O( 12 ) in the dual. As noted in [8, 18], a faster rate of O( 1 ) may be obtained using accelerated
methods such as Nesterov?s [15] or FISTA [1]. However, these also have an extra factor of N which
does not appear in the greedy rate. This could partially explain the excellent performance of the
greedy scheme when compared to FISTA (see Section 6).
Our analysis also highlights the advantage of using greedy block choice for MAP problems. The
advantage comes from the fact that the choice of block to update is quite efficient since its cost is of
the order of the other computations required by the algorithm. This can be viewed as a theoretical
reinforcement of selective scheduling algorithms such as Residual Belief Propagation [4].
Many interesting questions still remain to be answered. How should one choose between different
block updates (e.g., MSD vs star)? What are lower bounds on rates? Can we use acceleration as in
[15] to obtain better rates? What is the effect of adaptive smoothing (see [19]) on rates? We plan to
address these in future work.
Acknowledgments: This work was supported by BSF grant 2008303. Ofer Meshi is a recipient of the Google
Europe Fellowship in Machine Learning, and this research is supported in part by this Google Fellowship.
9
An alternative commonly used progress criterion is to decode an integral solution from the dual variables,
and see if its value is close to the dual upper bound. However, this will only work if P M AP has an integral
solution and we have managed to decode it.
8
References
[1] A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems.
SIAM J. Img. Sci., 2(1):183?202, Mar. 2009.
[2] Y. Boykov, O. Veksler, and R. Zabih. Fast approximate energy minimization via graph cuts. In Proc.
IEEE Conf. Comput. Vision Pattern Recog., 1999.
[3] D. Burshtein. Iterative approximate linear programming decoding of ldpc codes with linear complexity.
IEEE Transactions on Information Theory, 55(11):4835?4859, 2009.
[4] G. Elidan, I. Mcgraw, and D. Koller. Residual belief propagation: informed scheduling for asynchronous
message passing. In UAI, 2006.
[5] A. Globerson and T. Jaakkola. Fixing max-product: Convergent message passing algorithms for MAP
LP-relaxations. In J. Platt, D. Koller, Y. Singer, and S. Roweis, editors, NIPS 20. MIT Press, 2008.
[6] M. Guignard and S. Kim. Lagrangean decomposition: A model yielding stronger Lagrangean bounds.
Mathematical Programming, 39(2):215?228, 1987.
[7] T. Hazan and A. Shashua. Norm-product belief propagation: Primal-dual message-passing for approximate inference. IEEE Transactions on Information Theory, 56(12):6294?6316, 2010.
[8] V. Jojic, S. Gould, and D. Koller. Fast and smooth: Accelerated dual decomposition for MAP inference.
In Proceedings of International Conference on Machine Learning (ICML), 2010.
[9] V. Kolmogorov. Convergent tree-reweighted message passing for energy minimization. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(10):1568?1583, 2006.
[10] A. L. Martins, M. A. T. Figueiredo, P. M. Q. Aguiar, N. A. Smith, and E. P. Xing. An augmented
lagrangian approach to constrained map inference. In ICML, pages 169?176, 2011.
[11] K. Matusita. On the notion of affinity of several distributions and some of its applications. Annals of the
Institute of Statistical Mathematics, 19:181?192, 1967. 10.1007/BF02911675.
[12] O. Meshi and A. Globerson. An alternating direction method for dual map lp relaxation. In ECML PKDD,
pages 470?483. Springer-Verlag, 2011.
[13] O. Meshi, D. Sontag, T. Jaakkola, and A. Globerson. Learning efficiently with approximate inference via
dual losses. In ICML, pages 783?790, New York, NY, USA, 2010. ACM.
[14] Y. Nesterov. Introductory Lectures on Convex Optimization: A Basic Course, volume 87. Kluwer Academic Publishers, 2004.
[15] Y. Nesterov. Smooth minimization of non-smooth functions. Math. Prog., 103(1):127?152, May 2005.
[16] Y. Nesterov. Efficiency of coordinate descent methods on huge-scale optimization problems. Core discussion papers, Universit catholique de Louvain, 2010.
[17] A. Saha and A. Tewari. On the finite time convergence of cyclic coordinate descent methods, 2010.
preprint arXiv:1005.2146.
[18] B. Savchynskyy, S. Schmidt, J. Kappes, and C. Schnorr. A study of Nesterov?s scheme for lagrangian
decomposition and map labeling. CVPR, 2011.
[19] B. Savchynskyy, S. Schmidt, J. H. Kappes, and C. Schn?orr. Efficient mrf energy minimization via adaptive
diminishing smoothing. In UAI, 2012.
[20] S. Shalev-Shwartz and A. Tewari. Stochastic methods for l1-regularized loss minimization. J. Mach.
Learn. Res., 12:1865?1892, July 2011.
[21] D. Sontag, A. Globerson, and T. Jaakkola. Introduction to dual decomposition for inference. In Optimization for Machine Learning, pages 219?254. MIT Press, 2011.
[22] P. Tseng. Convergence of a block coordinate descent method for nondifferentiable minimization 1. Journal of Optimization Theory and Applications, 109(3):475?494, 2001.
[23] M. Wainwright, T. Jaakkola, and A. Willsky. MAP estimation via agreement on trees: message-passing
and linear programming. IEEE Transactions on Information Theory, 51(11):3697?3717, 2005.
[24] M. Wainwright and M. I. Jordan. Graphical Models, Exponential Families, and Variational Inference.
Now Publishers Inc., Hanover, MA, USA, 2008.
[25] T. Werner. A linear programming approach to max-sum problem: A review. IEEE Transactions on Pattern
Analysis and Machine Intelligence, 29(7):1165?1179, 2007.
[26] T. Werner. Revisiting the decomposition approach to inference in exponential families and graphical
models. Technical Report CTU-CMP-2009-06, Czech Technical University, 2009.
[27] T. Werner. How to compute primal solution from dual one in MAP inference in MRF? In Control Systems
and Computers (special issue on Optimal Labeling Problems in Structual Pattern Recognition), 2011.
[28] C. Yanover, T. Meltzer, and Y. Weiss. Linear programming relaxations and belief propagation ? an
empirical study. Journal of Machine Learning Research, 7:1887?1907, 2006.
9
|
4754 |@word version:2 norm:2 advantageous:1 stronger:2 c0:2 seek:2 decomposition:6 euclidian:1 solid:1 harder:1 cyclic:3 configuration:1 bhattacharyya:1 outperforms:1 current:1 comparing:1 si:21 written:1 must:1 additive:1 cheap:1 drop:1 update:26 v:1 implying:1 greedy:22 intelligence:2 amir:1 ctu:1 smith:1 core:1 caveat:1 provides:1 iterates:3 node:1 math:1 mathematical:1 direct:2 consists:1 introductory:1 theoretically:1 pairwise:3 expected:1 indeed:2 roughly:2 pkdd:1 little:1 solver:2 begin:1 xx:2 moreover:1 notation:2 maximizes:3 bounded:2 provided:1 israel:1 what:3 backbone:1 interpreted:1 informed:1 finding:5 transformation:2 guarantee:1 thorough:1 ti:3 runtime:4 exactly:1 universit:1 k2:6 platt:1 control:1 grant:1 omit:1 appear:3 before:1 negligible:1 engineering:1 local:7 understood:1 mach:1 ap:21 abuse:1 might:1 suggests:1 limited:1 gone:1 range:2 practical:2 globerson:5 enforces:1 acknowledgment:1 practice:2 block:29 procedure:5 empirical:3 significantly:1 projection:2 word:4 regular:1 protein:6 convenience:1 close:2 cannot:1 savchynskyy:2 scheduling:2 put:1 context:1 applying:2 kb1:3 optimize:1 equivalent:1 map:21 lagrangian:4 maximizing:2 jerusalem:1 straightforward:1 convex:5 simplicity:2 bsf:1 importantly:1 pull:1 notion:1 coordinate:42 structual:1 annals:1 decode:3 programming:8 designing:2 rotamers:1 agreement:2 element:1 recognition:1 particularly:2 cut:2 predicts:1 role:1 recog:1 preprint:1 solved:1 revisiting:1 kappes:2 cycle:1 decrease:1 mentioned:2 transforming:1 complexity:1 nesterov:6 solving:3 efficiency:1 easily:1 kolmogorov:1 derivation:2 fast:3 shortcoming:1 tell:1 labeling:2 choosing:1 shalev:1 quite:2 larger:1 supplementary:5 posed:1 cvpr:1 statistic:1 timescale:1 itself:1 advantage:4 differentiable:1 propose:3 interaction:1 maximal:1 product:2 turned:1 translate:2 achieve:3 roweis:1 convergence:24 optimum:11 generating:1 converges:2 derive:2 depending:1 ac:2 fixing:2 school:1 progress:1 eq:20 solves:2 recovering:1 c:2 implies:1 come:1 tommi:2 direction:1 stochastic:14 subsequently:1 meshi:5 fix:1 generalization:1 decompose:1 randomization:1 proposition:1 elementary:1 strictly:1 hold:4 around:1 considered:2 exp:4 great:1 presumably:1 mapping:3 claim:1 achieves:1 sought:1 heap:1 smallest:1 purpose:1 estimation:1 proc:1 label:3 combinatorial:1 minimization:36 mit:4 rather:3 cmp:1 shrinkage:1 jaakkola:5 derived:1 focus:4 notational:1 improvement:7 consistently:1 contrast:1 baseline:1 sense:1 kim:1 posteriori:1 inference:9 helpful:1 mrfs:1 stopping:1 typically:4 diminishing:1 koller:3 selective:1 interested:2 provably:1 pixel:1 issue:1 arg:1 dual:45 overall:3 denoted:1 plan:1 art:2 smoothing:5 constrained:1 special:1 marginal:1 icml:3 future:1 report:1 employ:1 saha:1 divergence:2 individual:1 beck:1 consisting:1 huge:1 message:9 highly:1 possibility:1 gamir:1 evaluation:1 analyzed:2 yielding:1 primal:44 chain:2 implication:2 accurate:1 integral:6 edge:1 tree:3 indexed:1 desired:3 plotted:1 re:1 theoretical:3 earlier:1 teboulle:1 werner:3 assignment:4 maximization:2 cost:4 subset:9 uniform:3 veksler:1 characterize:1 st:1 international:1 siam:1 huji:2 csail:2 decoding:1 together:2 satisfied:2 choose:4 priority:2 conf:1 li:2 supp:1 singleton:2 de:1 orr:1 star:10 b2:3 sec:1 inc:1 satisfy:1 depends:1 performed:2 later:1 break:1 closed:3 analyze:5 hazan:1 shashua:1 competitive:2 recover:1 option:1 xing:1 minimize:3 il:2 ni:6 accuracy:1 acid:1 efficiently:4 yield:2 correspond:4 simultaneous:1 explain:1 definition:1 energy:4 dm:8 associated:2 proof:6 dataset:3 popular:2 recall:1 fractional:4 schedule:2 actually:1 back:2 trw:1 appears:1 wei:1 evaluated:1 done:4 mar:1 generality:1 until:1 hand:1 replacing:1 overlapping:1 maximizer:1 propagation:5 google:2 quality:1 usa:2 effect:2 k22:3 true:1 remedy:1 managed:1 hence:1 jojic:1 alternating:1 semantic:1 deal:1 reweighted:1 width:1 noted:1 criterion:2 complete:1 fibonacci:1 performs:1 l1:1 image:1 variational:1 consideration:1 recently:1 fi:3 boykov:1 common:1 empirically:1 volume:1 kluwer:1 marginals:6 significant:1 cambridge:1 unconstrained:1 consistency:3 mathematics:1 pxc:1 europe:1 recent:2 showed:1 optimizing:3 verlag:1 inequality:1 inconsistency:1 b22:1 converge:4 elidan:1 dashed:2 july:1 multiple:2 full:3 violate:1 infer:1 smooth:8 exceeds:1 match:1 faster:4 calculation:1 offer:1 academic:1 technical:2 msd:3 prediction:3 variant:1 basic:2 mrf:3 vision:1 expectation:1 arxiv:1 iteration:12 normalization:1 tailored:1 sometimes:1 justified:1 residue:1 want:1 fellowship:2 else:1 publisher:2 extra:1 rest:1 file:1 facilitates:2 jordan:1 call:1 extracting:1 integer:1 constraining:1 easy:1 enough:2 tc0:1 switch:1 affect:1 meltzer:1 reduce:1 regarding:1 idea:1 t0:1 effort:1 queue:2 sontag:2 interpolates:1 passing:7 proceed:1 york:1 generally:3 useful:1 tewari:2 involve:2 zabih:1 notice:2 discrete:2 shall:5 promise:1 express:1 key:1 nevertheless:1 monitor:1 diffusion:2 graph:2 relaxation:11 subgradient:1 year:1 sum:5 run:2 inverse:1 prog:1 family:3 decide:1 draw:1 bound:10 pay:1 guaranteed:5 followed:1 convergent:2 constraint:5 answered:1 min:3 optimality:1 subgradients:1 relatively:1 gould:1 martin:1 according:1 combination:1 describes:1 remain:1 lp:18 s1:2 intuitively:1 restricted:1 taken:1 equation:1 agree:1 previously:1 turn:2 discus:1 needed:1 singer:1 tractable:2 end:3 ofer:2 operation:1 available:1 hanover:1 apply:2 observe:1 alternative:1 schmidt:2 slower:1 original:2 recipient:1 running:1 include:4 graphical:3 xc:26 exploit:1 k1:3 build:1 approximating:1 objective:12 question:1 strategy:1 gradient:19 affinity:1 distance:1 link:1 sci:1 mplp:2 nondifferentiable:1 polytope:2 tseng:1 willsky:1 code:2 ldpc:2 index:1 modeled:1 ratio:3 minimizing:1 hebrew:1 negative:2 rise:1 design:1 perform:2 upper:4 sm:2 finite:2 descent:12 ecml:1 precise:1 smoothed:10 cast:1 pair:2 namely:2 required:1 sentence:1 schn:1 louvain:1 czech:1 nip:1 address:2 below:2 pattern:4 appeared:1 regime:1 program:1 max:12 belief:5 wainwright:2 regularized:1 residual:3 yanover:2 scheme:10 older:1 negativity:1 review:1 literature:1 asymptotic:1 fully:1 loss:2 highlight:1 lecture:1 interesting:2 degree:3 proxy:1 consistent:4 thresholding:1 editor:1 course:1 supported:2 asynchronous:1 figueiredo:1 catholique:1 side:2 institute:1 taking:3 absolute:1 calculated:1 xn:2 author:3 commonly:1 reinforcement:1 coincide:1 adaptive:2 far:1 transaction:5 sj:1 approximate:6 mcgraw:1 keep:1 ml:11 global:3 uai:2 b1:4 summing:1 img:1 xi:49 shwartz:1 continuous:2 iterative:2 decomposes:1 pretty:1 table:3 nature:2 schnorr:1 learn:1 obtaining:2 excellent:1 complex:1 necessarily:2 domain:1 did:1 main:2 rh:1 bounding:1 repeated:1 amino:1 x1:6 augmented:2 nphard:1 fashion:1 ny:1 precision:1 decoded:3 exponential:2 comput:1 third:1 hmax:2 theorem:8 specific:2 consist:1 ci:10 execution:1 justifies:1 gap:1 entropy:2 tc:3 logarithmic:1 partially:1 applies:1 springer:1 corresponds:3 satisfies:4 relies:1 acm:1 ma:2 viewed:1 acceleration:1 aguiar:1 lipschitz:6 price:1 feasible:13 hard:3 considerable:2 change:2 specifically:4 typical:2 uniformly:1 except:1 averaging:1 fista:7 talg:2 lemma:5 bellow:1 total:1 lagrangean:2 duality:2 maxproduct:1 support:1 latter:1 accelerated:7 evaluate:1
|
4,148 | 4,755 |
Approximating Equilibria in Sequential Auctions with
Incomplete Information and Multi-Unit Demand
Jiacui Li
Department of Applied Math/Economics
Brown University
Providence, RI 02912
jiacui [email protected]
Amy Greenwald and Eric Sodomka
Department of Computer Science
Brown University
Providence, RI 02912
{amy,sodomka}@cs.brown.edu
Abstract
In many large economic markets, goods are sold through sequential auctions.
Examples include eBay, online ad auctions, wireless spectrum auctions, and the
Dutch flower auctions. In this paper, we combine methods from game theory and
decision theory to search for approximate equilibria in sequential auction domains,
in which bidders do not know their opponents? values for goods, bidders only partially observe the actions of their opponents?, and bidders demand multiple goods.
We restrict attention to two-phased strategies: first predict (i.e., learn); second,
optimize. We use best-reply dynamics [4] for prediction (i.e., to predict other bidders? strategies), and then assuming fixed other-bidder strategies, we estimate and
solve the ensuing Markov decision processes (MDP) [18] for optimization. We
exploit auction properties to represent the MDP in a more compact state space,
and we use Monte Carlo simulation to make estimating the MDP tractable. We
show how equilibria found using our search procedure compare to known equilibria for simpler auction domains, and we approximate an equilibrium for a more
complex auction domain where analytical solutions are unknown.
1
Introduction
Decision-making entities, whether they are businesses, governments, or individuals, usually interact
in game-theoretic environments, in which the final outcome is intimately tied to the actions taken
by others in the environment. Auctions are examples of such game-theoretic environments with
significant economic relevance. Internet advertising, of which a significant portion of transactions
take place through online auctions, has had spending increase 24 percent from 2010 to 2011, globally
becoming an $85 billion industry [16]. The FCC has conducted auctions for wireless spectrum since
1994, reaching sales of over $60 billion.1 Perishable commodities such as flowers are often sold via
auction; the Dutch flower auctions had about $5.4 billion in sales in 2011.2
A game-theoretic equilibrium, in which each bidder best responds to the strategies of its opponents,
can be used as a means of prescribing and predicting auction outcomes. Finding equilibria in auctions is potentially valuable to bidders, as they can use the resulting strategies as prescriptions that
guide their decisions, and to auction designers, as they can use the resulting strategies as predictions
for bidder behavior. While a rich literature exists on computing equilibria for relatively simple auction games [11], auction theory offers few analytical solutions for real-world auctions. Even existing
computational methods for approximating equilibria quickly become intractable as the number of
bidders and goods, and the complexity of preferences and decisions, increase.
1
2
See http://wireless.fcc.gov/auctions/default.htm?job=auctions_all.
See http://www.floraholland.com/en/.
1
In this paper, we combine methods from game theory and decision theory to approximate equilibria
in sequential auction domains, in which bidders do not know their opponents? values for goods,
bidders partially observe the actions of their opponents?, and bidders demand multiple goods. Our
method of searching for equilibria is motivated by the desire to reach strategies that real-world
bidders might actually use. To this end, we consider strategies that consist of two parts: a prediction
(i.e., learning) phase and an optimization phase. We use best-reply dynamics [4] for prediction (i.e.,
to predict other bidders? strategies), and then assuming fixed other-bidder strategies, we estimate
and solve a Markov decision processes (MDP) [18] for optimization. We exploit auction properties
to represent the MDPs in a more compact state space, and we use Monte Carlo simulation to make
estimating the MDPs tractable.
2
Sequential Auctions
We focus on sequential sealed-bid auctions, with a single good being sold at each of K rounds. The
number of bidders n and the order in which goods are sold are assumed to be common knowledge.
During auction round k, each bidder i submits a private bid bki ? Bi to the auctioneer. We let
bk = hbk1 , . . . , bkn i denote the vector of bids submitted by all bidders at round k. The bidder who
submits the highest bid wins and is assigned a cost based on a commonly known payment rule.
At the end of round k, the auctioneer sends a private (or public) signal oki ? Oi to each bidder i,
which is a tuple specifying information about the auction outcome for round k, such as the winning
bid, the bids of all agents, the winner identities, whether or not a particular agent won the good, or
any combination thereof. Bidders only observe opponents? bids if those bids are announced by the
auctioneer. Regardless, we assume that bidder i is told at least which set of goods she won in the
kth round, wik ? {?, {k}}, and how much she paid, cki ? R. We let ?(ok | bk ) ? [0, 1] denote
the probability that the auctioneer sends the bidders signals ok = hok1 , . . . , okn i given bk , and we let
?(oki | bk ) express the probability that player i receives signal oki , given bk .
An auction history at round k consists of past bids plus all information communicated by the auctioneer though round k ? 1. Let hki = h(b1i , o1i ), . . . , (bk?1
ok?1
)i be a possible auction history at
i
i
round k as observed by bidder i. Let Hi be the set of all possible auction histories for bidder i.
Each bidder i is endowed with a privately known type ?i ? ?i , drawn from a commonly known
distribution F , that determines bidder i?s valuations for various bundles of goods. A (behavioral)
strategy ?i : ? ? Hi 7? 4Bi for bidder i specifies a distribution over bids for each possible type
and auction history. The set ?i contains all possible strategies.
At the end of the K auction rounds, bidder i?s utility is based on the bundle of goods she won
and the amount she paid for those goods. Let X ? {1, . . . , K} be a possible bundle of goods,
and let v(X; ?i ) denote a bidder?s valuation for bundle X when its type is ?i . No assumptions
are made about the structure of this value function. A bidder?s utility for type ?i and history hK
after K auction rounds is simply that bidder?s value for the bundle of goods it won minus its cost:
PK k
K
k
ui (?i , hK
i ) = v(?k=1 wi ; ?i ) ?
k=1 ci .
Given a sequential auction ? (defined by all of the above), bidder i?s objective is to choose a strategy
that maximizes its expected utility. But this quantity depends on the actions of other bidders. A
strategy profile ~? = (?1 , ? ? ? , ?N ) = (?i , ??i ) defines a strategy for each bidder. (Throughout
the paper, subscript i refers to a bidder i while ?i refers to all bidders except i.) Let Ui (~? ) =
K
E?i ,hK
?.
? [ui (?i , hi )] denote bidder i?s expected utility given strategy profile ~
i |~
Definition 1 (-Bayes-Nash Equilibrium (-BNE)). Given a sequential auction ?, a strategy profile
~? ? ? is an -Bayes-Nash-equilibrium if Ui (~? ) + ? Ui (?i0 , ??i ) ?i ? {1, . . . , n}, ??i0 ? ?i .
In an -Bayes-Nash Equilibrium, each bidder has to come within an additive factor () of bestresponding to its opponent strategies. A Bayes-Nash equilibrium is an -Bayes-Nash equilibrium
where = 0. In this paper, we explore techniques for finding -BNE in sequential auctions. We also
explain how to experimentally estimate the so-called -factor of a strategy profile:
Definition 2 (-Factor). Given a sequential auction ?, the -factor of strategy profile ~? for bidder
i is i (~? ) = max?i0 Ui (?i0 , ??i ) ? Ui (?i , ??i ). In words, the -factor measures bidder i?s loss in
expected utility for not playing his part of ~? when other bidders are playing their parts.
2
3
Theoretical Results
As the number of rounds, bidders, possible types, or possible actions in a sequential auction increases, it quickly becomes intractable to find equilibria using existing computational methods. Such
real-world intractability is one reason bidders often do not attempt to solve for equilibria, but rather
optimize with respect to predictions about opponent behavior. Building on past work [2, 8], our first
contribution is to fully represent the decision problem for a single bidder i in a sequential auction ?
as a Markov decision process (MDP).
Definition 3 (Full-history MDP). A full-history MDP Mi (?, ?i , T ) represents the sequential auction
? from bidder i?s perspective, assuming i?s type is ?i , with states S = Hi , actions A = Bi , rewards
K
R(s) = {ui (?i , hK
i ) if s = hi is a history of length K; 0 otherwise}, and transition function T .
If bidder types are correlated, bidder i?s type informs its beliefs about opponents? types and thus
opponents? predicted behavior. For notational and computational simplicity, we assume that bidder
types are drawn independently, in which case there is one transition function T regardless of bidder
i?s type. We also assume that bidders are symmetric, meaning their types are all drawn from the same
distribution. When bidders are symmetric, we can restrict our attention to symmetric equilibria,
where a single set of full-history MDPs, one per type, is solved on behalf of all bidders.
Definition 4 (MDP Assessment). An MDP assessment (?, T ) for a sequential auction ? is a set of
policies {? ?i | ?i ? ?i }, one for each full-history MDP Mi (?, ?i , T ).
We now explain where the transition function T comes from. At a high level, we define (symmetric)
induced transition probabilities Induced(?) to be the transition probabilities that result from agent
i using Bayesian updating to infer something about its opponents? private information, and then
reasoning about its opponents? subsequent actions, assuming they all follow policy ?. The following
example provides some intuition for this process.
Example 1. Consider a first-price sequential auction with two rounds, two bidders, two possible
types (?H? and ?L?) drawn independently from a uniform prior (i.e., p(H) = 0.5 and p(L) = 0.5),
and two possible actions (?high? and ?low?). Suppose Bidder 2 is playing the following simple
strategy: if type H: bid ?high? with probability .9, and bid ?low? with probability .1; if type L: bid
?high? with probability .1, and bid ?low? with probability .9.
At round k = 1, from the perspective of Bidder 1, the only uncertainty that exists is about Bidder 2?s
type. Bidder 1?s beliefs about Bidder 2?s type is based solely on the type prior, resulting in beliefs
that Bidder 2 will bid ?high? and ?low? each with equal probability. Suppose Bidder 1 bids ?low?
and loses to Bidder 2, who the auctioneer reports as having bid ?high?. At round k = 2, Bidder
1 must update its posterior beliefs about Bidder 2 after observing the given outcome. This is done
using Bayes? rule to find that Bidder 2 is of type ?H? with probability 0.9. Based on its policy, in
the subsequent round, the probability Bidder 2 bids ?high? is 0.9(0.9) + 0.1(0.1) = 0.82, and the
probability it bids ?low? is 0.9(0.1) + 0.1(0.9) = 0.18. Given this bid distribution for Bidder 2,
Bidder 1 can compute her probability of transitioning to various future states for each possible bid.
More formally, denoting ski and aki as agent i?s state and action at auction round k, respectively,
define Pr(sk+1
| ski , aki ) to be the probability of reaching state sk+1
given that action aki was taken in
i
i
k
state si . By twice applying the law of total probability and then noting conditional independencies,
k+1
Pr(si
k
k
| si , ai )
=
X
k+1
Pr(si
k
k
k
k
k
k
| si , ai , a?i ) Pr(a?i | si , ai )
ak
?i
=
XXX
k+1
| si , ai , a?i , s?i , ??i ) Pr(a?i | si , ai , s?i , ??i ) Pr(s?i , ??i | si , ai )
k+1
| si , ai , a?i ) Pr(a?i | s?i , ??i ) Pr(s?i , ??i | si , ai )
{z
}|
{z
}|
{z
}
Pr(si
k
k
k
k
k
k
k
k
k
k
k
k
k
k
??i sk ak
?i ?i
=
XXX
??i sk ak
?i ?i
Pr(si
|
k
k
k
k
k
(1)
The first term in Equation 1 is defined by the auction rules and depends only on the actions taken at
round k: Pr(sk+1
| ski , aki , ak?i ) = ?(oki | ak ). The second term is a joint distribution over oppoi
nents? actions given opponents? private information. Each agent?s action atQround k is conditionally
independent given that agent?s state at round k: Pr(ak?i | sk?i , ??i ) = j6=i Pr(akj | skj , ?j ) =
Q
?j k
k
j6=i ? (aj | sj ). The third term is the joint distribution over opponents? private information,
3
given agent i?s observations. This term can be computed using Bayesian updating. We compute
induced transition probabilities Induced(?)(ski , aki , sk+1
) using Equation 1.
i
Definition 5 (?-Stable MDP Assessment). An MDP assessment (?, T ) for a sequential auction ? is
called ?-stable if d(T, Induced(?)) < ?, for some symmetric distance function d.
When ? = 0, the induced transition probabilities exactly equal the transition probabilities from the
MDP assessment (?, T ), meaning that if all agents follow (?, T ), the transition function T is correct.
[ui (?i , hK
Define Ui (?, T ) ? E?i ,hK
i )] to be the expected utility for following an MDP assessi |?,T
ment?s policy ? when the transition function is T . (We abbreviate Ui by U because of symmetry.)
Definition 6 (?-Optimal MDP Assessment). An MDP assessment (?, T ) for a sequential auction ?
is called ?-optimal if for all policies ? 0 , U (?, T ) + ? ? U (? 0 , T ).
If each agent is playing a 0-optimal (i.e., optimal) 0-stable (i.e., stable) MDP assessment for the
sequential auction ?, each agent is best responding to its beliefs, and each agent?s beliefs are correct.
It follows that any optimal stable MDP assessment for the sequential auction ? corresponds to
a symmetric Bayes-Nash equilibrium for ?. Corollary 2 (below) generalizes this observation to
approximate equilibria.3
Suppose we have a black box that tells us the difference in perceived versus actual expected utility
for optimizing with respect to the wrong beliefs: i.e., the wrong transition function. More precisely,
if we were to give the black box two transition functions T and T 0 that differ by at most ? (i.e.,
d(T, T 0 ) < ?), the black box would return max? |U (?, T ) ? U (?, T 0 )| ? D(?).
Theorem 1. Given such a black box, if (?, T ) is an ?-optimal ?-stable MDP assessment for the
sequential auction ?, then ? is a symmetric -Bayes-Nash equilibrium for ?, where = 2D(?) + ?.
Proof. Let T? = Induced(?), and let ? ? be such that (? ? , T? ) is an optimal MDP assessment.
U (?, T? ) ? U (?, T ) ? D(?)
(2)
?
? U (? , T ) ? (? + D(?))
(3)
? U (? ? , T? ) ? (? + 2D(?))
(4)
Lines 2 and 4 hold because (?, T ) is ?-stable. Line 3 holds because (?, T ) is ?-optimal.
Corollary 2. If (?, T ) is an ?-optimal ?-stable MDP assessment for the sequential auction ?, then
? is a symmetric -Bayes-Nash equilibrium for ?, where = 2?K + ?.
In particlar, when the distance between other-agent bid predictions and the actual other-agent bids
induced by the actual other-agent policies is less than ?, optimizing agents play a 2?K-BNE.
This corollary follows from the simulation lemma in Kakade et al. [9], which provides us
with a black box.4 In particular, if MDP assessment (?, T ) is ?-stable, then |U (?, T ) ?
P
U (?, Induced(?))| ? ?K, where d(T, T 0 ) =
|T (ski , aki , sk+1
) ? T 0 (ski , aki , sk+1
)| and
i
i
sk+1
i
K is the MDP?s horizon.
Wellman et al. [24] show that, for simultaneous one-shot auctions, optimizing with respect to predictions about other-agent bids is an -Bayes-Nash equilibrium, where depends on the distance
between other-agent bid predictions and the actual other-agent bids induced by the actual otheragent strategies. Corollary 2 is an extension of that result to sequential auctions.
Searching for an -BNE
4
We now know that an optimal, stable MDP assessment is a BNE, and moreover, a near-optimal,
near-stable MDP assessment is nearly a BNE. Hence, we propose to search for approximate BNE
by searching the space of MDP assessments for any that are nearly optimal and nearly stable.
3
Note that this result also generalizes to non-symmetric equilibria: we would calculate a vector of induced
transition probabilities (one per bidder), given a vector of MDP assessments, (one per bidder), instead of assuming that each bidder abides by the same assessment. Similarly, stability would need to be defined in terms of a
vector of MDP assessments. We present our theoretical results in terms of symmetric equilibria for notational
simplicity, and because we search for symmetric equilibria in Section 5.
4
Slightly adjusted since there is error only in the transition probabilities, not in the rewards.
4
Our search uses an iterative two-step learning process. We first find a set of optimal policies ? with
respect to some transition function T (i.e., ? = Solve MDP(T )) using dynamic programming, as
described by Bellman?s equations [1]. We then update the transition function T to reflect what would
happen if all agents followed the new policies ? (i.e., T ? = Induced(?)). More precisely,
1. Initiate the search from an arbitrary MDP assessment (? 0 , T 0 )
2. Initialize t = 1 and = ?
3. While (t < ? ) and ( > ?)
(a)
(b)
(c)
(d)
P REDICT: T t = Induced(? t?1 )
O PTIMIZE: for all types ?i , ? t = Solve MDP(?i , T t )
Calculate ? (? ? ) (defined below)
Increment t
4. Return MDP assessment (? ? , T ? ) and
This learning process is not guaranteed to converge, so upon termination, it could return an optimal,
?-stable MDP assessment for some very large ?. However, it has been shown to be successful experimentally in simultaneous auction games [24] and other large games of imperfect information [7].
Monte Carlo Simulations Recall how we define induced transition functions (Equation 1). In
practice, the Bayesian updating involved in this calculation is intractable. Instead, we employ Monte
Carlo simulations. First, we further simplify Equation 1 using the law of total probability and noting
conditional independencies (Equation 5). Second, we exploit some special structure of sequential
auctions: if nothing but the winning price at each round is revealed, conditional on reaching state ski ,
the posterior distribution over highest opponent bids is sufficient for computing the probability of
that round?s outcome (Equation 6).5 Third, we simulate N auction trajectories for the given policy
? and multiple draws from the agent?s type distribution, and count the number of times each highest
opponent bid occurs at each state (Equation 7):
Induced(?)(ski , aki , sk+1
)
i
InducedN (?)(ski , aki , sk+1
)
i
=
Pr(sk+1
| ski , aki , max ak?i )Pr(max ak?i | ski , aki )
i
(5)
=
Pr(sk+1
i
(6)
=
?(oki | max(ak?i ), aki )
|
ski , aki , max ak?i )Pr(max ak?i
#(max(ak?i ), ski )
#(ski )
|
ski )
(7)
Solving the MDP As previously stated, we solve the MDPs exactly using dynamic programming,
but we can only do so because we exploit the structure of auctions to reduce the number of states
in each MDP. Recall that we assume symmetry: i.e., all bidders? types are drawn from the same
distribution. Under this assumption, when the auctioneer announces that an Bidder j has won an
auction for the first time, this provides the same information as if a different Bidder k won an auction
for the first time. We thus collapse these two outcomes into the same state. This can greatly decrease
the MDP state space, particularly if the number of players n is larger than the number of auctions
K, as is often the case in competitive markets. In fact, by handling this symmetry, the MDP state
space is the same for any number of players n ? K.6 Second, we exploit the property of losing bid
symmetry: if a bidder i loses with a bid of b or a bid of b0 , its beliefs about its opponents bids are
unchanged, and thus it receives the same reward for placing the same bid at either resulting state.
5
A distribution over the next round?s highest opponent bid is only sufficient without the possibility of ties.
In ties can occur, a distribution over the number of opponents placing that highest bid is also needed. In our
experiments, we do not maintain such a distribution; if there is a tie, the agent in question wins with probability
0.5 (i.e., we assume it tied with only one opponent).
6
Even when n < K, the state space can still be significantly reduced, since instead of n different possible
winner identities in the kth round, there are only min(n; k + 1). In the extreme case of n = 2, there is no
winner identity symmetry to exploit, since n = k + 1 even in the first round.
5
K
-factor Approximation Define Ui (~? ) = E?i ,hK
? [ui (?i , hi )] to be bidder i?s expected utility
i |~
i when each agent plays its part in the vector of MDP assessment policies ~? . Following Definition 2, the -factor measures bidder i?s loss in expected utility for not playing his part of ~?
when other bidders are playing their parts: i (~? ) = max?i0 Ui (?i0 , ??i ) ? Ui (?i , ??i ). In fact,
since we are only interested in finding symmetric equilibria, where ~? = (?, . . . , ?), we calculate
(?) = max?0 U (? 0 , ~??i ) ? U (?, ~??i ).
The first term in this definition is the expected utility of the best response, ? ? , to ~??i . This quan?
tity typically cannot be computed exactly, so instead, we compute a near-best response ?
?N
=
Solve MDP(InducedN (?)), which is optimal with respect to InducedN (?) ? Induced(?),
?
and then measure the gain in expected utility of deviating from ? to ?
?N
.
Further, we approximate expected utility through Monte Carlo simulation. Specificially, we compute
?L (~? ) = 1 PL u(?l , hl ) by sampling ?~ and simulating (? ? , . . . , ? ? ) L times, and then averaging
U
l=1
L
?
?L (?
?L (?, ~??i ).
bidder i?s resulting utilities. Thus, we approximate (?) by ?(?) ? U
?N
, ~??i ) ? U
The approximation error in ?(?) comes from both imprecision in InducedN (?), which depends on
the sample size N , and imprecision
in the expected utility calculation, which depends on the sample
?
size L. The latter is O( L) by the central limit theorem, and can be made arbitrarily small. (In
our experiments, we plot the confidence bounds of this error to make sure it is indeed small.) The
?
is not truly optimal with respect to Induced(?), and goes to zero as N
former arises because ?
?N
goes to infinity by standard reinforcement learning results [20]. In practice we make sure that N is
large enough so that this error is negligible.
5
Experimental Results
This section presents the results of running our iterative learning method on three auction models studied in the economics literature: Katzman [10], Weber [23], and Menezes and Monteiro
[14]. These models are all two-round, second-price, sequential auctions7 , with continuous valuation spaces; they differ only in their specific choice of valuations. The authors analytically derive
a symmetric pure strategy equilibrium for each model, which we attempt to re-discover using our
iterative method. After discretizing the valuation space, our method is sufficiently general to apply
immediately in all three settings.
Although these particular sequential auctions are all second price, our method applies to sequential
auctions with other rules as well. We picked this format because of the abundance of corresponding
theoretical results and the simplicity of exposition in two-round auctions. It is a dominant strategy to
bid truthfully in a one-shot second-price auction [22]; hence, when comparing policies in two-round
second-price auctions it suffices to compare first-round policies only.
Static Experiments We first run one iteration of our learning procedure to check whether the
derived equilibria are strict. In other words, we check whether Solve MDP(InducedN (? E )) =
? E , where ? E is a (discretized) derived equilibrium strategy. For each of the three models, Figures
1(a)?1(c) compare first-round bidding functions of the former (blue) with the latter (green).
Our results indicate that the equilibria derived by Weber and Katzman are indeed strict, while that
by Menezes and Monteiro (MM) is not, since there exists a set of best-responses to the equilibrium
strategy, not a unique best-response. We confirm analytically that the set of bids output by our
learning procedure are best-responses to the theoretical equilibrium, with the upper bound being the
known theoretical equilibrium strategy and the lower bound being the black dotted line.8 To our
knowledge, this instability was previously unknown.
Dynamic Experiments Since MM?s theoretical equilibrium is not strict, we apply our iterative
learning procedure to search for more stable approximate equilibria. Our procedure converges within
a small number of iterations to an -BNE with a small factor, and the convergence is robust across
different initializations. We chose initial strategies ? 0 parametrized by p ? R+ that bid xp when
the marginal value of winning an additional good is x. By varying the exponent p, we initialize the
learning procedure with bidding strategies whose level of aggressiveness varies.
7
8
Weber?s model can be extended to any number of rounds, but is unit, not multi-unit, demand.
These analytical derivations are included in supplemental material.
6
Katzman, 2 agents
Menezes, 3 agents
1
0.8
0.8
0.8
0.6
0.6
0.6
Bid
1
Bid
Bid
Weber, 4 agents
1
0.4
0.4
0.4
0.2
0.2
0.2
0
0
0.5
Valuation
0
1
0
0.5
Valuation
(a)
1
0
0
0.5
Valuation
(b)
1
(c)
Figure 1: Comparison of first-round bidding functions of theoretical equilibrium strategies (green) and that of
the best response from one step of the iterative learning procedure initialized with those equilibrium strategies
(blue). (a) Weber. (b) Katzman. (c) MM.
2 round, d(?t,?t+1)
i
2 round, d(?t,?t)
i
i
After 20 iterations
j
1
?3
0.01
p = 0.5
p = 1.0
p = 2.0
0.008
0.006
p=1.0 ? p=0.5
p=2.0 ? p=0.5
p=2.0 ? p=1.0
0.008
0.006
1
0.8
0.004
0.4
0
0.002
0.002
0.2
?0.5
2
4
6
Iteration
8
10
(a)
0
0
2
4
6
Iteration
8
0
10
(b)
2 round epsilon
p = 1.0
[?2e?05,7e?05]
0.004
0
x 10
0.5
0.6
Bid
0.01
0.5
Valuation
(c)
1
?1
2
4
6
8
Iteration
10
(d)
Figure 2: Convergence properties of the learning procedure in two-round MM model with 3 agents. (a),(b)
evaluates convergence through L1 distance of first-round bidding functions; (c) compares the learned best
response (blue) with different learning procedure initializations (green). (d) plots evolution of estimated factor for learning dynamics with one specific initialization; plots for other initializations look very similar.
The bracketed values in the legend give the 99% confidence bound for the -factor in the final iteration, which
is estimated using more sample points (N = L = 109 ) than previous iterations (N = L = 106 ).
Our iterative learning procedure is not guaranteed to converge. Nonetheless, in this experiment,
our procedure not only converges with different initialization parameters p (Figure 2(a)), but also
converges to the same solution regardless of initial conditions (Figure 2(b)). The distance measure
d(?, ? 0 ) between two strategies ?, ? 0 in these figures is defined as the L1 distance of their respective
first-round bidding functions. Furthermore, the more economically meaningful measure of (?),
measured by ?(?), converges quickly to a negligible factor smaller than 1 ? 10?4 , which is less than
0.01% of the expected bidder profit (Figure 2(d)).
All existing theoretical work on Bayesian sequential auctions with multi-unit demand is confined
to two-round cases due to the increased complexity of additional rounds, but our method removes
this constraint. We extend the two-round MM model into a three-round auction model,9 and apply
our learning procedure. It requires more iterations for our algorithm to converge in this set up, but it
again converges to a rather stable -BNE regardless of initial conditions. The final -factor is smaller
than 0.5% of expected bidder profit (Figure 3(d)). Although d(?, ? 0 ) no longer fully summarizes
strategy differences, it still strongly indicates that the learning procedure converges to very similar
strategies regardless of initial conditions (Figure 3(b)).
6
Related Work
On the theoretical side, Weber [23] derived equilibrium strategies for a basic model in which n
bidders compete in k first or second price auctions, but bidders are assumed to have unit demand.
F?evrier [6] and Yao [25] studied a model where n bidders have multi-unit demand, but there are
only two auctions and a bidder?s per-good valuation is the same across the two goods. Liu [13]
and Paes Leme et al. [17] studied models of n bidders with multi-unit demand where bidders have
9
This model is described in supplemental material.
7
t
t+1
3 round, d(?ti ,?tj )
3 round, d(?i ,?i )
0.4
0.1
p=1.0 ? p=0.5
p=2.0 ? p=0.5
p=2.0 ? p=1.0
0.3
0.8
Bid
p = 0.5
p = 1.0
p = 2.0
0.3
3 round epsilon
p = 1.0
After 30 iterations
1
0.4
[?4e?05,0.003]
0.05
0.6
0.2
0.2
0.4
0
0.1
0.1
0.2
?0.05
0
0
0
5
10
15 20
Iteration
25
(a)
30
0
10
20
Iteration
0
30
(b)
0.5
Valuation
(c)
1
?0.1
10
20
Iteration
30
(d)
Figure 3: The same set of graphs as in Figure 2 for three round MM model with 3 agents.
complete information about opponents? valuations and perfect information about opponents? past
bids. Syrgkanis and Tardos [21] extended to the case of incomplete information with unit demand.
On the computational side, Rabinovich et al. [19] generalized fictitious play to finite-action incomplete information games and applied their technique to simultaneous second-price auctions with
utilities expressible as linear functions over a one-dimensional type space. Cai and Wurman [3] take
a heuristic approach to finding equilibria for sequential auctions with incomplete information; opponent valuations are sampled to create complete information games, which are solved with dynamic
programming and a general game solver, and then aggregated into mixed behavior strategies to form
a policy for the original incomplete information game. Fatima et al. [5] find equilibrium bidding
strategies in sequential auctions with incomplete information under various rules of information
revelation after each round. Additional methods of computing equilibria have been developed for
sequential games outside the context of auctions: Ganzfried and Sandholm [7] study the problem of
computing approximate equilibria in the context of poker, and Mostafa and Lesser [15] describe an
anytime algorithm for approximating equilibria in general incomplete information games.
From a decision-theoretic perspective, the bidding problem for sequential auctions was previously
formulated as an MDP in related domains. In Boutilier et al. [2], an MDP is created where distinct
goods are for sold consecutively, complementarities exist across goods, and the bidder is budgetconstrained. A similar formulation was studied in Greenwald and Boyan [8], but without budget
constraints. There, purchasing costs were models as negative rewards, significantly reducing the
size of the MDP?s state space. Lee et al. [12] represent multi-round games as iterated semi-netform games, and then use reinforcement learning techniques to find K-level reasoning strategies for
those games. Their experiments are for two-player games with perfect information about opponent
actions, but their approach is not conceptually limited to such models.
7
Conclusion
We presented a two step procedure (predict and optimize) for finding approximate equilibria in a
class of complex sequential auctions in which bidders have incomplete information about opponents?
types, imperfect information about opponents? bids, and demand multiple goods. Our procedure is
applicable under numerous pricing rules, allocation rules, and information-revelation policies. We
evaluated our method on models with analytically derived equilibria and on an auction domain in
which analytical solutions were heretofore unknown. Our method was able to both show that the
known equilibrium for one model was not strict and guided our own analytical derivation of the
non-strict set of equilibria. For a more complex auction with no known analytical solutions, our
method converged to an approximate equilibria with an -factor less than 10?4 , and did so robustly
with respect to initialization of the learning procedure. While we achieved fast convergence in
the MM model, such convergence is not guaranteed. The fact that our procedure converged to
nearly identical approximate equilibria even from different initializations is promising, and further
exploring convergence properties in this domain is a direction for future work.
Acknowledgements This research was supported by U.S. National Science Foundation Grants
CCF-0905139 and IIS-1217761. The authors (and hence, the paper) benefited from lengthy discussions with Michael Wellman, Michael Littman, and Victor Naroditskiy. Chris Amato also provided
useful insights, and James Tavares contributed to the code development.
8
References
[1] R. E. Bellman. Dynamic Programming. Princeton University Press, Princeton, NJ, 1957.
[2] C. Boutilier, M. Goldszmidt, and B. Sabata. Sequential auctions for the allocation of resources with
complementarities. In International Joint Conference on Artificial Intelligence, volume 16, pages 527?
534. Lawrence Erlbaum Associates LTD, 1999.
[3] G. Cai and P. R. Wurman. Monte Carlo approximation in incomplete information, sequential auction
games. Decision Support Systems, 39(2):153?168, Apr. 2005.
[4] A. Cournot. Recherches sur les Principes Mathematics de la Theorie la Richesse. Hachette, 1838.
[5] S. S. Fatima, M. Wooldridge, and N. R. Jennings. Sequential Auctions in Uncertain Information Settings.
Agent-Mediated Electronic Commerce and Trading Agent Design and Analysis, pages 16?-29, 2009.
[6] P. F?evrier. He who must not be named. Review of Economic Design, 8(1):99?1, Aug. 2003.
[7] S. Ganzfried and T. Sandholm. Computing Equilibria in Multiplayer Stochastic Games of Imperfect
Information. International Joint Conference on Artificial Intelligence, pages 140?146, 2009.
[8] A. Greenwald and J. Boyan. Bidding under uncertainty: Theory and experiments. In Twentieth Conference
on Uncertainty in Artificial Intelligence, pages 209?216, Banff, 2004.
[9] S. M. Kakade, M. J. Kearns, and J. Langford. Exploration in metric state spaces. In Proceedings of the
20th International Conference on Machine Learning ICML03, 2003.
[10] B. Katzman. A Two Stage Sequential Auction with Multi-Unit Demands,. Journal of Economic Theory,
86(1):77?99, May 1999.
[11] P. Klemperer. Auctions: theory and practice. Princeton University Press, 2004.
[12] R. Lee, S. Backhaus, J. Bono, W. Dc, D. H. Wolpert, R. Bent, and B. Tracey. Modeling Humans as
Reinforcement Learners : How to Predict Human Behavior in Multi-Stage Games. In NIPS 2011, 2011.
[13] Q. Liu. Equilibrium of a sequence of auctions when bidders demand multiple items. Economics Letters,
112(2):192?194, 2011.
[14] F. M. Menezes and P. K. Monteiro. Synergies and Price Trends in Sequential Auctions. Review of
Economic Design, 8:85?98, 2003.
[15] H. Mostafa and V. Lesser. Approximately Solving Sequential Games With Incomplete Information. In
Proceedings of the AAMAS08 Workshop on Multi-Agent Sequential Decision Making in Uncertain MultiAgent Domains, pages 92?106, 2008.
[16] Nielsen Company. Nielsen?s quarterly global adview pulse report, 2011.
[17] R. Paes Leme, V. Syrgkanis, and E. Tardos. Sequential Auctions and Externalities. In Proceedings of the
Twenty-Third Annual ACM-SIAM Symposium on Discrete Algorithms, pages 869?886, 2012.
[18] M. Puterman. Markov decision processes: discrete stochastic dynamic programming. Wiley, 1994.
[19] Z. Rabinovich, V. Naroditskiy, E. H. Gerding, and N. R. Jennings. Computing pure Bayesian Nash
equilibria in games with finite actions and continuous types. Technical report, University of Southampton,
2011.
[20] R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction, volume 9 of Adaptive computation and machine learning. MIT Press, 1998.
[21] V. Syrgkanis and E. Tardos. Bayesian sequential auctions. In Proceedings of the 13th ACM Conference
on Electronic Commerce, pages 929?944. ACM, 2012.
[22] W. Vickrey. Counterspeculation, Auctions, and Competitive Sealed Tenders. Journal of Finance, 16(1):
8?37, 1961.
[23] R. J. Weber. Multiple-Object Auctions. In R. Engelbrecht-Wiggans, R. M. Stark, and M. Shubik, editors,
Competitive Bidding, Auctions, and Procurement, pages 165?191. New York University Press, 1983.
[24] M. Wellman, E. Sodomka, and A. Greenwald. Self-confirming price prediction strategies for simultaneous
one-shot auctions. In The Conference on Uncertainty in Artificial Intelligence (UAI), 2012.
[25] Z. Yao. Sequential First-Price Auctions with Multi-Unit Demand. Technical report, Discussion paper,
UCLA, 2007.
9
|
4755 |@word private:5 economically:1 fatima:2 termination:1 simulation:6 heretofore:1 pulse:1 paid:2 profit:2 minus:1 shot:3 initial:4 liu:2 contains:1 denoting:1 past:3 existing:3 com:1 comparing:1 si:13 must:2 additive:1 subsequent:2 happen:1 confirming:1 remove:1 plot:3 update:2 intelligence:4 item:1 provides:3 math:1 preference:1 banff:1 simpler:1 become:1 symposium:1 consists:1 combine:2 behavioral:1 expected:13 indeed:2 market:2 behavior:5 multi:10 discretized:1 bellman:2 globally:1 company:1 gov:1 actual:5 solver:1 becomes:1 provided:1 estimating:2 moreover:1 discover:1 maximizes:1 what:1 developed:1 supplemental:2 finding:5 nj:1 commodity:1 ti:1 finance:1 fcc:2 exactly:3 tie:3 oki:5 wrong:2 revelation:2 sale:2 unit:10 grant:1 negligible:2 limit:1 sutton:1 ak:12 subscript:1 solely:1 becoming:1 approximately:1 might:1 plus:1 twice:1 black:6 studied:4 initialization:7 chose:1 specifying:1 cournot:1 abides:1 collapse:1 limited:1 bi:3 phased:1 unique:1 commerce:2 practice:3 communicated:1 procedure:17 significantly:2 word:2 confidence:2 refers:2 submits:2 cannot:1 context:2 applying:1 instability:1 optimize:3 www:1 go:2 economics:3 attention:2 regardless:5 independently:2 announces:1 syrgkanis:3 simplicity:3 immediately:1 pure:2 amy:2 rule:7 insight:1 his:2 stability:1 searching:3 increment:1 tardos:3 suppose:3 play:3 programming:5 losing:1 us:1 complementarity:2 associate:1 trend:1 particularly:1 updating:3 skj:1 observed:1 solved:2 calculate:3 decrease:1 highest:5 valuable:1 intuition:1 environment:3 nash:10 complexity:2 bkn:1 ui:15 reward:4 littman:1 dynamic:9 solving:2 upon:1 eric:1 learner:1 bidding:9 htm:1 joint:4 various:3 derivation:2 distinct:1 fast:1 describe:1 monte:6 artificial:4 tell:1 outcome:6 outside:1 whose:1 heuristic:1 larger:1 solve:8 otherwise:1 final:3 online:2 sequence:1 analytical:6 cai:2 propose:1 ment:1 billion:3 convergence:6 perfect:2 converges:6 object:1 derive:1 informs:1 measured:1 b0:1 aug:1 job:1 c:1 predicted:1 come:3 indicate:1 trading:1 differ:2 direction:1 guided:1 correct:2 consecutively:1 stochastic:2 exploration:1 aggressiveness:1 human:2 public:1 material:2 government:1 suffices:1 adjusted:1 extension:1 pl:1 exploring:1 hold:2 mm:7 sufficiently:1 equilibrium:54 lawrence:1 predict:5 mostafa:2 nents:1 perceived:1 applicable:1 create:1 mit:1 reaching:3 rather:2 varying:1 barto:1 corollary:4 derived:5 focus:1 amato:1 she:4 notational:2 check:2 indicates:1 hk:7 greatly:1 jiacui:2 i0:6 prescribing:1 typically:1 her:1 icml03:1 expressible:1 interested:1 monteiro:3 exponent:1 development:1 special:1 initialize:2 marginal:1 equal:2 having:1 sampling:1 identical:1 represents:1 placing:2 look:1 nearly:4 paes:2 future:2 others:1 report:4 simplify:1 few:1 employ:1 national:1 individual:1 deviating:1 phase:2 maintain:1 attempt:2 possibility:1 truly:1 extreme:1 wellman:3 tj:1 bki:1 bundle:5 tuple:1 respective:1 o1i:1 incomplete:10 initialized:1 re:1 theoretical:9 uncertain:2 increased:1 industry:1 modeling:1 rabinovich:2 cost:3 southampton:1 uniform:1 successful:1 conducted:1 erlbaum:1 providence:2 varies:1 international:3 siam:1 akj:1 cki:1 told:1 lee:2 michael:2 quickly:3 yao:2 again:1 reflect:1 central:1 choose:1 return:3 stark:1 li:2 wooldridge:1 de:1 bidder:90 bracketed:1 ad:1 depends:5 picked:1 observing:1 portion:1 competitive:3 bayes:10 contribution:1 oi:1 tity:1 who:3 conceptually:1 bayesian:6 iterated:1 wurman:2 carlo:6 advertising:1 trajectory:1 j6:2 history:10 submitted:1 converged:2 explain:2 simultaneous:4 reach:1 lengthy:1 definition:8 evaluates:1 nonetheless:1 involved:1 james:1 thereof:1 engelbrecht:1 proof:1 mi:2 static:1 gain:1 sampled:1 recall:2 knowledge:2 anytime:1 nielsen:2 actually:1 ok:3 follow:2 hbk1:1 xxx:2 response:7 formulation:1 done:1 though:1 box:5 strongly:1 furthermore:1 evaluated:1 stage:2 reply:2 langford:1 receives:2 ganzfried:2 assessment:23 defines:1 aj:1 pricing:1 mdp:43 building:1 brown:4 ccf:1 former:2 alumnus:1 assigned:1 hence:3 imprecision:2 symmetric:13 analytically:3 evolution:1 recherches:1 vickrey:1 puterman:1 conditionally:1 round:47 game:23 during:1 self:1 aki:13 won:6 generalized:1 theoretic:4 complete:2 l1:2 auction:91 percent:1 reasoning:2 spending:1 meaning:2 weber:7 common:1 winner:3 hki:1 volume:2 extend:1 he:1 significant:2 ai:8 ebay:1 sealed:2 similarly:1 mathematics:1 had:2 stable:15 longer:1 something:1 dominant:1 posterior:2 own:1 perspective:3 optimizing:3 discretizing:1 arbitrarily:1 victor:1 additional:3 converge:3 aggregated:1 signal:3 semi:1 ii:1 multiple:6 full:4 infer:1 technical:2 calculation:2 offer:1 naroditskiy:2 prescription:1 bent:1 prediction:9 basic:1 tavares:1 metric:1 externality:1 dutch:2 iteration:13 represent:4 confined:1 achieved:1 sends:2 sure:2 strict:5 induced:17 quan:1 legend:1 near:3 noting:2 revealed:1 enough:1 bid:45 restrict:2 economic:5 imperfect:3 reduce:1 lesser:2 whether:4 motivated:1 utility:15 ltd:1 york:1 action:16 boutilier:2 useful:1 leme:2 jennings:2 amount:1 reduced:1 http:2 specifies:1 exist:1 dotted:1 designer:1 estimated:2 per:4 blue:3 discrete:2 express:1 independency:2 drawn:5 graph:1 run:1 compete:1 letter:1 uncertainty:4 auctioneer:7 named:1 place:1 throughout:1 electronic:2 draw:1 decision:13 summarizes:1 announced:1 bound:4 hi:6 internet:1 followed:1 guaranteed:3 annual:1 occur:1 precisely:2 infinity:1 constraint:2 ri:2 ucla:1 simulate:1 min:1 relatively:1 format:1 department:2 combination:1 multiplayer:1 across:3 slightly:1 smaller:2 intimately:1 sandholm:2 wi:1 kakade:2 making:2 hl:1 pr:17 taken:3 equation:8 resource:1 payment:1 previously:3 count:1 needed:1 know:3 initiate:1 tractable:2 end:3 generalizes:2 endowed:1 opponent:26 apply:3 observe:3 quarterly:1 simulating:1 robustly:1 original:1 responding:1 running:1 include:1 exploit:6 epsilon:2 approximating:3 unchanged:1 objective:1 question:1 quantity:1 occurs:1 strategy:39 responds:1 poker:1 behalf:1 win:2 kth:2 distance:6 entity:1 ensuing:1 parametrized:1 chris:1 valuation:13 reason:1 assuming:5 length:1 code:1 sur:1 potentially:1 theorie:1 stated:1 negative:1 design:3 ski:15 policy:14 unknown:3 contributed:1 twenty:1 upper:1 observation:2 markov:4 sold:5 finite:2 extended:2 dc:1 arbitrary:1 bk:6 learned:1 nip:1 able:1 flower:3 usually:1 below:2 max:10 green:3 belief:8 business:1 boyan:2 predicting:1 abbreviate:1 wik:1 mdps:4 numerous:1 created:1 mediated:1 sodomka:3 literature:2 prior:2 acknowledgement:1 review:2 ptimize:1 law:2 loss:2 fully:2 multiagent:1 mixed:1 allocation:2 fictitious:1 versus:1 foundation:1 agent:30 purchasing:1 sufficient:2 xp:1 editor:1 intractability:1 playing:6 supported:1 wireless:3 guide:1 side:2 default:1 world:3 transition:17 rich:1 author:2 commonly:2 made:2 reinforcement:4 adaptive:1 transaction:1 sj:1 approximate:12 compact:2 synergy:1 confirm:1 global:1 uai:1 assumed:2 bne:9 spectrum:2 truthfully:1 search:7 iterative:6 continuous:2 sk:14 promising:1 learn:1 robust:1 correlated:1 symmetry:5 interact:1 complex:3 domain:8 did:1 pk:1 apr:1 privately:1 profile:5 nothing:1 benefited:1 en:1 wiley:1 winning:3 tied:2 third:3 abundance:1 procurement:1 counterspeculation:1 theorem:2 transitioning:1 specific:2 redict:1 exists:3 intractable:3 consist:1 workshop:1 sequential:42 ci:1 budget:1 demand:13 horizon:1 wolpert:1 simply:1 explore:1 twentieth:1 desire:1 partially:2 tracey:1 applies:1 corresponds:1 loses:2 determines:1 b1i:1 acm:3 conditional:3 identity:3 formulated:1 greenwald:4 exposition:1 tender:1 price:11 experimentally:2 included:1 except:1 reducing:1 averaging:1 lemma:1 kearns:1 called:3 total:2 experimental:1 la:2 player:4 meaningful:1 formally:1 principe:1 support:1 latter:2 arises:1 goldszmidt:1 relevance:1 princeton:3 handling:1
|
4,149 | 4,756 |
Symbolic Dynamic Programming for Continuous
State and Observation POMDPs
Zahra Zamani
ANU & NICTA
Canberra, Australia
Scott Sanner
NICTA & ANU
Canberra, Australia
[email protected]
[email protected]
Pascal Poupart
U. of Waterloo
Waterloo, Canada
Kristian Kersting
Fraunhofer IAIS & U. of Bonn
Bonn, Germany
[email protected]
[email protected]
Abstract
Point-based value iteration (PBVI) methods have proven extremely effective for
finding (approximately) optimal dynamic programming solutions to partiallyobservable Markov decision processes (POMDPs) when a set of initial belief
states is known. However, no PBVI work has provided exact point-based backups for both continuous state and observation spaces, which we tackle in this
paper. Our key insight is that while there may be an infinite number of observations, there are only a finite number of continuous observation partitionings that
are relevant for optimal decision-making when a finite, fixed set of reachable belief states is considered. To this end, we make two important contributions: (1) we
show how previous exact symbolic dynamic programming solutions for continuous state MDPs can be generalized to continuous state POMDPs with discrete observations, and (2) we show how recently developed symbolic integration methods
allow this solution to be extended to PBVI for continuous state and observation
POMDPs with potentially correlated, multivariate continuous observation spaces.
1
Introduction
Partially-observable Markov decision processes (POMDPs) are a powerful modeling formalism for
real-world sequential decision-making problems [3]. In recent years, point-based value iteration
methods (PBVI) [5, 10, 11, 7] have proved extremely successful at scaling (approximately) optimal
POMDP solutions to large state spaces when a set of initial belief states is known.
While PBVI has been extended to both continuous state and continuous observation spaces, no prior
work has tackled both jointly without sampling. [6] provides exact point-based backups for continuous state and discrete observation problems (with approximate sample-based extensions to continuous actions and observations), while [2] provides exact point-based backups (PBBs) for discrete
state and continuous observation problems (where multivariate observations must be conditionally
independent). While restricted to discrete states, [2] provides an important insight that we exploit in
this work: only a finite number of partitionings of the observation space are required to distinguish
between the optimal conditional policy over a finite set of belief states.
We propose two major contributions: First, we extend symbolic dynamic programming for continuous state MDPs [9] to POMDPs with discrete observations, arbitrary continuous reward and
transitions with discrete noise (i.e., a finite mixture of deterministic transitions). Second, we extend
this symbolic dynamic programming algorithm to PBVI and the case of continuous observations
1
(while restricting transition dynamics to be piecewise linear with discrete noise, rewards to be piecewise constant, and observation probabilities and beliefs to be uniform) by building on [2] to derive
relevant observation partitions for potentially correlated, multivariate continuous observations.
2
Hybrid POMDP Model
A hybrid (discrete and continuous) partially observable MDP (H-POMDP) is a tuple
hS, A, O, T , R, Z, ?, hi. States S are given by vector (ds , xs ) = (ds1 , . . . , dsn , xs1 , . . . , xsm )
where each dsi ? {0, 1} (1 ? i ? n) is boolean and each xsj ? R (1 ? j ? m) is continuous. We
assume a finite, discrete action space A = {a1 , . . . , ar }. Observations O are given by the vector
(do , xo ) = (do1 , . . . , dop , xo1 , . . . , xoq ) where each doi ? {0, 1} (1 ? i ? p) is boolean and each
xoj ? R (1 ? j ? q) is continuous.
Three functions are required for modeling H-POMDPs: (1) T : S ? A ? S ? [0, 1] a Markovian
transition model defined as the probability of the next state given the action and previous state; (2)
R : S ? A ? R a reward function which returns the immediate reward of taking an action in
some state; and (3) an observation function defined as Z : S ? A ? O ? [0, 1] which gives the
probability of an observation given the outcome of a state after executing an action. A discount
factor ?, 0 ? ? ? 1 is used to discount rewards t time steps into the future by ? t .
We use a dynamic Bayes net (DBN)1 to compactly represent the transition model T over the factored
state variables and we use a two-layer Bayes net to represent the observation model Z:
T : p(x0s ,d0s |xs ,ds , a) =
n
Y
p(d0si |xs ,ds , a)
i=1
Z : p(xo ,do |x0s ,d0s , a) =
p
Y
m
Y
p(x0sj |xs ,ds , d0s , a).
(1)
p(xoj |x0s ,d0s , a).
(2)
j=1
p(doi |x0s ,d0s , a)
i=1
q
Y
j=1
Probabilities over discrete variables p(d0si|xs ,ds ,a) and p(doi|x0s ,d0s ,a) may condition on both discrete variables and (nonlinear) inequalities of continuous variables; this is further restricted to
linear inequalities in the case of continuous observations. Transitions over continuous variables
p(x0sj|xs ,ds ,d0s ,a) must be deterministic (but arbitrary nonlinear) piecewise functions; in the case of
continuous observations they are further restricted to be piecewise linear; this permits discrete noise
in the continuous transitions since they may condition on stochastically sampled discrete next-state
variables d0s . Observation probabilities over continuous variables p(xoj|x0s ,d0s ,a) only occur in the
case of continuous observations and are required to be piecewise constant (a mixture of uniform distributions); the same restriction holds for belief state representations. The reward R(d, x, a) may be
an arbitrary (nonlinear) piecewise function in the case of deterministic observations and a piecewise
constant function in the case of continuous observations. We now provide concrete examples.
Example (Power Plant) [1] The steam generation system of a power plant evaporates feed-water
under restricted pressure and temperature conditions to turn a steam turbine. A reward is obtained
when electricity is generated from the turbine and the steam pressure and temperature are within safe
ranges. Mixing water and steam makes the respective pressure and temperature observations po ? R
and to ? R on the underlying state ps ? R and ts ? R highly uncertain. Actions A = {open, close}
control temperature and pressure by means of a pressure valve.
We initially present two H-POMDP variants labeled 1D-Power Plant using a single temperature
state variable ts . The transition and reward are common to both ? temperature increments (decrements) with a closed (opened) valve, a large negative reward is given for a closed valve with ts
exceeding critical threshold 15, and positive reward is given for a safe, electricity-producing state:
"
p(t0s |ts , a) = ? t0s ?
(
(a = open) : ts ? 5
(a = close) : ts + 7
#
?
?
: ?1
?(a = open)
R(ts , a) = (a = close) ? (ts > 15)
: ?1000 (3)
?
?(a = close) ? ?(t > 15) : 100
s
Next we introduce the Discrete Obs. 1D-Power Plant variant where we define an observation space
with a single discrete binary variable o ? O = {high, low }:
1
We disallow general synchronic arcs for simplicity of exposition but note their inclusion only places restrictions on the variable elimination ordering used during the dynamic programming backup operation.
2
b
(1 * x) >= 50
(1 * x) <= 39
234 + (1.5 * x)
(1 * x) >= 150
250
(1 * x) <= 139
197 + (2 * x)
121 + (3 * x)
Figure 1: (left) Example conditional plan ? h for discrete observations; (right) example ?-function for ? h over
state b ? {0, 1}, x ? R in decision diagram form: the true (1) branch is solid, the false (0) branch is dashed.
(
p(o = high|t0s , a = open) =
t0s ? 15 : 0.9
t0s > 15 : 0.1
(
p(o = high|t0s , a = close) =
t0s ? 15 : 0.7
t0s > 15 : 0.3
(4)
Finally we introduce the Cont. Obs. 1D-Power Plant variant where we define an observation space
with a single continuous variable to uniformly distributed on an interval of 10 units centered at t0s .
(
p(to |t0s , a
= open) =
U (to ; t0s
?
5, t0s
+ 5) =
(to > t0s ? 5) ? (to < t0s + 5) : 0.1
(to ? t0s ? 5) ? (to ? t0s + 5) : 0
(5)
While simple, we note no prior method could perform exact point-based backups for either problem.
3
Value Iteration for Hybrid POMDPs
In an H-POMDP, the agent does not directly observe the states and thus must maintain a belief state
b(xs ,ds ) = p(xs ,ds ). For a given belief state b = b(xs ,ds ), a POMDP policy ? can be represented
by a tree corresponding to a conditional plan ?. An h-step conditional plan ? h can be defined
recursively in terms of (h ? 1)-step conditional plans as shown in Fig. 1 (left). Our goal is to find a
policy ? that maximizes the value function, defined as the sum of expected discounted rewards over
horizon h starting from initial belief state b:
V?h (b) = E?
hXh
t=0
i
? t ? rt b0 = b
(6)
where rt is the reward obtained at time t and b0 is the belief state at t = 0. For finite h and belief
state b, the optimal policy ? is given by an h-step conditional plan ? h . For h = ?, the optimal
discounted (? < 1) value can be approximated arbitrarily closely by a sufficiently large h [3].
Even when the state is continuous (but the actions and observations are discrete), the optimal
POMDP value function for finite horizon h is a piecewise linear and convex function of the belief state b [6], hence V h is given by a maximum over a finite set of ??-functions? ?ih :
h
V (b) = max
h
?h
i ??
h?ih , bi
Z
= max
h
?h
i ??
X
?ih (xs ,ds ) ? b(xs ,ds ) dxs
(7)
xs d
s
Later on when we tackle continuous state and observations, we note that we will dynamically derive
an optimal, finite partitioning of the observation space for a given belief state and hence reduce the
continuous observation problem back to a discrete observation problem at every horizon.
The ?h in this optimal h-stage-to-go value function can be computed via Monahan?s dynamic programming approach to value iteration (VI) [4]. Initializing ?10 = 0, ?0 = {?10 }, and assuming
discrete observations o ? Oh , ?h is obtained from ?h?1 as follows:2
h
ga,o,j
(xs ,ds ) =
Z
X
p(o|x0s ,d0s , a)p(x0s ,d0s |xs ,ds , a)?jh?1 (x0s ,d0s )dxs0 ; ??jh?1 ? ?h?1
(8)
xs0 d
s0
n
o
h
?ha = R(xs ,ds , a) + ? o?O ga,o,j
(xs ,ds )
j
[ h
?h =
?a
(9)
(10)
a
2
The of sets is defined as
{p + q|p ? P, q ? Q}.
j?{1,...,n} Sj
= S1 ? ? ? Sn where the pairwise cross-sum P Q =
3
Algorithm 1: PBVI(H-POMDP, H, B = {bi }) ?? hV h i
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
begin
V 0 := 0, h := 0, ?0P BV I = {?10 }
while h < H do
h := h + 1, ?h := ?, ?hP BV I := ?
foreach bi ? B do
foreach a ? A do
?ha := ?
if (continuous observations: q > 0) then
// Derive relevant observation partitions Oih for belief bi
hOih , p(Oih |x0s ,d0s , a)i := GenRelObs(?h?1
P BV I , a, bi )
else
// Discrete observations and model already known
Oih := {do }; p(Oih |x0s ,d0s , a) := see Eq (2)
foreach o ? Oih do
foreach ?jh?1 ? ?h?1
P BV I do
?jh?1 := Prime(?jh?1 ) // ?di : di ? d0i and ?xi : xi ? x0i
h
ga,o,j
:= see Eq (8)
18
?ha := see Eq (9)
?h := ?h ? ?ha
19
20
21
// Retain only ?-functions optimal at each belief point
foreach bi ? B do
h
?b
:= arg max?j ??h ?j ? bi
i
h
h
?P BV I := ?hP BV I ? ?b
i
22
23
24
25
26
27
// Terminate if early convergence
if ?hP BV I = ?h?1
P BV I then
break
28
29
30
return ?P BV I
31
32
end
Point-based value iteration (PBVI) [5, 11] computes the value function only for a set of belief
states {bi } where bi := p(xs ,ds ). The idea is straightforward and the main modification needed to
Monahan?s VI approach in Algorithm 1 is the loop from lines 23?25 where only ?-functions optimal
at some belief state are retained for subsequent iterations. In the case of continuous observation
variables (q > 0), we will need to derive a relevant set of observations on line 10, a key contribution
of this work as described in Section 4.3. Otherwise if the observations are only discrete (q = 0),
then a finite set of observations is already known and the observation function as given in Eq (2).
We remark that Algorithm 1 is a generic framework that can be used for both PBVI and other
variants of approximate VI. If used for PBVI, an efficient direct backup computation of the optimal
?-function for belief state bi should be used in line 17 that is linear in the number of observations [5,
11] and which obviates the need for lines 23?25. However, for an alternate version of approximate
value iteration that will often produce more accurate values for belief states other than those in B,
one may instead retain the full cross-sum backup of line 17, but omit lines 23?25 ? this yields
an approximate VI approach (using discretized observations relevant only to a chosen set of belief
states B if continuous observations are present) that is not restricted to alpha-functions only optimal
at B, hence allowing greater flexibility in approximating the value function over all belief states.
Whereas PBVI is optimal if all reachable belief states within horizon H are enumerated in B, in
the H-POMDP setting, the generation of continuous observations will most often lead to an infinite
number of reachable belief states, even with finite horizon ? this makes it quite difficult to provide
optimality guarantees in the general case of PBVI for continuous observation settings. Nonetheless, PBVI has been quite successful in practice without exhaustive enumeration of all reachable
beliefs [5, 10, 11, 7], which motivates our use of PBVI in this work.
4
4
Symbolic Dynamic Programming
In this section we take a symbolic dynamic programming (SDP) approach to implementing VI and
PBVI as defined in the last section. To do this, we need only show that all required operations can
be computed efficiently and in closed-form, which we do next, building on SDP for MDPs [9].
4.1
Case Representation and Extended ADDs
The previous Power Plant examples represented all functions in case form, generally defined as
?
?
? :
?
? 1
.
f = ..
?
?
?? :
k
f1
..
.
fk
and this is the form we use to represent all functions in an H-POMDP. The ?i are disjoint logical
formulae defined over xs ,ds and/or xo ,do with logical (?, ?, ?) combinations of boolean variables
and inequalities (?, >, ?, <) over continuous variables. For discrete observation H-POMDPs, the
fi and inequalities may use any function (e.g., sin(x1 ) > log(x2 ) ? x3 ); for continuous observations,
they are restricted to linear inequalities and linear or piecewise constant fi as described in Section 2.
For unary operations such as scalar multiplication c ? f (for some constant c ? R) or negation
?f on case statements is simply to apply the operation on each case partition fi (1 ? i ? k). A
binary operation on two case statements, takes the cross-product of the logical partitions of each
case statement and performs the corresponding operation on the resulting paired partitions. The
cross-sum ? of two cases is defined as the following: ?
(
(
?1 : f 1
?1 : g 1
?
?2 : f 2
?2 : g 2
?
? 1 ? ?1
?
?
?? ? ?
1
2
=
?
? 2 ? ?1
?
?
?? ? ?
2
2
:
:
:
:
f1 + g1
f1 + g2
f2 + g1
f2 + g2
Likewise and ? are defined by subtracting or multiplying partition values. Inconsistent partitions
can be discarded when they are irrelevant to the function
? value. A symbolic case maximization is
defined as below:
??1 ? ?1 ? f1 > g1 : f1
(
casemax
?1 : f1
,
?2 : f2
(
?1 : g1
?2 : g2
!
?
?
?
?
? ? ?1 ? f1 ? g1 : g1
?
? 1
= ? 1 ? ?2 ? f 1 > g 2 : f 1
?
?
?1 ? ?2 ? f1 ? g2 : g2
?
?
?
?
..
?..
.
.
The following SDP operations on case statements require more detail than can be provided here,
hence we refer the reader to the relevant literature:
? Substitution f ?: Takes a set ? of variables and their substitutions (which may be case
statements themselves), and carries out all variable substitutions [9].
R
? Integration x1 f dx1 : There are two forms: If x1 is involved in a ?-function (cf. the
transition in Eq (3)) then the integral is equivalent to a symbolic substitution and can be
applied to any case statement (cf. [9]). Otherwise, if f is in linearly constrained polynomial
case form, then the approach of [8] can be applied to yield a result in the same form.
Case operations yield a combinatorial explosion in size if na??vely implemented, hence we use the
data structure of the extended algebraic decision diagram (XADD) [9] as shown in Figure 1 (right)
to compactly represent case statements and efficiently support the above case operations with them.
4.2
VI for Hybrid State and Discrete Observations
For H-POMDPs with only discrete observations o ? O and observation function p(o|x0s ,d0s , a) as in
the form of Eq (4), we introduce a symbolic version of Monahan?s VI algorithm. In brief, we note
that all VI operations needed in Section 3 apply directly to H-POMDPs, e.g., rewriting Eq (8):
Z
h
ga,o,j
(xs ,ds ) =
"
M
xs0 d
s0
p(o|x0s ,d0s ,a)?
n
O
!
p(d0si|xs ,ds ,a)
i=1
?
m
O
!
p(x0sj|xs ,ds , d0s ,a)
#
??jh?1 (x0s ,d0s )
dxs0
j=1
(11)
5
Algorithm 2: GenRelObs(?h?1 , a, bi ) ?? hOh , p(Oh |x0s ,d0s , a)i
1
2
3
4
5
6
7
8
9
10
11
12
13
14
begin
foreach ?j (x0s ,d0s ) ? ?h?1 and a ? A do
// Perform exact 1-stepRDPL
backup of ?-functions at horizon h ? 1
0
0
0
0
0
0
0
?ja (xs ,ds , xo ,do ) := x0
d0s p(xo ,do |xs ,ds , a) ? p(xs ,ds |xs ,ds , a) ? ?j (xs ,ds ) dxs
s
a
foreach ?j (xs ,ds , xo ,do ) do
// Generate value
each ?-vector at belief point bi (xs ,ds ) as a function of observations
R ofL
?ja (xo ,do ) := xs ds bi (xs ,ds ) ? ?ja (xs ,ds , xo ,do ) dxs
// Using casemax, generate observation partitions relevant to each policy ? see text for details
Oh := extract-partition-constraints[casemax(?1a1 (xo ,do ), ?1a2 (xo ,do ), . . . , ?jar (xo ,do ))]
foreach ok ? Oh do
// Let ?ok be the partition constraints for observation ok ? Oh
R L
p(Oh = ok |x0s ,d0s , a) := xo do p(xo ,do |x0s ,d0s , a)I[?ok ]dxo
return hOh , p(Oh |x0s ,d0s , a)i
end
P(t s)
(t o)
P(o ) =0.0127
b1
0.25
0.2
b
7.5
0.1
0
2
2
P(o ) =0.983
1
close
-72.5
-75
2
6
ts
open
4
11
5 5.1
8
to
14 15
18
Figure 2: (left) Beliefs b1 , b2 for Cont. 1D-Power Plant; (right) derived observation partitions for b2 at h = 2.
Crucially we note since the continuous transition cpfs p(x0sj|xs ,ds ,d0s ,a) are deterministic and hence
R
defined with Dirac ??s (e.g., Eq 3) as described in Section 2, the integral x 0 can always be computed
s
in closed case form as discussed in Section 4.1. In short, nothing additional is required for PBVI on
H-POMDPs in this case ? the key insight is simply that ?-functions are now represented by case
statements and can ?grow? with the horizon as they partition the state space more and more finely.
4.3
PBVI for Hybrid State and Hybrid Observations
In general, it would be impossible to apply standard VI to H-POMDPs with continuous observations
since the number of observations is infinite. However, building on ideas in [2], in the case of PBVI,
it is possible to derive a finite set of continuous observation partitions that permit exact point-based
backups at a belief point. This additional operation (GenRelObs) appears on line 10 of PBVI in
Algorithm 1 in the case of continuous observations and is formally defined in Algorithm 2.
To demonstrate the generation of relevant continuous observation partitions, we use the second
iteration of the Cont. Obs. 1D-Power Plant along with two belief points represented as uniform
distributions: b1 : U (ts ; 2, 6) and b2 : U (ts ; 6, 11) as shown in Figure 2 (left). Letting h = 2, we will
assume simply for expository purposes that |?1 | = 1 (i.e., it contains only one ?-function) and that
in lines 2?4 of Algorithm 2 we have computed the following two ?-functions for a ? {open, close}:
?
(
?
?(ts < 15) ? (ts ?10 < to < ts ) : 10
(ts ?10 < to < ts ) : 0.1
open
close
?1 (ts , to ) = (ts ? 15) ? (ts ?10 < to < ts ) : ?100 ?1 (ts , to ) =
?
?(ts ?10 < to < ts ) : 0
??(t ?10 < t < t )
:0
s
o
s
We now need the ?-vectors as a function of the observation space for a particular belief state, thus
next we marginalize out xs ,ds in lines 5?7. The resulting ?-functions are shown as follows where
for brevity from this point forward, 0 partitions are suppressed in the cases:
6
?
?
?(14 < to < 18)
?1close (to ) = (8 < to < 14)
?
?(4 < t < 8)
o
?
?
?(15 < to < 18)
?
?
?
?(14 < to < 15)
open
?1 (to ) = (8 < to < 14)
?
?
?(5 < to < 8)
?
?
?
(4 < to < 5)
: 0.025to ? 0.45
: ?0.1
: ?0.025to ? 0.1
: 25to ? 450
: ?2.5to ? 37.5
: ?72.5
: ?25to + 127.5
: 2.5to ? 10
Both ?1close (to ) and ?1open (to ) are drawn graphically in Figure 2 (right). These observationdependent ??s divide the observation space into regions which can yield the optimal policy according
to the belief state b2 . Following [2], we need to find the optimal boundaries or partitions of the observation space; in their work, numerical solutions are proposed to find these boundaries in one
dimension (multiple observations are handled through an independence assumption). Instead, here
we leverage the symbolic power of the casemax operator defined in Section 4.1 to find all the partitions where each potentially correlated, multivariate observation ? is optimal. For the two ??s above,
the following partitions of the observation space are derived by the casemax operator in line 9:
?
?
o1
?
?
?o
? 1
?
casemax ?1close (to ), ?1open (to ) = o1
?
?
?
o2
?
?
?
o2
: (14 < to ? 18) : 0.025to ? 0.45
: (8 < to ? 14)
: ?0.1
: (5.1 < to ? 8) : ?0.025to ? 0.1
: (5 < to ? 5.1) : ?25to + 127.5
: (4 < to ? 5)
: 2.5to ? 10
Here we have labeled with o1 the observations where ?1close is maximal and with o2 the observations
where ?1open is maximal. What we really care about though are just the constraints identifying o1
and o2 and this is the task of extract-partition-constraints in line 9. This would associate with o1
the partition constraint ?o1 ? (5.1 < to ? 8) ? (8 < to ? 14) ? (14 < to ? 18) and with o2 the
partition constraint ?o2 ? (4 < to ? 5) ? (5 < to ? 5.1) ? taking into account the 0 partitions
and the 1D nature of this example, we can further simplify ?o1 ? (to > 5.1) and ?o2 ? (to ? 5.1).
Given these relevant observation partitons, our final task in lines 10-12 is to compute the probabilities of each observation partition ?ok . This is simply done by marginalizing over the observation
function p(Oh |x0s ,d0s , a) within each region defined by ?ok (achieved by multiplying by an indicator
function I[?ok ] over these constraints). To better understand what is computed here, we can compute
the probability p(ok |bi , a) of each observation for a particular belief, calculated as follows:
Z
Z
MM
p(ok |bi , a) :=
xs
x0s d
s
p(ok |x0s ,d0s , a)?p(x0s ,d0s |xs ,ds , a)??j (x0s ,d0s )?bi (xs ,ds ) dx0s dxs (12)
d0s
Specifically, for b2 , we obtain p(o1 |b2 , a = close) = 0.0127 and p(o2 |b2 , a = close) = 0.933 as
shown in Figure 2 (right).
In summary, in this section we have shown how we can extend the exact dynamic programming
algorithm for the continuous state, discrete observation POMDP setting from Section 4.2 to compute exact 1-step point-based backups in the continuous observation setting; this was accomplished
through the crucial insight that despite the infinite number of observations, using Algorithm 2 we
can symbolically derive a set of relevant observations for each belief point that distinguish the optimal policy and hence value as graphically illustrated in Figure 2 (right). Next we present some
empirical results for 1- and 2-dimensional continuous state and observation spaces.
5
Empirical Results
We evaluated our continuous POMDP solution using XADDs on the 1D-Power Plant example and
another variant of this problem with two variables, described below.3
2D-Power Plant: We consider the more complex model of the power plant similar to [1] where the
pressure inside the water tank must be controlled to avoid mixing water into the steam (leading to
explosion of the tank). We model an observable pressure reading po as a function of the underlying
pressure state ps . Again we have two actions for opening and closing a pressure valve. The close
action has transition
"
p(p0s |ps , a
= close) = ?
(
p0s
?
(p + 10 > 20)
: 20
?(p + 10 > 20) : ps + 10
#
p(t0s |ts , a = close) = ? t0s ? (ts + 10)
3
Full problem specifications and Java code to reproduce these experiments are available online in Google
Code: http://code.google.com/p/cpomdp .
7
Power Plant
5
Power Plant
10
Number of Nodes
Time(ms)
1 state & 1 observ var
2 state & 2 observ vars
4
10
3
10
2
10
1
2
3
4
5
6
70
1 state & 1 observ var
2 state & 2 observ vars
60
50
40
30
20
10
0
1
Horizon
2
3
4
Horizon
5
6
Figure 3: (left) time vs. horizon, and (right) space (total # XADD nodes in ?-functions) vs. horizon.
and yields high reward for staying within
?the safe temperature and pressure range:
?
(5 ? ps ? 15) ? (95 ? ts ? 105) : 50
?
?
?(5 ? p ? 15) ? (t ? 95)
: ?1
s
s
R(ts , ps , a = close) =
?(ps ? 15)
: ?5
?
?
?else
: ?3
Alternately, for the open action, the transition functions reduce the temperature by 5 units and the
pressure by 10 units as long as the pressure stays above zero. For the open reward function, we
assume that there is always a small constant penalty (-1) since no electricity is produced.
Observations are distributed uniformly within a region depending on their underlying state:
p(to |t0s )
(
(ts + 80 < to < ts + 105)
=
?(ts + 80 < to < ts + 105)
: 0.04
:0
p(po |p0s )
(
(ps < po < ps + 10)
: 0.1
=
?(ps < po < ps + 10) : 0
Finally for PBVI, we define two uniform beliefs as follows: b1 : U [ts ; 90, 100] ? U [ps ; 0, 10] and
b2 : U [ts ; 90, 130] ? U [ps ; 10, 30]
In Figure 3, a time and space analysis of the two versions of Power Plant have been performed for
up to horizon h = 6. This experimental evaluation relies on one additional approximation over the
PBVI approach of Algorithm 1 in that it substitutes p(Oh |b, a) in place of p(Oh |x0s ,d0s , a) ? while
this yields correct observation probabilities for a point-based backup at a particular belief state b,
the resulting ?-functions represent an approximation for other belief states. In general, the PBVI
framework in this paper does not require this approximation, although when appropriate, using it
should increase computational efficiency.
Figure 3 shows that the computation time required per iteration generally increases since more complex ?-functions lead to a larger number of observation partitions and thus a more expensive backup
operation. While an order of magnitude more time is required to double the number of state and
observation variables, one can see that the PBVI approach leads to a fairly constant amount of computation time per horizon, which indicates that long horizons should be computable for any problem
for which at least one horizon can be computed in an acceptable amount of time.
6
Conclusion
We presented the first exact symbolic operations for PBVI in an expressive subset of H-POMDPs
with continuous state and observations. Unlike related work that has extended to the continuous
state and observation setting [6], we do not approach the problem by sampling. Rather, following [2], the key contribution of this work was to define a discrete set of observation partitions on
the multivariate continuous observation space via symbolic maximization techniques and derive the
related probabilities using symbolic integration. An important avenue for future work is to extend
these techniques to the case of continuous state, observation, and action H-POMDPs.
Acknowledgments
NICTA is funded by the Australian Government as represented by the Department of Broadband, Communications and the Digital Economy and the ARC through the ICT Centre of Excellence program. This work was
supported by the Fraunhofer ATTRACT fellowship STREAM and by the EC, FP7-248258-First-MM.
8
References
[1] Mario Agueda and Pablo Ibarguengoytia. An architecture for planning in uncertain domains.
In Proceedings of the ICTAI 2002 Conference, Dallas,Texas, 2002.
[2] Jesse Hoey and Pascal Poupart. Solving pomdps with continuous or large discrete observation
spaces. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI),
Edinburgh, Scotland, 2005.
[3] Leslie P. Kaelbling, Michael L. Littman, and Anthony R. Cassandra. Planning and acting in
partially observable stochastic domains. Artificial Intelligence, 101:99?134, 1998.
[4] G. E. Monahan. Survey of partially observable markov decision processes: Theory, models,
and algorithms. Management Science, 28(1):1?16, 1982.
[5] Joelle Pineau, Geoffrey J. Gordon, and Sebastian Thrun. Anytime point-based approximations
for large pomdps. J. Artif. Intell. Res. (JAIR), 27:335?380, 2006.
[6] J. M. Porta, N. Vlassis, M.T.J. Spaan, and P. Poupart. Point-based value iteration for continuous
pomdps. Journal of Machine Learning Research, 7:195220, 2006.
[7] Pascal Poupart, Kee-Eung Kim, and Dongho Kim. Closing the gap: Improved bounds on
optimal pomdp solutions. In In Proceedings of the 21st International Conference on Automated
Planning and Scheduling (ICAPS-11), 2011.
[8] Scott Sanner and Ehsan Abbasnejad. Symbolic variable elimination for discrete and continuous
graphical models. In In Proceedings of the 26th AAAI Conference on Artificial Intelligence
(AAAI-12), Toronto, Canada, 2012.
[9] Scott Sanner, Karina Valdivia Delgado, and Leliane Nunes de Barros. Symbolic dynamic
programming for discrete and continuous state mdps. In Proceedings of the 27th Conference
on Uncertainty in AI (UAI-2011), Barcelona, 2011.
[10] Trey Smith and Reid G. Simmons. Point-based POMDP algorithms: Improved analysis and
implementation. In Proc. Int. Conf. on Uncertainty in Artificial Intelligence (UAI), 2005.
[11] M. Spaan and N. Vlassis. Perseus: Randomized point-based value iteration for pomdps. Journal of Articial Intelligence Research (JAIR), page 195220, 2005.
9
|
4756 |@word h:1 version:3 polynomial:1 open:14 crucially:1 pressure:12 solid:1 recursively:1 delgado:1 carry:1 initial:3 substitution:4 contains:1 o2:8 com:2 must:4 porta:1 numerical:1 partition:25 subsequent:1 v:2 intelligence:5 scotland:1 smith:1 short:1 provides:3 node:2 toronto:1 karina:1 along:1 direct:1 eung:1 dsn:1 inside:1 introduce:3 excellence:1 x0:1 pairwise:1 expected:1 themselves:1 planning:3 sdp:3 discretized:1 discounted:2 valve:4 enumeration:1 provided:2 begin:2 underlying:3 maximizes:1 what:2 perseus:1 developed:1 finding:1 guarantee:1 every:1 tackle:2 icaps:1 partitioning:3 control:1 unit:3 omit:1 producing:1 reid:1 positive:1 dallas:1 despite:1 x0s:25 approximately:2 au:2 dynamically:1 range:2 bi:16 acknowledgment:1 practice:1 x3:1 empirical:2 java:1 xadd:2 symbolic:16 close:18 ga:4 marginalize:1 operator:2 scheduling:1 impossible:1 restriction:2 equivalent:1 deterministic:4 jesse:1 go:1 straightforward:1 starting:1 graphically:2 convex:1 pomdp:14 survey:1 simplicity:1 identifying:1 factored:1 insight:4 oh:10 increment:1 simmons:1 exact:10 programming:11 associate:1 approximated:1 expensive:1 labeled:2 initializing:1 hv:1 region:3 ordering:1 reward:14 littman:1 dynamic:13 solving:1 f2:3 efficiency:1 compactly:2 po:5 joint:1 represented:5 effective:1 doi:3 artificial:4 outcome:1 exhaustive:1 d0i:1 quite:2 larger:1 otherwise:2 g1:6 jointly:1 final:1 online:1 net:2 propose:1 subtracting:1 product:1 maximal:2 relevant:10 loop:1 pbvi:24 mixing:2 flexibility:1 ppoupart:1 dirac:1 convergence:1 double:1 p:13 ijcai:1 produce:1 executing:1 staying:1 derive:7 depending:1 x0i:1 b0:2 eq:8 implemented:1 australian:1 safe:3 closely:1 correct:1 opened:1 stochastic:1 centered:1 australia:2 elimination:2 implementing:1 require:2 ja:3 government:1 f1:8 really:1 enumerated:1 extension:1 hold:1 mm:2 sufficiently:1 considered:1 major:1 early:1 a2:1 purpose:1 proc:1 combinatorial:1 waterloo:2 partiallyobservable:1 always:2 rather:1 avoid:1 kersting:2 derived:2 indicates:1 kim:2 economy:1 attract:1 unary:1 initially:1 reproduce:1 germany:1 tank:2 arg:1 pascal:3 plan:5 constrained:1 integration:3 fairly:1 sampling:2 articial:1 future:2 piecewise:9 simplify:1 opening:1 gordon:1 intell:1 maintain:1 negation:1 highly:1 evaluation:1 mixture:2 accurate:1 tuple:1 integral:2 explosion:2 respective:1 vely:1 tree:1 divide:1 re:1 uncertain:2 formalism:1 modeling:2 boolean:3 markovian:1 ar:1 maximization:2 electricity:3 leslie:1 kaelbling:1 subset:1 uniform:4 successful:2 st:1 international:2 randomized:1 retain:2 stay:1 michael:1 concrete:1 na:1 again:1 aaai:2 management:1 iais:2 stochastically:1 conf:1 leading:1 return:3 account:1 de:2 b2:8 int:1 vi:9 stream:1 later:1 break:1 performed:1 closed:4 mario:1 bayes:2 contribution:4 valdivia:1 efficiently:2 likewise:1 yield:6 produced:1 multiplying:2 pomdps:19 xoj:3 sebastian:1 nonetheless:1 involved:1 synchronic:1 di:2 sampled:1 proved:1 logical:3 anytime:1 back:1 appears:1 feed:1 ok:11 jair:2 improved:2 done:1 though:1 evaluated:1 just:1 stage:1 p0s:3 d:34 expressive:1 nonlinear:3 google:2 pineau:1 mdp:1 artif:1 building:3 xs0:2 true:1 hence:7 illustrated:1 conditionally:1 sin:1 during:1 m:1 generalized:1 demonstrate:1 performs:1 temperature:8 recently:1 fi:3 common:1 nunes:1 foreach:8 extend:4 discussed:1 refer:1 ai:1 dbn:1 fk:1 hp:3 inclusion:1 dxs:4 closing:2 centre:1 reachable:4 funded:1 specification:1 add:1 multivariate:5 recent:1 irrelevant:1 prime:1 inequality:5 binary:2 arbitrarily:1 joelle:1 accomplished:1 greater:1 additional:3 care:1 dashed:1 branch:2 full:2 multiple:1 cross:4 long:2 a1:2 paired:1 controlled:1 xsm:1 variant:5 zahra:2 iteration:11 represent:5 ofl:1 achieved:1 whereas:1 fellowship:1 interval:1 diagram:2 else:2 grow:1 crucial:1 finely:1 unlike:1 inconsistent:1 leverage:1 ictai:1 automated:1 xsj:1 independence:1 architecture:1 reduce:2 idea:2 avenue:1 computable:1 texas:1 handled:1 penalty:1 algebraic:1 action:11 remark:1 generally:2 amount:2 discount:2 generate:2 http:1 disjoint:1 per:2 discrete:29 ds1:1 key:4 threshold:1 drawn:1 rewriting:1 symbolically:1 year:1 sum:4 powerful:1 uncertainty:2 place:2 reader:1 decision:7 ob:3 scaling:1 acceptable:1 layer:1 hi:1 bound:1 distinguish:2 tackled:1 bv:9 occur:1 constraint:7 x2:1 bonn:2 extremely:2 optimality:1 department:1 according:1 alternate:1 expository:1 combination:1 suppressed:1 spaan:2 making:2 s1:1 modification:1 restricted:6 hoey:1 xo:13 turn:1 needed:2 letting:1 fp7:1 end:3 available:1 operation:13 permit:2 apply:3 observe:1 generic:1 appropriate:1 substitute:1 obviates:1 cf:2 graphical:1 exploit:1 approximating:1 already:2 rt:2 thrun:1 poupart:4 water:4 nicta:4 assuming:1 code:3 o1:8 cont:3 retained:1 difficult:1 potentially:3 statement:8 negative:1 implementation:1 motivates:1 policy:7 perform:2 allowing:1 steam:5 observation:93 markov:3 discarded:1 arc:2 finite:13 t:33 immediate:1 extended:5 communication:1 vlassis:2 arbitrary:3 canada:2 pablo:1 required:7 barcelona:1 alternately:1 below:2 scott:4 reading:1 program:1 max:3 belief:35 power:15 critical:1 hybrid:6 indicator:1 sanner:4 oih:5 mdps:4 brief:1 fraunhofer:3 extract:2 dxo:1 sn:1 text:1 prior:2 literature:1 ict:1 multiplication:1 marginalizing:1 plant:14 dsi:1 generation:3 hoh:2 cpomdp:1 dop:1 proven:1 var:4 geoffrey:1 digital:1 agent:1 s0:2 summary:1 supported:1 last:1 disallow:1 allow:1 jh:6 understand:1 xs1:1 taking:2 distributed:2 edinburgh:1 boundary:2 dimension:1 calculated:1 world:1 transition:12 computes:1 forward:1 monahan:4 ec:1 sj:1 approximate:4 observable:5 alpha:1 uai:2 b1:4 xi:2 continuous:55 terminate:1 nature:1 ca:1 ehsan:1 complex:2 anthony:1 domain:2 barros:1 main:1 observ:4 linearly:1 decrement:1 uwaterloo:1 backup:12 noise:3 nothing:1 x1:3 fig:1 canberra:2 broadband:1 exceeding:1 formula:1 dx1:1 x:36 ih:3 restricting:1 sequential:1 false:1 magnitude:1 anu:3 horizon:15 cassandra:1 gap:1 jar:1 simply:4 partially:4 scalar:1 g2:5 kristian:2 turbine:2 relies:1 conditional:6 goal:1 kee:1 exposition:1 infinite:4 specifically:1 uniformly:2 acting:1 total:1 experimental:1 formally:1 zamani:2 support:1 brevity:1 correlated:3
|
4,150 | 4,757 |
Online allocation and homogeneous partitioning for
piecewise constant mean-approximation
Odalric Ambrym Maillard
Montanuniversit?at Leoben
Franz-Josef Strasse 18
A-8700 Leoben, Austria
Alexandra Carpentier
Statistical Laboratory, CMS
Wilberforce Road, Cambridge
CB3 0WB UK
[email protected]
[email protected]
Abstract
In the setting of active learning for the multi-armed bandit, where the goal of a
learner is to estimate with equal precision the mean of a ?nite number of arms,
recent results show that it is possible to derive strategies based on ?nite-time con?dence bounds that are competitive with the best possible strategy. We here consider an extension of this problem to the case when the arms are the cells of a
?nite partition P of a continuous sampling space X ? Rd . Our goal is now to
build a piecewise constant approximation of a noisy function (where each piece is
one region of P and P is ?xed beforehand) in order to maintain the local quadratic
error of approximation on each cell equally low. Although this extension is not
trivial, we show that a simple algorithm based on upper con?dence bounds can
be proved to be adaptive to the function itself in a near-optimal way, when |P| is
chosen to be of minimax-optimal order on the class of ??H?older functions.
1
Setting and Previous work
Let us consider some space X ? Rd , and Y ? R. We call X the input space or sampling space, Y
the output space or value space. We consider the problem of estimating with uniform precision the
function f : X ? Rd ? Y ? R. We assume that we can query n times the function f , anywhere in
the domain, and observe noisy samples of this function. These samples are collected sequentially,
and our aim is to design an adaptive procedure that selects wisely where on the domain to query the
function, according to the information provided by the previous samples. More formally:
Observed process We consider an unknown Y-valued process de?ned on X , written ? : X ?
+
M+
1 (Y), where M1 (Y) refers to the set of all probability measures on Y, such that for all x ? X ,
def
the random variable Y (x) ? ?(x) has mean f (x) = E[Y (x)|x] ? R. We write for convenience the
model in the following way
Y (x) = f (x) + noise(x) ,
def
where noise(x) = Y (x) ? E[Y (x)|x] is the centered random variable corresponding to the noise,
with unknown variance ? 2 (x). We assume throughout this paper that f is ?-H?older.
Partition We consider we can de?ne a partition P of the input space X , with ?nitely many P
regions {Rp }1?p?P that are assumed to be convex and not degenerated, i.e. such that the interior
of each region Rp has positive Lebesgue volume vp . Moreover, with each region Rp is associated
a sampling distribution in that region, written ?p ? M+
1 (Rp ). Thus, when we decide to sample in
region Rp , a new sample X ? Rp is generated according to X ? ?p .
Allocation. We consider that we have a ?nite budget of n ? N samples that we can use in order
to allocate samples as we wish among the regions {Rp }1?p?P . For illustration, let us assume that
we deterministically allocate Tp,n ? N samples in region Rp , with the constraint that the allocation {Tp,n }1?p?P must some to n. In region Rp , we thus sample points {Xp,i }1?p?P at random
1
according to the sampling distribution ?p , and then get the corresponding values {Yp,i }1?i?Tp,n ,
where Yp,i ? ?(Xp,i ). In the sequel, the distribution ?p is assumed to be the uniform distribution
d?(x)1x?R
over region Rp , i.e. the density of ?p is ?(Rp ) p where ? denotes the Lebesgue measure. Note
that this is not restrictive since we are in an active, not passive setting.
Piecewise constant mean-approximation. We use the collected samples in order to build a piecewise constant approximation f?n of the mean f , and measure the accuracy of approximation on a
region Rp with the expected quadratic norm of the approximation error, namely
?
? ?
?
?
2 ?(dx)
?
= E?p ,? (f (X) ? m
? p,n )2 ,
(f (x) ? fn (x))
E
?(Rp )
Rp
where m
? p,n is the constant value that takes f?n on the region Rp . A natural choice for the estimator
m
? p,n is to use the empirical mean that is unbiased and asymptotically optimal for this criterion.
Thus we consider the following estimate (histogram)
f?n (x) =
P
?
p=1
m
? p,n I{x ? Rp } where m
? p,n =
Tp,n
1 ?
Tp,n
Yp,i .
i=1
Pseudo loss Note that, since the Tp,n are deterministic, the expected quadratic norm of the approximation error of this estimator can be written in the following form
?
?
?
?
?
?
? p,n )2
E?p ,? (f (X) ? m
? p,n )2
= E?p ,? (f (X) ? E?p [f (X)])2 + E?p ,? (E?p [f (X)] ? m
?
?
?
?
= V?p f (X) + V?p ,? m
? p,n
?
?
?
?
1
V?p ,? Y (X) .
= V?p f (X) +
Tp,n
Now, using the following immediate decomposition
?
?
?
? ?
V?p ,? Y (X) = V?p f (X) +
? 2 (x)?p (dx) ,
Rp
we deduce that the maximal expected quadratic norm of the approximation error over the regions
def
{Rp }1?p?P , that depends on the choice of the considered allocation strategy A = {Tp,n }1?p?P
is thus given by the following so-called pseudo-loss
?
?
?
?
?
?
Tp,n + 1
1
def
2
(1)
V?p f (X) +
E? ? (X) .
Ln (A) = max
1? p ?P
Tp,n
Tp,n p
Our goal is to minimize this pseudo-loss. Note that this is a local measure of performance, as
opposed to a more usual yet less challenging global quadratic error. Eventually,
as the number
of
??
?2 ?
?
cells tends to ?, this local measure of performance approaches supx?X E? f (x) ? fn (x) . At
this point, let us also introduce, for convenience, the notation Qp (Tp,n ) that denotes the term inside
the max, in order to emphasize the dependency on the quadratic error with the allocation.
Previous work
There is a huge literature on the topic of functional estimation in batch setting. Since it is a rather
old and well studied question in statistics, many books have been written on this topic, such as Bosq
and Lecoutre [1987], Rosenblatt [1991], Gy?or? et al. [2002], where piecewise constant meanapproximation are also called ?partitioning estimate? or ?regressogram? (?rst introduced by Tukey
[1947]). The minimax-optimal rate of approximation on the class of ?-H?older functions is known
2?
to be in O(n? 2?+d ) (see e.g. Ibragimov and Hasminski [1981], Stone [1980], Gy?or? et al. [2002]).
In such setting, a dataset {(Xi , Yi )}i?n is given to the learner, and a typical question is thus to try
to ?nd the best possible histogram in order to minimize a approximation error. Thus the dataset is
?xed and we typically resort to techniques such as model selection where each model corresponds
to one histogram (see Arlot [2007] for an extensive study of such).
However, we here ask a very different question, that is how to optimally sample in an online setting
in order to minimize the approximation error of some histogram. Thus we choose the histogram
2
before we see any sample, then it is ?xed and we need to decide which cell to sample from at
each time step. Motivation for this setting comes naturally from some recent works in the setting
of active learning for the multi-armed bandit problem Antos et al. [2010], Carpentier et al. [2011].
In these works, the objective is to estimate with equal precision the mean of a ?nite number of
distributions (arms), which would correspond to the special case when X = {1, . . . , P } is a ?nite
set in our setting. Intuitively, we reduce the problem to such bandit problem with ?nite set of arms
(regions), and our setting answers the question whether it is possible to extend those results to the
case when the arms do not correspond to a singleton, but rather to a continuous region. We show
that the answer is positive, yet non trivial. This is non trivial due to the variance estimation in
each region: points x in some region may have different means f(x), so that standard estimators for
the variance are biased, contrary to the point-wise case and thus ?nite-arm techniques may yield
disastrous results. (Estimating the variance of the distribution in a continuous region actually needs
to take into account not only the point-wise noise but also the variation of the function f and the
noise level ? 2 in that region.) We describe a way, inspired from quasi Monte-Carlo techniques, to
correct this bias so that we can handle the additional error. Also, it is worth mentioning that this
setting can be informally linked to a notion of curiosity-driven learning (see Schmidhuber [2010],
Baranes and Oudeyer [2009]), since we want to decide in which region of the space to sample,
without explicit reward but optimizing the goal to understand the unknown environment.
Outline Section 2 provides more intuition about the pseudo-loss and a result about the optimal oracle strategy when the domain is partitioned in a minimax-optimal way on the class of ??H?older
functions. Section 3 presents our assumptions, that are basically to have a sub-Gaussian noise and
smooth mean and variance functions, then our estimator of the pseudo-loss together with its concentration properties, before introducing our sampling procedure, called OAHPA-pcma. Finally, the
performance of this procedure is provided and discussed in Section 4.
2
The pseudo-loss: study and optimal strategies
2.1 More intuition on each term in the pseudo-loss
It is natural to look at what happens to each of the two terms that appear in equation 1 when one
makes Rp shrink towards a point. More precisely, let xp be the mean of X ? ?p and let us look at
the limit of V?p (f (X)) when vp goes to 0. Assuming that f is differentiable, we get
?2 ?
??
lim V?p (f (X)) = lim E?p f (X) ? f (xp ) ? E[f (X) ? f (xp )]
vp ?0
vp ?0
=
=
=
lim E?p
??
?X ? xp , ?f (xp )? ? E[?X ? xp , ?f (xp )?]
vp ?0
?
?
lim E?p ?X ? xp , ?f (xp )?2
vp ?0
?
?
lim ?f (xp )T E?p (X ? xp )(X ? xp )T ?f (xp ) .
?2 ?
vp ?0
Therefore, if we introduce ?p to be the covariance matrix of the random variable X ? ?p , then we
simply have lim V?p (f (X)) = lim ||?f (xp )||2?p .
vp ?0
vp ?0
Example with hyper-cubic regions An important example is when Rp is a hypercube with side
1/d
length vp and ?p is the uniform distribution over the region Rp . In that case (see Lemma 1), we
dx
have ?p (dx) =
, and
2/d
vp
vp
.
||?f (xp )||2?p = ||?f (xp )||2
12
More generally, when f is ??differentiable, i.e. that ?a ? X , ??? f (a, ?) ? Sd (0, 1)R such that
(a)
?x ? Sd (0, 1), limh?0 f (a+hx)?f
= ?? f (a, x), then it is not too dif?cult to show that for such
h?
hyper-cubic regions, we have
?
?
? 2?
?
V?p f (X) = O vpd sup |?? f (xp , u)|2 .
S(0,1)
?
?
On the other hand, by direct computation, the second term is such that limvp ?0 E?p ? 2 (X) =
?
?
?
?
? 2 (xp ). Thus, while V?p f (X) vanishes, E?p ? 2 (X) stays bounded away from 0 (unless ? is
deterministic).
3
2.2
Oracle allocation and homogeneous partitioning for piecewise constant
mean-approximation.
We now assume that we are allowed to choose the partition P depending on n, thus P = Pn ,
amongst all homogeneous partitions of the space, i.e. partitions such that all cells have the same
volume, and come from a regular grid of the space. Thus the only free parameter is the number of
cells Pn of the partition.
An exact yet not explicit oracle algorithm. The minimization of the pseudo-loss (1) does not yield
to a closed-form solution in general. However, we can still derive the order of the optimal loss
(see [Carpentier and Maillard, 2012, Lemma 2] in the full version of the paper for an example of
minimax yet non adaptive oracle ?algorithm
in closed-form solution):
? given
?
?
?
??
??
Lemma 1 In the case when V?p f (X) = ? Pn?? and Rp ? 2 (x)?p (dx) = ? Pn?? , then an
optimal allocation and partitioning strategy A?n satis?es that?
?
?
?
V?p f (X) + E?p ? 2 (X)
?
?
,
L ? V?p f (X)
as soon as there exists, for such range of Pn? , a constant L such that
?
?
?
?
2
Pn? V
?
?p f (X) + E?p ? (X)
?
?
= n.
L ? V?p f (X)
p=1
1
Pn? = ?(n max(1+?? ??? ,1) )
and
def
?
Tp,n
=
The pseudo-loss of such an algorithm A?n , optimal amongst the allocations strategies that use the
partition Pn in Pn? regions, is then given by
?
?
? ?
def max(1 ? ? , 1 ? ? )
? 1.
where ? =
Ln (A?n ) = ? n?
max(1 + ?? ? ? ? , 1)
The condition involving the constant L is here to ensure that the partition is not degenerate. It is
morally satis?ed as soon as the variance of f and the noise are bounded and n is large enough.
This Lemma applies to the important class W 1,2 (R) of functions that admit a weak derivative that
belongs to L2 (R). Indeed these functions are H?older with coef?cient ? = 1/2, i.e. we have
W 1,2 (R) ? C 0,1/2 (R). The standard Brownian motion is an example of function that is 1/2-H?older.
More generally, for k = d2 + ? with ? = 1/2 when d is odd and ? = 1 when d is even, we have the
inclusion
W k,2 (Rd ) ? C 0,? (Rd ) ,
where W k,2 (Rd ) is the set of functions that admit a k th weak derivative belonging to L2 (Rd ). Thus
the previous Lemma applies to suf?ciently smooth functions with smoothness linearly increasing
with the dimension d of the input space X .
Important remark Note that this Lemma gives us a choice of the partition that is minimax-optimal,
and an allocation strategy on that partition that is not only minimax-optimal but also adaptive to the
function f itself. Thus it provides a way to decide in a minimax way what is the good number of
regions, and then to provide the best oracle way to allocate the budget.
We can deduce the following immediate corollary on the class of ??H?older functions observed in a
non-negligible noise of bounded variance (i.e. in the setting ? ? = 0 and ?? = 2?
d ).
Corollary 1 Consider that f is ??H?older and the noise is of bounded variance. Then a minimaxd
optimal partition satis?es Pn? = ?(n d+2? ) and an optimal allocation achieves the rate Ln (A?n ) =
? ?2? ?
? n d+2? . Moreover, the strategy of Lemma 1 is optimal amongst the allocations strategies that
use the partition Pn in Pn? regions.
? ?2? ?
The rate ? n d+2? is minimax-optimal on the class of ??H?older functions (see Gy?or? et al. [2002],
Ibragimov and Hasminski [1981], Stone [1980]), and it is thus interesting to consider an initial numd
Pn? = ?(n d+2? ). After having built the partition, if the quantities
ber of?regions
Pn? that
??
? order
??
? is of
?
V?p f p?P and E?p ? 2 p?P are known to the learner, it is optimal, in the aim of minimizing
?
provided in Lemma 1. Our
the pseudo-loss, to allocate to each region the number of samples Tp,n
objective in this paper is, after having chosen beforehand a minimax-optimal partition, to allocate
4
the samples properly in the regions, without having any access to those quantities. It?is then
neces? ??
sary to balance between exploration, i.e. allocating the samples in order to estimate V?p f p?P
?
? ??
and E?p ? 2 p?P , and exploitation, i.e. use the estimates to target the optimal allocation.
3
Online algorithms for allocation and homogeneous partitioning for
piecewise constant mean-approximation
In this section, we now turn to the design of algorithms that are fully online, with the goal to be
competitive against the kind of oracle algorithms considered in Section 2.2. We now assume that the
space X = [0, 1]d is divided in Pn hyper-cubic regions of same measure (the Lebesgue measure on
[0, 1]d ) vp = v = P1n . The goal of an algorithm is to minimize the quadratic error of approximation
of f by a constant over each cell, in expectation, which we write as
?
?
? ?
? ?
2 ?(dx)
2 ?(dx)
?
= max E
,
max E
(f (x) ? fn (x))
(f (x) ? m
? p,n )
1?p?Pn
1?p?Pn
?(Rp )
?(Rp )
Rp
Rp
where f?n is the histogram estimate of the function f on the partition P and m
? p,n is the empirical
mean de?ned on region Rp with the samples (Xi , Yi ) such that Xi ? Rp . To do so, an algorithm is
only allowed to specify at each time step t, the next point Xt where to sample, based on all the past
samples {(Xs , Ys )}s<t . The total budget n is known at the beginning as well as Pn and the regions
{Rp }1?p?Pn .
We want to compare the strategy of an online learning algorithm to the strategy of an oracle that
perfectly knows the law ?. We however restrict the power of the oracle by forcing it to only sample
uniformly inside a region Rp . Thus the oracle is only allowed to choose at each time step t in which
cell Rp to sample, but is not allowed to decide which point in the cell it can sample. The point Xt
has to be sampled uniformly in Rp .
Now, since a learning algorithm has no access to the true distribution ?, we give slightly more power
to the learning algorithm by allowing it to resort to a re?ned partition. We allow it to divide each
region Rp for p ? {1, . . . , Pn } into K hyper-cubic sub-regions {Rp,k }1?k?K of same Lebesgue
def
measure, resulting in a total number Pn+ = KPn of hyper-cubic regions of same measure vp,k =
1
+
KPn . Equivalently, this can be seen as letting the player use a re?ned partition with Pn cells.
However, instead of sampling one point in Rp,k , the algorithm is only allowed to sample all the K
points in region in the chosen Rp at the same time, one uniformly in each sub-region Rp,k , still
using of course the same total budget of n points (and not nK). Thus the algorithm is free to choose
K, but once a region Rp is chosen at time t, it can not choose moreover which point to sample inside
that region but only sample a set of points in one shot. The reason to do so is that this will allow
us to estimate the unknown quantities such as the quadratic variation of f on each region, but we
do not want to give the learner too much power. This one shot restriction is also for clarity purpose,
as otherwise one has to consider technical details and perform nasty computations that in the end
only affects second order terms. The effect of the factor K on the performance bound can be seen in
Section 4. For Pn of minimax order, our result shows that K can be chosen to be a (large) constant.
3.1 Assumptions
In order to derive performance bounds for a learning algorithm that does not know the noise and
the local variance of the function, we now need some assumptions on the data. These are here to
ensure that concentration properties apply and that empirical moments are close to true moments
with high probability depending on the number of samples. These add to the two other assumptions
on the structure of the histograms (uniformed grid partitions) and on the active scheme (that is we
can choose a bean but only get a random sample uniformly distributed in that bean).
We assume that ? is exactly sub-Gaussian, meaning that for all x ? X , the variance of the noise(x),
written ? 2 (x) < ? satis?es that
?2 ? 2 (x)
,
?? ? R+ log E exp[? noise(x)] ?
2
and we further assume that it satis?es the following slightly stronger second property (that is for
instance exactly veri?ed for a Gaussian variable, looking at the moment generating function):
?
?
?
?
1
?2 ? 2 (x)
2
??, ? ? R+ log E exp ?noise(x) + ?noise(x)2 ?
(x)
.
?
log
1
?
2??
2(1 ? 2?? 2 (x)) 2
5
The function f is assumed to be (L, ?)-H?older, meaning that it sati?es
?x, x? ? X f (x) ? f (x? ) ? L||x ? x? ||? .
Similarly, the function ? 2 is assumed to be (M, ?)-H?older i.e. it satis?es
?x, x? ? X ? 2 (x) ? ? 2 (x? ) ? M ||x ? x? ||? .
We assume that Y is a convex and compact subset of R, thus w.l.g. that it is [0, 1], and that it is
known that ||? 2 ||? , which is thus ?nite, is bounded by the constant 1.
3.2 Empirical estimation of the quadratic approximation error on each cell
We de?ne the sampling distribution ?
?p in the region Rp for each p ? {1, . . . , Pn } as a quasi-uniform
sampling scheme using the uniform distribution over the sub-regions. More precisely at time t ? n,
if we decide to sample in the region Rp according to ?
?p , we sample uniformly in each sub-region
one sample, resulting in a new batch of samples {(Xt,k , Yt,k )}1?k?K , where Xt,k ? ?p,k . Note that
due to this sampling process, the number of points Tp,t sampled in sub-region Rp at time t is always
Tp,t
.
a multiple of K and that moreover for all k, k ? ? {1, . . . , K} we have that Tp,k,t = Tp,k? ,t = K
Now this speci?c sampling is used in order to be able to estimate the variances V?p f and E?p ? 2 ,
?
can be computed as accurately as possible. Indeed, as explained in
so that the best proportions Tp,n
?
?
?
?
Lemma 1, we have that
2
f
(X)
+
E
?
V
(X)
?
?
p
p
? def
?
?
.
=
Tp,n
L ? V?p f (X)
? p,t and is
Variance estimation We now introduce two estimators. The ?rst estimator is written V
def
built in
way. First,let us introduce the empirical estimate f?p,k,t of the mean fp,k =
? the following
?
E?p,k f (X) of f in sub-region Rp,k . Similarly, to avoid some cumbersome notations, we introduce
?
?
?
?
?
?
def
def
def
2
= E?p,k ? 2 (X)
fp = E?p f (X) and vp,k = V?p,k f (X) for the function f , and then ?p,k
for the variance of the noise ? 2 . We now de?ne the empirical variance estimator to be
K
?
? p,t = 1
(f?p,k,t ? m
? p,t )2 ,
V
K ?1
k=1
that is a biased estimator. Indeed, for a deterministic Tp,t , it is not dif?cult to show that we have
?
K
K ?
?
?
? ?
? ??
? ?
? ? 2
1 ??
1 ?
? p,t
E V
+
E?p,k f ? E?p f
=
V?p,k f + E?p,k ? 2 .
K ?1
Tp,t
k=1
k=1
? ?
The leading term in this decomposition, that is given by the ?rst sum, is closed to V?p f since, by
using the assumption that f is (L, ?)?H?older, we have the following inequality
?
?
K ?
?
???
?1 ?
? ?
? ? 2
2L2 d?
? ?
?
E
f
?
E
f
f
(X)
?
V
,
?
?
?
p
p
p,k
?
?K
(KPn )2?/d
k=1
where we also used that the diameter of a sub-region Rp,k is given by diam(Rp,k ) =
d1/2
.
(KPn )1/d
Then, the second term also contributes to the bias, essentially due to the fact that V[f?p,k,t ] =
?
?
?
? 2
def
1
1
2
2
2 def
Tp,k,t (vp,k + ?p,k ) and not Tp,t (vk + ?k ) (with vp = V?p f (X) and ?p = E?p ? (X) ).
? 2p,k,t that estimates the variance
In order to correct this term, we now introduce the second estimator ?
?
?
?
?
? ?
of the outputs in a region Rp,k , i.e. V?p,k ,? Y (X) = V?p,k f (X) + E?p,k ? 2 . It is de?ned as
?2
t
t ?
?
1
1 ?
def
? 2p,k,t =
Yi ?
Yj I{Xj ? Rp,k } I{Xi ? Rp,k } .
?
Tp,k,t ? 1 i=1
Tp,k,t j=1
Now, we combine the two previous estimators to form the following estimator
K ?
?
1
1 ? 2
? p,t ? 1
? p,t = V
?
?
?
.
Q
K
Tp,k,t
Tp,t p,k,t
k=1
? p,t and
The following proposition provides a high-probability bound on the difference between Q
the quantity we want to estimate. We report the detailed proof in [Carpentier and Maillard, 2012].
6
? p,t , and for
Proposition 1 By the assumption that f is (L, ?)-H?older, the bias of the estimator Q
deterministic Tp,t , is given by
?
K ?
?
?
?
?
?
? ?
? ? 2
2L2 d?
? p,t ? Qp (Tp,t ) = 1
? V?p f (X) ?
.
E?p,k f ? E?p f
E Q
K
(KPn )2?/d
k=1
Moreover, it satis?es that for all ? ? [0, 1], there exists an event of probability higher than 1 ? ? such
that on this event, we have ?
?
?
?
? ?
?
?
K
K
2
?
?
?1 ?
?
?
?
?
? p,k,t
?
1
?
2
? p,t ? E Q
? p,t ? ? ? 8 log(4/?)
?Q
?
+o
? p,k .
?
?
(K ? 1)2
T2
T
K K
k=1
p,k,t
p,k,t
k=1
We also state the following Lemma that we are going to use in the analysis, and that takes into
account randomness of the stopping times Tp,k,t .
Lemma 2 Let {Xp,k,u }p?P, k?K, u?n be samples potentially sampled in region Rp,k . We introduce
qp,u to be the?equivalent
of Qp (Tp,t ) with explicitly ?xed value of Tp,t = u. Let also q?p,u be the
?
? p,t but computed with the ?rst u samples in
estimate of E qp,u , that is to say the equivalent of Q
each region Rp,k (i.e. Tp,t = u). Let us de?ne the event
?
?
?
?
?
? ? AK log(4nP/?)V?
? ?
2L2 d?
?
?
p,t
?n,P,K (?) =
+
? : ? q?p,u (?) ? E qp,u ? ?
,
u
K ?1
(KPn )2?/d
p?P u?n
?K
1
? 2p,k,t and where A ? 4 is a numerical constant. Then it
where V?p,t = V?p (Tp,t ) = K?1
k=1 ?
holds that
?
?
P ?n,P,K (?) ? 1 ? ? .
Note that, with the notations of this Lemma, Proposition 1 above is thus about q?p,u .
3.3
The Online allocation and homogeneous partitioning algorithm for piecewise constant
mean-approximation (OAHPA-pcma)
We are now ready to state the algorithm that we propose for minimizing the quadratic error of approximation of f . The algorithm is described in Figure 1. Although it looks similar, this algorithm is
? p,t decreases in expectation with Tp,t . Indeed,
quite different from a normal UCB
since?Q
? ? algorithm
? ?
? ??
?
K
its expectation is close to V?p f + KT1p,t k=1 V?p,k f + E?p,k ? 2 .
Algorithm 1 OAHPA-pcma.
1: Input: A, L, ?, Horizon n; Partition {Rp }p?P , with sub-partitions {Rp,k }k?K .
2: Initialization: Sample K points in every sub-region {Rp,k }p?P,k?K
3: for t = K 2 P + 1; t ? n; t = t + K do
? p,t .
4:
Compute ?p, Q
?
2 ?
log(4nP/?)V?p,t
AK
?
5:
Compute ?p, Bp,t = Qp,t +
+ 2L d2?/d .
Tp,t
K?1
(KPn )
6:
Select the region pt = argmax1?p?Pn Bp,t where to sample.
7:
Sample K samples in region Rpt one per sub-region Rpt ,k according to ?pt ,k .
8: end for
4
Performance of the allocation strategy and discussion
Here is the main result of the paper; see the full version [Carpentier and Maillard, 2012] for the
proof. We remind that the objective is to minimize for an algorithm A the pseudo-loss Ln (A).
Theorem 1 (Main result) Let ? =
?
maxp Tp,n
?
minp Tp,n
be the distortion factor of the optimal allocation stratdef
d
d
egy, and let ? > 0. Then with the choice of the number of regions Pn = n 2?+d ?2+ 2? , and of the
2d
d
def
def 8L2 ?
number of sub-regions K = C 4?+d ??2? ? , where C = Ad
1?? then the pseudo-loss of the OAHPApcma algorithm satis?es, under the assumptions of Section 3.1 and on an event of probability higher
than 1 ? ?,
?
?
?
?
?
2?
1 + ??C ? log(1/?) Ln (A?n ) + o n? 2?+d ,
Ln (A) ?
for some numerical constant C ? not depending on n, where A?n is the oracle of Lemma 1.
7
Minimax-optimal partitioning and ?-adaptive performance Theorem 1 provides a high probability bound on the performance of the OAHPA-pcma allocation strategy. It shows that this performance is competitive with that of an optimal (i.e. adaptive to the function f , see Lemma 1) allocation
d
A? on a partition with a number of cells Pn chosen to be of minimax order n 2?+d for the class of
2?
?-H?older functions. In particular, since Ln (A?n ) = O(n d+2? ) on that class, we recover the same
minimax order as what is obtained in the batch learning setting, when using for instance wavelets,
or Kernel estimates (see e.g. Stone [1980], Ibragimov and Hasminski [1981]). But moreover, due to
the adaptivity of A?n to the function itself, this procedure is also ?-adaptive to the function and not
only minimax-optimal on the class, on that partition (see Section 2.2). Naturally, the performance of
the method increases, in the same way than for any classical functional estimation method, when the
smoothness of the function increases. Similarly, in agreement with the classical curse of dimension,
the higher the dimension of the domain, the less ef?cient the method.
Limitations In this work, we assume that the smoothness ? of the function is available to the
learner, which enables her to calibrate Pn properly. Now it makes sense to combine the OAHPApcma procedure with existing methods that enable to estimate this smoothness online (under a
slightly stronger assumption than H?older, such as H?older functions that attain their exponents,
see Gin?e and Nickl [2010]). It is thus interesting, when no preliminary knowledge on the smoothness
of f is available, to spend some of the initial budget in order to estimate ?.
We have seen that the OAHPA-pcma procedure, although very simple, manages to get minimax
optimal results. Now the downside of the simplicity
? of the OAHPA-pcma strategy is two-fold.
The ?rst limitation is that the factor (1 + ??C ? log(1/?)) = (1 + O(?)) appearing in the bound
before Ln (A? ) is not 1, but higher than 1. Of course it is generally dif?cult to get a constant 1 in
the batch setting (see Arlot [2007]), and similarly this is a dif?cult task in our online setting too: If
? is chosen to be small, then the error with respect to the optimal allocation is small. However, since
Pn is expressed as an increasing function of ?, this implies that the minimax bound on the loss for
partition P increases also with ?. That said, in the view of the work on active learning multi-armed
bandit that we extend, we would still prefer to get the optimal constant 1.
The second limitation is more problematic: since K is chosen irrespective of the region Rp , this
causes the presence of the factor ?. Thus the algorithm will essentially no longer enjoy near-optimal
performance guarantees when the optimal allocation strategy is highly not homogeneous.
Conclusion and future work In this paper, we considered online regression with histograms in
an active setting (we select in which bean to sample), and when we can choose the histogram in a
class of homogeneous histograms. Since the (unknown) noise is heteroscedastic and we compete
not only with the minimax allocation oracle on ?-H?older functions but with the adaptive oracle
that uses a minimax optimal histogram and allocates samples adaptively to the target function, this
is an extremely challenging (and very practical) setting. Our contribution can be seen as a non
trivial extension of the setting of active learning for multi-armed bandits to the case when each arm
corresponds to one continuous region of a sampling space, as opposed to a singleton, which can also
be seen as a problem of non parametric function approximation. This new setting offers interesting
challenges: We provided a simple procedure, based on the computation of upper con?dence bounds
of the estimation of the local quadratic error of approximation, and provided a performance analysis
that shows that OAHPA-pcma is ?rst order ?-optimal with respect to the function, for a partition
chosen to be minimax-optimal on the class of ?-H?older functions. However, this simplicity also
has a drawback if one is interested in building exactly ?rst order optimal procedure, and going
beyond these limitations is de?nitely not trivial: A more optimal but much more complex algorithm
would indeed need to tune a different factor Kp in each cell in an online way, i.e. de?ne some Kp,t
that evolves with time, and rede?ne sub-regions accordingly. Now, the analysis of the OAHPA-pcma
already makes use of powerful tools such as empirical-Bernstein bounds for variance estimation (and
not only for mean estimation), which make it non trivial; in order to handle possibly evolving subregions and deal with the progressive re?nement of the regions, we would need even more intricate
analysis, due to the fact that we are online and active. This interesting next step is postponed to
future work.
Acknowledgements This research was partially supported by Nord-Pas-de-Calais Regional Council, French ANR EXPLO-RA (ANR-08-COSI-004), the European Communitys Seventh Framework
Programme (FP7/2007-2013) under grant agreement no 270327 (CompLACS) and no 216886 (PASCAL2).
8
References
Andr`as Antos, Varun Grover, and Csaba Szepesv`ari. Active learning in heteroscedastic noise. Theoretical Computer Science, 411(29-30):2712?2728, 2010.
Sylvain Arlot. R?ee? chantillonnage et S?election de mod`eles. PhD thesis, Universit?e Paris Sud - Paris
XI, 2007.
A. Baranes and P.-Y. Oudeyer. R-IAC: Robust Intrinsically Motivated Exploration and Active Learning. IEEE Transactions on Autonomous Mental Development, 1(3):155?169, October 2009.
D. Bosq and J.P. Lecoutre. Th?eorie de l?estimation fonctionnelle, volume 21. Economica, 1987.
Alexandra Carpentier and Odalric-Ambrym Maillard.
Online allocation and homogeneous partitioning for piecewise constant mean-approximation.
HAL, 2012.
URL
http://hal.archives-ouvertes.fr/hal-00742893.
Alexandra Carpentier, Alessandro Lazaric, Mohammad Ghavamzadeh, Rmi Munos, and Peter Auer.
Upper-con?dence-bound algorithms for active learning in multi-armed bandits. In Jyrki Kivinen,
Csaba Szepesv`ari, Esko Ukkonen, and Thomas Zeugmann, editors, Algorithmic Learning Theory,
volume 6925 of Lecture Notes in Computer Science, pages 189?203. Springer Berlin / Heidelberg,
2011.
E. Gin?e and R. Nickl. Con?dence bands in density estimation. The Annals of Statistics, 38(2):
1122?1170, 2010.
L. Gy?or?, M. Kohler, A. Krzy?zak, and Walk H. A distribution-free theory of nonparametric regression. Springer Verlag, 2002.
I. Ibragimov and R. Hasminski. Statistical estimation: Asymptotic theory. 1981.
M. Rosenblatt. Stochastic curve estimation, volume 3. Inst of Mathematical Statistic, 1991.
J. Schmidhuber. Formal theory of creativity, fun, and intrinsic motivation (19902010). Autonomous
Mental Development, IEEE Transactions on, 2(3):230?247, 2010.
C.J. Stone. Optimal rates of convergence for nonparametric estimators. The annals of Statistics,
pages 1348?1360, 1980.
J.W. Tukey. Non-parametric estimation ii. statistically equivalent blocks and tolerance regions?the
continuous case. The Annals of Mathematical Statistics, 18(4):529?539, 1947.
9
|
4757 |@word exploitation:1 version:2 norm:3 stronger:2 nd:1 proportion:1 d2:2 decomposition:2 covariance:1 shot:2 moment:3 initial:2 past:1 existing:1 com:1 gmail:1 dx:7 written:6 must:1 yet:4 fn:3 numerical:2 partition:25 enables:1 accordingly:1 cult:4 beginning:1 mental:2 provides:4 argmax1:1 mathematical:2 direct:1 combine:2 inside:3 introduce:7 intricate:1 indeed:5 ra:1 expected:3 multi:5 sud:1 inspired:1 election:1 armed:5 curse:1 increasing:2 provided:5 estimating:2 moreover:6 notation:3 bounded:5 what:3 xed:4 cm:1 kind:1 csaba:2 guarantee:1 pseudo:12 every:1 fun:1 exactly:3 universit:1 uk:2 partitioning:8 grant:1 enjoy:1 appear:1 arlot:3 positive:2 before:3 negligible:1 local:5 tends:1 limit:1 sd:2 ak:2 nitely:2 initialization:1 studied:1 challenging:2 heteroscedastic:2 dif:4 mentioning:1 range:1 statistically:1 practical:1 yj:1 block:1 procedure:8 nite:9 strasse:1 empirical:7 evolving:1 attain:1 road:1 refers:1 regular:1 get:6 convenience:2 interior:1 selection:1 close:2 restriction:1 equivalent:3 deterministic:4 yt:1 go:1 convex:2 simplicity:2 estimator:13 handle:2 notion:1 variation:2 autonomous:2 annals:3 target:2 pt:2 exact:1 homogeneous:8 us:1 agreement:2 pa:1 observed:2 eles:1 region:62 decrease:1 alessandro:1 intuition:2 environment:1 vanishes:1 reward:1 cam:1 ghavamzadeh:1 learner:5 baranes:2 eorie:1 describe:1 monte:1 kp:2 query:2 hyper:5 quite:1 spend:1 valued:1 say:1 distortion:1 otherwise:1 anr:2 maxp:1 statistic:5 noisy:2 itself:3 online:12 differentiable:2 propose:1 maximal:1 fr:1 degenerate:1 rst:7 convergence:1 generating:1 derive:3 depending:3 ac:1 odd:1 come:2 implies:1 drawback:1 correct:2 bean:3 stochastic:1 centered:1 exploration:2 enable:1 sary:1 hx:1 creativity:1 preliminary:1 proposition:3 extension:3 sati:1 hold:1 considered:3 montanuniversit:1 exp:2 normal:1 algorithmic:1 achieves:1 purpose:1 estimation:13 calais:1 council:1 tool:1 minimization:1 gaussian:3 always:1 aim:2 rather:2 pn:29 avoid:1 krzy:1 corollary:2 properly:2 vk:1 sense:1 inst:1 stopping:1 typically:1 her:1 bandit:6 quasi:2 going:2 selects:1 interested:1 josef:1 among:1 exponent:1 development:2 special:1 equal:2 once:1 having:3 sampling:11 nickl:2 progressive:1 look:3 future:2 report:1 t2:1 piecewise:9 np:2 lebesgue:4 maintain:1 bosq:2 huge:1 satis:8 highly:1 ouvertes:1 odalricambrym:1 antos:2 allocating:1 beforehand:2 allocates:1 unless:1 old:1 divide:1 walk:1 re:3 theoretical:1 instance:2 wb:1 downside:1 tp:39 morally:1 calibrate:1 introducing:1 subset:1 uniform:5 seventh:1 leoben:2 too:3 optimally:1 dependency:1 answer:2 supx:1 adaptively:1 density:2 stay:1 sequel:1 complacs:1 together:1 thesis:1 opposed:2 choose:7 possibly:1 admit:2 book:1 resort:2 derivative:2 leading:1 yp:3 account:2 de:12 singleton:2 gy:4 explicitly:1 depends:1 ad:1 piece:1 try:1 view:1 closed:3 tukey:2 linked:1 sup:1 competitive:3 recover:1 contribution:1 minimize:5 accuracy:1 variance:16 correspond:2 yield:2 vp:17 weak:2 accurately:1 manages:1 basically:1 carlo:1 worth:1 randomness:1 coef:1 cumbersome:1 ed:2 against:1 naturally:2 associated:1 proof:2 con:5 sampled:3 proved:1 dataset:2 intrinsically:1 ask:1 austria:1 lim:7 limh:1 knowledge:1 maillard:6 actually:1 auer:1 higher:4 varun:1 specify:1 cosi:1 shrink:1 fonctionnelle:1 anywhere:1 hand:1 french:1 hal:3 alexandra:3 building:1 effect:1 unbiased:1 true:2 laboratory:1 rpt:2 statslab:1 deal:1 criterion:1 stone:4 outline:1 mohammad:1 motion:1 passive:1 meaning:2 wise:2 ef:1 ari:2 functional:2 qp:7 volume:5 extend:2 discussed:1 m1:1 cambridge:1 zak:1 smoothness:5 rd:7 grid:2 similarly:4 inclusion:1 access:2 longer:1 deduce:2 add:1 brownian:1 recent:2 optimizing:1 belongs:1 driven:1 forcing:1 schmidhuber:2 verlag:1 inequality:1 yi:3 postponed:1 seen:5 additional:1 speci:1 ii:1 full:2 multiple:1 smooth:2 technical:1 offer:1 divided:1 equally:1 y:1 involving:1 regression:2 essentially:2 expectation:3 histogram:11 kernel:1 cell:13 szepesv:2 want:4 biased:2 regional:1 veri:1 archive:1 contrary:1 mod:1 call:1 ciently:1 ee:1 near:2 presence:1 bernstein:1 enough:1 affect:1 xj:1 iac:1 perfectly:1 restrict:1 reduce:1 whether:1 motivated:1 allocate:5 url:1 wilberforce:1 peter:1 nement:1 cause:1 remark:1 generally:3 detailed:1 informally:1 tune:1 ibragimov:4 nonparametric:2 band:1 subregions:1 diameter:1 http:1 zeugmann:1 wisely:1 problematic:1 andr:1 lazaric:1 per:1 rosenblatt:2 write:2 cb3:1 clarity:1 carpentier:8 asymptotically:1 sum:1 compete:1 powerful:1 throughout:1 decide:6 prefer:1 bound:11 def:17 fold:1 quadratic:11 oracle:12 rmi:1 constraint:1 precisely:2 bp:2 dence:5 extremely:1 ned:5 according:5 belonging:1 slightly:3 partitioned:1 evolves:1 kpn:7 happens:1 intuitively:1 explained:1 vpd:1 ln:8 equation:1 turn:1 eventually:1 know:2 letting:1 fp7:1 end:2 available:2 apply:1 observe:1 away:1 appearing:1 batch:4 rp:53 thomas:1 denotes:2 ensure:2 restrictive:1 build:2 hypercube:1 classical:2 objective:3 question:4 quantity:4 already:1 strategy:16 concentration:2 parametric:2 usual:1 said:1 gin:2 amongst:3 berlin:1 topic:2 odalric:2 collected:2 trivial:6 reason:1 degenerated:1 assuming:1 length:1 remind:1 illustration:1 minimizing:2 balance:1 equivalently:1 october:1 disastrous:1 potentially:1 nord:1 design:2 unknown:5 perform:1 allowing:1 upper:3 immediate:2 looking:1 nasty:1 community:1 introduced:1 namely:1 neces:1 paris:2 extensive:1 able:1 beyond:1 curiosity:1 fp:2 challenge:1 built:2 max:7 pascal2:1 power:3 event:4 natural:2 kivinen:1 arm:7 minimax:19 older:18 scheme:2 ne:6 irrespective:1 ready:1 literature:1 l2:6 acknowledgement:1 asymptotic:1 law:1 loss:14 fully:1 ukkonen:1 lecture:1 adaptivity:1 suf:1 interesting:4 allocation:22 limitation:4 grover:1 xp:21 minp:1 editor:1 course:2 supported:1 free:3 soon:2 bias:3 side:1 understand:1 ambrym:2 ber:1 allow:2 formal:1 munos:1 distributed:1 tolerance:1 curve:1 dimension:3 adaptive:8 franz:1 programme:1 transaction:2 emphasize:1 compact:1 global:1 active:11 sequentially:1 assumed:4 xi:5 economica:1 continuous:5 robust:1 contributes:1 heidelberg:1 complex:1 european:1 domain:4 main:2 linearly:1 motivation:2 noise:17 allowed:5 p1n:1 cient:2 cubic:5 precision:3 sub:14 wish:1 deterministically:1 explicit:2 wavelet:1 theorem:2 xt:4 x:1 exists:2 intrinsic:1 phd:1 budget:5 horizon:1 nk:1 egy:1 simply:1 expressed:1 partially:1 applies:2 springer:2 corresponds:2 goal:6 diam:1 jyrki:1 towards:1 typical:1 sylvain:1 uniformly:5 lemma:14 called:3 total:3 oudeyer:2 e:8 player:1 ucb:1 explo:1 formally:1 select:2 kohler:1 d1:1
|
4,151 | 4,758 |
Natural Images, Gaussian Mixtures and Dead Leaves
Daniel Zoran
Yair Weiss
Interdisciplinary Center for Neural Computation
School of Computer Science and Engineering
Hebrew University of Jerusalem
Israel
http : //www . cs . hu j i . ac .i l/ daniez
Hebrew University of Jerusalem
Israel
yweiss@cs . huj i. ac . i l
Abstract
Simple Gaussian Mixture Models (GMMs) learned from pixels of natural image
patches have been recently shown to be surprisingly strong performers in modeling
the statistics of natural images. Here we provide an in depth analysis of this simple
yet rich model. We show that such a GMM model is able to compete with even
the most successful models of natural images in log likelihood scores, denoising
performance and sample quality. We provide an analysis of what such a model
learns from natural images as a function of number of mixture components including covariance structure, contrast variation and intricate structures such as
textures, boundaries and more. Finally, we show that the salient properties of the
GMM learned from natural images can be derived from a simplified Dead Leaves
model which explicitly models occlusion, explaining its surprising success relative
to other models.
1 GMMs and natural image statistics models
Many models for the statistics of natural image patches have been suggested in recent years. Finding
good models for natural images is important to many different research areas - computer vision,
biological vision and neuroscience among others. Recently, there has been a growing interest in
comparing different aspects of models for natural images such as log-likelihood and multi-information
reduction performance, and much progress has been achieved [1,2, 3,4,5, 6]. Out of these results
there is one which is particularly interesting: simple, unconstrained Gaussian Mixture Models
(GMMs) with a relatively small number of mixture components learned from image patches are
extraordinarily good in modeling image statistics [6, 4]. This is a surprising result due to the simplicity
of GMMs and their ubiquity. Another surprising aspect of this result is that many of the current
models may be thought of as GMMs with an exponential or infinite number of components, having
different constraints on the covariance structure of the mixture components.
In this work we study the nature of GMMs learned from natural image patches. We start with a
thorough comparison to some popular and cutting edge image models. We show that indeed, GMMs
are excellent performers in modeling natural image patches. We then analyze what properties of
natural images these GMMs capture, their dependence on the number of components in the mixture
and their relation to the structure of the world around us. Finally, we show that the learned GMM
suggests a strong connection between natural image statistics and a simple variant of the dead
leaves model [7, 8] , explicitly modeling occlusions and explaining some of the success of GMMs in
modeling natural images.
1
3.5
.,...- ??.......-.-.. -..---'-.
1 ~~6\8161??
-.. .-.. --...--.-- ---..-.- -. --------------MII+??+ilIl
.....
.. . .
~
'[25 . . . ---- ]
B'II
1_
--
~2
;t::
fI
1-
---
,----
._.. :
61.5
.....
'"051
H
1-
1
0.5
.....
.._..
; --
1-
I 1--
I
f---
--
c?
1- ' fI?
IND G peA G pe A L ICA
Dcse
1-----
c
GSM MoGSM KL
GMM
Noisy INO GPCA GPCA L ICA DCSC GSM MoGSM KL
(a) Log Likelihood
GMM
(b) Denoising
Figure 1: (a) Log likelihood comparison - note how the GMM is able to outperform (or equal) all
other models despite its simplicity. (b) Denoising performance comparison - the GMM outperforms
all other models here as well, and denoising performance is more or less consistent with likelihood
performance. See text for more details .
2 Natural image statistics models - a comparison
As a motivation for this work, we start by rigorously comparing current models for natural images with
GMMs. While some comparisons have been reported before with a limited number of components in
the GMM [6] , we want to compare to state-of-the-art models also varying the number of components
systematically.
Each model was trained on 8 x 8 or 16 x 16 patches randomly sampled from the Berkeley Segmentation
Database training images (a data set of millions of patches). The DC component of all patches
was removed, and we discard it in all calculations . In all experiments, evaluation was done on the
same, unseen test set of a 1000 patches sampled from the Berkeley test images. We removed patches
having standard deviation below 0.002 (intensity values are between 0 and 1) as these are totally flat
patches due to saturation and contain no structure (only 8 patches were removed from the test set).
We do not perform any further preprocessing. The models we compare are: White Gaussian Noise
(Ind. G), PCA/Gaussian (PCA G), PCA/Laplace (PCA L), ICA (ICA) [9,10,11], 2xOvercompiete
sparse coding (2 x OCSC) [9] , Gaussian Scale Mixture (GSM), Mixture of Gaussian Scale Mixture
(MoGSM) [6], Karklin and Lewicki (KL) [12] and the GMM (with 200 components).
We compare the models using three criteria - log likelihood on unseen data, denoising results on
unseen data and visual quality of samples from each model. The complete details of training, testing
and comparisons may be found in the supplementary material of this paper - we encourage the reader
to read these details. All models and code are available online at: www.cs.huji.ac.ilJ~ daniez
Log likelihood The first experiment we conduct is a log likelihood comparison. For most of the
models above, a closed form calculation of the likelihood is possible, but for the 2 x OCSC and KL
models, we resort to Hamiltonian Importance Sampling (HAIS) [13]. HAIS allows us to estimate
likelihoods for these models accurately, and we have verified that the approximation given by HAIS
is relatively accurate in cases where exact calculations are feasible (see supplementary material for
details) . The results of the experiment may be seen in Figure 1a. There are several interesting results
in this figure. First, the important thing to note here is that GMMs outperforms all of the models
and is similar in performance to Karklin and Lewicki. In [6] a GMM with far less components (2-5)
has been compared to some other models (notably Restricted Boltzman Machines which the GMM
outperforms, and MoGSMs which slightly outperform the GMMs in this work) . Second, ICA with
its learned Gabor like filters [10] gives a very minor improvement when compared to PCA filters
with the same marginals. This has been noted before in [1]. Finally, overcomp1ete sparse coding is
actually a bit worse than complete sparse coding - while this is counter intuitive, this result has been
reported before as well [14, 2].
Denoising We compare the denoising performance of the different models. We added independent
white Gaussian noise with known standard deviation IJ"n = 25/ 255 to each of the patches in the
test set x. We then calculate the MAP estimate :X: of each model given the noisy patch. This can
2
be done in closed form for some of the models, and for those models where the MAP estimate
does not have a closed form, we resort to numerical approximation (see supplementary material
for more details). The performance of each model was measured using Peak Signal to Noise Ratio
(PSNR): PSNR = 10glO ( 1I Ix~xIl 2 ) . Results can be seen in Figure lb. Again, the GMM performs
extraordinarily well, outperforming all other models. As can be seen, results are consistent with the
log likelihood experiment - models with better likelihood tend to perform better in denoising [4].
Sample Quality As opposed to log likelihood and denoising, generating samples from all the
models compared here is easy. While it is more of a subjective measure, the visual quality of samples
may be an indicator to how well interesting structures are captured by a model. Figure 2 depicts
16 x 16 samples from a subset of the models compared here. Note that the GMM samples capture a
lot of the structure of natural images such as edges and textures, visible on the far right of the figure.
The Karklin and Lewicki model produces rather structured patches as well. GSM seems to capture
the contrast variation of images, but the patches themselves have very little structure (similar results
obtained with MoGSM, not shown). PCA lacks any meaningful structure, other than 1/ f power
spectrum.
As can be seen in the results we have just presented, the GMM is a very strong performer in modeling
natural image patches. While we are not claiming Gaussian Mixtures are the best models for natural
images, we do think this is an interesting result, and as we shall see later, it relates intimately to the
structure of natural images.
3 Analysis of results
So far we have seen that despite their simplicity, GMMs are very capable models for natural images.
We now ask - what do these models learn about natural images, and how does this affect their
performance?
3.1
How many mixture components do we need?
While we try to learn our GMMs with as few a priori assumptions as possible, we do need to set
one important parameter - the number of components in the mixture . As noted above, many of the
current models of natural images can be written in the form of GMMs with an exponential or infinite
number of components and different kinds of constraints on the covariance structure. Given this,
it is quite surprising that a GMM with a relatively small number of component (as above) is able
to compete with these models. Here we again evaluate the GMM as in the previous section but
now systematically vary the number of components and the size of the image patch. Results for the
16 x 16 model are shown in figure 3, see supplementary material for other patch sizes.
As can be seen, moving from one component to two already gives a tremendous boost in performance,
already outperforming lCA but still not enough to outperform GSM, which is outperformed at around
16 components. As we add more and more components to the mixture performance increases, but
seems to be converging to some upper bound (which is not reached here, see supplementary material
for smaller patch sizes where it is reached). This shows that a small number of components is indeed
PCAG
GSM
KL
GMM
Natural Images
Figure 2: Samples generated from some of the models compared in this work. PCA G produces no
structure other than 1/ f power spectrum. GSM capture the contrast variation of image patches nicely,
but the patches themselves have no structure. The GMM and KL models produce quite structured
patches - compare with the natural image samples on the right.
3
27.5
?=~=-:?::...a?::,,:;:;?-.e-""-,:f.r:-:::---?
27
~ 26.5 . _.-----/~~:-----------------c:::
____ .. __...;,...,L-_______________________ _
~
'-
I
?
I
26 -./ - . - -- -- - -.----..... I
==~\\~61jr ~
'===================
i
2
25.5-~r ........ -.-'?T -.-"t-,.?'??y_r-r-,.?,..T..,......-'?...,..,,.....,.. ... ?...
i
3
4
o
5
iog:): (Num Components)
1
2
3
4
5
?,-r..,...,.?.,.?l
6
7
log1(Nulll Components)
(b) Denoising
<a) Log Likelihood
Figure 3: (a) Log likelihood of GMMs trained on natural image patches, as a function of the number
of components in the mixture. Models of 16 x 16 were trained on a training set. Likelihood was
calculated on an unseen test set of patches. Already at 2 components the GMM outperforms rCA and
at 16 components it outperforms the 16 component GSM model. Likelihood continues to improve
as we add more components. See supplementary material for other patch sizes. (b) Denoising
performance as a function of number of components - performance behave qualitatively the same as
likelihood.
sufficient to achieve good performance and begs the questions - what do the first few components
learn that gives this boost in performance? what happens when we add more components to the
mixture, further improving performance? Before we answer these questions, we will shortly discuss
what are the properties of GMMs which we need to examine to gain this understanding.
3.2
GMMs as generative models
In order to gain a better understanding of GMMs it will be useful to think of them from a generative
perspective. The process of generating a sample from a GMM is a two step procedure; a non-linear
one, and a linear one. We pick one of the mixture components - the chances for the k-th component
to be picked are its mixing weight 1rk. Having picked the k-th component, we now sample N
independent Gaussian variables with zero mean and unit variance, where N is the number of pixels
in a patch (minus one for the DC component). We arrange these coefficients into a vector z. From
the covariance matrix of the k-th component we calculate the eigenvector matrix V k and eigenvalue
matrix D k . Then, the new sample x is:
This tells us that we can think of each covariance matrix in the mixture as a dictionary with N
elements. The dictionary elements are the "directions" each eigenvector in patch space points to, and
each of those is scaled by the corresponding eigenvalue. These are linearly mixed to form our patch.
In other words, to gain a better understanding of what each mixture component is capturing, we need
to look at the eigenvectors and eigenvalues of its corresponding covariance matrix.
3.3
Contrast
Figure 4 shows the eigenvectors and eigenvalues of the covariance matrices of a 2 component mixture
- as can be seen, the eigenvectors of both mixture components are very similar and they differ only
in their eigenvalue spectrum. The eigenvalue spectrum, on the other hand, is very similar in shape
but differs by a multiplicative constant (note the log scale). This behavior remains the same as we
add more and more components to the mixture - up to around 8-10 components (depending on the
patch size, not shown here) we get more components with similar eigenvector structure but different
eigenvalue distributions.
Modeling a patch as a mixture with the same eigenvectors but eigenvalues differing by a scalar
multiplier is in fact equivalent to saying that each patch is the product of a scalar z and a multivariate
Gaussian. This is exactly the Gaussian Scale Mixture model we compared to earlier! As can be
seen, 8- 10 components are already enough to equal the performance of the 16 component GSM.
This means that what the first few components of the mixture capture is the contrast variability of
natural image patches. This also means that factorial models like rCA have no hope in capturing this
as contrast is a global scaling of all coefficients together (something which is highly unlikely under
factorial models).
4
11" 1 = 0.5611
7t"2 = 0.4389
: : S====
.,
10
I = ~=~:~:~I
,
o
10
~
~
40
I nd ex
W
~
ro
Figure 4: Eigenvectors and eigenvalues of covariance matrices in a 2 component GMM trained on
natural images. Eigenvectors are sorted according to decreasing eigenvalue order, top left is the
largest eigenvalue. Note that the two components have approximately the same eigenvectors (up to
sign, and both resembling the Fourier basis) but different eigenvalue spectra. The eigenvalues mostly
differ by a scalar multiplication (note the log scale), hinting that this is, in fact, approximately a GSM
(see text for details).
3.4 Textures and boundaries
We have seen that the first components in the GMM capture the contrast variation of natural images,
but as we saw in Figure 3, likelihood continues to improve as we add more components, so we ask:
what do these extra components capture?
As we add more components to the mixture, we start revealing more specialized components which
capture different properties of natural images. Sorting the components by their mixing weights (where
the most likely ones are first), we observe that the first few tens of components are predominantly
Fourier like components, similar to what we have seen thus far, with varying eigenvalue spectra.
These capture textures at different scales and orientations. Figure 5 depicts two of these texture
components - note how their eigenvector structure is similar, but samples sampled from each of them
reveal that they capture different textures due to different eigenvalue spectra.
A more interesting family of components can be found in the mixture as we look into more rare
components. These components model boundaries of objects or textures - their eigenvectors are
structured such that most of the variability is on one side of an edge crossing the patch. These edges
come at different orientations, shifts and contrasts. Figure 5 depicts some of these components
at different orientations, along with two flat texture components for comparison. As can be seen,
we obtain a Fourier like structure which is concentrated on one side of the patch. Sampling from
the Gaussian associated with each mixture component (bottom row) reveals what each component
actually captures - patches with different textures on each side of an edge.
To see how these components relate to actual structure in natural images we perform the following
experiment. We take an unseen natural image, and for each patch in the image we calculate the most
likely component from the learned mixture. Figure 6 depicts those patches assigned to each of the
five components in Figure 5, where we show only non-overlapping patches for clarity (there are
many more patches assigned to each component in the image). The colors correspond to each of
the components in Figure 5. Note how the boundary components capture different orientations, and
prefer mostly borders with a specific ordering (top to bottom edge, and not vice versa for example),
while texture components tend to stay within object boundaries. The sources for these phenomena
will be discussed in the next section.
4 The "mini" dead leaves model
4.1
Dead leaves models
We now show that many of the properties of natural scenes that were captured by the GMM model
can be derived from a variant of the dead leaves model [15]. In the original dead leaves model, two
dimensional textured surfaces (which are sometimes called "objects" or "leaves") are sampled from a
shape and size distribution and then placed on the image plane at random positions, occluding one
another to produce an image. With a good choice of parameters, such a model creates images which
5
,," = 0.004
Figure 5: Leading eigenvectors (top row) and samples (bottom row) from 5 different components
from a 16 x 16 GMM. From left to right: components 12 and 23, having a similar Fourier like
eigenvector structure, but different eigenvalue spectra, notable by different texture generated from
each component. Three different "boundary" like component: note how the eigenvector structure has
a Fourier like structure which is concentrated only on side of the patch, depicting an edge structure.
These come in different orientations, shifts and contrasts in the mixture. The color markings are in
reference to Figure 6.
share many properties with natural images such as scale invariance, heavy tailed filter responses and
bow-tie distributions for conditional pair-wise filter responses [16, 17, 8]. A recent work by Pitkow
[8] provides an interesting review and analysis of these properties.
4.2 Mini dead leaves
We propose here a simple model derived from the dead leaves model which we call the "Mini Dead
Leaves" model. This is a patch based version of the dead leaves model, and can be seen as an
approximation of what happens when sampling small patches from an image produced by the dead
leaves model.
In mini dead leaves we generate an image patch in the following manner: for each patch we randomly
decide if this patch would be a "flat" patch or an "edge" patch. This is done by flipping a coin with
probability p. Flat patches are then produced by sampling a texture from a given texture process. In
this case we use a multidimensional Gaussian with some stationary texture covariance matrix which is
multiplied by a scalar contrast variable. We then add to the texture a random scalar mean value, such
that the final patch x is of the form: x = f.1, + zt where f.1, rv N(O, 1) is a scalar, t rv N(O,~) is a
Figure 6: Components assignment on natural images taken from the Berkeley test images. For each
patch in the image the most likely component from the mixture was calculated - presented here are
patches which were assigned to one of the components in Figure 5. Assignment are much more dense
than presented here, but we show only non-overlapping patches for clarity. Color codes correspond
to the colors in Figure 5. Note how different components capture different structures in the image.
See text and Figure 5 for more details.
6
Flat
p
/
D
-----.
~
o
"'-
Edge
;tf
1~
[)~ITI
(a)
(b)
(c)
(d)
Figure 7 : (a) The mini dead leaves models. Patches are either "flat" or "edge" patches. Flat patches
are sampled from a multivariate Gaussian texture which is scaled by a contrast scalar and a mean value
is added to it to form the patch. Edge patches are created by sampling two flat patches, an occlusion
mask and setting the pixels of each side of the mask to come from a different flat patch. See text
for full details . (b) Samples generated from the mini dead leaves model with their DC removed. (c)
Leading eigenvectors of an edge component from a mini dead leaves model. (d) Leading eigenvectors
of a component from the GMM trained on natural images - note how similar the structure is to the
mini dead leaves model analytical result. See text for details.
vector and the scalar z is sampled from a discrete set of variables Zk with a corresponding probability
'Irk. This results in a GSM texture to which we add a random mean (DC) value. In all experiments
here, we use a GSM trained on natural images .
Edge patches are generated by sampling two independent Flat patches from the texture process, f
and g, and then generating an occlusion mask to combine the two . We use a simple occlusion mask
generation process here: we choose a random angle (J and a random distance r measured from the
center of the patch, where both (J and r may be quantized - this defines the location of a straight
edge on the patch. Every pixel on one side of the edge is assumed to come from the same object, and
pixels from different sides of the patch come from different objects. We label all pixels belonging
to one object by L1 and to the other object by L 2 . We then generate the patch by taking all pixels
i E L1 to Xi = Ii and similarly X iEL 2 = g i. This results in a patch with two textured areas, one with
a mean value /1>1 and the other with /1>2. Figure 7a depicts the generative process for both kind of
patches and Figure 7b depicts samples from the model.
4.3
Gaussian mixtures and dead leaves
It can be easily seen that the mini dead leaves model is, in fact, a GMM. For each configuration of
hidden variables (denoting whether the patch is "fiat" or "edge", the scalar multiplier z and if it is
an edge patch the second scalar multiplier Z 2, r and (J) we have a Gaussian for which we know the
covariance matrix exactly. Together, all configurations form a GMM - the interesting thing here is
how the stnlcture of the covariance matrix given the hidden variable relates to natural images.
For Flat patches, the covariance is trivial- it is merely the texture of the stationary texture process L;
multiplied by the corresponding contrast scalar z. Since we require the texture to be stationary its
eigenvectors are the Fourier basis vectors [18] (up to boundary effects), much like the ones visible in
the first two components in Figure 5.
For Edge patches, given the hidden variable we know which pixel belongs to which "object" in the
patch, that is, we know the shape of the occlusion mask exactly. If i and j are two pixels in different
objects, we know they will be independent, and as such uncorrelated, resulting in zero entries in the
covariance matrix. Thus, if we arrange the pixels by their object assignment, the eigenvectors of such
a covariance matrix would be of the form:
where v is an eigenvector of the stationary (within-object) covariance and the rest of the entries are
zeros, thus eigenvectors of the covariance will be zero on one side of the occlusion mask and Fourierlike on the other side. Figure 7c depicts the eigenvector of such an edge component covariance - note
the similar structure to Figure 7d and 5. This block structure is a common structure in the GMM
learned from natural images, showing that indeed such a dead leaves model is consistent with what
we find in GMMs learned on natural images.
7
(a) Log Likelihood Comparison
(b) Mini Dead Leaves - ICA
(c) Natural Images - ICA
Figure 8: (a) Log likelihood comparison with mini dead leaves data. We train a GMM with a varying
number of components from mini dead leaves samples, and test its likelihood on a test set. We
compare to a PCA, ICA and a GSM model, all trained on mini dead leaves samples - as can be seen,
the GMM outperforms these considerably. Both PCA and ICA seek linear transformations, but since
the underlying generative process is non-linear (see Figure 7a), they fail. The GSM captures the
contrast variation of the data, but does not capture occlusions, which are an important part of this
model. (b) and (c) ICA filters learned on mini dead leaves and natural image patches respectively,
note the high similarity.
4.4 From mini dead leaves to natural images
We repeat the log likelihood experiment from sections 2 and 3, comparing to PCA, ICA and GSM
models to GMMs. This time, however, both the training setand test set are generated from the mini
dead leaves model. Results can be seen in Figure 8a. Both ICA and PCA do the best job that they
can in terms of finding linear projections that decorrelate the data (or make it as sparse as possible).
But because the true generative process for the mini dead leaves is not a linear transformation of
lID variables, neither of these does a very good job in terms of log likelihood. Interestingly - ICA
filters learned on mini dead leaves samples are astonishingly similar to those obtain when trained on
natural images - see Figure 8b and 8c. The GSM model can capture the contrast variation of the data
easily, but not the structure due to occlusion. A GMM with enough components, on the other hand, is
capable of explicitly modeling contrast and occlusion using covariance functions such as in Figure 7c,
and thus gives much better log likelihood to the dead leaves data. This exact same pattern of results
can be seen in natural image patches (Figure 2), suggesting that the main reason for the excellent
performance of GMMs on natural image patches is its ability to model both contrast and occlusions.
5 Discussion
In this paper we have provided some additional evidence for the surprising success of GMMs in
modeling natural images. We have investigated the causes for this success and the different properties
of natural images which are captured by the model. We have also presented an analytical generative
model for image patches which explains many of the features learned by the GMM from natural
images, as well as the shortcomings of other models.
One may ask - is the mini dead leaves model a good model for natural images? Does it explain
everything learned by the GMM? While the mini dead leaves model definitely explains some of the
properties learned by the GMM, at its current simple form presented here, it is not a much better
model than a simple GSM model. When adding the occlusion process into the model, the mini dead
leaves gains -0.1 bit/pixel when compared to the GSM texture process it uses on its own. This makes
it as good as a 32 component GMM, but significantly worse than the 200 components model (for
8 x 8 patches). There are two possible explanations for this. One is that the GSM texture process
is just not enough, and a richer texture process is needed (much like the one learned by the GMM).
The second is that the simple occlusion model we use here is too simplistic, and does not allow for
capturing the variable structures of occlusion present in natural images. Both of these may serve
as a starting point for a more efficient and explicit model for natural images, handling occlusions
and different texture processes explicitly. There have been several works in this direction already
[19,20,21], and we feel this may hold promise for creating links to higher level visual tasks such as
segmentation, recognition and more.
Acknowledgments
The authors wish to thank the Charitable Gatsby Foundation and the ISF for support.
8
References
[1] M. Bethge, "Factorial coding of natural images: how effective are linear models in removing higher-order
dependencies?" vol. 23, no. 6, pp. 1253-1268, June 2006.
[2] P. Berkes, R. Turner, and M. Sahani, "On sparsity and overcompleteness in image models," in NIPS, 2007.
[3] S. Lyu and E. P. Simoncelli, "Nonlinear extraction of iindependent componentsuof natural images using
radial Gaussianization," Neural Computation, vol. 21 , no. 6, pp. 1485-1519, Jun 2009.
[4] D. Zoran and Y. Weiss, "From learning models of natural image patches to whole image restoration," in
Computer Vision (ICCV), 2011 IEEE International Conference on. IEEE, 2011, pp. 479-486.
[5] B. Culpepper, J. Sohl-Dickstein, and B. Olshausen, "Building a better probabilistic model of images by
factorization," in Computer Vision (ICCV), 20111EEE International Conference on. IEEE, 2011.
[6] L. Theis, S. Gerwinn, F. Sinz, and M. Bethge, "In all likelihood, deep belief is not enough," The Journal of
Machine Learning Research, vol. 999888, pp. 3071-3096, 2011.
[7] G. Matheron, Random sets and integral geometry.
Wiley New York, 1975, vol. 1.
[8] X. Pitkow, "Exact feature probabilities in images with occlusion," Journal of Vision, vol. 10, no. 14,2010.
[9] B. 01shausen et al., "Emergence of simple-cell receptive field properties by learning a sparse code for
natural images," Nature, vol. 381, no. 6583, pp. 607-609, 1996.
[10] A. J. Bell and T. J. Sejnowski, "The independent components of natural scenes are edge filters," Vision
Research, vol. 37, pp. 3327-3338, 1997.
[11] A. Hyvarinen and E. Oja, "Independent component analysis: algorithms and applications," Neural networks,
vol. 13, no. 4-5, pp. 411-430, 2000.
[12] Y. Karklin and M. Lewicki, "Emergence of complex cell properties by learning to generalize in natural
scenes," Nature, November 2008.
[13] J. Sohl-Dickstein and B. Culpepper, "Hamiltonian annealed importance sampling for partition function
estimation," 2011.
[14] M. Lewicki and B. Olshausen, "Probabilistic framework for the adaptation and comparison of image codes,"
JOSA A, vol. 16, no. 7, pp. 1587-1601 , 1999.
[15] A. Lee, D. Mumford, and J. Huang, "Occlusion models for natural images: A statistical study of a
scale-invariant dead leaves model," International Journal of Computer Vision, vol. 41, no. 1, pp. 35-59,
2001.
[16] C. Zetzsche, E. Barth, and B. Wegmann, "The importance of intrinsically two-dimensional image features
in biological vision and picture coding," in Digital images and human vision. MIT Press, 1993, p. 138.
[17] E . Simoncelli, "Bayesian denoising of visual images in the wavelet domain," Lecture Notes in Statistics New York-Springer Verlag, pp. 291-308,1999.
[18] D. Field, "What is the goal of sensory coding?" Neural computation, vol. 6, no. 4, pp. 559-601, 1994.
[19] J. Lucke, R. Turner, M. Sahani, and M. Henniges, "Occlusive components analysis," Advances in Neural
Information Processing Systems, vol. 22, pp. 1069-1077, 2009~
[20] G. Puertas, J. Bornschein, and 1. Lucke, "The maximal causes of natural scenes are edge filters," in NIPS,
vol. 23 , 2010,pp. 1939-1947.
[21] N. Le Roux, N. Heess, J. Shotton, and J. Winn, "Learning a generative model of images by factoring
appearance and shape," Neural Computation, vol. 23, no. 3, pp. 593-650, 2011.
9
|
4758 |@word version:1 seems:2 nd:1 hu:1 seek:1 covariance:18 decorrelate:1 pick:1 minus:1 reduction:1 configuration:2 score:1 daniel:1 denoting:1 interestingly:1 outperforms:6 subjective:1 current:4 comparing:3 surprising:5 yet:1 written:1 numerical:1 visible:2 partition:1 shape:4 stationary:4 generative:7 leaf:35 plane:1 hamiltonian:2 num:1 gpca:2 provides:1 quantized:1 location:1 five:1 along:1 combine:1 manner:1 mask:6 ica:13 indeed:3 notably:1 themselves:2 examine:1 growing:1 multi:1 intricate:1 behavior:1 decreasing:1 little:1 actual:1 totally:1 provided:1 underlying:1 occlusive:1 what:14 israel:2 kind:2 eigenvector:8 differing:1 finding:2 transformation:2 sinz:1 thorough:1 berkeley:3 multidimensional:1 every:1 tie:1 exactly:3 ro:1 scaled:2 unit:1 before:4 engineering:1 despite:2 approximately:2 suggests:1 limited:1 factorization:1 acknowledgment:1 testing:1 block:1 differs:1 procedure:1 area:2 bell:1 thought:1 gabor:1 revealing:1 projection:1 word:1 significantly:1 radial:1 get:1 www:2 equivalent:1 map:2 center:2 resembling:1 jerusalem:2 annealed:1 starting:1 simplicity:3 roux:1 pitkow:2 variation:6 laplace:1 feel:1 exact:3 us:1 element:2 crossing:1 recognition:1 particularly:1 continues:2 database:1 bottom:3 capture:16 calculate:3 ordering:1 counter:1 removed:4 rigorously:1 zoran:2 trained:8 serve:1 creates:1 basis:2 textured:2 easily:2 train:1 irk:1 shortcoming:1 effective:1 sejnowski:1 tell:1 extraordinarily:2 quite:2 richer:1 supplementary:6 ability:1 statistic:7 unseen:5 think:3 emergence:2 noisy:2 final:1 online:1 eigenvalue:16 analytical:2 bornschein:1 propose:1 product:1 maximal:1 adaptation:1 bow:1 mixing:2 achieve:1 intuitive:1 xil:1 generating:3 produce:4 object:11 depending:1 ac:3 measured:2 ij:1 minor:1 school:1 progress:1 job:2 strong:3 ex:1 c:3 come:5 differ:2 direction:2 gaussianization:1 filter:8 pea:1 human:1 material:6 everything:1 explains:2 require:1 yweiss:1 biological:2 hold:1 around:3 lyu:1 vary:1 arrange:2 dictionary:2 estimation:1 outperformed:1 label:1 saw:1 largest:1 vice:1 tf:1 overcompleteness:1 hope:1 mit:1 gaussian:17 rather:1 varying:3 derived:3 june:1 improvement:1 likelihood:26 contrast:16 factoring:1 wegmann:1 unlikely:1 daniez:2 hidden:3 relation:1 pixel:11 among:1 orientation:5 priori:1 art:1 equal:2 field:2 having:4 nicely:1 sampling:7 extraction:1 look:2 others:1 culpepper:2 few:4 randomly:2 oja:1 geometry:1 occlusion:17 interest:1 highly:1 evaluation:1 mixture:32 zetzsche:1 accurate:1 edge:21 encourage:1 capable:2 integral:1 huj:1 conduct:1 modeling:9 earlier:1 restoration:1 assignment:3 deviation:2 subset:1 rare:1 entry:2 successful:1 too:1 reported:2 dependency:1 answer:1 considerably:1 peak:1 definitely:1 huji:1 international:3 interdisciplinary:1 stay:1 probabilistic:2 lee:1 together:2 bethge:2 ilj:1 again:2 opposed:1 choose:1 huang:1 worse:2 dead:34 creating:1 resort:2 leading:3 suggesting:1 coding:6 coefficient:2 notable:1 explicitly:4 later:1 try:1 lot:1 closed:3 picked:2 analyze:1 multiplicative:1 reached:2 start:3 variance:1 correspond:2 generalize:1 bayesian:1 accurately:1 produced:2 straight:1 gsm:19 explain:1 ilil:1 pp:14 associated:1 josa:1 sampled:6 gain:4 popular:1 ask:3 intrinsically:1 color:4 psnr:2 segmentation:2 fiat:1 actually:2 barth:1 higher:2 response:2 wei:2 done:3 just:2 hand:2 nonlinear:1 overlapping:2 lack:1 defines:1 quality:4 reveal:1 olshausen:2 building:1 effect:1 contain:1 multiplier:3 true:1 assigned:3 read:1 mogsm:4 white:2 ind:2 noted:2 criterion:1 complete:2 performs:1 l1:2 image:85 wise:1 recently:2 fi:2 predominantly:1 common:1 specialized:1 million:1 discussed:1 marginals:1 isf:1 versa:1 unconstrained:1 similarly:1 moving:1 similarity:1 surface:1 glo:1 add:8 berkes:1 something:1 multivariate:2 own:1 recent:2 perspective:1 belongs:1 discard:1 verlag:1 gerwinn:1 outperforming:2 success:4 seen:16 captured:3 additional:1 performer:3 signal:1 ii:2 relates:2 rv:2 full:1 simoncelli:2 calculation:3 iog:1 converging:1 variant:2 simplistic:1 vision:9 sometimes:1 achieved:1 cell:2 want:1 winn:1 source:1 extra:1 rest:1 tend:2 thing:2 gmms:23 call:1 shotton:1 easy:1 enough:5 affect:1 shift:2 whether:1 pca:11 york:2 cause:2 deep:1 heess:1 useful:1 eigenvectors:14 factorial:3 ten:1 concentrated:2 http:1 generate:2 outperform:3 sign:1 neuroscience:1 discrete:1 shall:1 promise:1 vol:14 dickstein:2 salient:1 clarity:2 gmm:36 neither:1 verified:1 henniges:1 merely:1 year:1 compete:2 angle:1 saying:1 reader:1 family:1 decide:1 patch:79 mii:1 eee:1 prefer:1 scaling:1 bit:2 capturing:3 bound:1 lucke:2 constraint:2 scene:4 flat:11 aspect:2 fourier:6 relatively:3 structured:3 marking:1 lca:1 according:1 belonging:1 jr:1 smaller:1 slightly:1 intimately:1 lid:1 happens:2 restricted:1 iccv:2 rca:2 invariant:1 taken:1 remains:1 puertas:1 discus:1 fail:1 needed:1 know:4 available:1 multiplied:2 observe:1 ubiquity:1 yair:1 shortly:1 coin:1 original:1 top:3 added:2 already:5 question:2 flipping:1 receptive:1 mumford:1 dependence:1 distance:1 link:1 thank:1 trivial:1 reason:1 code:4 mini:21 ratio:1 hebrew:2 mostly:2 relate:1 claiming:1 zt:1 perform:3 upper:1 iti:1 november:1 behave:1 ino:1 variability:2 dc:4 lb:1 intensity:1 pair:1 kl:6 connection:1 learned:15 tremendous:1 boost:2 nip:2 able:3 suggested:1 below:1 pattern:1 sparsity:1 saturation:1 including:1 explanation:1 belief:1 power:2 natural:61 indicator:1 karklin:4 turner:2 improve:2 picture:1 created:1 log1:1 jun:1 sahani:2 text:5 review:1 understanding:3 theis:1 multiplication:1 relative:1 lecture:1 mixed:1 interesting:7 generation:1 digital:1 foundation:1 astonishingly:1 sufficient:1 consistent:3 begs:1 charitable:1 systematically:2 uncorrelated:1 share:1 heavy:1 row:3 surprisingly:1 placed:1 repeat:1 side:9 allow:1 explaining:2 taking:1 sparse:5 boundary:7 depth:1 calculated:2 world:1 rich:1 sensory:1 author:1 qualitatively:1 preprocessing:1 simplified:1 far:4 boltzman:1 hyvarinen:1 cutting:1 global:1 reveals:1 assumed:1 xi:1 spectrum:8 tailed:1 nature:3 learn:3 zk:1 depicting:1 improving:1 excellent:2 investigated:1 complex:1 domain:1 dense:1 main:1 linearly:1 motivation:1 noise:3 border:1 whole:1 depicts:7 gatsby:1 wiley:1 position:1 explicit:1 wish:1 exponential:2 pe:1 learns:1 ix:1 wavelet:1 rk:1 removing:1 specific:1 showing:1 hinting:1 evidence:1 adding:1 sohl:2 importance:3 texture:25 iel:1 sorting:1 likely:3 appearance:1 visual:4 lewicki:5 scalar:11 springer:1 chance:1 conditional:1 sorted:1 goal:1 feasible:1 infinite:2 denoising:12 called:1 invariance:1 meaningful:1 occluding:1 support:1 evaluate:1 phenomenon:1 handling:1
|
4,152 | 4,759 |
A lattice filter model of the visual pathway
Karol Gregor
Dmitri B. Chklovskii
Janelia Farm Research Campus, HHMI
19700 Helix Drive, Ashburn, VA
{gregork, mitya}@janelia.hhmi.org
Abstract
Early stages of visual processing are thought to decorrelate, or whiten, the incoming temporally varying signals. Motivated by the cascade structure of the visual
pathway (retina ? lateral geniculate nucelus (LGN) ? primary visual cortex, V1)
we propose to model its function using lattice filters - signal processing devices
for stage-wise decorrelation of temporal signals. Lattice filter models predict neuronal responses consistent with physiological recordings in cats and primates. In
particular, they predict temporal receptive fields of two different types resembling
so-called lagged and non-lagged cells in the LGN. Moreover, connection weights
in the lattice filter can be learned using Hebbian rules in a stage-wise sequential
manner reminiscent of the neuro-developmental sequence in mammals. In addition, lattice filters can model visual processing in insects. Therefore, lattice filter
is a useful abstraction that captures temporal aspects of visual processing.
Our sensory organs face an ongoing barrage of stimuli from the world and must transmit as much
information about them as possible to the rest of the brain [1]. This is a formidable task because, in
sensory modalities such as vision, the dynamic range of natural stimuli (more than three orders of
magnitude) greatly exceeds the dynamic range of relay neurons (less than two orders of magnitude)
[2]. The reason why high fidelity transmission is possible at all is that the continuity of objects
in the physical world leads to correlations in natural stimuli, which imply redundancy. In turn,
such redundancy can be eliminated by compression performed by the front end of the visual system
leading to the reduction of the dynamic range [3, 4].
A compression strategy appropriate for redundant natural stimuli is called predictive coding [5, 6, 7].
In predictive coding, a prediction of the incoming signal value is computed from past values delayed
in the circuit. This prediction is subtracted from the actual signal value and only the prediction
error is transmitted. In the absence of transmission noise such compression is lossless as the original
signal could be decoded on the receiving end by inverting the encoder. If predictions are accurate, the
dynamic range of the error is much smaller than that of the natural stimuli. Therefore, minimizing
dynamic range using predictive coding reduces to optimizing prediction.
Experimental support for viewing the front end of the visual system as a predictive encoder comes
from the measurements of receptive fields [6, 7]. In particular, predictive coding suggests that, for
natural stimuli, the temporal receptive fields should be biphasic and the spatial receptive fields center-surround. These predictions are born out by experimental measurements in retinal ganglion
cells, [8], lateral geniculate nucleus (LGN) neurons [9] and fly second order visual neurons called
large monopolar cells (LMCs) [2]. In addition, the experimentally measured receptive fields vary
with signal-to-noise ratio as would be expected from optimal prediction theory [6]. Furthermore,
experimentally observed whitening of the transmitted signal [10] is consistent with removing correlated components from the incoming signals [11].
As natural stimuli contain correlations on time scales greater than hundred milliseconds, experimentally measured receptive fields of LGN neurons are equally long [12]. Decorrelation over such long
time scales requires equally long delays. How can such extended receptive field be produced by
1
biological neurons and synapses whose time constants are typically less than hundred milliseconds
[13]?
The field of signal processing offers a solution to this problem in the form of a device called a lattice
filter, which decorrelates signals in stages, sequentially adding longer and longer delays [14, 15, 16,
17]. Motivated by the cascade structure of visual systems [18], we propose to model decorrelation
in them by lattice filters. Naturally, visual systems are more complex than lattice filters and perform
many other operations. However, we show that the lattice filter model explains several existing
observations in vertebrate and invertebrate visual systems and makes testable predictions. Therefore,
we believe that lattice filters provide a convenient abstraction for modeling temporal aspects of visual
processing.
This paper is organized as follows. First, we briefly summarize relevant results from linear prediction
theory. Second, we explain the operation of the lattice filter in discrete and continuous time. Third,
we compare lattice filter predictions with physiological measurements.
1
Linear prediction theory
Despite the non-linear nature of neurons and synapses, the operation of some neural circuits in
vertebrates [19] and invertebrates [20] can be described by a linear systems theory. The advantage
of linear systems is that optimal circuit parameters may be obtained analytically and the results are
often intuitively clear. Perhaps not surprisingly, the field of signal processing relies heavily on the
linear prediction theory, offering a convenient framework [15, 16, 17]. Below, we summarize the
results from linear prediction that will be used to explain the operation of the lattice filter.
Consider a scalar sequence y = {yt } where time t = 1, . . . , n. Suppose that yt at each time
point depends on side information provided by vector zt . Our goal is to generate a series of linear
predictions, y?t from the vector zt , y?t = w ? zt . We define a prediction error as:
et = yt ? y?t = yt ? w ? zt
and look for values of w that minimize mean squared error:
1 X 2
1 X
he2 i =
et =
(yt ? w ? zt )2 .
nt t
nt t
(1)
(2)
The weight vector, w is optimal for prediction of sequence y from sequence z if and only if the
prediction error sequence e = y ? w ? z is orthogonal to each component of vector z:
hezi = 0.
(3)
When the whole series y is given in advance, i.e. in the offline setting, these so-called normal
equations can be solved for w, for example, by Gaussian elimination [21]. However, in signal
processing and neuroscience applications, another setting called online is more relevant: At every
time step t, prediction y?t must be made using only current values of zt and w. Furthermore, after a
prediction is made, w is updated based on the prediction y?t and observed yt , zt .
In the online setting, an algorithm called stochastic gradient descent is often used, where, at each
time step, w is updated in the direction of negative gradient of e2t :
w ? w ? ??w (yt ? w ? zt )2 .
(4)
This leads to the following weight update, known as least mean square (LMS) [15], for predicting
sequence y from sequence z:
w ? w + ?et zt ,
(5)
where ? is the learning rate. The value of ? represents the relative influence of more recent observations compared to more distant ones. The larger the learning rate the faster the system adapts to
recent observations and less past it remembers.
In this paper, we are interested in predicting a current value xt of sequence x from its past values
xt?1 , . . . , xt?k restricted by the prediction order k > 0:
x
?t = wk ? (xt?1 , . . . , xt?k )T .
2
(6)
This problem is a special case of the online linear prediction framework above, where yt = xt ,
zt = (xt?1 , . . . , xt?k )T . Then the gradient update is given by:
w ? wk + ?et (xt?1 , . . . , xt?k )T .
(7)
While the LMS algorithm can find the weights that optimize linear prediction (6), the filter wk has
a long temporal extent making it difficult to implement with neurons and synapses.
2
Lattice filters
One way to generate long receptive fields in circuits of biological neurons is to use a cascade architecture, known as the lattice filter, which calculates optimal linear predictions for temporal sequences and transmits prediction errors [14, 15, 16, 17]. In this section, we explain the operation of
a discrete-time lattice filter, then adapt it to continuous-time operation.
2.1
Discrete-time implementation
The first stage of the lattice filter, Figure 1, calculates the error of the first order optimal prediction
(i.e. only using the preceding element of the sequence), the second stage uses the output of the first
stage and calculates the error of the second order optimal prediction (i.e. using only two previous
values) etc. To make such stage-wise error computations possible the lattice filter calculates at every
stage not only the error of optimal prediction of xt from past values xt?1 , . . . , xt?k , called forward
error,
ftk = xt ? wk ? (xt?1 , . . . , xt?k )T ,
(8)
but, perhaps non-intuitively, also the error of optimal prediction of a past value xt?k from the more
recent values xt?k+1 , . . . , xt , called backward error:
0
bkt = xt?k ? w k ? (xt?k+1 , . . . , xt )T ,
k
where w and w
0
k
(9)
are the weights of the optimal prediction.
For example, the first stage of the filter calculates the forward error ft1 of optimal prediction of xt
from xt?1 : ft1 = xt ? u1 xt?1 as well as the backward error b1t of optimal prediction of xt?1 from
xt : b1t = xt?1 ? v 1 xt , Figure 1. Here, we assume that coefficients u1 and v 1 that give optimal linear
prediction are known and return to learning them below.
Each following stage of the lattice filter performs a stereotypic operation on its inputs, Figure 1. The
k-th stage (k > 1) receives forward, ftk?1 , and backward, bk?1
, errors from the previous stage,
t
delays backward error by one time step and computes a forward error:
ftk = ftk?1 ? uk bk?1
t?1
(10)
of the optimal linear prediction of ftk?1 from bk?1
t?1 . In addition, each stage computes a backward
error
k k?1
bkt = bk?1
(11)
t?1 ? v ft
k?1
of the optimal linear prediction of bk?1
.
t?1 from ft
As can be seen in Figure 1, the lattice filter contains forward prediction error (top) and backward
prediction error (bottom) branches, which interact at every stage via cross-links. Operation of the
lattice filter can be characterized by the linear filters acting on the input, x, to compute forward
or backward errors of consecutive order, so called prediction-error filters (blue bars in Figure 1).
Because of delays in the backward error branch the temporal extent of the filters grows from stage
to stage.
In the next section, we will argue that prediction-error filters correspond to the measurements of
temporal receptive fields in neurons. For detailed comparison with physiological measurements we
will use the result that, for bi-phasic prediction-error filters, such as the ones in Figure 1, the first
bar of the forward prediction-error filter has larger weight, by absolute value, than the combined
weights of the remaining coefficients of the corresponding filter. Similarly, in backward predictionerror filters, the last bar has greater weight than the rest of them combined. This fact arises from
the observation that forward prediction-error filters are minimum phase, while backward predictionerror filters are maximum phase [16, 17].
3
Figure 1: Discrete-time lattice filter performs stage-wise computation of forward and backward prediction errors. In the first stage, the optimal prediction of xt from xt?1 is computed by
delaying the input by one time step and multiplying it by u1 . The upper summation unit subtracts the
predicted xt from the actual value and outputs prediction error ft1 . Similarly, the optimal prediction
of xt?1 from xt is computed by multiplying the input by v 1 . The lower summation unit subtracts
the optimal prediction from the actual value and outputs backward error b1t . In each following stage
k, the optimal prediction of ftk?1 from bk?1
is computed by delaying bk?1
by one time step and
t
t
k
multiplying it by u . The upper summation unit subtracts the prediction from the actual ftk?1 and
k?1
is computed by
outputs prediction error ftk . Similarly, the optimal prediction of bk?1
t?1 from ft
k
multiplying it by u . The lower summation unit subtracts the optimal prediction from the actual
value and outputs backward error bkt . Black connections have unitary weights and red connections
have learnable negative weights. One can view forward and backward error calculations as applications of so-called prediction-error filters (blue) to the input sequence. Note that the temporal extent
of the filters gets longer from stage to stage.
Next, we derive a learning rule for finding optimal coefficients u and v in the online setting. The uk
k?1
k
, zt = bk?1
is used for predicting ftk?1 from bk?1
t?1 and
t?1 to obtain error ft . By substituting yt = ft
k
k
et = ft into (5) the update of u becomes
uk ? uk + ?ftk bk?1
t?1 .
(12)
Similarly, v k is updated by
v k ? v k + ?bkt ftk?1 .
(13)
Interestingly, the updates of the weights are given by the product of the activities of outgoing and
incoming nodes of the corresponding cross-links. Such updates are known as Hebbian learning rules
thought to be used by biological neurons [22, 23].
Finally, we give a simple proof that, in the offline setting when the entire sequence x is known, f k
and bk , given by equations (10, 11), are indeed errors of optimal k-th order linear prediction. Let D
be one step time delay operator (Dx)t = xt?1 . The induction statement at k is that f k and bk are
k-th order forward and backward errors of optimal linear prediction which is equivalent to f k and bk
0
0
being of the form f k = x?w1k Dx?. . .?wkk Dk x and bk = Dk x?w1k Dk?1 x?. . .?wkk x and, from
k i
k i
k i?1
normal equations (3), satisfying hf D xi = 0 and hDb D xi = hb D xi = 0 for i = 1, . . . , k.
That this is true for k = 1 directly follows from the definition of f 1 and b1 . Now we assume that
this is true for k ? 1 ? 1 and show it is true for k. It is easy to see from the forms of f k?1 and bk?1
and from f k = f k?1 ? uk Dbk?1 that f k has the correct form f k = x ? w1k Dx ? . . . ? wkk Dk x.
Regarding orthogonality for i = 1, . . . , k ? 1 we have hf k Di xi = h(f k?1 ? uk Dbk?1 )Di xi =
hf k?1 Di xi ? uk h(Dbk?1 )Di xi = 0 using the induction assumptions of orhogonality at k ? 1. For
the remaining i = k we note that f k is the error of the optimal linear prediction of f k?1 from Dbk?1
0
0
k?1
and therefore 0 = hf k Dbk?1 i = hf k (Dk x ? w1k?1 Dk?1 x ? . . . + wk?1
Dx)i = hf k Dk xi as
k
desired. The b case can be proven similarly.
2.2
Continuous-time implementation
The last hurdle remaining for modeling neuronal circuits which operate in continuous time with a
lattice filter is its discrete-time operation. To obtain a continuous-time implementation of the lattice
4
filter we cannot simply take the time step size to zero as prediction-error filters would become
infinitesimally short. Here, we adapt the discrete-time lattice filter to continous-time operation in
two steps.
First, we introduce a discrete-time Laguerre lattice filter [24, 17] which uses Laguerre polynomials
rather than the shift operator to generate its basis functions, Figure 2. The input signal passes
through a leaky integrator whose leakage constant ? defines a time-scale distinct from the time step
(14). A delay, D, at every stage is replaced by an all-pass filter, L, (15) with the same constant
?, which preserves the magnitude of every Fourier component of the input but shifts its phase in a
frequency dependent manner. Such all-pass filter reduces to a single time-step delay when ? = 0.
The optimality of a general discrete-time Laguerre lattice filter can be proven similarly to that for
the discrete-time filter, simply by replacing operator D with L in the proof of section 2.1.
Figure 2: Continuous-time lattice filter using Laguerre polynomials. Compared to the discretetime version, it contains a leaky integrator, L0 ,(16) and replaces delays with all-pass filters, L, (17).
Second, we obtain a continuous-time formulation of the lattice filter by replacing t ? 1 ? t ? ?t,
defining the inverse time scale ? = (1 ? ?)/?t and taking the limit ?t ? 0 while keeping ? fixed.
As a result L0 and L are given by:
Discrete time
L0 (x)t
L(x)t
Continuous time
= ?L0 (x)t?1 + xt
(14)
= ?(L(x)t?1 ? xt ) + xt?1 (15)
dL0 (x)/dt = ??L0 (x) + x
L(x) = x ? 2?L0 (x)
(16)
(17)
Representative impulse responses of the continuous Laguerre filter are shown in Figure 2. Note that,
similarly to the discrete-time case, the area under the first (peak) phase is greater than the area under
the second (rebound) phase in the forward branch and the opposite is true in the backward branch.
Moreover, the temporal extent of the rebound is greater than that of the peak not just in the forward
branch like in the basic discrete-time implementation but also in the backward branch. As will be
seen in the next section, these predictions are confirmed by physiological recordings.
3
Experimental evidence for the lattice filter in visual pathways
In this section we demonstrate that physiological measurements from visual pathways in vertebrates
and invertebrates are consistent with the predictions of the lattice filter model. For the purpose of
modeling visual pathways, we identify summation units of the lattice filter with neurons and propose
that neural activity represents forward and backward errors. In the fly visual pathway neuronal
activity is represented by continuously varying graded potentials. In the vertebrate visual system,
all neurons starting with ganglion cells are spiking and we identify their firing rate with the activity
in the lattice filter.
3.1
Mammalian visual pathway
In mammals, visual processing is performed in stages. In the retina, photoreceptors synapse onto
bipolar cells, which in turn synapse onto retinal ganglion cells (RGCs). RGCs send axons to the
LGN, where they synapse onto LGN relay neurons projecting to the primary visual cortex, V1.
In addition to this feedforward pathway, at each stage there are local circuits involving (usually
inhibitory) inter-neurons such as horizontal and amacrine cells in the retina. Neurons of each class
5
come in many types, which differ in their connectivity, morphology and physiological response. The
bewildering complexity of these circuits has posed a major challenge to visual neuroscience.
Alonso et al. ? Connections between LGN and Cortex
J. Neurosci., June 1, 2001, 21(11):4002?4015 4009
Temporal Filter
1
0.5
0
-0.5
-1
RGC
LGN
0
100
Time (ms)
200
Figure 7. Distribution of geniculate cells and simple cells with respect to the timing of their responses. The distribution of three parameters derived
from impulse responses of geniculate and cortical neurons is shown. A, Peak time. B, Zero-crossing time. C, Rebound index. Peak time is the time with
the strongest response in the first phase of the impulse response. Zero-crossing time is the time between the first and second phases. Rebound index is
the area of the impulse response after the zero crossing divided by the area before the zero crossing. Only impulse responses with good signal to noise
were included (?5 SD above baseline; n ? 169).
Figure 3: Electrophysiologically measured temporal receptive fields get progressively longer
along the cat visual pathway. Left: A cat LGN cell (red) has a longer receptive field than a
corresponding RGC cell (blue) (adapted from [12] which also reports population data). Right (A,B):
Extent of the temporal receptive fields of simple cells in cat V1 is greater than that of corresponding
LGN cells as quantified by the peak (A) and zero-crossing (B) times. Right (C): In the temporal
receptive fields of cat LGN and V1 cells the peak can be stronger or weaker than the rebound
(adapted from [25]).
simple cells and geniculate cells differed for all temporal parameters measured, there was considerable overlap between the distributions (Fig. 7). This overlap raises the following question:
does connectivity depend on how well geniculate and cortical
responses are matched with respect to time? For instance, do
simple cells with fast subregions (early times to peak and early
zero crossings) receive input mostly from geniculate cells with
fast centers?
Figure 8 illustrates the visual responses from a geniculate cell
and a simple cell that were monosynaptically connected. A strong
positive peak was observed in the correlogram (shown with a 10
msec time window to emphasize its short latency and fast rise
time). In this case, an ON central subregion was well overlapped
with an ON geniculate center (precisely at the peak of the
subregion). Moreover, the timings of the visual responses from
the overlapped subregion and the geniculate center were very
similar (same onset, ?0 ?25 msec; same peak, ?25?50 msec). It is
worth noting that the two central subregions of the simple cell
were faster and stronger than the two lateral subregions. The
responses of the central subregions matched the timing of the
geniculate center. In contrast, the timing of the lateral subregions
resembled more closely the timing of the geniculate surround
(both peaked at 25?50 msec).
Unlike the example shown in Figure 8, a considerable number
of geniculocortical pairs produced responses with different timing. For example, Figure 9 illustrates a case in which a geniculate
center fully overlapped a strong simple-cell subregion of the same
sign, but with slower timing (LGN onset, ?0 ?25 msec; peak,
?25?50 msec; simple-cell onset, ?25?50 msec; peak, ?50 ?75
msec). The cross-correlogram between this pair of neurons was
flat, which indicates the absence of a monosynaptic connection
(Fig. 9, top right).
To examine the role of timing in geniculocortical connectivity,
we measured the response time course from all cell pairs that met
two criteria. First, the geniculate center overlapped a simple-cell
subregion of the same sign (n ? 104). Second, the geniculate
center overlapped the cortical subregion in a near-optimal position (relative overlap ? 50%, n ? 47; see Materials and Methods;
Fig. 5A). All these cell pairs had a high probability of being
monosynaptically connected because of the precise match in
receptive-field position and sign (31 of 47 were connected). The
distributions of peak time, zero-crossing time, and rebound index
from these cell pairs were very similar to the distributions from
the entire sample (Fig. 7; see also Fig. 10 legend). The selected
cell pairs included both presumed directional (predicted DI ?
0.3, see Materials and Methods; 12/20 connected) and nondirectional (19/27 connected) simple cells. Most geniculate cells had
small receptive fields (less than two simple-cell subregion widths;
see Receptive-field sign), although five cells with larger receptive
fields were also included (three connected). From the 47 cell pairs
used in this analysis, those with similar response time courses had
a higher probability of being connected (Fig. 10). In particular,
cell pairs that had both similar peak time and zero-crossing time
were all connected (n ? 12; Fig. 10 A). Directionally selective
simple cells were included in all timing groups. For example, in
Figure 10 A there were four, five, two, and one directionally
selective cells in the time groups ?20, 40, 60, and ?60 msec,
respectively. Similar results were obtained if we restricted our
sample to geniculate centers overlapped with the dominant subregion of the simple cell (n ? 31). Interestingly, the efficacy and
contributions of the connections seemed to depend little on the
relative timing of the visual responses (Fig. 10, right).
Although our sample of them was quite small, lagged cells are
of considerable interest and therefore deserve comment. We
recorded from 13 potentially lagged LGN cells whose centers
were superimposed with a simple-cell subregion (eight with rebound indices between 1.2 and 1.5; five with rebound indices
?1.9). Only seven of these pairs could be used for timing comparisons (in one pair the baseline of the correlogram had insufficient spikes; in three pairs the geniculate receptive fields were
Here, we point out several experimental observations related to temporal processing in the visual
system consistent with the lattice filter model. First, measurements of temporal receptive fields
demonstrate that they get progressively longer at each consecutive stage: i) LGN neurons have
longer receptive fields than corresponding pre-synaptic ganglion cells [12], Figure 3left; ii) simple
cells in V1 have longer receptive fields than corresponding pre-synaptic LGN neurons [25], Figure
3rightA,B. These observation are consistent with the progressively greater temporal extent of the
prediction-error filters (blue plots in Figure 2).
Second, the weight of the peak (integrated area under the curve) may be either greater or less than
that of the rebound both in LGN relay cells [26] and simple cells of V1 [25], Figure 3right(C).
Neurons with peak weight exceeding that of rebound are often referred to as non-lagged while the
others are known as lagged found both in cat [27, 28, 29] and monkey [30]. The reason for this
becomes clear from the response to a step stimulus, Figure 4(top).
By comparing experimentally measured receptive fields with those of the continuous lattice filter,
Figure 4, we identify non-lagged neurons with the forward branch and lagged neurons with the
backward branch. Another way to characterize step-stimulus response is whether the sign of the
transient is the same (non-lagged) or different (lagged) relative to sustained response.
Third, measurements of cross-correlation between RGCs and LGN cell spikes in lagged and nonlagged neurons reveals a difference of the transfer function indicative of the difference in underlying
circuitry [30]. This is consistent with backward pathway circuit of the Laguerre lattice filter, Figure
2, being more complex then that of the forward path (which results in different transfer function). ?
(or replacing ?more complex? with ?different?)
Third, measurements of cross-correlation between RGCs and LGN cell spikes in lagged and nonlagged neurons reveals a difference of the transfer function indicative of the difference in underlying
circuitry [31]. This is consistent with the backward branch circuit of the Laguerre lattice filter, Figure 2, being different then that of the forward branch (which results in different transfer function).
In particular, a combination of different glutamate receptors such as AMPA and NMDA, as well as
GABA receptors are thought to be responsible for observed responses in lagged cells [32]. However, further investigation of the corresponding circuitry, perhaps using connectomics technology, is
desirable.
Fourth, the cross-link weights of the lattice filter can be learned using Hebbian rules, (12,13) which
are biologically plausible [22, 23]. Interestingly, if these weights are learned sequentially, starting
from the first stage, they do not need to be re-learned when additional stages are added or learned.
This property maps naturally on the fact that in the course of mammalian development the visual
pathway matures in a stage-wise fashion - starting with the retina, then LGN, then V1 - and implying
that the more peripheral structures do not need to adapt to the maturation of the downstream ones.
6
Figure 4: Comparison of electrophysiologically measured responses of cat LGN cells with the
continuous-time lattice filter model. Top: Experimentally measured temporal receptive fields and
step-stimulus responses of LGN cells (adapted from [26]). Bottom: Typical examples of responses
in the continuous-time lattice filter model. Lattice filter coefficients were u1 = v 1 = 0.4, u2 = v 2 =
0.2 and 1/? = 50ms to model the non-lagged cell and u1 = v 1 = u2 = v 2 = 0.2 and 1/? = 60ms
to model the lagged cell. To model photoreceptor contribution to the responses, an additional leaky
integrator L0 was added to the circuit of Figure 2.
While Hebbian rules are biologically plausible, one may get an impression from Figure 2 that they
must apply to inhibitory cross-links. We point out that this circuit is meant to represent only the computation performed rather than the specific implementation in terms of neurons. As the same linear
computation can be performed by circuits with a different arrangement of the same components,
there are multiple implementations of the lattice filter. For example, activity of non-lagged OFF
cells may be seen as representing minus forward error. Then the cross-links between the non-lagged
OFF pathway and the lagged ON pathway would be excitatory. In general, classification of cells
into lagged and non-lagged seems independent of their ON/OFF and X/Y classification [31, 28, 29],
but see[33].
3.2
Insect visual pathway
In insects, two cell types, L1 and L2, both post-synaptic to photoreceptors play an important role
in visual processing. Physiological responses of L1 and L2 indicate that they decorrelate visual
signals by subtracting their predictable parts. In fact, receptive fields of these neurons were used as
the first examples of predictive coding in neuroscience [6]. Yet, as the numbers of synapses from
photoreceptors to L1 and L2 are the same [34] and their physiological properties are similar, it has
been a mystery why insects, have not just one but a pair of such seemingly redundant neurons per
facet. Previously, it was suggested that L1 and L2 provide inputs to the two pathways that map onto
ON and OFF pathways in the vertebrate retina [35, 36].
Here, we put forward a hypothesis that the role of L1 and L2 in visual processing is similar to that of
the two branches of the lattice filter. We do not incorporate the ON/OFF distinction in the effectively
linear lattice filter model but anticipate that such combined description will materialize in the future.
As was argued in Section 2, in forward prediction-error filters, the peak has greater weight than
the rebound, while in backward prediction-error filters the opposite is true. Such difference implies
that in response to a step-stimulus the signs of sustained responses compared to initial transients
are different between the branches. Indeed, Ca2+ imaging shows that responses of L1 and L2 to
step-stimulus are different as predicted by the lattice filter model [35], Figure 5b. Interestingly, the
activity of L1 seems to represent minus forward error and L2 - plus backward error, suggesting that
the lattice filter cross-links are excitatory. To summarize, the predictions of the lattice filter model
seem to be consistent with the physiological measurements in the fly visual system and may help
understand its operation.
7
Stimulus
1
0.5
0
0
5
10
15
20
5
10
15
20
5
10
time
15
20
? Forward
Error
1
0
?1
0
Backward
Error
1
0
?1
0
Figure 5: Response of the lattice filter and fruit fly LMCs to a step-stimulus. Left: Responses
of the first order discrete-time lattice filter to a step stimulus. Right: Responses of fly L1 and L2
cells to a moving step stimulus (adapted from [35]). Predicted and the experimentally measured
responses have qualitatively the same shape: a transient followed by sustained response, which has
the same sign for the forward error and L1 and the opposite sign for the backward error and L2.
4
Discussion
Motivated by the cascade structure of the visual pathway, we propose to model its operation with
the lattice filter. We demonstrate that the predictions of the continuous-time lattice filter model are
consistent with the course of neural development and the physiological measurement in the LGN,
V1 of cat and monkey, as well as fly LMC neurons. Therefore, lattice filters may offer a useful
abstraction for understanding aspects of temporal processing in visual systems of vertebrates and
invertebrates.
Previously, [11] proposed that lagged and non-lagged cells could be a result of rectification by
spiking neurons. Although we agree with [11] that LGN performs temporal decorrelation, our explanation does not rely on non-linear processing but rather on the cascade architecture and, hence, is
fundamentally different. Our model generates the following predictions that are not obvious in [11]:
i) Not only are LGN receptive fields longer than RGC but also V1 receptive fields are longer than
LGN; ii) Even a linear model can generate a difference in the peak/rebound ratio; iii) The circuit
from RGC to LGN should be different for lagged and non-lagged cells consistent with [31]; iv) The
lattice filter circuit can self-organize using Hebbian rules, which gives a mechanistic explanation of
receptive fields beyond the normative framework of [11].
In light of the redundancy reduction arguments given in the introduction, we note that, if the only
goal of the system were to compress incoming signals using a given number of lattice filter stages,
then after the compression is peformed only one kind of prediction errors, forward or backward
needs to be transmitted. Therefore, having two channels, in the absence of noise, may seem redundant. However, transmitting both forward and backward errors gives one the flexibility to continue
decorrelation further by adding stages performing relatively simple operations.
We are grateful to D.A. Butts, E. Callaway, M. Carandini, D.A. Clark, J.A. Hirsch, T. Hu, S.B.
Laughlin, D.N. Mastronarde, R.C. Reid, H. Rouault, A. Saul, L. Scheffer, F.T. Sommer, X. Wang
for helpful discussions.
References
[1] F. Rieke, D. Warland, R.R. van Steveninck, and W. Bialek. Spikes: exploring the neural code. MIT press,
1999.
[2] S.B. Laughlin. Matching coding, circuits, cells, and molecules to signals: general principles of retinal
design in the fly?s eye. Progress in retinal and eye research, 13(1):165?196, 1994.
[3] F. Attneave. Some informational aspects of visual perception. Psychological review, 61(3):183, 1954.
[4] H. Barlow. Redundancy reduction revisited. Network: Comp in Neural Systems, 12(3):241?253, 2001.
[5] R.M. Gray. Linear Predictive Coding and the Internet Protocol. Now Publishers, 2010.
[6] MV Srinivasan, SB Laughlin, and A. Dubs. Predictive coding: a fresh view of inhibition in the retina.
Proceedings of the Royal Society of London. Series B. Biological Sciences, 216(1205):427?459, 1982.
[7] T. Hosoya, S.A. Baccus, and M. Meister. Dynamic predictive coding by the retina. Nature, 436:71, 2005.
8
[8] HK Hartline, H.G. Wagner, and EF MacNichol Jr. The peripheral origin of nervous activity in the visual
system. Studies on excitation and inhibition in the retina: a collection of papers from the laboratories of
H. Keffer Hartline, page 99, 1974.
[9] N.A. Lesica, J. Jin, C. Weng, C.I. Yeh, D.A. Butts, G.B. Stanley, and J.M. Alonso. Adaptation to stimulus
contrast and correlations during natural visual stimulation. Neuron, 55(3):479?491, 2007.
[10] Y. Dan, J.J. Atick, and R.C. Reid. Efficient coding of natural scenes in the lateral geniculate nucleus:
experimental test of a computational theory. The Journal of Neuroscience, 16(10):3351?3362, 1996.
[11] D.W. Dong and J.J. Atick. Statistics of natural time-varying images. Network: Computation in Neural
Systems, 6(3):345?358, 1995.
[12] X. Wang, J.A. Hirsch, and F.T. Sommer. Recoding of sensory information across the retinothalamic
synapse. The Journal of Neuroscience, 30(41):13567?13577, 2010.
[13] C. Koch. Biophysics of computation: information processing in single neurons. Oxford Univ Press, 2005.
[14] F. Itakura and S. Saito. On the optimum quantization of feature parameters in the parcor speech synthesizer. In Conference Record, 1972 International Conference on Speech Communication and Processing,
Boston, MA, pages 434?437, 1972.
[15] B. Widrow and S.D. Stearns. Adaptive signal processing. Prentice-Hall, Inc. Englewood Cliffs, NJ, 1985.
[16] S. Haykin. Adaptive filter theory. Prentice-Hall, Englewood-Cliffs, NJ, 2003.
[17] A.H. Sayed. Fundamentals of adaptive filtering. Wiley-IEEE Press, 2003.
[18] D.J. Felleman and D.C. Van Essen. Distributed hierarchical processing in the primate cerebral cortex.
Cerebral cortex, 1(1):1?47, 1991.
[19] X. Wang, F.T. Sommer, and J.A. Hirsch. Inhibitory circuits for visual processing in thalamus. Current
Opinion in Neurobiology, 2011.
[20] SB Laughlin, J. Howard, and B. Blakeslee. Synaptic limitations to contrast coding in the retina of
the blowfly calliphora. Proceedings of the Royal society of London. Series B. Biological sciences,
231(1265):437?467, 1987.
[21] D.C. Lay. Linear Algebra and Its Applications. Addison-Wesley/Longman, New York/London, 2000.
[22] D.O. Hebb. The organization of behavior: A neuropsychological theory. Lawrence Erlbaum, 2002.
[23] O. Paulsen and T.J. Sejnowski. Natural patterns of activity and long-term synaptic plasticity. Current
opinion in neurobiology, 10(2):172?180, 2000.
[24] Z. Fejzo and H. Lev-Ari. Adaptive laguerre-lattice filters. Signal Processing, IEEE Transactions on,
45(12):3006?3016, 1997.
[25] J.M. Alonso, W.M. Usrey, and R.C. Reid. Rules of connectivity between geniculate cells and simple cells
in cat primary visual cortex. The Journal of Neuroscience, 21(11):4002?4015, 2001.
[26] D. Cai, G.C. Deangelis, and R.D. Freeman. Spatiotemporal receptive field organization in the lateral
geniculate nucleus of cats and kittens. Journal of Neurophysiology, 78(2):1045?1061, 1997.
[27] D.N. Mastronarde. Two classes of single-input x-cells in cat lateral geniculate nucleus. i. receptive-field
properties and classification of cells. Journal of Neurophysiology, 57(2):357?380, 1987.
[28] J. Wolfe and L.A. Palmer. Temporal diversity in the lateral geniculate nucleus of cat. Visual neuroscience,
15(04):653?675, 1998.
[29] AB Saul and AL Humphrey. Spatial and temporal response properties of lagged and nonlagged cells in
cat lateral geniculate nucleus. Journal of Neurophysiology, 64(1):206?224, 1990.
[30] A.B. Saul. Lagged cells in alert monkey lateral geniculate nucleus. Visual neurosci, 25:647?659, 2008.
[31] D.N. Mastronarde. Two classes of single-input x-cells in cat lateral geniculate nucleus. ii. retinal inputs
and the generation of receptive-field properties. Journal of Neurophysiology, 57(2):381?413, 1987.
[32] P. Heggelund and E. Hartveit. Neurotransmitter receptors mediating excitatory input to cells in the cat
lateral geniculate nucleus. i. lagged cells. Journal of neurophysiology, 63(6):1347?1360, 1990.
[33] J. Jin, Y. Wang, R. Lashgari, H.A. Swadlow, and J.M. Alonso. Faster thalamocortical processing for dark
than light visual targets. The Journal of Neuroscience, 31(48):17471?17479, 2011.
[34] M. Rivera-Alba, S.N. Vitaladevuni, Y. Mischenko, Z. Lu, S. Takemura, L. Scheffer, I.A. Meinertzhagen,
D.B. Chklovskii, and G.G. de Polavieja. Wiring economy and volume exclusion determine neuronal
placement in the drosophila brain. Current Biology, 21(23):2000?5, 2011.
[35] D.A. Clark, L. Bursztyn, M.A. Horowitz, M.J. Schnitzer, and T.R. Clandinin. Defining the computational
structure of the motion detector in drosophila. Neuron, 70(6):1165?1177, 2011.
[36] M. Joesch, B. Schnell, S.V. Raghu, D.F. Reiff, and A. Borst. On and off pathways in drosophila motion
vision. Nature, 468(7321):300?304, 2010.
9
|
4759 |@word neurophysiology:5 version:1 briefly:1 polynomial:2 compression:4 stronger:2 seems:2 hu:1 decorrelate:2 paulsen:1 mammal:2 rivera:1 minus:2 schnitzer:1 reduction:3 initial:1 born:1 series:4 contains:2 efficacy:1 offering:1 interestingly:4 past:5 existing:1 current:5 comparing:1 nt:2 yet:1 dx:4 reminiscent:1 must:3 connectomics:1 synthesizer:1 distant:1 plasticity:1 shape:1 plot:1 update:5 progressively:3 implying:1 selected:1 device:2 nervous:1 indicative:2 mastronarde:3 short:2 record:1 haykin:1 node:1 revisited:1 org:1 five:3 along:1 alert:1 become:1 sustained:3 pathway:18 dan:1 introduce:1 manner:2 sayed:1 inter:1 indeed:2 presumed:1 expected:1 behavior:1 examine:1 morphology:1 brain:2 monopolar:1 integrator:3 freeman:1 informational:1 borst:1 actual:5 little:1 window:1 humphrey:1 vertebrate:6 becomes:2 provided:1 monosynaptic:1 moreover:3 campus:1 formidable:1 circuit:16 matched:2 underlying:2 lesica:1 kind:1 monkey:3 finding:1 biphasic:1 nj:2 temporal:24 every:5 bipolar:1 uk:7 unit:5 organize:1 reid:3 before:1 positive:1 local:1 w1k:4 timing:11 limit:1 sd:1 despite:1 receptor:3 oxford:1 cliff:2 lev:1 firing:1 path:1 black:1 plus:1 quantified:1 suggests:1 callaway:1 palmer:1 range:5 bi:1 steveninck:1 neuropsychological:1 responsible:1 implement:1 b1t:3 saito:1 area:5 thought:3 cascade:5 convenient:2 matching:1 pre:2 get:4 cannot:1 onto:4 operator:3 put:1 prentice:2 influence:1 optimize:1 equivalent:1 map:2 center:10 yt:9 resembling:1 send:1 starting:3 monosynaptically:2 rule:7 population:1 rieke:1 transmit:1 updated:3 target:1 suppose:1 heavily:1 play:1 us:2 hypothesis:1 origin:1 overlapped:6 element:1 crossing:8 satisfying:1 wolfe:1 lay:1 mammalian:2 observed:4 ft:6 bottom:2 fly:7 role:3 solved:1 capture:1 wang:4 connected:8 developmental:1 predictable:1 complexity:1 calliphora:1 dynamic:6 schnell:1 raise:1 depend:2 grateful:1 algebra:1 predictive:9 rouault:1 basis:1 cat:15 represented:1 neurotransmitter:1 univ:1 distinct:1 fast:3 london:3 sejnowski:1 deangelis:1 whose:3 quite:1 larger:3 posed:1 plausible:2 encoder:2 statistic:1 farm:1 online:4 directionally:2 seemingly:1 sequence:12 advantage:1 cai:1 propose:4 subtracting:1 product:1 adaptation:1 relevant:2 flexibility:1 adapts:1 description:1 transmission:2 optimum:1 karol:1 object:1 help:1 derive:1 widrow:1 measured:9 progress:1 subregion:9 strong:2 predicted:4 come:2 indicate:1 met:1 differ:1 direction:1 implies:1 closely:1 correct:1 filter:81 stochastic:1 viewing:1 transient:3 opinion:2 elimination:1 material:2 explains:1 argued:1 investigation:1 drosophila:3 biological:5 anticipate:1 summation:5 exploring:1 ft1:3 koch:1 hall:2 normal:2 lawrence:1 predict:2 lm:2 circuitry:3 substituting:1 major:1 vary:1 early:3 consecutive:2 relay:3 e2t:1 purpose:1 geniculate:27 organ:1 mit:1 gaussian:1 amacrine:1 rather:3 varying:3 l0:7 derived:1 june:1 indicates:1 superimposed:1 greatly:1 contrast:3 hk:1 baseline:2 helpful:1 economy:1 abstraction:3 dependent:1 sb:2 entire:2 integrated:1 typically:1 selective:2 lgn:26 interested:1 fidelity:1 classification:3 insect:4 development:2 spatial:2 special:1 field:32 having:1 eliminated:1 biology:1 represents:2 look:1 rebound:12 peaked:1 future:1 report:1 stimulus:17 others:1 fundamentally:1 retina:9 preserve:1 delayed:1 replaced:1 phase:7 ab:1 organization:2 interest:1 englewood:2 essen:1 weng:1 light:2 accurate:1 orthogonal:1 lmcs:2 nonlagged:3 iv:1 desired:1 re:1 psychological:1 instance:1 modeling:3 facet:1 lattice:56 hundred:2 delay:8 erlbaum:1 front:2 characterize:1 spatiotemporal:1 combined:3 peak:18 international:1 fundamental:1 off:6 receiving:1 dong:1 continuously:1 transmitting:1 connectivity:4 squared:1 central:3 recorded:1 horowitz:1 leading:1 return:1 suggesting:1 potential:1 diversity:1 de:1 retinal:5 coding:11 wk:5 coefficient:4 inc:1 alba:1 mv:1 depends:1 onset:3 performed:4 view:2 red:2 hf:6 contribution:2 minimize:1 square:1 correspond:1 identify:3 directional:1 produced:2 dub:1 lu:1 multiplying:4 confirmed:1 drive:1 worth:1 comp:1 hartline:2 explain:3 synapsis:4 strongest:1 detector:1 synaptic:5 definition:1 bewildering:1 frequency:1 obvious:1 attneave:1 naturally:2 transmits:1 proof:2 di:5 carandini:1 stanley:1 organized:1 nmda:1 wesley:1 higher:1 dt:1 matures:1 maturation:1 response:35 synapse:4 formulation:1 furthermore:2 just:2 stage:31 atick:2 correlation:5 receives:1 horizontal:1 replacing:3 continuity:1 defines:1 perhaps:3 impulse:5 gray:1 believe:1 grows:1 contain:1 true:5 rgcs:4 barlow:1 analytically:1 hence:1 laboratory:1 wkk:3 wiring:1 lmc:1 width:1 self:1 during:1 excitation:1 whiten:1 m:3 criterion:1 impression:1 demonstrate:3 felleman:1 performs:3 l1:9 motion:2 image:1 wise:5 ef:1 ari:1 stimulation:1 spiking:2 physical:1 cerebral:2 volume:1 measurement:11 surround:2 similarly:7 parcor:1 janelia:2 had:5 dbk:5 moving:1 cortex:6 longer:10 whitening:1 etc:1 inhibition:2 dominant:1 recent:3 exclusion:1 optimizing:1 continue:1 transmitted:3 seen:3 greater:8 minimum:1 preceding:1 additional:2 determine:1 redundant:3 signal:20 ii:3 branch:12 multiple:1 desirable:1 thalamus:1 reduces:2 hebbian:5 exceeds:1 faster:3 adapt:3 characterized:1 hhmi:2 long:6 offer:2 cross:9 calculation:1 divided:1 match:1 equally:2 post:1 va:1 calculates:5 prediction:66 neuro:1 basic:1 involving:1 biophysics:1 vision:2 represent:2 cell:67 receive:1 addition:4 hurdle:1 chklovskii:2 modality:1 publisher:1 rest:2 operate:1 unlike:1 pass:1 comment:1 recording:2 legend:1 meinertzhagen:1 seem:2 unitary:1 near:1 noting:1 feedforward:1 iii:1 easy:1 hb:1 architecture:2 opposite:3 regarding:1 shift:2 whether:1 motivated:3 speech:2 york:1 useful:2 latency:1 clear:2 detailed:1 dark:1 subregions:5 ashburn:1 discretetime:1 generate:4 stearns:1 millisecond:2 inhibitory:3 sign:8 neuroscience:8 per:1 materialize:1 blue:4 discrete:13 srinivasan:1 group:2 redundancy:4 four:1 backward:27 v1:9 imaging:1 longman:1 dmitri:1 downstream:1 inverse:1 mystery:1 fourth:1 ca2:1 internet:1 followed:1 electrophysiologically:2 replaces:1 activity:8 adapted:4 placement:1 orthogonality:1 precisely:1 ftk:11 flat:1 scene:1 invertebrate:4 generates:1 aspect:4 u1:5 fourier:1 optimality:1 argument:1 performing:1 infinitesimally:1 relatively:1 peripheral:2 combination:1 gaba:1 jr:1 smaller:1 across:1 primate:2 making:1 mitya:1 biologically:2 intuitively:2 restricted:2 projecting:1 rectification:1 equation:3 agree:1 previously:2 turn:2 phasic:1 addison:1 mechanistic:1 barrage:1 end:3 raghu:1 meister:1 operation:13 eight:1 apply:1 hierarchical:1 blowfly:1 appropriate:1 subtracted:1 slower:1 original:1 compress:1 top:4 remaining:3 sommer:3 bkt:4 warland:1 testable:1 graded:1 gregor:1 society:2 leakage:1 question:1 added:2 spike:4 arrangement:1 receptive:30 primary:3 strategy:1 bialek:1 gradient:3 link:6 lateral:12 alonso:4 seven:1 argue:1 extent:6 reason:2 induction:2 geniculocortical:2 fresh:1 code:1 index:5 insufficient:1 ratio:2 minimizing:1 difficult:1 mostly:1 baccus:1 mediating:1 statement:1 potentially:1 negative:2 rise:1 lagged:27 implementation:6 design:1 zt:11 perform:1 upper:2 neuron:32 observation:6 howard:1 descent:1 jin:2 defining:2 extended:1 communication:1 delaying:2 precise:1 neurobiology:2 bk:16 inverting:1 pair:12 connection:6 continous:1 learned:5 distinction:1 deserve:1 beyond:1 bar:3 suggested:1 below:2 laguerre:8 usually:1 perception:1 pattern:1 summarize:3 challenge:1 royal:2 explanation:2 overlap:3 decorrelation:5 natural:10 rely:1 predicting:3 glutamate:1 representing:1 technology:1 lossless:1 imply:1 temporally:1 eye:2 remembers:1 review:1 understanding:1 l2:9 yeh:1 relative:4 fully:1 takemura:1 generation:1 limitation:1 filtering:1 proven:2 clark:2 nucleus:9 consistent:10 fruit:1 principle:1 helix:1 course:4 excitatory:3 surprisingly:1 last:2 keeping:1 thalamocortical:1 offline:2 side:1 weaker:1 understand:1 laughlin:4 saul:3 face:1 taking:1 wagner:1 decorrelates:1 absolute:1 leaky:3 van:2 recoding:1 curve:1 distributed:1 cortical:3 world:2 computes:2 sensory:3 forward:25 made:2 seemed:1 qualitatively:1 collection:1 subtracts:4 adaptive:4 transaction:1 emphasize:1 butt:2 sequentially:2 incoming:5 reveals:2 hirsch:3 b1:1 photoreceptors:3 xi:8 continuous:13 why:2 nature:3 transfer:4 channel:1 molecule:1 vitaladevuni:1 itakura:1 interact:1 complex:3 ampa:1 protocol:1 neurosci:2 whole:1 noise:4 he2:1 usrey:1 neuronal:4 fig:8 representative:1 referred:1 scheffer:2 differed:1 fashion:1 hebb:1 axon:1 wiley:1 position:2 decoded:1 msec:9 exceeding:1 third:3 removing:1 hosoya:1 xt:39 resembled:1 specific:1 normative:1 learnable:1 dk:7 physiological:10 evidence:1 quantization:1 sequential:1 adding:2 effectively:1 magnitude:3 illustrates:2 boston:1 simply:2 ganglion:4 visual:43 correlogram:3 scalar:1 u2:2 relies:1 ma:1 goal:2 absence:3 considerable:3 experimentally:6 included:4 typical:1 acting:1 called:11 stereotypic:1 pas:3 experimental:5 photoreceptor:1 rgc:4 support:1 arises:1 meant:1 ongoing:1 incorporate:1 outgoing:1 kitten:1 correlated:1
|
4,153 | 476 |
Generalization Performance in PARSEC-A
Structured Connectionist Parsing Architecture
Ajay N. Jain?
School of Computer Science
Carnegie Mellon University
Pittsburgh, PA 15213-3890
ABSTRACT
This paper presents PARSEC-a system for generating connectionist
parsing networks from example parses. PARSEC is not based on formal
grammar systems and is geared toward spoken language tasks. PARSEC
networks exhibit three strengths important for application to speech processing: 1) they learn to parse, and generalize well compared to handcoded grammars; 2) they tolerate several types of noise; 3) they can
learn to use multi-modal input. Presented are the PARSEC architecture
and performance analyses along several dimensions that demonstrate
PARSEC's features. PARSEC's performance is compared to that of traditional grammar-based parsing systems.
1 INTRODUCTION
While a great deal of research has been done developing parsers for natural language, adequate solutions for some of the particular problems involved in spoken language have not
been found. Among the unsolved problems are the difficulty in constructing task-specific
grammars, lack of tolerance to noisy input, and inability to effectively utilize non-symbolic information. This paper describes PARSEC-a system for generating connectionist
parsing networks from example parses.
*Now with Alliant Techsystems Research and Technology Center ([email protected]).
209
210
Jain
--=---)
INPUT--+l
Figure 1: PARSEC's high-level architecture
PARSEC networks exhibit three strengths:
? They automatically learn to parse, and generalize well compared to hand-coded
grammars.
? They tolerate several types of noise without any explicit noise modeling.
? They can learn to use multi-modal input such as pitch in conjunction with syntax and
semantics.
The PARSEC network architecture relies on a variation of supervised back-propagation
learning. The architecture differs from some other connectionist approaches in that it is
highly structured, both at the macroscopic level of modules, and at the microscopic level
of connections. Structure is exploited to enhance system performance. 1
Conference registration dialogs formed the primary development testbed for PARSEC. A
separate speech recognition effort in conference registration provided data for evaluating
noise-tolerance and also provided an application for PARSEC in speech-to-speech translation (Waibel et al. 1991).
PARSEC differs from early connectionist work in parsing (e.g. Fanty 1985; Selman 1985)
in its emphasis on learning. It differs from recent connectionist approaches (e.g. Elman
1990; Miikkulainen 1990) in its emphasis on performance issues such as generalization
and noise tolerance in real tasks. This papers presents the PARSEC architecture, its training algorithms, and performance analyses that demonstrate PARSEC's features.
2 PARSEC ARCHITECTURE
The PARSEC architecture is modular and hierarchical. Figure 1 shows the high-level
architecture. PARSEC can learn to parse complex English sentences including multiple
clauses, passive constructions, center-embedded constructions, etc. The input to PARSEC
is presented sequentially, one word at a time. PARSEC produces a case-based representation of a parse as the input sentence develops.
IpARSEC is a generalization of a previous connectionist parsing architecture (Jain 1991). For a
detailed exposition of PARSEC, please refer to Jain' s PhD thesis (in preparation).
Generalization Performance in PARSEC-A Structured Connectionist Parsing Architecture
-.
(mDD'
Units
~
-+~
OUTPUT:
(labels for input)
Figure 2: Basic structure of a PARSEC module
The parse for the sentence, "I will send you a form immediately:' is:
([statement]
([clause]
([agent]
([action]
([recipient]
([patient]
([time]
I)
will send)
you)
a form)
immediately)))
Input words are represented as binary feature patterns (primarily syntactic with some
semantic features). These feature representations are hand-crafted.
Each module of PARSEC can perform either a transformation or a labeling of its input.
The output function of each module is represented across localist connectionist units. The
actual transformations are made using non-connectionist subroutines.2 Figure 2 shows the
basic structure of a PARSEC module. The bold ovals contain units that learn via backpropagation.
There are four steps in generating a PARSEC network: 1) create an example parse file; 2)
define a lexicon; 3) train the six modules; 4) assemble the full network. Of these, only the
first two steps require substantial human effort, and this effort is small relative to that
required for writing a grammar by hand. Training and assembly are automatic.
2.1 PREPROCESSING MODULE
This module marks alphanumeric sequences, which are replaced by a single special
marker word. This prevents long alphanumeric strings from overwhelming the length constraint on phrases. Note that this is not always a trivial task since words such as "a" and
"one" are lexically ambiguous.
"It costs three hundred twenty one dollars."
INPUT:
OUTPUT: "It costs ALPHANUM dollars."
Prhese transfonnations could be carried out by connectionist networks, but at a substantial computational cost for training and a risk of undergeneralization.
211
212
Jain
2.2 PHRASE MODULE
The Phrase module processes the evolving output of the Prep module into phrase blocks.
Phrase blocks are non-recursive contiguous pieces of a sentence. They correspond to simple noun phrases and verb groups.3 Phrase blocks are represented as grouped sets of units
in the network. Phrase blocks are denoted by brackets in the following:
"I will send you a new form in the morning."
INPUT:
OUTPUT: "[I] [will send] [you] [a new form] [in the morning]."
2.3 CLAUSE MAPPING MODULE
The Clause module uses the output of the Phrase module as input and assigns the clausal
structure. The result is an unambiguous bracketing of the phrase blocks that is used to
transform the phrase block representation into representations for each clause:
INPUT:
"[I] [would like] [to register] [for the conference]."
OUTPUT: "([I] [would like]) ([to register] [for the conference]}."
2.4 ROLE LABELING MODULE
The Roles module associates case-role labels with each phrase block in each clause. It also
denotes attachment structure for prepositional phrases ("MOD-I" indicates that the current phrase block modifies the previous one):
INPUT:
"( [The titles] [of papers] [are printed] [in the forms])"
OUTPUT: "([The titles] [of papers] [are printed] [in the forms])"
PATIENT
MOD-l
ACTION
LOCATION
2.S INTERCLAUSE AND MOOD MODULES
The Interclause and Mood modules are similar to the Roles module. They both assign
labels to constituents, except they operate at higher levels. The Interclause module indicates, for example, subordinate and relative clause relationships. The Mood module indicates the overall sentence mood (declarative or interrogative in the networks discussed
here).
3 GENERALIZATION
Generalization in large connectionist networks is a critical issue. This is especially the
case when training data is limited. For the experiments reported here, the training data was
limited to twelve conference registration dialogs containing approximately 240 sentences
with a vocabulary of about 400 words. Despite the small corpus, a large number of English
constructs were covered (including passives, conditional constructions, center-embedded
relative clauses, etc.).
A set of 117 disjoint sentences was obtained to test coverage. The sentences were generated by a group of people different from those that developed the 12 dialogs. These sentences used the same vocabulary as the 12 dialogs.
3Abney has described a similar linguistic unit called a chunk (Abney 1991).
Generalization Performance in PARSEC-A Structured Connectionist Parsing Architecture
3.1 EARLY PARSEC VERSIONS
Straightforward training of a PARSEC network resulted in poor generalization performance, with only 16% of the test sentences being parsed correctly. One of the primary
sources for error was positional sensitivity acquired during training of the three transformational modules. In the Phrase module, for example, each of the phrase boundary detector units was supposed to learn to indicate a boundary between words in specific positions.
Each of the units of the Phrase module is perfonning essentially the same job, but the network doesn't "know" this and cannot learn this from a small sample set. By sharing the
connection weights across positions, the network is forced to be position insensitive (similar to TDNN's as in Waibel et al. 1989). After modifying PARSEC to use shared weights
and localized connectivity in the lower three modules, generalization performance
increased to 27%. The primary source of error shifted to the Roles module.
Part of the problem could be ascribed to the representation of phrase blocks. They were
represented across rows of units that each define a word. In the phrase block "the big dog,"
"dog" would have appeared in row 3. This changes to row 2 if the phrase block is just "the
dog." A network had to learn to respond to the heads of phrase blocks even though they
moved around. An augmented phrase block representation in which the last word of the
phrase block was copied to position 0 solved this problem. With the augmented phrase
block representation coupled with the previous improvements, PARSEC achieved 44%
coverage.
3.2 PARSEC: FINAL VERSION
The final version of PARSEC uses all of the previous enhancements plus a technique
called Programmed Constructive Learning (PCL). In PCL, hidden units are added to a
network one at a time as they are needed. Also, there is a specific series of hidden unit
types for each module of a PARSEC network. The hidden unit types progress from being
highly local in input connectivity to being more broad. This forces the networks to learn
general predicates before specializing and using possibly unreliable infonnation.
The final version of PARSEC was used to generate another parsing network. 4 Its performance was 67% (78% including near-misses). Table 1 summarizes these results.
3.3 COMPARISON TO HAND-CODED GRAMMARS
PARSEC's performance was compared to that of three independently constructed grammars. Two of the grammars were commissioned as part of a contest where the first prize
($700) went to the grammar-writer with best coverage of the test set and the second prize
($300) went to the other grammar writer. S The third grammar was independently constructed as part of the JANUS system (described later). The contest grammars achieved
25% and 38% coverage, and the other grammar achieved just 5% coverage of the test set
4nus [mal parsing
network was not trained all the way to completion. Training to completion
hurts generalization performance.
Srrne contest participants had 8 weeks to complete their grammars, and they both spent over 60
hours doing so. The grammar writers work in Machine Translation and Computational Linguistics and were quite experienced.
213
214
Jain
Table 1: PARSEC's comparative perfonnance
Noise
Ungram.
PARSECV4 67% (78%)
77%
Grammar 1
Grammar 2
Grammar 3
70%
66%
34%
38%
2%
Coverage
38% (39%)
25% (26%)
5% (5%)
(see Table 1). All of the hand-coded grammars produced NIL parses for the majority of
test sentences. In the table, numbers in parentheses include near-misses.
PARSEC's performance was substantially better than the best of the hand-coded grammars. PARSEC has a systematic advantage in that it is trained on the incremental parsing
task and is exposed to partial sentences during training. Also, PARSEC's constructive
learning approach coupled with weight sharing emphasizes local constraints wherever
possible, and distant variations in input structure do not adversely affect parsing.
4 NOISE TOLERANCE
The second area of performance analysis for PARSEC was noise tolerance. Preliminary
comparisons between PARSEC and a rule-based parser in the JANUS speech-to-speech
translation system were promising (Waibel et al. 1991). More extensive evaluations corroborated the early observations. In addition, PARSEC was evaluated on synthetic
ungrammatical sentences. Experiments on spontaneous speech using DARPA's ATIS task
are ongoing.
4.1 NOISE IN SPEECH-TO-SPEECH TRANSLATION
In the JANUS system, speech recognition is provided by an LPNN (Tebelskis et al. 1991),
parsing can be done by a PARSEC network or an LR parser, translation is accomplished
by processing the interlingual output of the parser using a standard language generation
module, and speech generation is provided by off-the-shelf devices. The system can be run
using a single (often noisy) hypothesis from the LPNN or a ranked list of hypotheses.
When run in single-hypothesis mode, JANUS using PARSEC correctly translated 77% of
the input utterances, and J ANUS using the LR parser (Grammar 3 in the table) achieved
70%. The PARSEC network was able to parse a number of incorrect recognitions well
enough that a successful translation resulted. However, when run in multi-hypothesis
mode, the LR parser achieved 86% compared to PARSEC's 80%. The LR parser utilized a
very tight grammar and was able to robustly reject hypotheses that deviated from expectations. This allowed the LR parser to "choose" the correct hypothesis more often than PARSEC. PARSEC tended to accept noisy utterances that produced incorrect translations. Of
course, given that the PARSEC network's coverage was so much higher than that of the
grammar used by the LR parser, this result is not surprising.
4.2 SYNTHETIC UNGRAMMATICALITY
Using the same set of grammars for comparison, the parsers were tested on ungrammatical
input from the CR task. These sentences were corrupted versions of sentences used for
Generalization Performance in PARSEC-A Structured Connectionist Parsing Architecture
FILE: s.O.O "Okay:
0.1 ????????
0.0
Il..
duration
=409.1 msec, mean fraq = 113.2
?????????????????????????????????????????????????
FILE: q.O.O "Okay?-
duration
.............
=377.0 msec, mean freq = 137.3
0.6
0.5
0.4
0.3
0.2
0.1 ????????
? ?????????????????
0.0
???????????????????????????
Figure 3: Smoothed pitch contours.
training. Training sentences were used to decouple the effects of noise from coverage.
Table 1 shows the results. They essentially mirror those of the coverage tests. PARSEC is
substantially less sensitive to such effects as subject/verb disagreement, missing detenniners, and other non-catastrophic irregularities.
Some researchers have augmented grammar-based systems to be more tolerant of noise
(e.g. Saito and Tomita 1988). However, the PARSEC network in the test reported here was
trained only on grammatical input and still produced a degree of noise tolerance for free.
In the same way that one can explicitly build noise tolerance into a grammar-based system, one can train a PARSEC network on input that includes specific types of noise. The
result should be some noise tolerance beyond what was explicitly trained.
5 MULTI-MODAL INPUT
A somewhat elusive goal of spoken language processing has been to utilize information
from the speech signal beyond just word sequences in higher-level processing. It is well
known that humans use such infonnation extensively in conversation. Consider the utterances "Okay." and "Okay?" Although semantically distinct, they cannot be distinguished
based on word sequence, but pitch contours contain the necessary infonnation (Figure 3).
In a grammar-based system, it is difficult to incorporate real-valued vector input in a useful way. In a PARSEC network, the vector is just another set of input units. The Mood
module of a PARSEC network was augmented to contain an additional set of units that
contained pitch infonnation. The pitch contours were smoothed output from the OGI Neural Network Pitch Tracker (Barnard et al. 1991). PARSEC added another hidden unit to
utilize the new infonnation.
The trained PARSEC network was tolerant of speaker variation, gender variation, utterance variation (length and content), and a combination of these factors. Although not
explicitly trained to do so, the network correctly processed sentences that were grammatical questions but had been pronounced with the declining pitch of a typical statement.
Within the JANUS system, the augmented PARSEC network brings new functionality.
Intonation affects translation in JANUS when using the augmented PARSEC network.
The sentence, "This is the conference office." is translated to "Kaigi jimukyoku desu."
"This is the conference office?" is translated to "Kaigi jimukyoku desuka?" This required
no changes in the other modules of the JANUS system. It also should be possible to use
other types of infonnation from the speech signal to aid in robust parsing (e.g. energy patterns to disambiguate clausal structure).
215
216
Jain
6 CONCLUSION
PARSEC is a system for generating connectionist parsing networks from training examples. Experiments using a conference registration conversational task showed that PARSEC: 1) learns and generalizes well compared to hand-coded grammars; 2) tolerates noise:
recognition errors and ungrammaticality; 3) successfully learns to combine intonational
infonnation with syntactic/semantic infonnation. Future work with PARSEC will be continued by extending it to new languages, larger English tasks, and speech tasks that
involve tighter coupling between speech recognition and parsing. There are numerous
issues in NLP that will be addressed in the context of these research directions.
Acknowledgements
The author gratefully acknowledges the support of DARPA, the National Science Foundation, A1R Interpreting Telephony Laboratories, NEC Corp., and Siemens Corp.
References
Abney, S. P. 1991. Parsing by chunks. In Principle-Based Parsing, ed. R. Berwick. S. P.
Abney, C. Tenny. Kluwer Academic Publishers.
Barnard, E., R. A. Cole, M. P. Yea. F. A. Alleva. 1991. Pitch Detection with a Neural-Net
Classifier. IEEE Transactions on Signal Processing 39(2): 298-307.
Elman, J. L. 1989. Representation and Structure in Connectionist Networks. Tech. Rep.
CRL 8903. Center for Research in Language, University of California. San Diego.
Fanty, M. 1985. Context Free Parsing in Connectionist Networks. Tech. Rep. 1R174.
Computer Science Department, University of Rochester.
Jain, A. N. and A. H. Waibel. 1990. Robust connectionist parsing of spoken language. In
Proceedings of the 1990 IEEE International Conference on Acoustics, Speech, and Signal Processing
Jain. A. N. In preparation. PARSEC: A Connectionist Learning Architecture for Parsing
Speech. PhD Thesis. School of Computer Science. Carnegie Mellon University.
Miikkulainen. R. 1990. A PDP architecture for processing sentences with relative clauses.
In Proceedings of the 13th Annual Conference of the Cognitive Science Society.
Saito, H .? and M. Tomita. 1988. Parsing noisy sentences. In Proceedings of INFO JAPAN
'88: International Conference of the Information Processing Society of Japan. 553-59.
Selman, B. 1985. Rule-Based Processing in a Connectionist System/or Natural Language
Understanding. Ph.D. Thesis, University of Toronto. Available as Tech. Rep. CSRI168.
Tebelskis. J., A. Waibel. B. Petek, and O. Schmidbauer. 1991. Continuous speech recognition using linked predictive neural networks. In Proceedings of the 1991 IEEE Interna-
tional Conference on Acoustics, Speech, and Signal Processing.
Waibel. A .? T. Hanazawa. G. Hinton, K. Shikano, and K. Lang. 1989. Phoneme recognition using time-delay neural networks. IEEE Transactions on Acoustics, Speech, and
Signal Processing 37(3):328-339.
Waibel. A., A. N. Jain, A. E. McNair. H. Saito, A. G. Hauptmann, and J. Tebelskis. 1991.
JANUS: A speech-to-speech translation system using connectionist and symbolic processing strategies. In IEEE Proceedings of the International Conference on Acoustics,
Speech, and Signal Processing.
|
476 |@word version:5 alliant:1 yea:1 series:1 current:1 com:1 surprising:1 lang:1 parsing:23 distant:1 alphanumeric:2 device:1 prize:2 lr:6 toronto:1 location:1 lexicon:1 parsec:65 along:1 constructed:2 incorrect:2 combine:1 ascribed:1 acquired:1 elman:2 dialog:4 multi:4 jimukyoku:2 automatically:1 actual:1 overwhelming:1 provided:4 what:1 string:1 substantially:2 developed:1 spoken:4 transformation:2 transfonnations:1 classifier:1 unit:14 before:1 local:2 despite:1 approximately:1 plus:1 emphasis:2 limited:2 programmed:1 recursive:1 block:15 differs:3 backpropagation:1 irregularity:1 saito:3 area:1 evolving:1 reject:1 printed:2 word:10 symbolic:2 cannot:2 risk:1 context:2 writing:1 center:4 missing:1 send:4 modifies:1 straightforward:1 elusive:1 independently:2 duration:2 immediately:2 assigns:1 rule:2 continued:1 variation:5 hurt:1 construction:3 spontaneous:1 parser:10 diego:1 us:2 hypothesis:6 pa:1 associate:1 recognition:7 utilized:1 corroborated:1 role:5 module:30 solved:1 mal:1 went:2 substantial:2 trained:6 tight:1 exposed:1 predictive:1 writer:3 translated:3 darpa:2 interlingual:1 represented:4 train:2 jain:11 forced:1 distinct:1 labeling:2 quite:1 modular:1 larger:1 valued:1 grammar:29 syntactic:2 transform:1 noisy:4 hanazawa:1 final:3 mood:5 sequence:3 advantage:1 net:1 fanty:2 supposed:1 moved:1 pronounced:1 constituent:1 enhancement:1 extending:1 produce:1 generating:4 comparative:1 incremental:1 spent:1 coupling:1 completion:2 interclause:3 tolerates:1 school:2 progress:1 job:1 coverage:9 indicate:1 direction:1 correct:1 functionality:1 modifying:1 human:2 atk:1 subordinate:1 require:1 assign:1 generalization:11 preliminary:1 tighter:1 around:1 tracker:1 great:1 mapping:1 week:1 early:3 label:3 infonnation:8 title:2 sensitive:1 cole:1 grouped:1 create:1 successfully:1 always:1 shelf:1 cr:1 office:2 conjunction:1 linguistic:1 improvement:1 indicates:3 tech:3 dollar:2 tional:1 accept:1 hidden:4 subroutine:1 semantics:1 issue:3 among:1 overall:1 denoted:1 development:1 noun:1 special:1 construct:1 broad:1 future:1 connectionist:21 develops:1 primarily:1 okay:4 resulted:2 national:1 fraq:1 replaced:1 detection:1 highly:2 evaluation:1 bracket:1 partial:1 necessary:1 perfonnance:1 increased:1 modeling:1 contiguous:1 localist:1 phrase:24 cost:3 hundred:1 delay:1 predicate:1 successful:1 reported:2 corrupted:1 synthetic:2 chunk:2 twelve:1 sensitivity:1 international:3 systematic:1 off:1 enhance:1 connectivity:2 thesis:3 containing:1 choose:1 possibly:1 adversely:1 cognitive:1 japan:2 transformational:1 bold:1 lexically:1 includes:1 register:2 explicitly:3 a1r:1 piece:1 later:1 doing:1 linked:1 participant:1 rochester:1 formed:1 il:1 phoneme:1 correspond:1 generalize:2 produced:3 emphasizes:1 researcher:1 detector:1 tended:1 sharing:2 ed:1 energy:1 ungrammaticality:2 involved:1 unsolved:1 conversation:1 schmidbauer:1 back:1 tolerate:2 higher:3 supervised:1 modal:3 done:2 though:1 evaluated:1 just:4 hand:7 parse:7 marker:1 lack:1 propagation:1 morning:2 mode:2 brings:1 effect:2 contain:3 laboratory:1 semantic:2 freq:1 deal:1 ogi:1 during:2 please:1 ambiguous:1 unambiguous:1 speaker:1 syntax:1 complete:1 demonstrate:2 interpreting:1 passive:2 clause:9 insensitive:1 discussed:1 atis:1 kluwer:1 mellon:2 refer:1 declining:1 automatic:1 contest:3 language:9 had:3 gratefully:1 geared:1 etc:2 kaigi:2 recent:1 showed:1 corp:2 binary:1 rep:3 accomplished:1 exploited:1 additional:1 somewhat:1 signal:7 multiple:1 full:1 berwick:1 academic:1 long:1 coded:5 specializing:1 parenthesis:1 pitch:8 basic:2 ajay:1 patient:2 essentially:2 expectation:1 alleva:1 achieved:5 addition:1 addressed:1 bracketing:1 source:2 macroscopic:1 publisher:1 operate:1 file:3 subject:1 mod:2 near:2 enough:1 affect:2 architecture:15 six:1 effort:3 speech:23 action:2 adequate:1 useful:1 detailed:1 covered:1 involve:1 extensively:1 ph:1 processed:1 generate:1 shifted:1 disjoint:1 correctly:3 clausal:2 carnegie:2 group:2 four:1 registration:4 utilize:3 run:3 you:4 respond:1 petek:1 summarizes:1 copied:1 deviated:1 assemble:1 annual:1 strength:2 constraint:2 pcl:2 tebelskis:3 conversational:1 structured:5 developing:1 department:1 waibel:7 combination:1 poor:1 describes:1 across:3 prepositional:1 wherever:1 lpnn:2 needed:1 know:1 generalizes:1 available:1 hierarchical:1 disagreement:1 robustly:1 distinguished:1 recipient:1 denotes:1 tomita:2 linguistics:1 assembly:1 include:1 nlp:1 parsed:1 especially:1 build:1 society:2 added:2 question:1 strategy:1 primary:3 traditional:1 exhibit:2 microscopic:1 separate:1 perfonning:1 majority:1 trivial:1 toward:1 declarative:1 length:2 relationship:1 difficult:1 handcoded:1 statement:2 info:1 twenty:1 perform:1 observation:1 hinton:1 incorporate:1 head:1 pdp:1 smoothed:2 verb:2 dog:3 required:2 extensive:1 connection:2 sentence:20 mdd:1 california:1 acoustic:4 testbed:1 hour:1 interrogative:1 nu:1 able:2 beyond:2 pattern:2 appeared:1 including:3 mcnair:1 critical:1 natural:2 difficulty:1 force:1 ranked:1 technology:1 numerous:1 attachment:1 carried:1 acknowledges:1 tdnn:1 coupled:2 utterance:4 understanding:1 acknowledgement:1 relative:4 embedded:2 par:3 generation:2 telephony:1 localized:1 foundation:1 agent:1 degree:1 principle:1 translation:9 row:3 course:1 last:1 free:2 english:3 formal:1 tolerance:8 grammatical:2 boundary:2 dimension:1 vocabulary:2 evaluating:1 contour:3 doesn:1 selman:2 made:1 author:1 preprocessing:1 san:1 miikkulainen:2 transaction:2 janus:8 unreliable:1 sequentially:1 tolerant:2 corpus:1 pittsburgh:1 shikano:1 continuous:1 abney:4 table:6 promising:1 learn:10 disambiguate:1 robust:2 ungrammatical:2 complex:1 constructing:1 big:1 noise:16 prep:1 allowed:1 desuka:1 augmented:6 crafted:1 aid:1 experienced:1 position:4 explicit:1 msec:2 intonation:1 third:1 learns:2 commissioned:1 specific:4 list:1 effectively:1 mirror:1 phd:2 nec:1 hauptmann:1 positional:1 prevents:1 contained:1 gender:1 relies:1 conditional:1 goal:1 exposition:1 shared:1 barnard:2 content:1 change:2 rtc:1 crl:1 typical:1 except:1 tenny:1 semantically:1 miss:2 decouple:1 called:2 oval:1 nil:1 catastrophic:1 siemens:1 mark:1 people:1 support:1 inability:1 preparation:2 ongoing:1 constructive:2 tested:1
|
4,154 | 4,760 |
Semantic Kernel Forests from Multiple Taxonomies
Sung Ju Hwang
University of Texas
Austin, TX 78701
Kristen Grauman
University of Texas
Austin, TX 78701
Fei Sha
University of Southern California
Los Angeles, CA 90089
[email protected]
[email protected]
[email protected]
Abstract
When learning features for complex visual recognition problems, labeled image
exemplars alone can be insufficient. While an object taxonomy specifying the categories? semantic relationships could bolster the learning process, not all relationships are relevant to a given visual classification task, nor does a single taxonomy
capture all ties that are relevant. In light of these issues, we propose a discriminative feature learning approach that leverages multiple hierarchical taxonomies
representing different semantic views of the object categories (e.g., for animal
classes, one taxonomy could reflect their phylogenic ties, while another could reflect their habitats). For each taxonomy, we first learn a tree of semantic kernels,
where each node has a Mahalanobis kernel optimized to distinguish between the
classes in its children nodes. Then, using the resulting semantic kernel forest, we
learn class-specific kernel combinations to select only those relationships relevant
to recognize each object class. To learn the weights, we introduce a novel hierarchical regularization term that further exploits the taxonomies? structure. We
demonstrate our method on challenging object recognition datasets, and show that
interleaving multiple taxonomic views yields significant accuracy improvements.
1 Introduction
Object recognition research has made impressive gains in recent years, with particular success
in using discriminative learning algorithms to train classifiers tuned to each category of interest
(e.g., [1, 2]). As the basic ?image features + labels + classifier? paradigm has reached a level of
maturity, we believe it is time to reach beyond it towards models that incorporate richer semantic
knowledge about the object categories themselves.
One appealing source of such external knowledge is a taxonomy. A hierarchical semantic taxonomy
is a tree that groups classes together in its nodes according to some human-designed merging or
splitting criterion. For example, well-known taxonomies include WordNet, which groups words
into sets of cognitive synonyms and their super-subordinate relations [3], and the phylogenetic tree
of life, which groups biological species based on their physical or genetic properties. Critically,
such trees implicitly embed cues about human perception of categories, how they relate to one
another, and how those relationships vary at different granularities. Thus, in the context of visual
object recognition, such a structure has the potential to guide the selection of meaningful low-level
features, essentially augmenting the standard supervision provided by image labels. Some initial
steps have been made based on this intuition, typically by leveraging the WordNet hierarchy as a
prior on inter-class visual similarity [4, 5, 6, 7, 8, 9, 10, 11].
Two fundamental issues, however, complicate the use of a semantic taxonomy for learning visual
objects. First, a given taxonomy may offer hints about visual relatedness, but its structure need not
entirely align with useful splits for recognition. (For example, monkey and dog are fairly distant
semantically according to WordNet, yet they share a number of visual features. An apple and applesauce are semantically close, yet are easily separable with basic visual features.) Second, given the
complexity of visual objects, it is highly unlikely that some single optimal semantic taxonomy exists
to lend insight for recognition. While previous work relies on a single taxonomy out of convenience,
1
Appearance
Biological
Animal
canine
Dalmatian
feline
wolf
Habitat
Texture
Siamese
cat
leopard
Pointy
Ears
Spotted
Dalmatian
Tameness
Leopard
Siamese
cat
Domestic
Wolf
Dalmatian
Siamese
Cat
Wild
Wolf
leopard
Figure 1: Main idea: For a given set of classes, we assume multiple semantic taxonomies exist, each
one representing a different ?view? of the inter-class semantic relationships. Rather than commit to
a single taxonomy?which may or may not align well with discriminative visual features?we learn
a tree of kernels for each taxonomy that captures the granularity-specific similarity at each node.
Then we show how to exploit the inter-taxonomic structure when learning a combination of these
kernels from multiple taxonomies (i.e., a ?kernel forest?) to best serve the object recognition tasks.
in reality objects can be organized along many semantic dimensions or ?views?. (For example, a
Dalmatian belongs to the same group as the wolf according to a biological taxonomy, as both are canines. However, in terms of visual attributes, it can be grouped with the leopard, as both are spotted;
in terms of habitat, it can be grouped with the Siamese cat, as both are domestic. See Figure 1.)
Motivated by these issues, we present a discriminative feature learning approach that leverages multiple taxonomies capturing different semantic views of the object categories. Our key insight is
that some combination of the semantic views will be most informative to distinguish a given visual
category. Continuing with the sketch in Figure 1, that might mean that the first taxonomy helps
learn dog- and cat-like features, while the second taxonomy helps elucidate spots and pointy corner
features, while the last reveals context cues such as proximity to humans or indoor scene features.
While each view differs in its implicit human-designed splitting criterion, all separate some classes
from others, thereby lending (often complementary) discriminative cues. Thus, rather than commit
to a single representation, we aim to inject pieces of the various taxonomies as needed.
To this end, we propose semantic kernel forests. Our method takes as input training images labeled
according to their object category, as well as a series of taxonomies, each of which hierarchically
partitions those same labels (object classes) by a different semantic view. For each taxonomy, we
first learn a tree of semantic kernels: each node in a tree has a Mahalanobis-based kernel optimized to
distinguish between the classes in its children nodes. The kernels in one tree isolate image features
useful at a range of category granularities. Then, using the resulting semantic kernel forest from
all taxonomies, we apply a form of multiple kernel learning (MKL) to obtain class-specific kernel
combinations, in order to select only those relationships relevant to recognize each object class. We
introduce a novel hierarchical regularization term into the MKL objective that further exploits the
taxonomies? structure. The output of the method is one learned kernel per object class, which we
can then deploy for one-versus-all multi-class classification on novel images.
Our main contribution is to simultaneously exploit multiple semantic taxonomies for visual feature learning. Whereas past work focuses on building object hierarchies for scalable classification [12, 13] or using WordNet to gauge semantic distance [5, 6, 8, 9], we learn discriminative kernels that capitalize on the cues in diverse taxonomy views, leading to better recognition accuracy.
The primary technical contributions are i) an approach to generate semantic base kernels across taxonomies, ii) a method to integrate the complementary cues from multiple suboptimal taxonomies,
and iii) a novel regularizer for multiple kernel learning that exploits hierarchical structure from the
taxonomy, allowing kernel selection to benefit from semantic knowledge of the problem domain.
We demonstrate our approach with challenging images from the Animals with Attributes and ImageNet datasets [14, 7] together with taxonomies spanning cognitive synsets, visual attributes, behavior, and habitats. Our results show that the taxonomies can indeed boost feature learning, letting
us benefit from humans? perceived distinctions as implicitly embedded in the trees. Furthermore,
we show that interleaving the forest of multiple taxonomic views leads to the best performance,
particularly when coupled with the proposed novel regularization.
2
2 Related Work
Leveraging hierarchies for object recognition Most work in object recognition that leverages
category hierarchy does so for the sake of efficient classification [15, 16, 12, 13, 17]. Making coarse
to fine predictions along a tree of classifiers efficiently rules out unlikely classes at an early stage.
Since taxonomies need not be ideal structures for this goal, recent work focuses on novel ways to
optimize the tree structure itself [12, 13, 17], while others consider splits based on initial inter-class
confusions [16]. A parallel line of work explores unsupervised discovery of hierarchies for image
organization and browsing, from images alone [18, 19] or from images and tags [20]. Whereas all
such work exploits tree structures to improve efficiency (whether in classification or browsing), our
goal is for externally defined semantic hierarchies to enhance recognition accuracy.
More related to our problem setting are techniques that exploit the inter-class relationships in a
taxonomy [5, 6, 8, 9, 10, 11]. One idea is to combine the decisions of classifiers along the semantic
hierarchy [5, 4]. Alternatively, the semantic ?distance? between nodes can be used to penalize
misclassifications more meaningfully [9], or to share labeled exemplars between similar classes [8].
Metric learning and feature selection can also benefit from an object hierarchy, either by preferring
to use disjoint feature sets to discriminate super- and sub-classes [10], by using a taxonomy-induced
loss for structured sparsity [21], or by sharing parameters between metrics along the same path [11].
All prior work commits to a single taxonomy, however, which as discussed above may restrict the
semantics? impact and will not always align well with the visual data.
Classification with multiple semantic views Combining information from multiple ?views? of
data is a well-researched topic in the machine learning, multimedia, and computer vision communities. In multi-view learning, the training data typically consists of paired examples coming from
different modalities?e.g., text and images, or speech and video; basic approaches include recovering the underlying shared latent space for both views [22, 20], bootstrapping classifiers formed
independently per feature space [23, 24], or accounting for the view dependencies during clustering [25, 26]. When the classification tasks themselves are grouped, multi-task learning methods
leverage the parallel tasks to regularize parameters learned for the individual classifiers or features
(e.g., [27, 28, 29]). Broadly speaking, our problem has a similar spirit to such settings, since we want
to leverage multiple parallel taxonomies over the data; however, our goal to aggregate portions of
the taxonomies during feature learning is quite distinct. More specifically, while previous methods
attempt to find a single structure to accommodate both views, we seek complementary information
from the semantic views and assemble task-specific discriminative features.
Learning kernel combinations Multiple kernel learning (MKL) algorithms [30] have shown
promise for image recognition (e.g., [31, 32]) and are frequently employed in practice as a principled way to combine feature types. Our approach also employs a form of MKL, but rather than
pool kernels stemming from different low-level features or kernel hyperparameters, it pools kernels
stemming from different semantic sources. Furthermore, our addition of a novel regularizer exploits
the hierarchical structure from which the kernels originate.
3 Approach
We cast the problem of learning semantic features from multiple taxonomies as learning to combine
kernels. The base kernels capture features specific to individual taxonomies and granularities within
those taxonomies, and they are combined discriminatively to improve classification, weighing each
taxonomy and granularity only to the extent useful for the target classification task.
We describe the two main components of the approach in turn: learning the base kernels?which we
call a semantic kernel forest (Sec. 3.1), and learning their combination across taxonomies (Sec. 3.2),
where we devise a new hierarchical regularizer for MKL.
In what follows, we assume that we are given a labeled dataset D = {(xi , yi )}N
n=1 where (xi , yi )
stands for the ith instance (feature vector) and its class label is yi , as well as a set of tree-structured
taxonomies {Tt }T
t=1 . Each taxonomy Tt is a collection of nodes. The leaf nodes correspond to class
labels, and the inner nodes correspond to superclasses?or, more generally, semantically meaningful
groupings of categories. We index those nodes with double subscripts tn, where t refers to the tth
taxonomy and n to the nth node in that taxonomy. Without loss of generality, we assign the leaf
nodes (i.e., the class nodes) a number between 1 and C, where C is the number of class labels.
3
3.1 Learning a semantic kernel forest
Our first step is to learn a forest of base kernels. These kernels are granularity- and view-specific;
that is, they are tuned to similarities implied by the given taxonomies. While base kernels are learned
independently per taxonomy, they are learned jointly within each taxonomy, as we describe next.
Formally, for each taxonomy Tt , we learn a set of Gaussian kernels for the superclass at every
internal node tn for which n ? C + 1. The Gaussian kernels are parameterized as
Ktn (xi , xj ) = exp{??tn d2Mtn (xi , xj )} = exp{??tn (xi ? xj )T Mtn (xi ? xj )},
(1)
where the Mahalanobis distance metric Mtn is used in lieu of the conventional Euclidean metric.
Note that for leaf nodes where n ? C, we do not learn base kernels.
We want the base kernels to encode similarity between examples using features that reflect their
respective granularity in the taxonomy. Certainly, the kernel Ktn should home in on features that
are helpful to distinguish the node tn?s subclasses. Beyond that, however, we specifically want it
to use features that are as different as possible from the features used by its ancestors. Doing so
ensures that the subsequent combination step can choose a sparse set of ?disconnected? features.
To that end, we apply our Tree of Metrics (ToM) technique [10] to learn the Mahalanobis parameters Mtn . In ToM, metrics are learned by balancing two forces: i) discriminative power and ii) a
preference for different features to be chosen between parent and child nodes. The latter exploits the
taxonomy semantics, based on the intuition that features used to distinguish more abstract classes
(dog vs. cat) should differ from those used for finer-grained ones (Siamese vs. Persian cat).
Briefly, for each node tn, the training data is reduced to Dn = {(xi , yin )}, where yin is the label
of n?s child on the path to the leaf node yi . If yi is not a descendant of the superclass at the node n,
then xi is excluded from Dn . The metrics are learned jointly, with each node mutually encouraging
the others to use non-overlapping
P features. ToM achieves this by augmenting a large margin nearest
neighbor [33] loss function n ?(Dn ; Mtn ) with the following disjoint sparsity regularizer:
X
X X
?d (M ) = ?
Trace[Mtn ] + ?
kdiag(Mtn ) + diag(Mtm )k22 ,
(2)
n?C+1 m?n
n?C+1
where m ? n denotes that node m is either an ancestor or descendant of n. The first part of
the regularizer encourages sparsity in the diagonal elements of Mtn , and the second part incurs a
penalty when two different metrics ?compete? for the same diagonal element, i.e., to use the same
feature dimension. The resulting optimization problem is convex and can be solved efficiently [10].
After learning the metrics {Mtn } in each taxonomy, we construct base kernels as in eq. (1). The
bandwidths ?tn are set as the average distances on training data. We call the collection F = {Ktn }
of all base kernels the semantic kernel forest. Figure 1 shows an illustrative example.
While ToM has shown promising results in learning metrics in a single taxonomy, its reliance on
linear Mahalanobis metrics is inherently limited. A straightforward convex combination of ToMs
would result in yet another linear mapping, incapable of capturing nonlinear inter-taxonomic interactions. In contrast, our kernel approach retains ToM?s granularity-specific features but also enables
nontrivial (nonlinear) combinations, especially when coupled with a novel hierarchical regularizer,
which we will define next.
3.2 Learning class-specific kernels across taxonomies
Base kernels in the semantic kernel forest are learned jointly within each taxonomy but independently across taxonomies. To leverage multiple taxonomies and to capture different semantic views
of the object categories, we next combine them discriminatively to improve classification.
Basic setting To learn class-specific features (or kernels), we compose a one-versus-rest supervised
learning problem. Additionally, instead of combining all the base kernels in the forest F , we preselect a subset of them based on the taxonomy structure. Specifically, from each taxonomy, we
select base kernels that correspond to the nodes on the path from the root to the leaf node class. For
example, in the Biological taxonomy of Figure 1, for the category Dalmatian, this path includes the
nodes (superclasses) canine and animal. Thus, for class c, the linearly combined kernel is given by
XX
Fc (xi , xj ) =
?ctn Ktn (xi , xj ),
(3)
t
n?c
4
where n ? c indexes the nodes that are ancestors of c, which is a leaf node (recall that the first C
nodes in every taxonomy are reserved for leaf class nodes). The combination coefficients ?ctn are
constrained to be nonnegative to ensure the positive semidefiniteness of the resulting kernel Fc (?, ?).
We apply the kernel Fc (?, ?) to construct the one-versus-rest binary classifier to distinguish instances
from class c from all other classes. We then optimize ?c = {?ctn } such that the classifier attains
the lowest empirical misclassification risk. The resulting optimization (in its dual formulation) is
analogous to standard multiple kernel learning [30]:
X
1 XX
?ci ?
min max
?ci ?cj qci qcj Fc (xi , xj )
?c ?c
2 i j
i
(4)
X
s.t.
?ci qci = 0, 0 ? ?ci ? C, ? i,
i
where ?c is the Lagrange multipliers for the binary SVM classifier, C is the regularizer for the
SVM?s hinge loss function, and qci = ?1 is the indicator variable of whether or not xi ?s label is c.
Hierarchical regularization Next, we extend the basic setting to incorporate richer modeling
assumptions. We hypothesize that kernels at higher-level nodes should be preferred to lower-level
nodes. Intuitively, higher-level kernels relate to more classes, thus are likely essential to reduce loss.
We leverage this intuition and knowledge about the relative priority of the kernels from each taxonomy?s hierarchical structure. We design a novel structural regularization that prefers larger weights
for a parent node compared to its children. Formally, the proposed MKL-H regularizer is given by:
X
X
?(?c ) = ?
?ctn + ?
max(0, ?ctn ? ?ctpn + 1).
(5)
t,n?c
t,n?c
The first part prefers a sparse set of kernels. The second part (in the form of hinge loss) encodes our
desire to have the weight assigned to a node n be less than the weight assigned to the node?s parent
pn . We also introduce a margin of 1 to further increase the difference between the two weights.
Hierarchical regularization was previously explored in [34], where a mixed (1, 2)-norm is used to
regularize the relative sizes between the parent and the children. The main idea there is to discard
children nodes if the parent is not selected. Our regularizer is similar, but is simpler and more computationally efficient. (Additionally, our preliminary studies show [34] has no empirical advantage
over our approach in improving recognition accuracy.)
3.3 Numerical optimization
Our learning problem is cast as a convex optimization that balances the discriminative loss in eq. (4)
and the regularizer in eq. (5):
min f (?c ) = g(?c ) + ?(?c ), s.t. ?c ? 0,
(6)
?c
where we use the function g(?) to encapsulate the inner maximization problem over ?c in eq. (4).
We use the projected subgradient method to solve eq. (6), for its ease of implementation and practical
effectiveness [35]. Specifically, at iteration t, let ?ct be the current value of ?c . We compute f (?c )?s
subgradient st , then perform the following update,
?ct+1 ? max 0, ?ct ? ?t st ,
(7)
where the max( ) function implements the projection operation such that the update does not fall
outside of the feasible region ?c ? 0. For step size ?t , we use the modified Polyak?s step size [36].
4 Experiments
We validate our approach on multiple image datasets, and compare to several informative baselines.
4.1 Image datasets and taxonomies
We consider two publicly available image collections: Animals with Attributes (AWA) [14] and
ImageNet [7]1 . We form two datasets from AWA. The first consists of the four classes shown in
1
attributes.kyb.tuebingen.mpg.de/ and image-net.org/challenges/LSVRC/2011/
5
(b) Appearance
(c) Behavior
t n
ra oo
cc
t
ra
ca
g
pi sian
r
Pe ard da
op an us
le t p am
an ot
gi op zee
pp n
hi pa
im
cahl
ck
se ba
p
m
hu
le
k
al
ac us
se
pb m
m ta
o
hu op
pp
t
hi
ca
g
p i si a n d a
r an
Pe t p
an
gi
t n
ra oo zee
cc n
ra pa
im
ch ard
op
al
ck s
s e ba m u
p
m ota
hu op
pp
hi
g
pi
t n
ra oo at
cc c
ra sian
r
Pe ard zee
op n
le pa da
im an
ch t p
an
gi
t
ca
n
ia
rs
e ne Pe ard
or li
p
iv f e
idleo on da
rn
n
o
ca
yo acc pan
r t
oc
pr
an
l
gi l
k
ta
tic ea ac us
en
ua s pb am
ac
pl
aq
d um ot
oe h op
?t p p
en hi
ev
g
pi
ee
t nz
ra pa
im
ch
(a) WordNet
(d) Habitat
er
st
oa l
rc e
lle he
e ro isw a
rid
onrr b
sife im
us ar
ce rc m m
vi pe
ru
de lity
er d ton
en but le
ta
st
en
ck
fa
m
bu
ru
le ble
st
in
re ru lta
itu o
rn pmop n
fu
la eva
lic
po b
m b
co htu all
t b
t ba ket er
s w
an
pl
ba flo
ar er un
ul
w
s
y
is i
sc flo
va
da sa
n
bo rn rry
o e
it ac wb oa
ra rb
st the
a
fe ge
id
br
f ru
ai
ns
bo n ry
or er
ac wb
ra
st sy
i b
da htu oa
t rb
ba the
a
fe b
m
co ge
id
bmr p
la le
ck
er
bu ton ast
t o l
bu erc ee
ll h
ro isw le
rr b
fe lta a
o b
po rim
a
m
le
ru m all
u tb
dr ke er
s w
ba flo n
n a
su icev
l
po
le
ru b
m
co m
u
dr kle
c
bmup
la tub
th
ba rn
o
ac ton er
t w
bu flo
n
su sy oa
i rb
da the all
a tb
fe ke a
s b
ba rim
a
er
m ge ast
id o
br erc
ll i
ro sa rry
n e
bo wb le
ra b
st lta n
o a l
po icev ee
l h
po isw
rr
fe
(e) WordNet
(f) Appearance
(g) Attributes
Figure 2: Taxonomies for the AWA-10 (a-d) and ImageNet-20 (e-g) datasets.
Fig. 1, and totals 2, 228 images; the second contains the ten classes in [14], and totals 6, 180 images.
We refer to them as AWA-4 and AWA-10, respectively. The third dataset, ImageNet-20, consists of
28, 957 total images spanning 20 classes from ILSVRC2010. We chose classes that are non-animals
(to avoid overlap with AWA) and that have attribute labels [37].
To obtain multiple taxonomies per dataset, we use attribute labels and WordNet. Attributes are human understandable properties shared among object classes, e.g., ?furry?, ?flat?, ?carnivorous? [14].
AWA and ImageNet have 85 and 25 attribute labels, respectively. To form semantic taxonomies
based on attributes, we first manually divide the attribute labels into subsets according to their mutual
semantic relevance (e.g., ?furry? and ?shiny? are attributes relevant for an Appearance taxonomy,
while ?land-dwelling? and ?aquatic? are relevant for a Habitat taxonomy). Then, for each subset of
attributes, we perform agglomerative clustering using Euclidean distance on vectors of the training
images? real-valued attributes. We restrict the tree height (6 for ImageNet and 3 for AWA) to ensure
that the branching factor at the root is not too high. To extract a WordNet taxonomy, we find all
nodes in WordNet that contain the object class names on their word lists, and then build a hierarchy
by pruning nodes with only one child and resolving multiple parentship.
For AWA-10, we use 4 taxonomies: one from WordNet, and three based on attribute subsets reflecting Appearance, Behavior, and Habitat ties. For ImageNet-20, we use 3 taxonomies: one from
WordNet, one reflecting Appearance as found by hierarchical clustering on the visual features, and
one reflecting Attributes using annotations from [37]. For the AWA-4 taxonomies, we simply generate all 3 possible 2-level binary trees, which, based on manual observation, yield taxonomies
reflecting Biological, Appearance, and Habitat ties between the animals. See Figures 1 and 2.
We stress that these taxonomies are created externally with human knowledge, and thus they inject
perceived object relationships into the feature learning problem. This is in stark contrast to prior
work that focuses on optimizing hierarchies for efficiency, without requiring interpretability of the
trees themselves [16, 12, 13, 17].
4.2 Baseline methods for comparison
We compare our method to three key baselines: 1) Raw feature kernel: an RBF kernel computed
on the original image features, with the ? parameter set to the inverse of the mean Euclidean distance
d among training instances. 2) Raw feature kernel + MKL: MKL combination of multiple such
RBF kernels constructed by varying ?, which is a traditional approach to generate base kernels
(e.g., [30]). For this baseline, we generate the same number N of base kernels as in the semantic
kernel forest, with ? = ?d , for ? = {21?m , . . . , 2N ?m }, where m = N2 . 3) Perturbed semantic
kernel tree: a semantic kernel tree trained with taxonomies that have randomly swapped leaves.
6
AWA-4
47.67 ? 2.22
48.50 ? 1.89
N/A
N/A
47.17 ? 2.40
48.89 ? 1.06
50.06 ? 1.12
49.67 ? 1.11
52.83 ? 1.68
Raw feature kernel
Raw feature kernel + MKL
Perturbed semantic kernel tree + MKL-H
Perturbed semantic kernel forest + MKL-H
Semantic kernel tree + Avg
Semantic kernel tree + MKL
Semantic kernel tree + MKL-H
Semantic kernel forest + MKL
Semantic kernel forest + MKL-H
AWA-10
30.80 ? 1.36
31.13 ? 2.81
31.53 ? 2.07
33.20 ? 2.96
31.92 ? 1.21
32.43 ? 1.93
32.68 ? 1.79
34.60 ? 1.78
35.87 ? 1.22
ImageNet-20
28.20 ? 1.45
27.67 ? 1.50
28.20 ? 2.02
30.77 ? 1.53
28.97 ? 1.61
29.74 ? 1.26
29.90 ? 0.70
30.97 ? 1.14
32.30 ? 1.00
Table 1: Multi-class classification accuracy on all datasets, across 5 train/test splits. (The perturbed semantic
kernel tree baseline is not applicable for AWA-4, since all possible groupings are present in the taxonomies.)
Imagenet?20
AWA?10
15
Accuracy improvement
8
6
4
2
0
Wordnet (0.73)
Visual (1.97)
Attributes (2.40)
All (4.10)
10
Accuracy improvement
Wordnet (1.73)
Appearance (1.00)
Behavior (2.53)
Habitat (2.27)
All (5.07)
10
5
0
?5
?2
rule
marimba
acorn
buckle
daisy
r. coaster
bonsai
pooltable
policevan
drum
sunflower
button
lamp
basketball
comb
featherboa
bridge
bathtub
strawberry
rat
panda
leopard
pig
chimp
raccoon
P. cat
hippo
seal
h. whale
ferriswheel
?10
?4
Figure 3: Per-class accuracy improvements of each individual taxonomy and the semantic kernel forest (?All?)
over the raw feature kernel baseline. Numbers in legends denote mean improvement. Best viewed in color.
The first two baselines will show the accuracy attainable using the same image features and basic
classification tools (SVM, MKL) as our approach, but lacking the taxonomy insights. The last
baseline will test if weakening the semantics in the taxonomy has a negative impact on accuracy.
We evaluate several variants of our approach, in order to analyze the impact of each component: 1)
Semantic kernel tree + Avg: an equal-weight average of the semantic kernels from one taxonomy.
2) Semantic kernel tree + MKL: the same kernels, but combined with MKL using sparsity regularization only (i.e., ? = 0 in eq. 5). 3) Semantic kernel tree + MKL-H: the same as previous,
but adding the proposed hierarchical regularization (eq. 5). 4) Semantic kernel forest + MKL:
semantic forest kernels from multiple taxonomies combined with MKL. 5) Semantic kernel forest
+ MKL-H: the same as previous, but adding our hierarchical regularizer.
4.3 Implementation details
For all results, we use 30/30/30 images per class for training/validation/testing, and generate 5
such random splits. We report average multi-class recognition accuracy and standard errors for
95% confidence interval. For single taxonomy results, we report the average over all individual
taxonomies. For all methods, the raw image features are bag-of-words histograms obtained on SIFT,
provided with the datasets. We reduce their dimensionality to 100 with PCA to speed up the ToM
training, following [10]. To train ToM, we sample 400 random constraints and cross-validate the
regularization parameters ?, ? ? {0.1, 1, 10}. For MKL/MKL-H, we use C = 1000 for the C-SVM
parameter, and cross-validate the sparsity and hierarchical parameters ?, ? ? {0, 0.1, 1, 10}.
4.4 Results
Quantitative results Table 1 shows the multi-class classification accuracy on all three datasets.
Our semantic kernel forests approach significantly outperforms all three baselines. It improves accuracy for 9 of the 10 AWA-10 classes, and 16 of the 20 classes in ImageNet-20 (see Figure 3).
These gains clearly show the impact of injecting semantics into discriminative feature learning. The
forests? advantage over the individual trees supports our core claim regarding the value of interleaving semantic cues from multiple taxonomies. Further, the proposed hierarchical regularization
(MKL-H) outperforms the generic MKL, particularly for the multiple taxonomy forests.
We stress that semantic kernel forests? success is not simply due to having access to a variety of
kernels, as we can see by comparing our method to both the raw feature MKL and perturbed tree
7
l1?only (34.33)
(a) Biolog.
5
5
9
8
4
12
1
22
4
leopard
2
11
14
wolf
3
1
18
8
0
4
11
15
(c) Habitat
dalmatian
S. cat
17
3
5
5
2
13
4
11
leopard
3
1
21
5
wolf
2
3
10
15
(d) All
l1 + Hierarchical (35.67)
chimpanzee
giant panda
leopard
Persian cat
pig
hippopotamus
humpback whale
raccoon
rat
seal
(e) MKL
procyonid
feline
even?toed
aquatic
carnivore
placental
cat/rat
hairless
~panda
appearance
racoon/rat
land
aquatic
predator/prey
behavior
jungle
nonjungle
aquatic
land
habitat
3
3
(b) Appear.
11
6
chimpanzee
giant panda
leopard
Persian cat
pig
hippopotamus
humpback whale
raccoon
rat
seal
procyonid
feline
even?toed
aquatic
carnivore
placental
cat/rat
hairless
~panda
appearance
racoon/rat
land
aquatic
predator/prey
behavior
jungle
nonjungle
aquatic
land
habitat
S. cat
wolf
12
dalmatian
9
S. cat
8
9
5
3
leopard
S. cat
15
3
wolf
3
6
9
13
dalmatian
wolf
4
wolf
4
All (55.00)
wolf
leopard
leopard
12
5
S. cat
S. cat
dalmatian
leopard
9
Habitat (43.33)
dalmatian
11
6
wolf
7
8
S. cat
1
7
leopard
11
dalmatian
S. cat
leopard
dalmatian
Appearance (50.83)
dalmatian
Biological (38.33)
(f) MKL-H
Figure 4: (a-d): AWA-4 confusion matrices for individual taxonomies (a-c) and the combined taxonomies (d). Y-axis shows true classes; x-axis shows predicted classes. (e-f): Example ?c ?s to show
the characteristics of the two regularizers. Each entry is a learned kernel weight (brighter=higher
weight). Y-axis shows object classes; x-axis shows kernel node names.
results?all of which use the same number of kernels. Instead, the advantage is leveraging the
implicit discriminative criteria embedded in the external semantic groupings. In addition, we note
that even perturbed taxonomies can be semantic; some of their groupings of classes may happen
to be meaningful, especially when there are fewer categories. Hence, their advantage over the raw
feature kernels is understandable. Nonetheless, perturbed taxonomies are semantically weaker than
the originals, and our kernel trees with the true single or multiple taxonomies perform better.
MKL-H has the most impact for the multiple taxonomy forests, and relatively little on the single
kernel tree. This makes sense. For a single taxonomy, a single kernel is solely responsible for
discriminating a class from the others, making all kernels similarly useful. In contrast, in the forest,
two classes are related at multiple different nodes, making it necessary to select out useful views;
here, the hierarchical regularizer plays the role of favoring kernels at higher levels, which might
have more generalization power due to the training set size and number of classes involved.
The per-class and per-taxonomy comparisons in Figure 3 further elucidate the advantage of using
multiple complementary taxonomies. A single semantic kernel tree often improves accuracy on
some classes, but at the expense of reduced accuracy on others. This illustrates that the structure of
an individual taxonomy is often suboptimal. For example, the Habitat taxonomy on AWA-10 helps
distinguish humpback whale well from the others?it branches early from the other animals due to
its distinctive ?oceanic? background?but it hurts accuracy for giant panda. The WordNet taxonomy
does exactly the opposite, improving giant panda via the Biological taxonomy, but hurting humpback whale. The semantic kernel forest takes the best of both through its learned combination. The
only cases in which it fails are when the majority of the taxonomies strongly degrade performance,
as to be expected given the linear MKL combination (e.g., see the classes marimba and rule).
Further qualitative analysis Figure 4 (a-d) shows the confusion matrices for AWA-4 using only
the root level kernels. We see how each taxonomy specializes the features, exactly in the manner
sketched in Sec. 1. The combination of all taxonomies achieves the highest accuracy (55.00), better
than the maximally performing individual taxonomy (Appearance, 50.83). Figure 4 (e-f) shows
the learned kernel combination weights ?c for each class c in AWA-10, using the two different
regularizers. In (e), the L1 regularizer selects a sparse set of useful kernels. For example, the
humpback whale drops the kernels belonging to the whole Behavior taxonomy block, and gives the
strongest weight to ?hairless?, and ?habitat?. However, by failing to select some of the upper-level
nodes, it focuses only on the most confusing fine-grained problems. In contrast, with the proposed
regularization (f), we see more emphasis on the upper nodes (e.g., the ?behavior? and ?placental?
kernels), which helps accuracy.
5 Conclusion
We proposed a semantic kernel forest approach to learn discriminative visual features that leverage
information from multiple semantic taxonomies. The results show that it improves object recognition accuracy, and give good evidence that committing to a single external knowledge source is
insufficient. In future work, we plan to explore non-additive and/or local per-instance kernel combination techniques for integrating the semantic views.
Acknowledgements This research is supported in part by NSF IIS-1065243 and NSF IIS-1065390.
8
References
[1] N. Dalal and B. Triggs. Histograms of Oriented Gradients for Human Detection. In CVPR, 2005.
[2] J. Wang, J. Yang, K. Yu, F. Lv, T. Huang, and Y. Gong. Locality-Constrained Linear Coding for Image
Classification. In CVPR, 2010.
[3] C. Fellbaum, editor. WordNet An Electronic Lexical Database. MIT Press, May 1998.
[4] A. Zweig and D. Weinshall. Exploiting Object Hierarchy: Combining Models from Different Category
Levels. In ICCV, 2007.
[5] M. Marszalek and C. Schmid. Semantic hierarchies for visual object recognition. In CVPR, 2007.
[6] A. Torralba, R. Fergus, and W. T. Freeman. 80 million Tiny Images: a Large Dataset for Non-Parametric
Object and Scene Recognition. PAMI, 30(11):1958?1970, 2008.
[7] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. Imagenet: A Large-Scale Hierarchical Image
Database. In CVPR, 2009.
[8] R. Fergus, H. Bernal, Y. Weiss, and A. Torralba. Semantic label sharing for learning with many categories.
In ECCV, 2010.
[9] J. Deng, A. Berg, K. Li, and L. Fei-Fei. What does classifying more than 10,000 image categories tell us?
In ECCV, 2010.
[10] S. J. Hwang, K. Grauman, and F. Sha. Learning a tree of metrics with disjoint visual features. In NIPS,
2011.
[11] N. Verma, D. Mahajan, S. Sellamanickam, and V. Nair. Learning hierarchical similarity metrics. In CVPR,
2012.
[12] S. Bengio, J. Weston, and D. Grangier. Label Embedding Trees for Large Multi-Class Task. In NIPS,
2010.
[13] J. Deng, S. Satheesh, A. Berg, and L. Fei Fei. Fast and balanced: Efficient label tree learning for large
scale object recognition. In NIPS, 2011.
[14] C. Lampert, H. Nickisch, and S. Harmeling. Learning to Detect Unseen Object Classes by Between-Class
Attribute Transfer. In CVPR, 2009.
[15] M. Marszalek and C. Schmid. Constructing category hierarchies for visual recognition. In ECCV, 2008.
[16] G. Griffin and P. Perona. Learning and using taxonomies for fast visual categorization. In CVPR, 2008.
[17] T. Gao and D. Koller. Discriminative learning of relaxed hierarchy for large-scale visual recognition. In
ICCV, 2011.
[18] J. Sivic, B. Russell, A. Zisserman, W. Freeman, and A. Efros. Unsupervised discovery of visual object
class hierarchies. In CVPR, 2008.
[19] E. Bart, I. Porteous, P. Perona, and M. Welling. Unsupervised learning of visual taxonomies. In CVPR,
2008.
[20] L.-J. Li, C. Wang, Y. Lim, D. Blei, and L. Fei-Fei. Building and using a semantivisual image hierarchy.
In CVPR, 2010.
[21] S. Kim and E. Xing. Tree-guided group lasso for multi-task regression with structured sparsity. In ICML,
2010.
[22] D. R. Hardoon, S. Szedmak, and J. Shawe-Taylor. Canonical Correlation Analysis: An Overview with
Application to Learning Methods. Neural Computation, 16(12), 2004.
[23] A. Blum and T. Mitchell. Combining Labeled and Unlabeled Data with Co-training. In COLT: Proceedings of the Workshop on Computational Learning Theory, 1998.
[24] C. Christoudias, K. Saenko, L. Morency, and T. Darrell. Co-adaptation of audio-visual speech and gesture
classifiers. In International Conference on Multimodal Interaction, 2006.
[25] I. Dhillon, S. Mallela, and R. Kumar. A divisive information-theoretic feature clustering algorithm for
text classification. Journal of Machine Learning Research, 3:1265?1287, 2003.
[26] A. Gupta and S. Dasgupta. Hybrid hierarchical clustering: Forming a tree from multiple views. In
Workshop on Learning With Multiple Views, 2005.
[27] A. Argyriou, T. Evgeniou, and M. Pontil. Multi-task feature learning. In NIPS, 2006.
[28] N. Loeff and A. Farhadi. Scene Discovery by Matrix Factorization. In ECCV, 2008.
[29] S. J. Hwang, F. Sha, and K. Grauman. Sharing features between objects and their attributes. In CVPR,
2011.
[30] F. Bach, G. Lanckriet, and M. Jordan. Multiple Kernel Learning, Conic Duality, and the SMO Algorithm.
In ICML, 2004.
[31] M. Varma and D. Ray. Learning the discriminative power-invariance trade-off. In ICCV, 2007.
[32] P. Gehler and S. Nowozin. On feature combination for multiclass object classification. In ICCV, 2009.
[33] K. Weinberger, J. Blitzer, and L. Saul. Distance Metric Learning for Large Margin Nearest Neighbor
Classification. In NIPS, 2006.
[34] F. Bach. Exploring large feature spaces with hierarchical multiple kernel learning. In NIPS, 2008.
[35] D. Bertsekas. Nonlinear Programming. Athena Scientific, 1999.
[36] S. Boyd and A. Mutapcic. Subgradient methods. 2007.
[37] O. Russakovsky and L. Fei-Fei. Attribute learning in large-scale datasets. In ECCV, 2010.
9
|
4760 |@word briefly:1 dalal:1 norm:1 seal:3 triggs:1 hu:3 seek:1 r:1 accounting:1 attainable:1 incurs:1 thereby:1 accommodate:1 initial:2 series:1 contains:1 tuned:2 genetic:1 biolog:1 past:1 outperforms:2 regarding:1 current:1 bmr:1 comparing:1 si:1 yet:3 stemming:2 ctn:5 distant:1 happen:1 partition:1 informative:2 subsequent:1 enables:1 numerical:1 hypothesize:1 designed:2 drop:1 update:2 ota:1 v:2 alone:2 cue:6 leaf:8 weighing:1 selected:1 fewer:1 bart:1 lamp:1 ith:1 core:1 htu:2 blei:1 coarse:1 node:42 lending:1 preference:1 org:1 simpler:1 phylogenetic:1 rc:2 along:4 dn:3 height:1 shiny:1 constructed:1 maturity:1 descendant:2 consists:3 qualitative:1 wild:1 combine:4 compose:1 comb:1 ray:1 manner:1 introduce:3 inter:6 hippo:1 ra:10 indeed:1 behavior:8 themselves:3 nor:1 frequently:1 multi:9 mpg:1 ry:1 expected:1 freeman:2 researched:1 encouraging:1 little:1 farhadi:1 ua:1 domestic:2 provided:2 xx:2 underlying:1 hardoon:1 lowest:1 what:2 tic:1 weinshall:1 monkey:1 giant:4 bootstrapping:1 sung:1 quantitative:1 every:2 subclass:1 tie:4 exactly:2 grauman:4 classifier:10 um:1 ro:3 appear:1 encapsulate:1 bertsekas:1 positive:1 local:1 jungle:2 id:3 subscript:1 path:4 solely:1 marszalek:2 pami:1 might:2 chose:1 emphasis:1 nz:1 specifying:1 challenging:2 co:5 ease:1 limited:1 factorization:1 range:1 practical:1 responsible:1 harmeling:1 testing:1 practice:1 block:1 implement:1 differs:1 spot:1 pontil:1 empirical:2 significantly:1 projection:1 boyd:1 word:3 confidence:1 refers:1 lic:1 integrating:1 convenience:1 close:1 selection:3 unlabeled:1 context:2 risk:1 ast:2 optimize:2 conventional:1 lexical:1 straightforward:1 independently:3 convex:3 chimp:1 ke:2 feline:3 splitting:2 insight:3 rule:3 regularize:2 varma:1 embedding:1 hurt:1 analogous:1 elucidate:2 hierarchy:16 deploy:1 target:1 play:1 programming:1 lanckriet:1 pa:4 element:2 recognition:20 particularly:2 rry:2 labeled:5 database:2 gehler:1 role:1 solved:1 capture:4 wang:2 region:1 ensures:1 eva:1 oe:1 russell:1 highest:1 trade:1 principled:1 intuition:3 balanced:1 complexity:1 trained:1 serve:1 distinctive:1 efficiency:2 easily:1 po:5 multimodal:1 cat:20 tx:2 various:1 regularizer:13 train:3 distinct:1 committing:1 describe:2 fast:2 sc:1 tell:1 aggregate:1 outside:1 quite:1 richer:2 larger:1 solve:1 valued:1 cvpr:11 gi:4 unseen:1 commit:2 jointly:3 itself:1 advantage:5 rr:2 net:1 propose:2 interaction:2 coming:1 adaptation:1 relevant:6 combining:4 flo:4 validate:3 christoudias:1 los:1 exploiting:1 parent:5 double:1 darrell:1 categorization:1 bernal:1 object:34 help:4 oo:3 blitzer:1 ac:6 gong:1 augmenting:2 exemplar:2 ard:4 mutapcic:1 nearest:2 op:7 sa:2 eq:7 recovering:1 c:2 predicted:1 differ:1 guided:1 attribute:21 human:8 subordinate:1 assign:1 generalization:1 kristen:1 preliminary:1 biological:7 im:5 leopard:15 pl:2 exploring:1 proximity:1 exp:2 mapping:1 claim:1 efros:1 achieves:2 vary:1 early:2 torralba:2 perceived:2 failing:1 injecting:1 applicable:1 bag:1 label:15 utexas:2 bridge:1 grouped:3 gauge:1 tool:1 mit:1 feisha:1 clearly:1 always:1 gaussian:2 aim:1 super:2 rather:3 mtm:1 pn:1 avoid:1 ck:4 modified:1 varying:1 hippopotamus:2 encode:1 focus:4 yo:1 improvement:5 contrast:4 attains:1 baseline:9 detect:1 preselect:1 helpful:1 am:2 sense:1 kim:1 humpback:5 typically:2 unlikely:2 weakening:1 perona:2 relation:1 ancestor:3 favoring:1 koller:1 selects:1 semantics:4 sketched:1 issue:3 classification:17 dual:1 among:2 colt:1 animal:8 constrained:2 plan:1 fairly:1 mutual:1 equal:1 construct:2 evgeniou:1 having:1 manually:1 whale:6 capitalize:1 unsupervised:3 yu:1 icml:2 future:1 others:6 report:2 hint:1 employ:1 ktn:4 randomly:1 oriented:1 simultaneously:1 recognize:2 individual:8 usc:1 attempt:1 detection:1 organization:1 interest:1 highly:1 certainly:1 light:1 regularizers:2 fu:1 zee:3 necessary:1 respective:1 tree:37 iv:1 continuing:1 euclidean:3 divide:1 re:1 taylor:1 instance:4 modeling:1 wb:3 ar:2 retains:1 maximization:1 subset:4 entry:1 too:1 bonsai:1 dependency:1 perturbed:7 nickisch:1 combined:5 ju:1 st:8 fundamental:1 explores:1 discriminating:1 international:1 preferring:1 bu:4 dong:1 off:1 pool:2 enhance:1 together:2 reflect:3 ear:1 choose:1 huang:1 priority:1 ket:1 external:3 cognitive:2 corner:1 inject:2 leading:1 dr:2 stark:1 li:5 potential:1 de:2 semidefiniteness:1 sec:3 coding:1 includes:1 coefficient:1 vi:1 piece:1 view:23 root:3 placental:3 doing:1 analyze:1 reached:1 portion:1 xing:1 parallel:3 panda:7 annotation:1 predator:2 daisy:1 contribution:2 formed:1 publicly:1 accuracy:19 reserved:1 efficiently:2 sy:2 yield:2 correspond:3 characteristic:1 raw:8 critically:1 apple:1 finer:1 cc:3 russakovsky:1 acc:1 strongest:1 reach:1 sharing:3 complicate:1 manual:1 nonetheless:1 pp:3 involved:1 gain:2 dataset:4 oceanic:1 mitchell:1 recall:1 knowledge:6 color:1 dimensionality:1 improves:3 organized:1 cj:1 lim:1 rim:2 ea:1 reflecting:4 fellbaum:1 higher:4 ta:3 supervised:1 tom:8 zisserman:1 maximally:1 wei:1 formulation:1 strongly:1 generality:1 furthermore:2 implicit:2 stage:1 correlation:1 sketch:1 su:2 nonlinear:3 overlapping:1 mkl:31 hwang:3 believe:1 aquatic:7 scientific:1 building:2 name:2 k22:1 contain:1 multiplier:1 requiring:1 true:2 regularization:11 assigned:2 hence:1 excluded:1 dhillon:1 furry:2 semantic:67 mahajan:1 mahalanobis:5 ll:2 during:2 branching:1 encourages:1 basketball:1 illustrative:1 oc:1 rat:7 criterion:3 stress:2 tt:3 demonstrate:2 confusion:3 tn:7 theoretic:1 l1:3 image:29 novel:9 physical:1 overview:1 million:1 discussed:1 extend:1 he:1 significant:1 refer:1 hurting:1 ai:1 similarly:1 erc:2 grangier:1 shawe:1 aq:1 access:1 impressive:1 supervision:1 similarity:5 align:3 base:14 recent:2 optimizing:1 belongs:1 discard:1 incapable:1 binary:3 success:2 life:1 qci:3 yi:5 devise:1 relaxed:1 employed:1 deng:3 mallela:1 paradigm:1 ii:4 resolving:1 multiple:34 siamese:5 persian:3 branch:1 technical:1 gesture:1 offer:1 cross:2 zweig:1 bach:2 spotted:2 paired:1 va:1 impact:5 prediction:1 scalable:1 basic:6 variant:1 regression:1 essentially:1 metric:14 vision:1 iteration:1 kernel:109 histogram:2 penalize:1 whereas:2 want:3 fine:2 addition:2 interval:1 background:1 source:3 modality:1 ot:2 rest:2 swapped:1 isolate:1 induced:1 chimpanzee:2 meaningfully:1 legend:1 leveraging:3 spirit:1 effectiveness:1 jordan:1 call:2 structural:1 ee:3 leverage:8 granularity:8 ideal:1 split:4 iii:1 yang:1 bengio:1 variety:1 xj:7 misclassifications:1 brighter:1 lasso:1 restrict:2 suboptimal:2 bandwidth:1 inner:2 idea:3 reduce:2 multiclass:1 polyak:1 br:2 tub:1 drum:1 texas:2 angeles:1 whether:2 motivated:1 sunflower:1 pca:1 ul:1 penalty:1 speech:2 speaking:1 prefers:2 useful:6 generally:1 se:2 awa:19 ten:1 category:18 tth:1 reduced:2 generate:5 exist:1 nsf:2 canonical:1 disjoint:3 per:9 rb:3 diverse:1 broadly:1 promise:1 dasgupta:1 group:5 key:2 four:1 reliance:1 ilsvrc2010:1 pb:2 blum:1 ce:1 prey:2 button:1 subgradient:3 year:1 compete:1 inverse:1 taxonomic:4 parameterized:1 electronic:1 home:1 loeff:1 decision:1 ble:1 confusing:1 griffin:1 dwelling:1 capturing:2 entirely:1 ct:3 hi:4 distinguish:7 assemble:1 nonnegative:1 nontrivial:1 mtn:8 constraint:1 fei:11 scene:3 flat:1 encodes:1 sake:1 tag:1 speed:1 min:2 kumar:1 performing:1 separable:1 relatively:1 structured:3 according:5 combination:17 disconnected:1 belonging:1 across:5 pan:1 appealing:1 making:3 intuitively:1 iccv:4 pr:1 computationally:1 mutually:1 previously:1 turn:1 lta:3 needed:1 letting:1 ge:3 end:2 lieu:1 available:1 operation:1 apply:3 hierarchical:22 generic:1 weinberger:1 original:2 denotes:1 clustering:5 include:2 ensure:2 porteous:1 hinge:2 procyonid:2 exploit:9 commits:1 especially:2 build:1 implied:1 objective:1 fa:1 sha:3 primary:1 parametric:1 diagonal:2 traditional:1 southern:1 gradient:1 distance:7 separate:1 oa:4 majority:1 athena:1 strawberry:1 degrade:1 topic:1 originate:1 agglomerative:1 extent:1 tuebingen:1 spanning:2 itu:1 ru:6 index:2 relationship:8 insufficient:2 balance:1 hairless:3 fe:5 taxonomy:110 relate:2 expense:1 trace:1 negative:1 ba:8 design:1 implementation:2 understandable:2 satheesh:1 canine:3 allowing:1 perform:3 upper:2 observation:1 datasets:10 lity:1 rn:4 community:1 dog:3 cast:2 optimized:2 imagenet:11 dalmatian:13 sivic:1 california:1 smo:1 learned:10 distinction:1 boost:1 nip:6 beyond:2 perception:1 ev:1 indoor:1 sparsity:6 challenge:1 pig:3 tb:2 max:4 interpretability:1 lend:1 video:1 power:3 misclassification:1 ia:1 overlap:1 force:1 hybrid:1 indicator:1 sian:2 nth:1 representing:2 improve:3 habitat:15 ne:1 conic:1 axis:4 created:1 specializes:1 coupled:2 extract:1 schmid:2 szedmak:1 text:2 prior:3 discovery:3 acknowledgement:1 relative:2 embedded:2 loss:7 lacking:1 discriminatively:2 mixed:1 versus:3 lv:1 validation:1 integrate:1 editor:1 verma:1 nowozin:1 classifying:1 tiny:1 share:2 balancing:1 austin:2 pi:3 land:5 supported:1 last:2 eccv:5 carnivore:2 synset:1 guide:1 lle:1 weaker:1 neighbor:2 fall:1 saul:1 sparse:3 benefit:3 dimension:2 stand:1 kdiag:1 made:2 collection:3 projected:1 avg:2 welling:1 pruning:1 implicitly:2 relatedness:1 preferred:1 reveals:1 rid:1 isw:3 discriminative:14 xi:12 alternatively:1 fergus:2 un:1 latent:1 table:2 reality:1 additionally:2 learn:13 promising:1 transfer:1 ca:5 inherently:1 forest:28 improving:2 complex:1 constructing:1 domain:1 diag:1 da:6 main:4 hierarchically:1 linearly:1 synonym:1 whole:1 hyperparameters:1 lampert:1 n2:1 child:8 complementary:4 fig:1 en:4 n:1 additive:1 sub:1 fails:1 pe:5 third:1 interleaving:3 grained:2 externally:2 embed:1 specific:9 kyb:1 sift:1 er:9 explored:1 ton:3 svm:4 gupta:1 list:1 evidence:1 grouping:4 exists:1 essential:1 socher:1 workshop:2 merging:1 adding:2 ci:4 texture:1 illustrates:1 opposite:1 margin:3 browsing:2 locality:1 yin:2 fc:4 simply:2 appearance:12 likely:1 raccoon:3 explore:1 visual:26 gao:1 lagrange:1 desire:1 forming:1 bo:3 ch:3 wolf:12 relies:1 nair:1 weston:1 superclass:4 goal:3 viewed:1 rbf:2 towards:1 shared:2 feasible:1 lsvrc:1 specifically:4 semantically:4 wordnet:15 morency:1 multimedia:1 specie:1 duality:1 invariance:1 discriminate:1 total:3 la:3 toed:2 meaningful:3 saenko:1 divisive:1 pointy:2 select:5 formally:2 berg:2 internal:1 support:1 latter:1 relevance:1 incorporate:2 evaluate:1 audio:1 argyriou:1
|
4,155 | 4,761 |
Human memory search as a random walk
in a semantic network
Joseph L. Austerweil
Department of Psychology
University of California, Berkeley
Berkeley, CA 94720
[email protected]
Joshua T. Abbott
Department of Psychology
University of California, Berkeley
Berkeley, CA 94720
[email protected]
Thomas L. Griffiths
Department of Psychology
University of California, Berkeley
Berkeley, CA 94720
tom [email protected]
Abstract
The human mind has a remarkable ability to store a vast amount of information in
memory, and an even more remarkable ability to retrieve these experiences when
needed. Understanding the representations and algorithms that underlie human
memory search could potentially be useful in other information retrieval settings,
including internet search. Psychological studies have revealed clear regularities in
how people search their memory, with clusters of semantically related items tending to be retrieved together. These findings have recently been taken as evidence
that human memory search is similar to animals foraging for food in patchy environments, with people making a rational decision to switch away from a cluster
of related information as it becomes depleted. We demonstrate that the results
that were taken as evidence for this account also emerge from a random walk on a
semantic network, much like the random web surfer model used in internet search
engines. This offers a simpler and more unified account of how people search
their memory, postulating a single process rather than one process for exploring a
cluster and one process for switching between clusters.
1
Introduction
Human memory has a vast capacity, storing all the semantic knowledge, facts, and experiences
that people accrue over a lifetime. Given this huge repository of data, retrieving any one piece of
information from memory is a challenging computational problem. In fact, it is the same problem
faced by libraries [1] and internet search engines [6] that need to efficiently organize information to
facilitate retrieval of those items most likely to be relevant to a query. It thus becomes interesting to
try to understand exactly what kind of algorithms and representations are used when people search
their memory.
One of the main tasks that has been used to explore memory search is the semantic fluency task, in
which people retrieve as many items belonging to a particular category (e.g., animals) as they can
in a limited time period. Early studies using semantic fluency tasks suggested a two-part memory
retrieval process: clustering, in which the production of words form semantic subcategories, and
switching, in which a transition is made from one subcategory to another [13, 21]. This decomposition of behavior has been useful for diagnosing individual participants with particular clinical
1
conditions such as Alzheimer?s and Parkinson?s disease, which result in different patterns of deficits
in these processes [9, 22].
Recently, it has been suggested that the clustering patterns observed in semantic fluency tasks could
reflect an optimal foraging strategy, with people searching for items distributed in memory in a way
that is similar to animals searching for food in environments with patchy food resources [7]. The idea
behind this approach is that each cluster corresponds to a ?patch? and people strategically choose to
leave patches when the rate at which they retrieve relevant concepts drops below their average rate
of retrieval. Quantitative analyses of human data provide support for this account, finding shorter
delays in retrieving relevant items after a change in clusters and a relationship between when people
leave a cluster and their average retrieval time.
In this paper, we argue that there may be a simpler explanation for the patterns seen in semantic
fluency tasks, requiring only a single cognitive process rather than separate processes for exploring
a cluster and deciding to switch between clusters. We show that the results used to argue for the
optimal foraging account can be reproduced by a random walk on a semantic network derived from
human semantic associations. Intriguingly, this is exactly the kind of process assumed by the PageRank algorithm [12], providing a suggestive link between human memory and internet search and a
new piece of evidence supporting the claim [6] that this algorithm might be relevant to understanding
human semantic memory.
The plan of the paper is as follows. Section 2 provides relevant background information on studies of
human memory search with semantic fluency tasks and outlines the retrieval phenomena predicted
by an optimal foraging account. Section 3 presents the parallels between searching the internet
and search in human memory, and provides a structural analysis of semantic memory. Section 4
evaluates our proposal that a random walk in a semantic network is consistent with the observed
behavior in semantic fluency tasks. Finally, Section 5 discusses the implications of our work.
2
Semantic fluency and optimal foraging
Semantic fluency tasks (also known as free recall from natural categories) are a classic methodological paradigm for examining how people recall relevant pieces of information from memory
given a retrieval cue [2, 14, 19]. Asking people to retrieve as many examples of a category as
possible in a limited time is a simple task to carry out in clinical settings, and semantic fluency
has been used to study memory deficits in patients with Alzheimer?s, Parkinson?s, and Huntington?s disease [9, 20, 21, 22]. Both early and recent studies [2, 14, 21] have consistently found that
clusters appear in the sequences of words that people produce, with bursts of semantically related
words produced together and noticeable pauses between these bursts. For example Troyer et al. [21]
had people retrieve examples of animals, and divided those animals into 22 nonexclusive clusters
(?pets?, ?African animals?, etc.). These clusters could be used to analyze patterns in people?s responses: if an item shares a cluster with the item immediately before it, it is considered part of the
same cluster, otherwise, the current item defines a transition between clusters. For example, given
the sequence ?dog-cat-giraffe?, ?dog? and ?cat? are considered elements of the same cluster, while
?giraffe? is considered a point of transition to a new patch. Observing fast transitions between items
within a cluster but slow transitions between clusters led to the proposal that memory search might
be decomposed into separate ?clustering? and ?switching? processes [21].
The clusters that seem to appear in semantic memory suggest an analogy to the distribution of
animal food sources in a patchy environment. When animals search for food, they must consider
the costs and benefits of staying within a patch as opposed to searching for a new patch. Optimal
foraging theory [16] explores the ideal strategies for solving this problem. In particular, the marginal
value theorem shows that a forager?s overall rate of return is optimized if it leaves a patch when the
instantaneous rate (the marginal value) of finding food within the patch falls below the long-term
average rate of finding food over the entire environment [3]. In a recent proposal, Hills et al. [7]
posited that search in human semantic memory is similarly guided by an optimal foraging policy.
The corresponding prediction is that people should leave a ?patch? in memory (ie. a semantically
related cluster) when the the marginal value of resource gain (finding more relevant items) falls
below the expected rate of searching elsewhere in memory.
To investigate these predictions, Hills et al. [7] had people perform a semantic fluency task (?Name
as many animals as you can in 3 minutes?) and analyzed the search paths taken through memory
2
1.4
(b)
Number of words produced
1.2
10
1
0.8
0.6
8
6
0.4
4
0.2
2
0
0
?2
?1
1
2
3
Order of entry relative to patch switch
60
(c)
12
Average IRT
Item IRT / Average IRT
(a)
50
40
30
20
10
0
2
4
6
8
10
12
Mean IRT for the entry prior to switch
0
1
2
3
4
5
Abs(Last item IRT ? Average IRT)
Figure 1: Human results from the Hills et al. [7] animal naming task. (a) The mean ratio between
the inter-item response time (IRT) for an item and the participant?s long-term average IRT over the
entire task, relative to the order of entry for the item (where ?1? refers to the relative IRT between
the first word in a patch and the last word in the preceding patch). The dotted line indicates where
item IRTs would be the same as the participant?s average IRT for the entire task. (b) The long-term
average IRT versus the mean IRT prior to a switch for each participant. (c) The relationship between
a participant?s deviation from the marginal value theorem policy for patch departures (horizontalaxis) and the total number of words a participant produced.
in terms of the sequences of animal names produced, assessed with the predetermined animal subcategories of Troyer et al. [21]. As a first measure of correspondence with optimal foraging theory,
the ratio between inter-item response times (IRTs) of items and the long-term average IRTs for each
participant were examined at different retrieval positions relative to a patch switch. Figure 1 (a)
displays the results of this analysis. The first word in a patch (indicated by an order of entry of ?1?)
takes longer to produce than the overall long-term average IRT (indicated by the dotted line), and the
second word in a patch (indicated by ?2?) takes much less time to produce. These results are in line
with the marginal value theorem where IRTs up until a patch switch should increase monotonically
towards the long-term average IRT and go above this average only for patch switch IRTs since it
takes extra time to find a new patch. Hills et al. offered a two-part process model to account for this
phenomenon: When the IRT following a word exceeds the long-term average IRT, search switches
from local to global cues (e.g. switching between using semantic similarity or overall frequency as
search cues).
To formally examine how close the IRTs for words immediately preceding a patch switch were to
the long-term average IRT, the per-participant average IRT for these pre-switch words was plotted
against the per-participant long-term average IRT (see Figure 1 (b)). The difference between these
IRTs is very small, with a majority of participant?s pre-switch IRTs taking less time than their longterm average IRT as predicted by the marginal value theorem. As a further analysis of these preswitch IRTs, the absolute difference between the pre-switch IRT and long-term average IRT was
plotted against the number of words a participant produced along with a regression line through
this data (see Figure 1 (c)). Participants with a larger absolute difference (indicating they either left
patches too soon or too late) produced fewer words, as predicted by the marginal value theorem.
3
The structure of semantic memory
The explanation proposed by Hills et al. [7] for the patterns observed in people?s behavior in semantic fluency tasks is relatively complex, assuming two separate processes and a strategic decision to
switch between them. In the remainder of the paper we consider a simpler alternative explanation,
based on the structure of semantic memory. Specifically, we consider the consequences of a single
search process operating over a richer representation ? a semantic network.
A semantic network represents the relationships between words (or concepts) as a directed graph,
where each word is represented as a node and nodes are connected together with edges that represent
pairwise association [4]. Semantic networks derived from human behavior can be used to explore
questions about the structure of human memory [5, 14, 17, 18]. We will focus on a network derived
from a word association task, in which people were asked to list the words that come to mind for
a particular cue. For example, when given the cue ?doctor?, a person might produce the associates
3
?nurse?, ?hospital?, and ?sick? [11]. This task was repeated with a large number of participants,
with each response that was produced more than once being used as a cue in turn. The result is a
semantic network with 5018 nodes, from ?a? to ?zucchini?.
If the clusters that appear in people?s responses in the semantic fluency task are reflected in the
structure of this semantic network, a simple process that moves around the semantic network without explicitly knowing that it contains clusters might be sufficient to capture the phenomena reported
by Hills et al. [7]. We explored whether the distance between the nodes corresponding to different
animals in the semantic network could be predicted by their cluster membership. The 141 participants in the study conducted by Hills et al. produced 373 unique animals, of which 178 were
included in the semantic network. However, 13 of these were ?sources?, not having been produced
as associates for any other words, and we eliminated these from our analysis (as well as the other
analyses we report later in the paper). The result was a set of 165 nodes that each had incoming
and outgoing edges. We analyzed whether the relationship between these animals in the semantic
network showed evidence of the clustering seen in semantic fluency tasks, based on the clusters
identified by Troyer et al. [21].
Our analysis was performed using an additive clustering model [15]. Letting S be the 165 ? 165
matrix of similarities obtained by taking sij = exp{?dij }, where dij is the length of the shortest
path between animal nodes i and j in the semantic network, the similarity matrix according to
additive clustering is
S = FWF0
(1)
where F is a feature matrix (fac = 1 if animal a has feature c) and W is a diagonal matrix of
(non-negative) cluster weights. The features in the matrix F were defined to be the twenty-two
hand-coded subcategorization of animals from Troyer et al. [21], and W was found by maximizing
the posterior distribution over weights obtained by assuming Gaussian error in reconstructing S and
a Gaussian prior on W (as in [10]).
The empirical similarity matrix S and its reconstruction using the clusters are shown in Figure 2
(a) and (b) respectively. The two similarity matrices contain similar block structure, which supports
the hypothesis that the clusters of animals are implicitly captured by the semantic network. If the
distance between animals in different clusters is greater than the distance between animals in the
same cluster, as these results suggest, then a simple search process that is sensitive to this distance
may be able to account for the results reported by Hills et al. [7].
(a)
(b)
Figure 2: Visualizing the similarity between pairs of animals in our semantic network (darker colors
represent stronger similarities). (a) Similarity matrix derived by exponentiating the negative shortest path distance between each pair of animals. (b) Similarity matrix obtained using the additive
clustering model where the features are the Troyer et al. [21] clusters and weights are inferred using Nelder-Mead simplex search [8]. The rows and columns of the two matrices were reordered to
display animals in the clusters with largest weight first.
4
4
Random walks and semantic fluency
One of the simplest processes that can operate over a semantic network is a random walk, stochastically jumping from one node to another by following edges. Intuitively, this might provide a reasonable model for searching through semantic memory, being a meandering rather than a directed
search. Random walks on semantic networks have previously been proposed as a possible account
of behavior on fluency tasks: Griffiths et al. [6] argued that the responses that people produce when
asked to generate a word that begins with a particular letter were consistent with the stationary distribution of a random walk on the same semantic network used in the analysis presented in the previous
section.
In addition to being simple, random walks on semantic networks have an interesting connection to
methods used for information retrieval. The PageRank algorithm [12], a component of the original
Google search engine, considers web pages as nodes and links as directed edges from one node to
another. The PageRank algorithm is the result of a simple observation about web pages (and more
broadly, any directed graph): important web pages are linked to by other important web pages. The
link structure of n web pages on the Internet can be characterized by an n ? n matrix L, where Lij
is 1 if there is a link from web page j to web page i, and 0 otherwise. If an internet user clicks
uniformly at random over the outgoing links, then the probability that the user will click on page i
given she is currently on page j is
Lij
Mij = Pn
(2)
k=1 Lkj
where the denominator is the out-degree or number of web pages that page j links to. Thus, M is the
transition matrix of a Markov chain and under mild conditions, the probability that a ?random surfer?
will be on any page regardless of where she starts is given by the vector p that solves p = Mp.
This is the eigenvector of M corresponding to its largest eigenvalue (which is 1 as M is a stochastic
matrix).
Viewed in this light, the finding reported by Griffiths et al. [6] is that the prominence of words in
human memory can be predicted by running the PageRank algorithm on a semantic network. However, as Griffiths et al. pointed out, multiple mechanisms exist that could produce this result, with
only one possibility being that memory search is a random walk on a semantic network. Exploring
whether this kind of random walk can reproduce the phenomena identified by Hills et al. [7] in a
completely different memory task would provide further support for this possibility.
In the remainder of this section, we explore some variations on a simple random walk that result
in four different models. We then evaluate our models of memory search by applying the analyses
used by Hills et al. [7] to their behavior.
4.1
Random walk models
In the experiment reported by Hills et al. [7] participants were asked to produce as many unique
animals as possible in three minutes. A simple generative model for this sequential process is a
Markov chain that starts at state X0 = ?animal?, and then at step n randomly generates the next state
Xn+1 according to a probability distribution that only depends on the current state Xn (and possibly
the cue C = ?animal?). We define a space of four possible models by varying two dimensions for
how we define the transition probabilities.
The first dimension is the transition model, which can either be uniform, where the next state is
chosen uniformly at random from the outgoing links of the current node (ie. using the transition
matrix M defined above), or weighted, where the probability of the next state is weighted according
to the frequency of transitions in the word-association data [11]. This captures the fact that stronger
associations (e.g., ?cat? and ?mouse?) are produced more frequently than weaker associations (e.g.,
?cat? and ?house?), even though ?cat? was produced given either word.
The second dimension is the effect of the cue at each step, which was either non-jumping (it has
no effect except for initializing the chain at ?animal?) or jumping1 , where the cue causes us to
1
We note this is a qualitatively different operation than the Hills et al. [7] proposal of ?jumping? between
different search cues. Instead, this dimension explores the effect of priming the search process by returning to
the initial state.
5
jump back to ?animal? and transition from there, P (Xn+1 |Xn = ?animal?), with probability ? (but
otherwise transition normally with probability 1??). A jumping process is actually also a part of the
PageRank algorithm, which incorporates modifications to the graph that are equivalent to randomly
restarting the random surfer in order to deal with violations of ergodicity [12].
Formally, the space of models is defined by
P (Xn+1 |C = ?animal?, Xn = xn ) = ?P (Xn+1 |Xn = ?animal?) + (1 ? ?)P (Xn+1 |Xn = xn )
(3)
where P (Xn+1 |Xn ) is either uniform or weighted, and ? = 0 is non-jumping or 0 < ? ? 1 is
jumping.
4.2
Computing inter-item retrieval times
Random walk simulations for the models defined above will produce a list of the nodes visited at
each iteration. A method of mapping this output to reaction times is necessary in order to make
an appropriate comparison with human results. In our analyses we consider only the first time an
animal node is visited, which we denote as ? (k) for the k th unique animal seen on the random walk
(out of the K unique animals seen on the random walk). For example, a simulation may produce the
following output:
X1 = ?animal?, X2 = ?dog?, X3 = ?house?, X4 = ?dog?, X5 = ?cat?.
Here, K = 2 with k = 1 and k = 2 referring to ?dog? and ?cat? respectively. Our ? (k) function
would return ? (1) = 2 and ? (2) = 5 for this example since we only care about the first time ?dog?
is visited (at timestep n = 2) and ?house? (at timestep n = 3) is not an animal.
An additional assumption that we made is that the amount of time the Markov chain spends to
?emit? an animal is the length of the word. As participants in Hills et al. [7] typed their responses,
this accounts for it taking longer for participants to type longer than shorter words. Thus, according
to the random walk models, the inter-item retrieval time (IRT) between animal k and k ? 1 is
IRT (k) = ? (k) ? ? (k ? 1) + L(X? (k) )
(4)
where ? (k) is the first hitting time of animal X? (k) and L(X) is the length of word X. In our
example above, the IRT between ?cat? (k = 2) and ?dog? (k = 1) is:
IRT (?cat?) = ? (?cat?) ? ? (?dog?) + L(?cat?) = 5 ? 2 + 3 = 6.
With this mapping defined, we can now perform the same set of analyses in Hills et al. [7] on IRTs
between animal words for our random walker simulations.
4.3
Evaluating the models
We ran 1000 simulations of each of the four models for a duration of 1750 iterations. The number of
iterations was selected to produce a similar mean number of animals to those produced by participants in Hills et al. [7]. Human participants produced an average of 36.8 animals, while the uniform
non-jumping, uniform jumping, weighted non-jumping, and weighted jumping models produced an
average of 30.6, 39.3, 21.0, and 29.1 animals respectively.2 The jumping models had a probability of
? = 0.05 of making a jump back to ?animals?, selected primarily to illustrate the impact of adding
this additional component to the search process.
All four models were subjected to the same analyses as Hills et al. [7] applied to the human data
(Figure 1). The model results are presented in Figure 3. The left column shows the mean ratio
between the inter-item retrieval time (IRT) for an item and the mean IRT over all 1750 iterations in
the simulations, relative to the order of entry for the item. Here we see that the first word starting
a patch (the bar labeled ?1?) has the highest overall retrieval time. This was interpreted by Hills et
al. as indicating the time it takes to switch clusters and generate a word from a new cluster. The
2
A slightly lower overall total number of animals is to be expected, given the limited number of animals
among the words included in our semantic network.
6
50
1.4
Number of words produced
70
1.2
60
1
Average IRT
Item IRT / Average IRT
(a)
0.8
0.6
50
40
30
0.4
20
0.2
10
0
0
?2
?1
1
2
1.4
20
30
40
50
60
0.4
100
0.2
0
0
?1
1
2
10
50
100
150
200
0
1
25
20
15
10
5
0
0
?2
?1
1
2
Number of words produced
55
0.2
40
60
80
100
50
45
40
35
30
25
0
3
20
Abs(Last item IRT ? Average IRT)
30
Average IRT
Item IRT / Average IRT
15
60
5
10
15
20
25
30
35
0
Mean IRT for the entry prior to switch
Order of entry relative to patch switch
2
4
6
8
10
Abs(Last item IRT ? Average IRT)
50
1.4
(d)
20
35
0.4
25
25
1.2
0.6
20
30
1.4
0.8
15
35
Mean IRT for the entry prior to switch
Order of entry relative to patch switch
(c)
10
5
0
3
5
40
50
?2
20
Abs(Last item IRT ? Average IRT)
Number of words produced
0.6
25
0
150
0.8
30
70
200
Average IRT
Item IRT / Average IRT
10
1.2
1
35
Mean IRT for the entry prior to switch
Order of entry relative to patch switch
(b)
40
15
0
3
45
Number of words produced
20
1
Average IRT
Item IRT / Average IRT
25
1.2
0.8
0.6
15
10
0.4
5
0.2
0
0
?2
?1
1
2
3
Order of entry relative to patch switch
45
40
35
30
25
20
15
0
5
10
15
20
25
Mean IRT for the entry prior to switch
0
2
4
6
8
10
Abs(Last item IRT ? Average IRT)
Figure 3: The model results after 1000 simulations of the four random walk models: (a) the uniform
transition model with no jumps, (b) the non-uniform transition model with no jumps, (c) the uniform
transition model with a jump probability of 0.05, and (d) the non-uniform transition model with a
jump probability of 0.05. The left-most column displays the mean ratio between the inter-item
retrieval time (IRT) for an item and the long-term average IRT over the entire task, for each model
simulation, relative to the order of entry for the item (where ?1? refers to the relative IRT between
the first word in a patch and the last word in the preceding patch). The dotted line indicates where
item IRTs would be the same as a simulation?s average IRT for the entire task. The middle column
displays the long-term average IRT versus a simulation?s mean IRT prior to a switch. The right-most
column displays the relationship between a simulation?s deviation from the marginal value theorem
policy for patch departures (horizontal axis) and the total number of words the simulation produced.
7
emergence of the same phenomenon is seen across all four of our models which suggests that the
structure of semantic memory, together with a simple undirected search process, is sufficient to
capture this effect. The introduction of jumps primarily reduces the difference between the IRTs
before and after a cluster switch. Additionally, we reproduced the same statistical tests as Hills et
al. [7] on the models, demonstrating that, like people, all four models take a significantly longer
amount of time for the word immediately following a patch (all t(999) > 44, p < 0.0001) and take
a significantly shorter amount of time for the second item after a patch (all t(999) < ?49, p <
0.0001).
The second and third columns of Figure 3 show how the simulated results produced by the four
models relate to the predictions of the marginal value theorem. Intriguingly, all four models produce
the basic phenomena taken as evidence for the use of the marginal value theorem in memory search.
There is a strong correlation between the IRT at the point of a cluster switch and the mean IRT
(R2 = 0.67, 0.67, 0.52 and 0.57 for the four models in the order of Figure 3, all F (1, 998) >
1000, p < 0.0001), and a negative relationship between acting in the way stipulated by the marginal
value theorem and the number of responses produced (R2 = 0.02, 0.01, 0.10, and R2 = 0.01 for the
four models in the same order as before, and all F (1, 998) > 10, p < 0.001). These results show
that behavior consistent with following the marginal value theorem can be produced by surprisingly
simple search algorithms, at least when measured along these metrics.
5
Discussion
Understanding how people organize and search through information stored in memory has the potential to inform how we construct automated information retrieval systems. In this paper, we considered two different accounts for the appearance of semantically-related clusters when people retrieve
a sequence of items from memory. These accounts differ in the number of processes they postulate and in the rationality they attribute to those processes. The idea that human memory search
might follow the principles of optimal foraging [7] builds on previous work suggesting that there are
two separate processes involved in semantic fluency tasks ? generating from a cluster and switching
between clusters [21] ? and views the shift between processes as being governed by the rational principles embodied in the marginal value theorem. In contrast, the proposal that memory search might
just be a random walk on a semantic network [6] postulates a single, undirected process. Our results
show that four random walk models qualitatively reproduce a set of results predicted by optimal
foraging theory, providing an alternative explanation for clustering in semantic fluency tasks.
Finding that a random walk on a semantic network can account for some of the relatively complex
phenomena that appear in the semantic fluency task provides further support for the idea that memory search might simply be a random walk. This result helps to clarify the possible mechanisms
that could account for PageRank predicting the prominence of words in semantic memory [6], since
PageRank is simply the stationary distribution of the Markov chain defined by this random walk.
This simple mechanism seems particularly attractive given its existing connections to ideas that
appear in the information retrieval literature.
Demonstrating that the random walk models can produce behavior consistent with optimal foraging
in semantic fluency tasks generates some interesting directions for future research. Having two
competing accounts of the same phenomena suggests that the next step in exploring semantic
fluency is designing an experiment that distinguishes between these accounts. Considering whether
the optimal foraging account can also predict the prominence of words in semantic memory,
where the random walk model is already known to succeed, is one possibility, as is exploring
the predictions of the two accounts across a wider range of memory search tasks. However, one
of the most intriguing directions for future research is considering how these different proposals
fare in accounting for changes in semantic fluency in clinical populations. Given that conditions
such as Alzheimer?s and Parkinson?s disease differentially affect clustering and switching [9, 22],
considering the different failure conditions of these models might help to answer practical as well
as theoretical questions about human memory.
Acknowledgments. This work was supported by grants IIS-0845410 from the National Science Foundation
and FA-9550-10-1-0232 from the Air Force Office of Scientific Research.
8
References
[1] J. R. Anderson. The adaptive character of thought. Erlbaum, Hillsdale, NJ, 1990.
[2] W. A. Bousfield and C. H. W. Sedgewick. An analysis of sequences of restricted associative
responses. Journal of General Psychology, 30:149?165, 1944.
[3] E.L. Charnov et al. Optimal foraging, the marginal value theorem. Theoretical Population
Biology, 9(2):129?136, 1976.
[4] A. M. Collins and E. F. Loftus. A spreading-activation theory of semantic processing. Psychological Review, 82(6):407, 1975.
[5] T. L. Griffiths, M. Steyvers, and J. B. Tenenbaum. Topics in semantic representation. Psychological Review, 114:211?244, 2007.
[6] T.L. Griffiths, M. Steyvers, and A. Firl. Google and the mind. Psychological Science,
18(12):1069?1076, 2007.
[7] T.T. Hills, M.N. Jones, and P.M. Todd. Optimal foraging in semantic memory. Psychological
Review, 119(2):431?440, 2012.
[8] J. C. Lagarias, J. A. Reeds, M. H. Wright, and P. E. Wright. Convergence properties of the
Nelder-Mead simplex method in low dimensions. SIAM journal of optimization, 9:112?147,
1998.
[9] M.D. Lezak. Neuropsychological assessment. Oxford University Press, USA, 1995.
[10] D. J. Navarro and T. L. Griffiths. Latent features in similarity judgments: A nonparametric
Bayesian approach. Neural Computation, 20:2597?2628, 2008.
[11] D.L. Nelson, C.L. McEvoy, and T.A. Schreiber. The University of South Florida free association, rhyme, and word fragment norms. Behavior Research Methods, 36(3):402?407, 2004.
[12] L. Page, S. Brin, R. Motwani, and T. Winograd. The PageRank citation ranking: Bringing
order to the web. Technical Report 1999-66, Stanford InfoLab, November 1999.
[13] J.G. Raaijmakers and R.M. Shiffrin. Search of associative memory. Psychological Review,
88(2):93, 1981.
[14] A. K. Romney, D. D. Brewer, and W. H. Batchelder. Predicting clustering from semantic
structure. Psychological Science, 4(1):28?34, 1993.
[15] R. N. Shepard and P. Arabie. Additive clustering: Representation of similarities as combinations of discrete overlapping properties. Psychological Review, 86(2):87, 1979.
[16] D.W. Stephens and J.R. Krebs. Foraging theory. Princeton University Press, 1986.
[17] M. Steyvers, R.M. Shiffrin, and D.L. Nelson. Word association spaces for predicting semantic
similarity effects in episodic memory. Experimental cognitive psychology and its applications:
Festschrift in honor of Lyle Bourne, Walter Kintsch, and Thomas Landauer, pages 237?249,
2004.
[18] M. Steyvers and J.B. Tenenbaum. The large-scale structure of semantic networks: Statistical
analyses and a model of semantic growth. Cognitive Science, 29(1):41?78, 2005.
[19] L.L. Thurstone. Primary mental abilities. Psychometric Monographs, 1938.
[20] A.I. Tr?oster, D.P. Salmon, D. McCullough, and N. Butters. A comparison of the category
fluency deficits associated with Alzheimer?s and Huntington?s disease. Brain and Language,
37(3):500?513, 1989.
[21] A. K. Troyer, M. Moscovitch, and G. Winocur. Clustering and switching as two components of
verbal fluency: Evidence from younger and older healthy adults. Neuropsychology, 11(1):138,
1997.
[22] A. K. Troyer, M. Moscovitch, G. Winocur, L. Leach, and M. Freedman. Clustering and switching on verbal fluency tests in Alzheimer?s and Parkinson?s disease. Journal of the International
Neuropsychological Society, 4(2):137?143, 1998.
9
|
4761 |@word mild:1 repository:1 longterm:1 kintsch:1 middle:1 stronger:2 seems:1 norm:1 forager:1 simulation:11 decomposition:1 prominence:3 accounting:1 tr:1 carry:1 initial:1 contains:1 fragment:1 reaction:1 existing:1 current:3 com:1 activation:1 gmail:1 intriguing:1 must:1 additive:4 predetermined:1 drop:1 stationary:2 cue:10 leaf:1 fewer:1 item:37 generative:1 selected:2 mcevoy:1 mental:1 provides:3 node:12 simpler:3 diagnosing:1 burst:2 along:2 retrieving:2 x0:1 pairwise:1 inter:6 expected:2 behavior:9 examine:1 frequently:1 brain:1 decomposed:1 food:7 considering:3 becomes:2 begin:1 what:1 kind:3 interpreted:1 spends:1 eigenvector:1 unified:1 finding:7 nj:1 berkeley:8 quantitative:1 growth:1 exactly:2 returning:1 normally:1 underlie:1 grant:1 appear:5 organize:2 before:3 local:1 todd:1 consequence:1 switching:8 oxford:1 mead:2 rhyme:1 path:3 might:9 examined:1 suggests:2 challenging:1 limited:3 range:1 neuropsychological:2 directed:4 unique:4 practical:1 acknowledgment:1 lyle:1 block:1 x3:1 episodic:1 empirical:1 significantly:2 thought:1 word:42 pre:3 griffith:8 refers:2 suggest:2 close:1 applying:1 equivalent:1 maximizing:1 go:1 regardless:1 starting:1 duration:1 immediately:3 retrieve:6 steyvers:4 classic:1 searching:6 population:2 variation:1 thurstone:1 rationality:1 user:2 designing:1 hypothesis:1 associate:2 element:1 particularly:1 labeled:1 winograd:1 observed:3 initializing:1 capture:3 connected:1 highest:1 neuropsychology:1 ran:1 disease:5 monograph:1 environment:4 asked:3 arabie:1 solving:1 reordered:1 completely:1 cat:11 represented:1 walter:1 fast:1 fac:1 query:1 richer:1 larger:1 stanford:1 otherwise:3 ability:3 austerweil:2 emergence:1 meandering:1 reproduced:2 associative:2 sequence:5 eigenvalue:1 reconstruction:1 remainder:2 relevant:7 shiffrin:2 differentially:1 convergence:1 regularity:1 cluster:38 motwani:1 produce:12 generating:1 leave:3 staying:1 help:2 illustrate:1 wider:1 measured:1 fluency:24 noticeable:1 strong:1 solves:1 predicted:6 come:1 differ:1 direction:2 guided:1 attribute:1 stochastic:1 stipulated:1 human:21 brin:1 hillsdale:1 argued:1 exploring:5 clarify:1 around:1 considered:4 wright:2 deciding:1 exp:1 surfer:3 mapping:2 predict:1 claim:1 early:2 winocur:2 spreading:1 currently:1 visited:3 healthy:1 sensitive:1 largest:2 schreiber:1 weighted:5 gaussian:2 rather:3 pn:1 parkinson:4 varying:1 office:1 derived:4 focus:1 she:2 methodological:1 consistently:1 indicates:2 contrast:1 romney:1 membership:1 entire:5 raaijmakers:1 reproduce:2 overall:5 among:1 plan:1 animal:47 marginal:14 once:1 construct:1 intriguingly:2 having:2 eliminated:1 x4:1 represents:1 biology:1 jones:1 bourne:1 future:2 simplex:2 report:2 primarily:2 strategically:1 distinguishes:1 randomly:2 national:1 individual:1 festschrift:1 ab:5 huge:1 investigate:1 possibility:3 violation:1 analyzed:2 light:1 behind:1 chain:5 implication:1 emit:1 edge:4 necessary:1 experience:2 shorter:3 jumping:11 walk:25 plotted:2 accrue:1 theoretical:2 psychological:8 column:6 asking:1 patchy:3 cost:1 strategic:1 deviation:2 entry:14 uniform:8 delay:1 examining:1 dij:2 conducted:1 erlbaum:1 too:2 reported:4 stored:1 foraging:15 answer:1 referring:1 person:1 explores:2 siam:1 international:1 ie:2 together:4 mouse:1 reflect:1 postulate:2 opposed:1 choose:1 possibly:1 cognitive:3 stochastically:1 return:2 account:17 potential:1 suggesting:1 explicitly:1 mp:1 depends:1 ranking:1 piece:3 performed:1 try:1 later:1 view:1 observing:1 analyze:1 doctor:1 linked:1 start:2 participant:19 parallel:1 air:1 efficiently:1 judgment:1 infolab:1 bayesian:1 produced:22 african:1 inform:1 evaluates:1 against:2 failure:1 frequency:2 typed:1 involved:1 associated:1 batchelder:1 nurse:1 rational:2 gain:1 recall:2 knowledge:1 color:1 fwf0:1 actually:1 back:2 follow:1 tom:1 response:9 reflected:1 though:1 anderson:1 lifetime:1 ergodicity:1 just:1 until:1 correlation:1 hand:1 horizontal:1 web:10 assessment:1 overlapping:1 google:2 defines:1 indicated:3 scientific:1 facilitate:1 name:2 effect:5 concept:2 requiring:1 contain:1 usa:1 semantic:66 deal:1 attractive:1 visualizing:1 x5:1 hill:19 outline:1 demonstrate:1 instantaneous:1 salmon:1 recently:2 tending:1 shepard:1 association:8 fare:1 krebs:1 similarly:1 pointed:1 language:1 had:4 longer:4 similarity:12 operating:1 etc:1 sick:1 posterior:1 recent:2 showed:1 retrieved:1 store:1 honor:1 leach:1 joshua:2 seen:5 captured:1 greater:1 care:1 preceding:3 additional:2 paradigm:1 period:1 monotonically:1 shortest:2 ii:1 stephen:1 multiple:1 reduces:1 exceeds:1 technical:1 characterized:1 offer:1 clinical:3 retrieval:16 long:12 divided:1 posited:1 naming:1 coded:1 impact:1 prediction:4 regression:1 basic:1 denominator:1 patient:1 metric:1 iteration:4 represent:2 younger:1 proposal:6 background:1 addition:1 walker:1 source:2 extra:1 operate:1 bringing:1 navarro:1 south:1 undirected:2 incorporates:1 seem:1 alzheimer:5 structural:1 depleted:1 ideal:1 revealed:1 automated:1 switch:26 affect:1 psychology:5 identified:2 click:2 competing:1 idea:4 knowing:1 shift:1 whether:4 cause:1 useful:2 clear:1 amount:4 nonparametric:1 tenenbaum:2 category:4 simplest:1 generate:2 exist:1 dotted:3 per:2 broadly:1 discrete:1 four:12 demonstrating:2 loftus:1 abbott:2 vast:2 graph:3 timestep:2 letter:1 you:1 reasonable:1 patch:30 decision:2 internet:7 display:5 correspondence:1 x2:1 huntington:2 generates:2 relatively:2 department:3 according:4 combination:1 belonging:1 across:2 slightly:1 reconstructing:1 character:1 joseph:2 making:2 modification:1 intuitively:1 restricted:1 sij:1 taken:4 resource:2 previously:1 discus:1 turn:1 mechanism:3 brewer:1 needed:1 mind:3 letting:1 subjected:1 operation:1 away:1 appropriate:1 alternative:2 florida:1 thomas:2 original:1 clustering:13 running:1 build:1 society:1 move:1 question:2 already:1 strategy:2 fa:1 primary:1 diagonal:1 distance:5 deficit:3 separate:4 link:7 capacity:1 majority:1 simulated:1 nelson:2 topic:1 argue:2 considers:1 pet:1 assuming:2 length:3 relationship:6 reed:1 providing:2 ratio:4 potentially:1 relate:1 negative:3 irt:60 policy:3 twenty:1 subcategory:1 perform:2 observation:1 markov:4 november:1 supporting:1 inferred:1 dog:8 pair:2 optimized:1 connection:2 california:3 engine:3 adult:1 able:1 suggested:2 bar:1 below:3 pattern:5 lkj:1 departure:2 pagerank:8 including:1 memory:47 explanation:4 natural:1 force:1 predicting:3 pause:1 older:1 library:1 axis:1 lij:2 embodied:1 oster:1 faced:1 prior:8 understanding:3 literature:1 review:5 relative:11 subcategories:2 interesting:3 analogy:1 versus:2 remarkable:2 foundation:1 degree:1 offered:1 sufficient:2 consistent:4 principle:2 storing:1 share:1 production:1 row:1 elsewhere:1 surprisingly:1 last:7 free:2 soon:1 supported:1 verbal:2 weaker:1 understand:1 fall:2 taking:3 emerge:1 absolute:2 distributed:1 benefit:1 dimension:5 xn:14 transition:16 evaluating:1 made:2 qualitatively:2 exponentiating:1 jump:7 adaptive:1 restarting:1 citation:1 implicitly:1 suggestive:1 global:1 incoming:1 assumed:1 nelder:2 landauer:1 search:38 latent:1 additionally:1 ca:3 complex:2 priming:1 troyer:7 giraffe:2 main:1 lagarias:1 freedman:1 firl:1 repeated:1 x1:1 psychometric:1 postulating:1 slow:1 darker:1 position:1 house:3 governed:1 subcategorization:1 late:1 third:1 theorem:12 minute:2 list:2 explored:1 r2:3 evidence:6 sequential:1 adding:1 led:1 simply:2 likely:1 explore:3 appearance:1 hitting:1 mij:1 corresponds:1 succeed:1 viewed:1 towards:1 change:2 butter:1 included:2 specifically:1 except:1 uniformly:2 semantically:4 acting:1 total:3 hospital:1 experimental:1 indicating:2 formally:2 people:23 support:4 moscovitch:2 assessed:1 collins:1 evaluate:1 outgoing:3 princeton:1 phenomenon:8
|
4,156 | 4,762 |
Tractable Objectives for Robust Policy Optimization
Katherine Chen
University of Alberta
Michael Bowling
University of Alberta
[email protected]
[email protected]
Abstract
Robust policy optimization acknowledges that risk-aversion plays a vital role in
real-world decision-making. When faced with uncertainty about the effects of actions, the policy that maximizes expected utility over the unknown parameters of
the system may also carry with it a risk of intolerably poor performance. One
might prefer to accept lower utility in expectation in order to avoid, or reduce
the likelihood of, unacceptable levels of utility under harmful parameter realizations. In this paper, we take a Bayesian approach to parameter uncertainty, but
unlike other methods avoid making any distributional assumptions about the form
of this uncertainty. Instead we focus on identifying optimization objectives for
which solutions can be efficiently approximated. We introduce percentile measures: a very general class of objectives for robust policy optimization, which
encompasses most existing approaches, including ones known to be intractable.
We then introduce a broad subclass of this family for which robust policies can
be approximated efficiently. Finally, we frame these objectives in the context of a
two-player, zero-sum, extensive-form game and employ a no-regret algorithm to
approximate an optimal policy, with computation only polynomial in the number
of states and actions of the MDP.
1
Introduction
Reinforcement learning is focused on learning optimal policies from trajectories of data. One common approach is to build a Markov decision process (MDP) with parameters (i.e., rewards and transition probabilities) learned from data, and then find an optimal policy: a sequence of actions that
would maximize expected cumulative reward in that MDP. However, optimal policies are sensitive
to the estimated reward and transition parameters. The optimal performance on the estimated MDP
is unlikely to be actually attained under the true, but unknown, parameter values. Furthermore, optimizing for the estimated parameter realization may risk unacceptable performance under other less
likely parameter realizations. For example, consider a data-driven medical decision support setting:
given one-step trajectory data from a controlled trial, the goal is to identify an effective treatment
policy. The policy that maximizes expected utility under a single estimated model, or even averaged
over a distribution of models, may still result in poor outcomes for a substantial minority of patients.
What is called for is a policy that is more robust to the uncertainties of individual patients.
There are two main approaches for finding robust policies in MDPs with parameter uncertainty. The
first assumes rewards and transitions belong to a known and compact uncertainty set, which also
includes a single nominal parameter setting that is thought most likely to occur [19]. Robustness, in
this context, is a policy?s performance under worst-case parameter realizations from the set and is
something one must trade-off against how well a policy performs under the nominal parameters. In
many cases, the robust policies found are overly conservative because they do not take into account
how likely it is for an agent to encounter worst-case parameters. The second approach takes a
Bayesian perspective on parameter uncertainty, where a prior distribution over the parameter values
is assumed to be given, with a goal to optimize the performance for a particular percentile [4].
Unfortunately, the approach assumes specific distributions of parameter uncertainty in order to be
1
tractable, e.g., rewards from Gaussians and transition probabilities from independent Dirichlets. In
fact, percentile optimization with general parameter uncertainty is NP-hard [3].
In this paper we focus on the Bayesian setting where a distribution over the parameters of the MDP
is given. Rather than restricting the form of the distribution in order to achieve tractable algorithms,
we consider general parameter uncertainty, and instead explore the space of possible objectives. We
introduce a generalization of percentile optimization with objectives defined by a measure over percentiles instead of a single percentile. This family of objectives subsumes tractable objectives such
as optimizing for expected value, worst-case, or Conditional Value-at-Risk; as well as intractable
objectives such as optimizing for a single specific percentile (percentile optimization or Value-atRisk). We then introduce a particular family of percentile measures, which can be efficiently approximated. We show this by framing the problem as a two-player, zero-sum, extensive-form game,
and then employing a form of counterfactual regret minimization to find near-optimal policies in
time polynomial in the number of states and actions in the MDP. We give a further generalization
of this family by proving a general, but sufficient, condition under which percentile measures admit
efficient optimization. Finally, we empirically demonstrate our algorithm on a synthetic uncertain
MDP setting inspired by finding robust policies for diabetes management.
2
Background
We begin with an overview of Markov decision processes and existing techniques for dealing with
uncertainty in the parameters of the underlying MDP. In section 3, we show that many of the objectives described here are special cases of percentile measures.
2.1
Markov Decision Processes
A finite-horizon Markov decision process is a tuple M = ?S, A, R, P, H?. S is a finite set of states,
A is a finite set of actions, and H is the horizon. The decision agent starts in an initial state s0 , drawn
from an initial state distribution P (s0 ). System dynamics are defined by P (s, a, s? ) = P(s? |s, a)
which indicates the probability of transitioning from one state s ? S to another state s? ? S after
taking action a ? A. The immediate reward for being in a state and taking an action is defined by the
reward function R : S ? A ?? R. We will assume the rewards are bounded so that |R(s, a)| ? ?/2.
We denote ?HR as the set of all history-dependent randomized policies, i.e., those that map
sequences of state-action pairs and the current state to probability distribution over actions. We
denote ?M R as the set of all Markov randomized policies, i.e., those that map only the current
state and timestep to a probability distribution over actions. For a fixed MDP M, the objective is to
compute a policy ? that maximizes expected cumulative reward,
?
VM
=E
?
H
?
t=0
?
?
?
?
R(st , at )?M, s0 ? P (s0 ), ?
?
(1)
For a fixed MDP, the set of Markov random policies (in fact, Markov deterministic policies) contains
?
a maximizing policy. This is called the optimal policy for the fixed MDP: ? ? = argmax???M R VM
.
However, for MDPs with parameter uncertainty, Markov random policies may not be a sufficient
class. We will return to this issue again when discussing our own work.
2.2
MDPs with Parameter Uncertainty
In this paper, we are interested in the situation where the MDP parameters, R and P , are not known.
In general, we call this an uncertain MDP. The form of this uncertainty and associated optimization
objectives has been the topic of a number of papers.
Uncertainty Set Approach. One formulation for parameter uncertainty assumes that the parameters
are taken from uncertainty sets R ? R and P ? P [12]. In the robust MDP approach the desired
policy maximizes performance in the worst-case parameters of the uncertainty sets:
? ? = argmax
?
min
R?R,P ?P
2
?
VM
(2)
The robust MDP objective has been criticized for being overly-conservative as it focuses entirely on
the worst-case [19]. A further refinement is to assume that a nominal fixed MDP model is also given,
which is thought to be a good guess for the true model. A mixed optimization objective is then proposed that trades-off between the nominal performance and robust (worst-case) performance [19].
However, neither the robust MDP objective nor the mixed objective consider a policy?s performance
in parameter realizations other than the nominal- and worst-cases, and neither considers the relative
likelihood of encountering these parameter realizations.
Xu and Mannor [20] propose a further alternative by placing parameter realizations into nested
uncertainty sets, each associated with a probability of drawing a parameter realization from the set.
They then propose a distributional robustness approach, which maximizes the expected performance
over the worst-case distribution of parameters that satisfies the probability bounds on uncertainty
sets. This approach is a step between the specification of uncertainty sets and a Bayesian approach
with a fully specified MDP parameter distribution.
Bayesian Uncertainty Approach. The alternative formulation to uncertainty sets, is to assume
that the true parameters of the MDP, R? and P ? , are distributed according to a known distribution
P(R, P ). A worst-case analysis in such a formulation is non-sensical, except in the case of distributions with bounded support (i.e. Uniform distributions), in which case it offers nothing over
uncertainty sets. A natural alternative is to look at percentile optimization [4]. For a fixed ?, the objective is to seek a policy that will maximize the performance on ? percent of parameter realizations.
Formally, this results in the following optimization:
? ? = argmax max
?
y
y?R
?
subject to PM [VM
? y] ? ?
(3)
The optimal policy ? ? guarantees the optimal value y ? is achieved with probability ? given the
distribution over parameters P(R, P ). Delage and Mannor showed that for general reward and/or
transition uncertainty, percentile optimization is NP-hard (even for a small fixed horizon) [3]. They
did show that for Gaussian reward uncertainty, the optimization can be efficiently solved as a second
order cone program. They also showed that for transitions with independent Dirichlet distributions
that are sufficiently-peaked (e.g., given enough observations), optimizing an approximation of the
expected performance over the parameters approximately optimizes for percentile performance [4].
Objectives from Financial Economics. Value-at-Risk (VaR) and Conditional Value-at-Risk
(CVaR) are optimization objectives used to assess the risk of financial portfolios. Value-at-Risk
is equivalent to percentile optimization and is intractable for general forms of parameter uncertainty.
Additionally, it is not a coherent risk measure in that it does not follow subadditivity, a key coherence property that states that the risk of a combined portfolio must be no larger than the sum of the
risks of its components. In contrast, Conditional Value-at-Risk at the ?% level is defined as the ?average of the ? ? 100 worst losses?[1]. It is both a coherent and a tractable objective [13]. In section 3
we show that CVaR is also encompassed by percentile measures.
Restrictions on Parameter Uncertainty. One commonality among previous approaches is that
they all make heavy restrictions on the form of parameter uncertainty in order to obtain efficient algorithms. A common requirement, for example, is that the uncertainty between states is uncoupled
or independent; or that reward and transition uncertainty themselves are uncoupled or independent.
A very recent paper relaxes this coupling in the context of uncertainty sets, however the relaxation
still takes a very specific form allowing for a finite number of deviations [9]. Another common
assumption is that the uncertainty is non-stationary, i.e., a state?s parameter realization can vary independently with each visit. The Delage and Mannor work on percentile optimization [4] makes
the more natural assumption that the uncertain parameters are stationary, but in turn requires very
specific choices for the uncertainty distributions themselves. In this work, we avoid making assumptions on the form of parameter uncertainty beyond the ability to sample from the distribution.
Instead, we focus on identifying the possible optimality criteria which admit efficient algorithms.
3
Percentile Measures
We take the Bayesian approach to uncertainty where the true MDP parameters are assumed to be
distributed according to a known distribution, i.e., the true MDP M? is distributed according to an
3
0.1
0.5
0
0
0.2
0.4
0.6
Percentile
0.8
1
k of N = 1 of 1
k of N = 1 of 2
k of N = 1 of 5
k of N = 1 of 10
0.05
0
0
(a) Percentile optimization
0.5
Percentile
k of N = 100 of 1000
k of N = 250 of 1000
k of N = 400 of 1000
k of N = 1000 of 1000
0.1
Density
p = 0.1
p=0.25
p=0.40
Density
Density
1
1
0.05
0
0
(b) 1 of N
0.5
Percentile
1
(c) k of N
Figure 1: Examples of percentile measures.
arbitrary distribution P(M). We begin by delineating a family of objectives for robust policy optimization, which generalizes the concept of percentile optimization. While percentile optimization
is already known to be NP-hard, in section 4, we will restrict our focus to a subclass of our family
that does admit efficient algorithms. Rather than seeking to maximize one specific percentile of
MDPs, our family of objectives maximizes an integral of a policy?s performance over all percentiles
? ? [0, 1] of MDPs M as weighted by a percentile measure ?. Formally, given a measure ? over
the interval [0, 1] a ?-robust policy is the solution to the following optimization:
? ? = argmax sup
???
y?F
subject to
?
y(?)d?
(4)
?
?
PM [VM
? y(?)] ? ?
?? ? [0, 1]
where F is the class of real-valued, bounded, ?-integrable functions on the interval [0, 1].
There are many possible ways to choose the measure ?, each of which corresponds to a different
robustness interpretation and degree. In fact, our distribution measures framework encompasses
optimization objectives for the expected, robust, and percentile MDP problems as well as for VaR
and CVaR. In particular, if ? is the Lebesgue measure (i.e., a uniform density over the unit interval),
all percentiles are equally weighted and the ?-robust policy will optimize the expected cumulative
?
reward over the distribution P(M). In other words, it maximizes EM [VM
]. This objective was
explored by Mannor et al. [10], where they concluded that the common approach of computing an
?
optimal policy for the expected MDP, i.e., maximizing VE[M]
, results in a biased optimization of the
desired value expectation under general transition uncertainty. Alternatively, when ? = ?0.1 , where
?? is the Dirac delta at ?, the optimization problem becomes identical to the VaR and percentile
optimization problems where ? = 0.1, the 10th percentile. The measures for the 10th, 25th, and
40th percentiles are shown in figure 1a. When ? = ?0 , the optimization problem becomes the worstcase robust MDP problem, over the support of the distribution P(M). Finally, if ? is a decreasing
step function at ?, this corresponds to the CVaR objective at the ?% level, with equal weighting for
the bottom ? percentiles and zero weighting elsewhere.
4
k-of-N Measures
There is little reason to restrict ourselves to percentile measures that put uniform weight on all
percentiles, or Dirac deltas on the worst-case or specific percentiles. One can imagine creating
other density functions over percentiles, and not all of these percentile measures will necessarily
be intractable like percentile optimization. In this section we introduce a subclass of percentile
measures, called k-of-N measures, and go on to show that we can efficiently approximate ?-robust
policies for this entire subclass.
We start by imagining a sampling scheme for evaluating the robustness of a fixed policy ?. Consider
sampling N = 1000 MDPs from the distribution P(M). For each MDP we can evaluate the policy
? and then rank the MDPs based on how much expected cumulative reward ? attains on each. If
we choose to evaluate our policy based on the very worst of these MDPs, that is, the k = 1 of the
N = 1000 MDPs, then we get a loose estimate of the percentile value of ? in the neighborhood of
the 1/1000th percentile for the distribution P(M). If we sample just N = 1 MDP, then we get an
estimate of ??s expected return over the distribution. Each choice of N results in a different density,
and corresponding measure, over the percentiles on the interval [0, 1]. Figure 1b depicts the shape of
the density when we hold k = 1 while increasing the number of MDPs we sample, N . We see that as
4
N increases, the policy puts more weight on optimizing for lower percentiles of MDPs. Thus we can
smoothly transition from finding policies that perform well in expectation (no robustness) to policies
that care almost only about worst-case performance (overly conservative robustness). Alternatively,
after sampling N MDPs we could instead choose the expected cumulative reward of a random MDP
from the k ? 1 least-favorable MDPs for ?. For every choice of k and N , this gives a different
density function and associated measure. Figure 1c shows the density function for N = 1000 while
increasing k. The densities themselves act as approximate step-functions whose weight falls off in
the neighborhood of the percentile ? = k/N . Furthermore, as N increases, the shape of the density
more closely approximates a step-function, and thus more closely approximates the CVaR objective.
For a particular N and k, we call this measure the k-of-N measure, or ?k-of-N .
Proposition 1. For any 1 ? k ? N , the density g of the measure ?k-of-N is g(?) ? 1?I? (k, N ?k),
where Ix (?, ?) = B(x; ?, ?)/B(?, ?) is the regularized incomplete Beta function.
The proof can be found in the supplemental material.
4.1
k-of-N Game
Our sampling description of the k-of-N measure can be reframed as a two-player zero-sum
extensive-form game with imperfect information, as shown in Figure 2. Each node in the tree represents a game state or history labeled with the player whose turn it is to act, with each branch being
a possible action.
In our game formulation, chance, denoted as player c, first selects
N MDPs according to P(M). The adversary, denoted as player
2, has only one decision in the game which is to select a subset
#$ !
!
of k MDPs out of the N , from which chance selects one MDP M
uniformly at random. At this point, the decision maker, denoted
"
M "
as player 1, has no knowledge of the sampled MDPs, the choice
made by the adversary, or the final selected MDP. Hence, player 1
#$
#$
%
%
%
%
?
might be in any one of the circled nodes and can not distinguish one
from the other. Such histories are partitioned into one set, termed
(%)$
&'$ (+$
&*)$
an information set, and the player?s policy must be identical for all
Figure 2: k-of-N game tree histories in an information set. The decision maker now alternates
turns with chance, observing states sampled by chance according to
the chosen MDP?s transition function, but not ever observing the chosen MDP itself, i.e., histories
with the same sequence of sampled states and chosen actions belong to the same information set
for player 1. After the horizon has been reached, the utility of the leaf node is just the sum of the
immediate rewards of the decision maker?s actions according to the chosen MDP?s reward function.
?
?
N
?
"
? ?
N
k
The decision maker?s behavioral strategy in the game maps information sets of the game to a distribution over actions. Since the only information is the observed state-action sequence, the strategy
can be viewed as a policy in ?HR (or possibly ?M R , as we will discuss below).
Because the k-of-N game is zero-sum, a Nash equilibrium policy in the game is one that maximizes
its expected utility against its best-response adversary. The best-response adversary for any policy
is the one that chooses the k least favorable MDPs for that policy. Thus a policy?s value against
its best-response is, in fact, its value under the measure ?k-of-N . Hence, a Nash equilibrium policy
for the k-of-N game is a ?k-of-N -robust policy. Furthermore, an ?-Nash equilibrium policy is a 2?
approximation of a ?k-of-N -robust policy.
4.2
Solving k-of-N Games
In the past five years there have been dramatic advances in solving large zero-sum extensive-form
games with imperfect information [21, 5, 8]. These algorithmic advancements have made it possible to solve games five orders of magnitude larger than previously possible. Counterfactual regret
minimization (CFR) is one such approach [21]. CFR is an efficient form of regret minimization
for extensive-form games. Its use in solving extensive-form games is based on the principle that
two no-regret learning algorithms in self-play will have their average strategies converge to a Nash
equilibrium. However, the k-of-N game presents a difficulty due to the imbalance in the size of the
two players? strategies. While player one?s strategy is tractable (the size of a policy in the underly5
ing MDP), player two?s strategy involves decisions at infinitely many information sets (one for each
sampled set of N MDPs).
A recent variant of CFR, called CFR-BR, specifically addresses the challenge of an adversary having
an intractably large strategy space [6]. It combines two ideas. First, it avoids representing the entirety
of the second player?s strategy space, by having the player always play according to a best-response
to the first player?s strategy. So, the repeated games now involve a CFR algorithm playing against
its own best-response. Note that best-response is also a regret-minimizing strategy, and so such
repeated play still converges to a Nash equilibrium. Second, it avoids having to compute or store
a complete best-response by employing sampling over chance outcomes to focus the best-response
and regret updates on a small subtree of the game on each iteration. The approach removes all
dependence on the size of the adversary?s strategy space in either computation time or memory.
Furthermore, it can be shown that the player?s current strategy is approaching almost-always a Nash
equilibrium strategy, and so there is no need to worry about strategy averaging. CFR-BR has the
following convergence guarantee.
Theorem 1 (Theorems 4 and 6 [6]). For any p ? (0, 1], after T ? iterations of chance-sampled
CFR-BR where T ? is chosen uniformly at random from {1, . . . , T }, with probability (1 ? p), player
1?s strategy on iteration T ? is part of an ?-Nash equilibrium with
?
?
?
2H?|I1 | |A1 |
2
?
?= 1+ ?
p
p T
where H? is the maximum difference in total reward over H steps, and |I1 | is the number of
information sets for player 1.
The key property of this theorem is that the bound is decreasing with the number of iterations T and
there is no dependence on the size of the adversary?s strategy space. The random stopping time of
the algorithm is unusual and is needed for the high-probability guarantee. Johanson and colleagues
note, ?In practice, our stopping time is dictated by convenience and availability of computational
resources, and so is expected to be sufficiently random.? [6]; we follow this practice.
The application of chance-sampled CFR-BR to k-of-N games is straightforward. The algorithm
is iterative. On each iteration, N MDPs are sampled from the uncertainty distribution. The bestresponse for this subtree of the game involves simply evaluating the player?s current MDP policy on
the N MDPs and choosing the least-favorable k. Chance samples again, by choosing a single MDP
from the least-favorable k. The player?s regrets are then updated using the transitions and rewards
for the selected MDP, resulting in a new policy for the next iteration. See the supplemental material
for complete details.
Markovian Policies and Imperfect Recall. There still remains one important detail that we have
not discussed: the nature and size of player 1?s strategy space. In finite horizon MDPs with no parameter uncertainty, an optimal policy exists in the space of Markovian policies (?M R ) ? policies
that depend only on the number of timesteps remaining and the current state, but not on the history
of past states and actions. Under transition uncertainty, this is no longer true. The sequence of past
states and actions provide information about the uncertain transition parameters, which is informative for future transitions. For this case, optimal policies are not in general Markovian policies as
they will depend upon the entire history of states and actions (?HR ). As a result, the number of information sets (i.e., decision points) in an optimal policy is |I1 | = |S|((|S||A|)H ? 1)/(|S||A| ? 1),
and so polynomial in the number of states and actions for any fixed horizon, but exponential in the
horizon itself. While being exponential in the horizon may seem like a problem, there are many
interesting real-world problems with short time horizons. One such class of problems is Adaptive treatment strategies (ATS) for sequential medical treatment decisions [11, 15]. Many ATS
problems have time horizons of H ? 3, e.g., CATIE (H = 2) [16, 17] and STAR*D (H = 3) [14].
Under reward uncertainty (where rewards are not observed by the agent while acting), the sequence
of past states and actions is not informative, and so Markovian policies again suffice.1 . In this case,
the number of information sets |I1 | = |S|H, and so polynomial in both states and the horizon.
However, such an information-set structure for the player results in a game with imperfect recall,
1
Markovian policies are also sufficient under a non-stationary uncertainty model, where the transition parameters are resampled independently on repeated visits to states (see the end of Section 2.2).
6
where the player forgets information (past states and actions) it previously knew. Perfect recall is a
fundamental requirement for extensive-form game solvers. However, a recent result has presented
sufficient conditions under which the perfect recall assumption can be relaxed and CFR will still
minimize overall regret [7]. These conditions are exactly satisfied in the case of reward uncertainty:
the forgotten information (i) does not influence future rewards, (ii) does not influence future transition probabilities, (iii) is never known by the opponent, (iv) is not remembered later by the player.
Therefore, we can construct the extensive-form game with the player restricted to Markovian policies and still solve it with CFR-BR.
CFR-BR for k-of-N Games. We can now analyze the use of CFR-BR for computing approximate
?k-of-N -robust policies.
Theorem 2. For any ? > 0 and p ? (0, 1], let,
?
?2
2
16H 2 ?2 |I1 |2 |A|
T = 1+ ?
.
p
p2 ?2
With probability 1 ? p, when applying CFR-BR to the k-of-N game, its current strategy at iteration
T ? , chosen uniformly at random in?the interval [1, T ], is an ?-approximation
to a ?k-of-N -robust pol?
icy. The total time complexity is O (H?/?)2 |I1 |
reward uncertainty and |I1 | ? O(|S|
H+1
3
|A|3 N log N
p3
, where |I1 | ? O(|S|H) for arbitrary
|A| ) for arbitrary transition and reward uncertainty.
H
Proof. The proof follows almost directly from Theorem 1 and our connection between k-of-N
games and the ?k-of-N measure. The choice of T by Theorem 1 guarantees the policy is an ?/2Nash approximation, which in turn guarantees the policy is within ? of optimal in the worst-case,
and so is an ? approximation to a ?k-of-N -robust policy. Each iteration requires N policy evaluations
each requiring O(|I1 ||A|) time; these are then sorted in O(N log N ) time; and finally the regret
update in O(|I1 ||A|) time. Theorem 2 gives us our overall time bound.
5
Non-Increasing Measures
We have defined a family of percentile measures, ?k-of-N , that represent optimization objectives that
differ in how much weight they place on different percentiles and can be solved efficiently. In this
section, we go beyond our family of measures and provide a very broad but still sufficient condition
for which a measure can be solved efficiently. We conjecture that a form of this condition is also
necessary, but leave that for future work.
Theorem 3. Let ? be an absolutely continuous measure with density function g? , such that g? is nonincreasing and piecewise Lipschitz continuous with m pieces and Lipschitz constant L. A ?-robust
policy can be approximated with high probability in time polynomial in {|A|, |S|, ?, L, m, 1? , p1 } for
(i) arbitrary reward uncertainty with time also polynomial in the horizon or (ii) arbitrary transition
and reward uncertainty with a fixed horizon.
The proof is in the supplemental material. Note that previously known measures with efficient
solutions (i.e., worst-case, expectation-maximization, and CVaR) satisfy the property that the weight
placed on a particular percentile is never smaller than a larger percentile. Our k-of-N measures also
have this property. Percentile measures (? > 0), though, do not: they place infinitely more weight
on the p percentile than any of the percentiles less than ?. At the very least, we have captured the
condition that separates the currently known-to-be-easy measures from the currently known-to-behard ones.
6
Experiments
We now explore our k-of-N approach in a simplified version of a diabetes management task. Our
results aim to demonstrate two things: first, that CFR-BR can find k-of-N policies for MDP problems with general uncertainty in rewards and transitions; and second, that optimizing for different
percentile measures creates policies that differ accordingly.
7
Inverse CDF (Quantile) plot
0.04
40
5
?
VM
Density
1?of?1
1?of?5
10?of?50
Comparison to 1?of?1 policy
10
1?of?1
1?of?5
10?of?50
mean MDP
Difference
Percentile Measure Densities
0.06
20
0.02
?10
0
0
20
40
60
Percentile
80
100
0
0
?5
20
40
60
Percentile
80
100
?15
0
1?of?1
1?of?5
10?of?50
mean MDP
20
40
60
Percentile
80
100
Figure 3: Evaluation of k-of-N percentile measures on the diabetes management task.
Our simplified diabetes management MDP simulates the daily life of a diabetic patient distilled into
a small MDP with |S| = 9 states, |A| = 3 actions and a time horizon of H = 3. States are a combination of blood glucose level and meal size. Three times daily, corresponding to meal times, the
patient injects themselves with a dose of insulin to bring down the rise in blood glucose that comes
with consuming carbohydrates at each meal. A good treatment policy keeps blood glucose in the
moderate range all day. The uncertain reward function is sampled from a independent multivariate
Normal distribution and transition probabilities are sampled from Dirichlet distributions, but both
could have been drawn from other distributions. The Dirichlet parameter vector is the product of a
fixed set of per-state parameters with an MDP-wide multiplicative factor q ? Unif[1, 5] to simulate
variation in patient sensitivity to insulin, and results in transition uncertainty between states that is
not independent. For full details on the problem set up, see the supplemental material.
We used CFR-BR to find optimal policies for the 1-of-1, 1-of-5, and 10-of-50 percentile measures.
The densities for these measures are shown in Figure 3(left). We also computed the policy that
?
optimizes VE(M)
, that is the optimal policy for the mean MDP. We evaluated the performance of
all of these policies empirically on over 10,000 sampled MDPs and show the empirical quantile
function (inverse CFR) in Figure 3(center). To highlight the differences between these policies,
we show the performance of the policies relative to the 1-of-1-robust policy over the full range of
percentiles in Figure 3(right). From the difference plot, we see that the optimal policy for the mean
MDP, although optimal for the mean MDP?s specific parameters, does not perform well over the
uncertainty distribution (as noted in [10]). All of the k-of-N policies are more robust, performing
better on the lower percentiles, while 1-of-1 is almost a uniform improvement. We also see that
1-of-5 and 10-of-50 policies perform quite differently despite having the same k/N ratio. Because
the 10-of-50 policy has a sharper drop-off in density at the 20th percentile compared to the 1-of-5
policy, we see that 10-of-50 policies give up more performance in higher percentile MDPs for a bit
more performance in the lowest 20 percentile MDPs compared to the 1-of-5 policy.
7
Conclusion
This is the first work we are aware of to do robust policy optimization with general parameter uncertainty. We describe a broad family of robustness objectives that can be efficiently optimized,
and present an algorithm based on techniques for Nash approximation in imperfect information
extensive-form games. We believe this approach will be useful for adaptive treatment strategy optimization, where small sample sizes cause real parameter uncertainty and the short time horizons
make even transition uncertainty tractable. The next step in this direction is to extend these robustness techniques to large, or continuous state-action spaces. Abstraction has proven useful for finding
good policies in other large extensive-form-games [2, 18], and so will likely prove effective here.
8
Acknowledgements
We would like to thank Kevin Waugh, Anna Koop, the Computer Poker Research Group at the
University of Alberta, and the anonymous reviewers for their helpful discussions. This research was
supported by NSERC, Alberta Innovates Technology Futures, and the use of computing resources
provided by WestGrid and Compute/Calcul Canada.
8
References
[1] Carlo Acerbi. Spectral Measures of Risk: a Coherent Representation of Subjective Risk Aversion. Journal
of Baking and Finance, 2002.
[2] D. Billings, N. Burch, A. Davidson, R. Holte, J. Schaeffer, T. Schauenberg, and D. Szafron. Approximating game-theoretic optimal strategies for full-scale poker. Proceedings of the Eighteenth International
Joint Conference on Artificial Intelligence (IJCAI), 2003.
[3] Erick Delage. Distributionally Robust Optimization in context of Data-driven Problems. PhD thesis,
Stanford University, 2009.
[4] Erick Delage and Shie Mannor. Percentile Optimization in Uncertain Markov decision processes with Application to Efficient Exploration. Proceedings of the 24th International Conference on Machine Learning
(ICML), 2007.
[5] Samid Hoda, Andrew Gilpin, Javier Pe?na, and Tuomas Sandholm. Smoothing techniques for computing
Nash equilibria of sequential games. Mathematics of Operations Research, 35(2):494?512, 2010.
[6] Michael Johanson, Nolan Bard, Neil Burch, and Michael Bowling. Finding optimal abstract strategies in
extensive-form games. Proceedings of the 26th Conference on Artificial Intelligence (AAAI), 2012.
[7] Marc Lanctot, Richard Gibson, Neil Burch, Martin Zinkevich, and Michael Bowling. No-regret learning
in extensive-form games with imperfect recall. Proceedings of the 29th International Conference on
Machine Learning (ICML), 2012.
[8] Marc Lanctot, Kevin Waugh, Martin Zinkevich, and Michael Bowling. Monte Carlo Sampling for Regret
Minimization in Extensive Games. Advances in Neural Information Processing Systems (NIPS), 2009.
[9] Shie Mannor, Ofir Mebel, and Huan Xu. Lighting Does Not Strike Twice: Robust MDPs with Coupled
Uncertainty. Proceedings of the 29th International Conference on Machine Learning (ICML), 2012.
[10] Shie Mannor, Duncan Simester, Peng Sun, and John N. Tsitsiklis. Bias and variance in value function
estimation. Management Science, 53(2):308?322, February 2007.
[11] Susan A. Murphy and James R. McKay. Adaptive treatment strategies: an emerging approach for improving treatment effectiveness. (Newsletter of the American Psychological Association Division 12, section
III: The Society for the Science of Clinical Psychology), 2003.
[12] Arnab Nilim and Laurent El Ghaoui. Robust Control of Markov decision processes with Uncertain Transition matrices. Operations Research, 53(5):780?798, October 2006.
[13] R. Tyrrell Rockafellar and Stanislav Uryasev. Conditional value-at-risk for general loss distributions.
Journal of Baking and Finance, 26:1443?1471, 2002.
[14] A. J. Rush, M. Fava, S. R. Wisniewski, P.W. Lavori, M. H. Trivedi, H. A. Sackeim, M. E. Thase, A. A.
Nierenberg, F. M. Quitkin, T.M. Kashner, D.J. Kupfer, J. F. Rosenbaum, J. Alpert, J. W. Stewart, P. J. McGrath, M. M. Biggs, K. Shores-Wilson, B. D. Lebowitz, L. Ritz, and G. Niederehe. Sequenced treatment
alternatives to relieve depression (STAR*D): rational and design. Controlled Clinical Trials, 25(1):119?
142, 2004.
[15] Susan A. Shortreed, Eric Laber, Daniel J. Lizotte, T. Scott Stroup, Joelle Pineau, and Susan A. Murphy. Informing sequential clinical decision-making through Reinforcement learning: an empirical study.
Machine Learning, 84(1-2):109?136, July 2011.
[16] T. Scott Stroup, J.P. McEvoy, M.S. Swartz, M.J. Byerly, I.D. Glick, J.M Canive, M. McGee, G.M. Simpson, M.D. Stevens, and J.A. Lieberman. The National Institute of Mental Health clinical antipschotic
trials of intervention effectiveness (CATIE) project: schizophrenia trial design and protocol development.
Schizophrenia Bulletin, 29(1):15?31, 2003.
[17] M.S. Swartz, D.O. Perkins, T.S. Stroup, J.P. McEvoy, J.M. Nieri, and D.D. Haal. Assessing clinical
and functional outcomes in the clinical antipsychotic of intervention effectiveness (CATIE) schizophrenia
trial. Schizophrenia Bulletin, 1(33-43), 29.
[18] Kevin Waugh, Martin Zinkevich, Michael Johanson, Morgan Kan, David Schnizlein, and Michael Bowling. A practical use of imperfect recall. Proceedings of the Eighth Symposium on Abstraction, Reformulation and Approximation (SARA), 2009.
[19] Huan Xu and Shie Mannor. On robustness/performance tradeoffs in linear programming and markov
decision processes. Operations Research, 2005.
[20] Huan Xu and Shie Mannor. Distributionally Robust Markov decision processes. Advances in Neural
Information Processing Systems (NIPS), 2010.
[21] Martin Zinkevich, Michael Johanson, Michael Bowling, and Carmelo Piccione. Regret Minimization in
Games with Incomplete Information. Advances in Neural Information Processing Systems (NIPS), 2008.
9
|
4762 |@word trial:5 innovates:1 version:1 polynomial:6 szafron:1 unif:1 seek:1 dramatic:1 wisniewski:1 carry:1 initial:2 contains:1 daniel:1 past:5 existing:2 subjective:1 current:6 must:3 john:1 informative:2 shape:2 remove:1 plot:2 drop:1 update:2 stationary:3 intelligence:2 selected:2 guess:1 leaf:1 advancement:1 accordingly:1 mcevoy:2 short:2 mental:1 mannor:9 node:3 five:2 unacceptable:2 beta:1 symposium:1 prove:1 glick:1 combine:1 behavioral:1 introduce:5 peng:1 expected:15 themselves:4 nor:1 p1:1 inspired:1 decreasing:2 alberta:4 little:1 solver:1 increasing:3 becomes:2 begin:2 provided:1 underlying:1 bounded:3 maximizes:8 suffice:1 stroup:3 lowest:1 what:1 project:1 emerging:1 supplemental:4 finding:5 guarantee:5 forgotten:1 every:1 subclass:4 act:2 finance:2 delineating:1 exactly:1 control:1 unit:1 medical:2 intervention:2 despite:1 laurent:1 approximately:1 might:2 twice:1 sara:1 range:2 averaged:1 practical:1 carmelo:1 practice:2 regret:13 delage:4 empirical:2 gibson:1 thought:2 shore:1 word:1 get:2 convenience:1 put:2 risk:15 context:4 influence:2 applying:1 optimize:2 zinkevich:4 equivalent:1 map:3 eighteenth:1 deterministic:1 maximizing:2 restriction:2 go:2 economics:1 straightforward:1 independently:2 center:1 focused:1 identifying:2 ritz:1 financial:2 proving:1 variation:1 updated:1 imagine:1 play:4 nominal:5 ualberta:2 programming:1 diabetes:4 approximated:4 distributional:2 labeled:1 bottom:1 role:1 observed:2 solved:3 worst:15 susan:3 sun:1 trade:2 substantial:1 nash:10 complexity:1 pol:1 reward:29 dynamic:1 depend:2 solving:3 upon:1 creates:1 division:1 eric:1 biggs:1 joint:1 differently:1 carbohydrate:1 effective:2 describe:1 monte:1 artificial:2 kevin:3 outcome:3 neighborhood:2 choosing:2 whose:2 quite:1 larger:3 valued:1 solve:2 stanford:1 drawing:1 nolan:1 ability:1 neil:2 insulin:2 itself:2 final:1 sequence:6 propose:2 product:1 realization:10 achieve:1 description:1 dirac:2 convergence:1 ijcai:1 requirement:2 assessing:1 perfect:2 converges:1 leave:1 coupling:1 andrew:1 p2:1 c:1 involves:2 entirety:1 come:1 rosenbaum:1 differ:2 direction:1 closely:2 stevens:1 exploration:1 material:4 generalization:2 anonymous:1 proposition:1 hold:1 sufficiently:2 normal:1 equilibrium:8 algorithmic:1 vary:1 commonality:1 favorable:4 estimation:1 currently:2 maker:4 sensitive:1 weighted:2 minimization:5 gaussian:1 always:2 aim:1 rather:2 avoid:3 johanson:4 wilson:1 focus:6 improvement:1 rank:1 likelihood:2 indicates:1 contrast:1 attains:1 lizotte:1 helpful:1 waugh:3 dependent:1 stopping:2 abstraction:2 el:1 unlikely:1 entire:2 accept:1 selects:2 i1:10 interested:1 issue:1 among:1 overall:2 denoted:3 development:1 smoothing:1 special:1 equal:1 aware:1 distilled:1 construct:1 having:4 never:2 sampling:6 identical:2 placing:1 broad:3 look:1 icml:3 represents:1 peaked:1 future:5 np:3 piecewise:1 richard:1 employ:1 baking:2 ve:2 national:1 individual:1 murphy:2 argmax:4 ourselves:1 lebesgue:1 simpson:1 evaluation:2 ofir:1 nonincreasing:1 tuple:1 integral:1 necessary:1 daily:2 huan:3 mebel:1 antipsychotic:1 tree:2 incomplete:2 harmful:1 iv:1 desired:2 rush:1 uncertain:7 criticized:1 dose:1 psychological:1 markovian:6 lieberman:1 stewart:1 maximization:1 reviewer:1 deviation:1 subset:1 mckay:1 uniform:4 synthetic:1 combined:1 chooses:1 st:1 density:17 fundamental:1 randomized:2 sensitivity:1 international:4 vm:7 off:4 michael:9 dirichlets:1 na:1 thesis:1 again:3 satisfied:1 management:5 aaai:1 choose:3 possibly:1 admit:3 creating:1 american:1 return:2 account:1 star:2 subsumes:1 includes:1 availability:1 rockafellar:1 relieve:1 satisfy:1 piece:1 later:1 multiplicative:1 observing:2 sup:1 reached:1 start:2 analyze:1 ass:1 minimize:1 variance:1 efficiently:8 identify:1 bayesian:6 carlo:2 trajectory:2 lighting:1 history:7 against:4 colleague:1 james:1 associated:3 proof:4 schaeffer:1 sampled:10 rational:1 treatment:8 schauenberg:1 counterfactual:2 recall:6 knowledge:1 javier:1 actually:1 worry:1 attained:1 higher:1 day:1 follow:2 response:8 laber:1 formulation:4 evaluated:1 though:1 furthermore:4 just:2 westgrid:1 pineau:1 mdp:45 believe:1 effect:1 samid:1 concept:1 true:6 requiring:1 hence:2 bowling:7 game:37 self:1 noted:1 percentile:66 criterion:1 complete:2 demonstrate:2 theoretic:1 performs:1 newsletter:1 bring:1 percent:1 common:4 functional:1 empirically:2 overview:1 belong:2 interpretation:1 approximates:2 discussed:1 extend:1 association:1 glucose:3 meal:3 pm:2 mathematics:1 portfolio:2 specification:1 encountering:1 longer:1 something:1 multivariate:1 own:2 showed:2 recent:3 perspective:1 optimizing:6 optimizes:2 driven:2 dictated:1 termed:1 store:1 moderate:1 remembered:1 discussing:1 life:1 joelle:1 integrable:1 captured:1 holte:1 morgan:1 care:1 relaxed:1 converge:1 maximize:3 strike:1 swartz:2 july:1 ii:2 branch:1 full:3 ing:1 bestresponse:1 offer:1 clinical:6 equally:1 visit:2 schizophrenia:4 a1:1 controlled:2 variant:1 koop:1 patient:5 expectation:4 iteration:8 represent:1 arnab:1 sequenced:1 achieved:1 background:1 interval:5 concluded:1 biased:1 unlike:1 subject:2 simulates:1 sensical:1 thing:1 shie:5 seem:1 effectiveness:3 call:2 near:1 vital:1 enough:1 relaxes:1 iii:2 easy:1 timesteps:1 psychology:1 restrict:2 approaching:1 reduce:1 imperfect:7 idea:1 subadditivity:1 br:10 billing:1 tradeoff:1 utility:6 cause:1 action:23 depression:1 useful:2 involve:1 estimated:4 overly:3 delta:2 per:1 alpert:1 group:1 key:2 reformulation:1 blood:3 drawn:2 neither:2 timestep:1 relaxation:1 injects:1 sum:7 cone:1 year:1 inverse:2 uncertainty:54 place:2 family:10 almost:4 p3:1 decision:20 prefer:1 coherence:1 lanctot:2 duncan:1 bit:1 entirely:1 bound:3 resampled:1 distinguish:1 occur:1 burch:3 perkins:1 simulate:1 min:1 optimality:1 performing:1 martin:4 conjecture:1 according:7 alternate:1 combination:1 poor:2 smaller:1 sandholm:1 em:1 partitioned:1 making:4 restricted:1 ghaoui:1 taken:1 resource:2 previously:3 remains:1 turn:4 loose:1 discus:1 needed:1 tractable:7 end:1 unusual:1 generalizes:1 gaussians:1 operation:3 opponent:1 spectral:1 alternative:4 robustness:9 encounter:1 assumes:3 dirichlet:3 remaining:1 quantile:2 build:1 approximating:1 february:1 society:1 seeking:1 objective:28 already:1 strategy:23 dependence:2 poker:2 separate:1 thank:1 cvar:6 topic:1 considers:1 cfr:16 reason:1 minority:1 bard:1 tuomas:1 ratio:1 minimizing:1 katherine:1 unfortunately:1 october:1 sharper:1 rise:1 design:2 policy:91 unknown:2 perform:3 allowing:1 imbalance:1 observation:1 markov:12 finite:5 schnizlein:1 immediate:2 situation:1 ever:1 frame:1 arbitrary:5 canada:1 david:1 pair:1 specified:1 extensive:13 connection:1 optimized:1 coherent:3 learned:1 framing:1 uncoupled:2 icy:1 nip:3 address:1 beyond:2 adversary:7 below:1 scott:2 eighth:1 challenge:1 encompasses:2 program:1 including:1 max:1 memory:1 natural:2 difficulty:1 regularized:1 hr:3 representing:1 scheme:1 technology:1 mdps:25 acknowledges:1 coupled:1 health:1 faced:1 prior:1 circled:1 acknowledgement:1 calcul:1 relative:2 stanislav:1 fully:1 loss:2 highlight:1 piccione:1 mixed:2 interesting:1 proven:1 var:3 aversion:2 agent:3 degree:1 sufficient:5 s0:4 acerbi:1 principle:1 playing:1 heavy:1 elsewhere:1 placed:1 supported:1 intractably:1 tsitsiklis:1 bias:1 institute:1 fall:1 wide:1 taking:2 bulletin:2 distributed:3 world:2 transition:23 cumulative:5 evaluating:2 avoids:2 made:2 reinforcement:2 refinement:1 adaptive:3 simplified:2 employing:2 uryasev:1 approximate:4 compact:1 keep:1 dealing:1 assumed:2 knew:1 consuming:1 davidson:1 alternatively:2 continuous:3 iterative:1 diabetic:1 additionally:1 nature:1 robust:31 ca:2 improving:1 imagining:1 necessarily:1 hoda:1 marc:2 protocol:1 did:1 anna:1 main:1 nothing:1 intolerably:1 repeated:3 xu:4 simester:1 depicts:1 encompassed:1 nilim:1 exponential:2 pe:1 forgets:1 weighting:2 ix:1 theorem:8 down:1 transitioning:1 specific:7 explored:1 intractable:4 exists:1 restricting:1 sequential:3 erick:2 phd:1 magnitude:1 subtree:2 horizon:15 trivedi:1 chen:1 smoothly:1 simply:1 likely:4 explore:2 infinitely:2 nserc:1 nested:1 corresponds:2 satisfies:1 worstcase:1 chance:8 cdf:1 kan:1 conditional:4 goal:2 viewed:1 sorted:1 informing:1 lipschitz:2 hard:3 specifically:1 except:1 uniformly:3 tyrrell:1 averaging:1 acting:1 conservative:3 called:4 total:2 player:26 distributionally:2 formally:2 select:1 gilpin:1 support:3 absolutely:1 evaluate:2 shortreed:1
|
4,157 | 4,763 |
Nonconvex Penalization Using Laplace Exponents
and Concave Conjugates
Zhihua Zhang and Bojun Tu
College of Computer Science & Technology
Zhejiang University
Hangzhou, China 310027
{zhzhang, tubojun}@zju.edu.cn
Abstract
In this paper we study sparsity-inducing nonconvex penalty functions using L?evy
processes. We define such a penalty as the Laplace exponent of a subordinator. Accordingly, we propose a novel approach for the construction of sparsityinducing nonconvex penalties. Particularly, we show that the nonconvex logarithmic (LOG) and exponential (EXP) penalty functions are the Laplace exponents
of Gamma and compound Poisson subordinators, respectively. Additionally, we
explore the concave conjugate of nonconvex penalties. We find that the LOG and
EXP penalties are the concave conjugates of negative Kullback-Leiber (KL) distance functions. Furthermore, the relationship between these two penalties is due
to asymmetricity of the KL distance.
1
Introduction
Variable selection plays a fundamental role in statistical modeling for high-dimensional data sets,
especially when the underlying model has a sparse representation. The approach based on penalty
theory has been widely used for variable selection in the literature. A principled approach is to
due the lasso of [17], which uses the ?1 -norm penalty. Recently, some nonconvex alternatives,
such as the bridge penalty, the nonconvex exponential penalty (EXP) [3, 8], the logarithmic penalty
(LOG) [19, 13], the smoothly clipped absolute deviation (SCAD) penalty [6] and the minimax concave plus (MCP) penalty [20], have been demonstrated to have attractive properties theoretically and
practically.
There has also been work on nonconvex penalties within a Bayesian framework. Zou and Li [23]
derived their local linear approximation (LLA) algorithm by combining the EM algorithm with an
inverse Laplace transformation. In particular, they showed that the bridge penalty can be obtained
by mixing the Laplace distribution with a stable distribution. However, Zou and Li [23] proved that
both MCP and SCAD can not be cast into this framework. Other authors have shown that the prior
induced from the LOG penalty has an interpretation as a scale mixture of Laplace distributions with
an inverse gamma density [5, 9, 12, 2]. Recently, Zhang et al. [22] extended this class of Laplace
variance mixtures by using a generalized inverse Gaussian density. Additionally, Griffin and Brown
[11] devised a family of normal-exponential-gamma priors.
Our work is motivated by recent developments of Bayesian nonparametric methods in feature selection [10, 18, 4, 15]. Especially, Polson and Scott [15] proposed a nonparametric approach for
normal variance mixtures using L?evy processes, which embeds finite dimensional normal variance
mixtures in infinite ones. We develop a Bayesian nonparametric approach for the construction
of sparsity-inducing nonconvex penalties. Particularly, we show that Laplace transformations of
L?evy processes can be viewed as pseudo-priors and the corresponding Laplace exponents then form
1
sparsity-inducing nonconvex penalties. Moreover, we exemplify that the LOG and EXP penalties
can be respectively regarded as Laplace exponents of Gamma and compound Poisson subordinators.
In addition, we show that both LOG and EXP can be constructed via the Kullback-Leibler distance.
This construction recovers an inherent connection between LOG and EXP. Moreover, it provides us
with an approach for adaptively updating tuning hyperparameters, which is a very important computational issue in nonconvex sparse penalization. Typically, the multi-stage LLA and SparseNet
algorithms with nonconvex penalties [21, 13] implement a two-dimensional grid research, so they
take more computational costs. However, we do not claim that our method will always be optimal
for generalization performance.
2
L?evy Processes for Nonconvex Penalty Functions
Suppose we are given a set of training data {(xi , yi ) : i = 1, . . . , n}, where the?
xi ? Rp are the
n
input vectors and the yi are the corresponding outputs. Moreover, we assume that i=1 xi = 0 and
?
n
i=1 yi = 0. We now consider the following linear regression model:
y = Xb + ?,
where y = (y1 , . . . , yn )T is the n?1 output vector, X = [x1 , . . . , xn ]T is the n?p input matrix,
and ? is a Gaussian error vector N (?|0, ?In ). We aim to find a sparse estimate of regression vector
b = (b1 , . . . , bp )T under the MAP framework.
We particular study the use of Laplace variance mixtures in sparsity modeling. For this purpose, we
define a hierarchical model:
[bj |?j , ?] ? L(bj |0, ?(2?j )?1 ),
ind
iid
[?j ] ? p(?j ),
p(?) = ?Constant?,
where the ?j s are known as the local shrinkage parameters and L(b|u, ?) denotes a Laplace distribution of the density
(
)
1
1
L(b|u, ?) =
exp ? |b ? u| .
4?
2?
The classical regularization framework is based on a penalty function induced from the margin prior
p(bj |?). Let
?(|b|) = ? log p(b|?),
??
where p(b|?) = 0 L(b|0, ?? ?1 )p(?)d?. Then the penalized regression problem is
p
{
}
?
1
2
min F (b) , ?y?Xb?2 + ?
?(|bj |) .
b
2
j=1
2
?(|b|)
Using some direct calculations, we can obtain that d?(|b|)
> 0 and d d|b|
< 0. This implies
2
d|b|
that ?(|b|) is nondecreasing and concave in |b|. In other words, ?(|b|) forms a class of nonconvex
penalty functions for b.
Motivated by use of Bayesian nonparametrics in sparsity modeling, we now explore Laplace scale
mixtures by relating ? with a subordinator. We thus have a Bayesian nonparametric formulation for
the construction of joint priors of the bj ?s.
2.1
Subordinators and Laplace Exponents
Before we go into the presentation, we give some notions and lemmas that will be uses later. Let
f ? C ? (0, ?) with f ? 0. We say f is completely monotone if (?1)n f (n) ? 0 for all n ? N and
a Bernstein function if (?1)n f (n) ? 0 for all n ? N. The following lemma will be useful.
??
Lemma 1 Let ? be the L?evy measure such that 0 min(u, 1)?(du) < ?.
2
(1) f is a Bernstein function if and only if the mapping s 7? exp(?tf (s)) is completely monotone for all t ? 0.
(2) f is a Bernstein function if and only if it has the representation
? ?
[
]
f (s) = ? + ?s +
1 ? exp(?su) ?(du) for all s > 0,
(1)
0
where ?, ? ? 0.
Our work is based on the notion of subordinators. Roughly speaking, a subordinator is an onedimensional L?evy process that is non-decreasing (a.s.) [16]. An important property for subordinators
is given in the following lemma.
Lemma 2 If T = (T (t) : t ? 0) is a subordinator, then the Laplace transformation of its density
takes the form
? ?
( ?sT (t) )
E e
=
e?sT (t) p(T (t))dT (t) = e?t?(s) ,
0
?
where
?
?(s) = ?s +
[
]
1 ? e?su ?(du)
for s > 0.
(2)
0
Here ? ? 0 and ? is the L?evy measure defined in Lemma 1.
Conversely, if ? is an arbitrary mapping from (0, ?) ? (0, ?) of the form (2), then e?t?(s) is the
Laplace transformation of the density of a subordinator.
Lemmas 1 and 2 can be found in [1, 16]. The function ? in (2) is usually called the Laplace
exponent of the subordinator and it satisfies ?(0) = 0. Lemma 1 implies that the Laplace exponent
? is a Bernstein function and the corresponding Laplace transformation exp(?t?(s)) is completely
monotone.
Recall that the Laplace exponent ?(s) is nonnegative, nondecreasing and concave on (0, ?). Thus,
if we let s = |b|, then ?(|b|) defines a nonconvex penalty function of b on (??, ?). Moreover,
such ?(|b|) is nondifferentiable at the origin because ? ? (0+ ) > 0 and ? ? (0? ) < 0. Thus, it is able
to induce sparsity. In this regard, exp(?t?(|b|)) forms a pseudo-prior for b1 . Lemma 2 shows that
the prior can be defined by a Laplace transformation. In summary, we have the following theorem.
Theorem 1 Let ?(s) be a nonzero Bernstein function of s on (0, ?). If ?(s) = 0, then ?(|b|) is a
nondifferentiable and nonconvex function of b on (??, ?). Furthermore,
? ?
exp(?t?(|b|)) =
exp(?|b|T (t))p(T (t))dT (t), t ? 0,
0
where (T (t) : t ? 0) is some subordinator.
The subordinator T (t) plays the same role as the local shrinkage parameter ?, which is also called a
latent variable. Moreover, we will see that t plays the role of a tuning hyperparameter. Theorem 1
shows an explicit relationship between the local shrinkage parameter and the corresponding tuning
hyperparameter; i.e., the former is a stochastic process of the later. It is also worth noting that
? ?
exp(?t?(|b|)) = 2
L(b|0, (2T (t))?1 )T (t)?1 p(T (t))dT (t).
??
0
?1
Thus, if 0 T (t) p(T (t))dT (t) = 1/C < ?, p? (T (t)) , CT (t)?1 p(T (t)) defines a new proper
density for T (t). In this case, the proper prior C ?exp(?t?(|b|)) is a Laplace scale mixture, i.e., the
?
mixture of L(b|0, (2T (t))?1 ) with p? (T (t)). If 0 T (t)?1 p(T (t))dT (t) = ?, then p? (T (t)) ,
T (t)?1 p(T (t)) defines an improper density for T (t). Thus, the improper prior exp(?t?(|b|)) is a
mixture of L(b|0, (2T (t))?1 ) with p? (T (t)).
??
If 0 exp(?t?(s))ds is infinite, exp(?t?(|b|)) is an improper density w.r.t. Lebesgue measure. Otherwise, it can forms a proper density. In any case, we use the terminology of pseudo-priors for exp(?t?(|b|)).
1
3
2.2
The MAP Estimation
Based on the subordinator given in the previous subsection, we rewrite the hierarchical representation for joint prior of the bj under the regression framework. That is,
[bj |?j , ?]
ind
?
L(bj |0, ?(2?j )?1 ),
p? (?j )
?
??j?1 p(?j ),
which is equivalent to
( ?
)
ind
j
[bj , ?j |?] ? exp ? |bj | p(?j ).
?
Here T (tj ) = ?j . The joint marginal pseudo-prior of the bj ?s is
p
p ? ?
)
(
( |b | ))
( ?
?
?
j
j
exp ? tj ?
.
p? (b|?) =
exp ? |bj | P (?j )d?j =
?
?
0
j=1
j=1
Thus, the MAP estimate of b is based on the following optimization problem
min
b
{1
2
?y ? Xb?22 + ?
p
?
}
tj ?(|bj |/?) .
j=1
Clearly, the tj ?s are tuning hyperparameters and the ?j ?s are latent variables. Moreover, it is interesting that ?j (T (tj )) is defined as a subordinator w.r.t. tj .
3
Gamma and Compound Poisson Subordinators
In [15], the authors discussed the use of ?-stable subordinators and inverted-beta subordinators. In
this section we study applications of Gamma and Compound Poisson subordinators in constructing
nonconvex penalty functions. We establish an interesting connection of these two subordinators with
nonconvex logarithmic (LOG) and exponential (EXP) penalties. Particularly, these two penalties are
the Laplace exponents of the two subordinators, respectively.
3.1
The LOG penalty and Gamma Subordinator
The log-penalty function is defined by
?(|b|) =
(
)
1
log ?|b|+1 ,
?
?, ? > 0.
(3)
Clearly, ?(|b|) is a Bernstein function of |b| on (0, ?). Thus, it is the Laplace exponent of a subordinator. In particular, we have the following theorem.
Theorem 2 Let ?(s) be defined by (3) with s = |b|. Then,
) ? ?[
(
]
1
log ?s+1 =
1 ? exp(?su) ?(du),
?
0
where the L?evy measure ? is
?(du) =
1
exp(?u/?)du.
?u
Furthermore,
?t/?
exp(?t?(s)) = (?s+1)
?
=
?
exp(?sT (t))p(T (t))dT (t),
0
where {T (t) : t ? 0} is a Gamma subordination and each T (t) has density
?? ? ?t ?1
p(T (t) = ?) =
?
exp(???1 ?).
?(t/?)
t
4
As we see, T (t) follows Gamma distribution Ga(T (t)|t/?, ?). Thus, the {T (t) : t ? 0} is called
the Gamma subordinator.
We also note that the corresponding pseudo-prior is
? ?
(
)?t/?
exp(?t?(|b|)) = ?|b|+1
?
L(b|0, T (t)?1 )T (t)?1 p(T (t))dT (t).
0
Furthermore, if t > ?, we can form the pseudo-prior as a proper distribution, which is the mixture
of L(b|0, T (t)?1 ) with Gamma distribution Ga(T (t)|? ?1 t?1, ?).
3.2
The EXP Penalty and Compound Poisson Subordinator
We call {K(t), t ? 0} a Poisson process of intensity ? > 0 if K takes values in N ? {0} and each
K(t) ? Po(K(t)|?t), namely,
(?t)k ??t
e , for k = 0, 1, 2, . . .
k!
P (K(t) = k) =
Let {Z(k) : k ? N} be a sequence of i.i.d. random real variables from common law ?Z and let K be
a Poisson process of intensity ? that is independent of all the Z(k). Then T (t) , Z(K(1)) + ? ? ? +
Z(K(t)) for t ? 0 follows a compound Poisson distribution (denoted T (t) ? Po(T (t)|?t, ?Z )). We
then call {T (t) : t ? 0} the compound Poisson process. It is well known that Poisson processes are
subordinators. A compound Poisson process is a subordinator if and only if the Z(k) are nonnegative
random variables [16].
In this section we employ the compound Poisson process to explore the EXP penalty, which is
?(|b|) =
1
(1 ? exp(??|b|)),
?
?, ? > 0.
(4)
It is easily seen that ?(|b|) is a Bernstein function of |b| on (0, ?). Moreover, we have
Theorem 3 Let ?(s) be defined by (4) where |b| = s. Then
? ?
?(s) =
[1 ? exp(?su)]?(du)
0
with the L?evy measure ?(du) = ?
?1
?? (u)du. Furthermore,
? ?
exp(?t?(s)) =
exp(?sT (t))P (T (t))dT (t),
0
where {T (t) : t ? 0} is a compound Poisson subordinator, each T (t) ? Po(T (t)|t/?, ?? (?)), and
?u (?) is the Dirac Delta measure.
?
Note that R (1? exp(??|b|))db = ?, so ? ?1 (1 ? exp(??|b|)) is an improper prior of b.
As we see, there are two parameters ? and ? in both LOG and EXP penalties. Usually, for the LOG
penalty ones set ? = log(1 + ?), because the corresponding ?(|b|) goes from ?b?1 to ?b?0 , as ?
varying from 0 to ?. In the same reason, ones set ? = 1? exp(??) for the EXP penalty. Thus, ?
(or ?) measures the sparseness. It makes sense to set ? as ? = p (i.e., the dimension of the input
vector) in the following experiments. Interestingly, the following theorem shows a limiting property
of the subordinators.
Theorem 4 Assume that ? > 0 and ? > 0.
d
(1) If ? = log(1 + ?), then lim??0 Ga(T (t)|t/?, ?) ? ?t (T (t)).
d
(2) If ? = 1 ? e?? , then lim??0 Po(T (t)|t/?, ?? (?)) ? ?t (T (t)).
In this section we have an interesting connection between the LOG and EXP penalties based on
the relationship between the Gamma and compound Poisson subordinators. Subordinators help
5
us establish a direct connection between the tuning hyperparameters tj and the latent variables ?j
(T (tj )). However, when we implement the MAP estimation, it is challenging how to select these
tuning hyperparameters. Recently, Palmer et al. [14] considered the application of concave conjugates in developing variational EM algorithms for non-Gaussian latent variable models. In the next
section we rederive the nonconvex LOG and EXP penalties via concave conjugate. This derivation
is able to deal with the challenge.
4
A View of Concave Conjugate
Our derivation for the LOG and EXP penalties is based on the Kullback-Leibler (KL) distance.
Given two nonnegative vectors a = (a1 , . . . , ap )T and s = (s1 , . . . , sp )T , the KL distance between
them is
p
?
aj
KL(a, s) =
aj log ?aj +sj ,
sj
j=1
where 0 log 00 = 0. It is well known that KL(a, s) ? 0 and KL(a, s) = 0 if and only if a = s, but
typically KL(a, s) ?= KL(s, a).
Theorem 5 Let a = (a1 , . . . , ap )T be a nonnegative vector and |b| = (|b1 |, . . . , |bp |)T . Then,
p
?
aj ?(|bj |) ,
j=1
}
{
(
)
1
log ?|bj |+1 = min wT |b| + KL(a, w)
w?0
?
?
p
?
aj
j=1
when wj = aj /(1 + ?|bj |), and
p
?
aj ?(|bj |) ,
j=1
{
}
1
[1 ? exp(??|bj |)] = min wT |b| + KL(w, a)
w?0
?
?
p
?
aj
j=1
when wj = aj exp(??|bj |).
When setting aj = ?? tj , we readily see the LOG and EXP penalties. Thus, Theorem 5 illustrates
a very interesting connection between the LOG and EXP penalties. Since KL(a, w) is strictly
convex in either w or a, the LOG and EXP penalties are respectively the concave conjugates of
???1 KL(a, w) and ???1 KL(w, a).
The construction method for the nonconvex penalties provides us with a new approach for solving the corresponding penalized regression model. In particular, to solve the nonconvex penalized
regression problem:
min
b
p
{
}
?
1
J(b, a) , ?y ? Xb?22 +
aj ?(|bj |) ,
2
j=1
(5)
we equivalently formulate it as
{
{1
}}
1
min min
?y ? Xb?22 + wT |b| + D(w, a) .
w?0 2
b
?
(6)
Here D(w, a) is either KL(a, w) or KL(w, a). Moreover, we are also interested in adaptive estimation of a in solving the problem (6). Accordingly, we develop a new training algorithm, which
consists of two steps.
We are given initial values w(0) , e.g., w(0) = (1, . . . , 1)T . After the kth estimates (b(k) , a(k) ) of
(b, a) are obtained, the (k+1)th iteration of the algorithm is defined as follows.
The first step calculates w(k) via
w(k) = argmin
w>0
(k)
Particular, wj
(k)
p
{?
(k)
wj |bj | +
j=1
(k)
(k)
= aj /(1 + ?|bj |) in LOG, while wj
6
}
1
D(w, a(k) ) .
?
(k)
(k)
= aj exp(??|bj |) in EXP.
The second step then calculates (b(k+1) , a(k+1) ) via
{1
}
1
(b(k+1) , a(k+1) ) = argmin
?y ? Xb?22 + |b|T w(k) + D(w(k) , a) .
2
?
b, a
Note that given w(k) , b and a are independent. Thus, this step can be partitioned into two parts.
Namely, a(k+1) = w(k) and
b
(k+1)
= argmin
b
{1
2
?y ?
Xb?22
+
p
?
}
(k)
wj |bj | .
j=1
Recall that the LOG and EXP penalties are differentiable and strictly concave in |b| on [0, ?). Thus,
the above algorithm enjoys the same convergence property of the LLA was studied by Zou and Li
[23] (see Theorem 1 and Proposition 1 therein).
5
Experimental Analysis
We conduct experimental analysis of our algorithms with LOG and EXP given in the previous section. We also implement the Lasso, adaptive Lasso (adLasso) and MCP-based methods. All these
methods are solved by the coordinate descent algorithm. For LOD and EXP algorithms, we fix
? = p (the dimension of the input vector), and set w(0) = ?1 where ? is selected by using crossvalidation and 1 is the vector of ones. For Lasso, AdLasso and MCP, we use cross-validation to
select the tunning parameters (? in Lasso, ? and ? in AdLasso and MCP).
In this simulation example, we use a data model as follow
y = xT b + ??
where ? ? N (0, 1), and b is a 200-dimension vector with only 10 non-zeros such that bi = b100+i =
0.2i, i = 1, . . . , 5. Each data point x is sampled from a multivariate normal distribution with zero
mean and covariance matrix ? = {0.7|i?j| }1?i,j?200 . We choose ? such that the Signal-to-Noise
Ratio (SNR), which is defined as
?
bT ?b
SNR =
,
?
is a specified value. Our experiment is performed on n = 100 and two different SNR values. We
? denote the solution given by each algorithm. The
generate N = 1000 test data for each test. Let b
Standardized Prediction Error (SPE) is defined as
?N
? 2
(yi ? xTi b)
SPE = i=1
N ?2
? which is correctly set to zero
and the Feature Selection Error (FSE) is proportion of coefficients in b
or non-zero based on true b.
Figure 1 reports the average results over 20 repeats. From the figure, we see that both the LOG and
EXP outperform the other methods in prediction accuracy and sparseness in most cases. Our methods usually takes about 10 iterations to get convergence. Thus, our methods are computationally
more efficient than the AdLasso and MCP.
In the second experiment, we apply our methods to regression problems on four datasets from UCI
Machine Learning Repository and the cookie (Near-Infrared (NIR) Spectroscopy of Biscuit Doughs)
dataset [7]. For the four UCI datasets, we randomly select 70% of the data for training and the rest
for test, and repeat this process for 20 times. We report the mean and standard deviation of the Root
Mean Square Error (RMSE) and the model sparsity (proportion of zero coefficients in the model)
in Tables 1 and 2. For the NIR dataset, we follow the steup for the original dataset: 40 instances
for training and 32 instances for test. We form four different datasets for the four responses (?fat?,
?sucrose?, ?dry flour? and ?water?) in the experiment, and report the RMSE on the test set and
the model sparsity in Table 3. We can see that all the methods are competitive in both prediction
accuracy. But the nonconvex LOG, EXP and MCP have strong ability in feature selection.
7
SPE
?FSE?
SPE
SNR = 3.0
?FSE?
SNR = 10.0
Figure 1: Box-and-whisker plots of SPE and FSE results. Here (a), (b), (c), (d), (e) are for LOG,
EXP, Lasso, AdLasso, and MCP, respectively
Table 1: Root Mean Square Error on Real datasets
Abalone
Housing
Pyrim
Triazines
LOG
EXP
2.207(?0.077)
2.208(?0.077)
4.880(?0.405)
4.883(?0.405)
0.138(?0.032)
0.130(?0.033)
0.156(?0.018)
0.153(?0.020)
Lasso
AdLasso
MCP
2.208(?0.078)
2.208(?0.078)
2.209(?0.078)
4.886(?0.414)
4.887(?0.413)
4.889(?0.412)
0.118(?0.035)
0.127(?0.028)
0.122(?0.036)
0.146(?0.017)
0.146(?0.017)
0.148(?0.017)
Table 2: Sparsity on Real datasets
Abalone
Housing
Pyrim
Triazines
LOG
EXP
12.50(?0.00)
10.63(?4.46)
11.54(?5.70)
8.08(?5.15)
57.22(?35.32)
88.15(?5.69)
68.17(?31.19)
76.25(?21.84)
Lasso
AdLasso
MCP
1.88(?4.46)
8.75(?5.73)
12.50(?0.00)
3.08(?5.10)
8.07(?7.08)
11.54(?6.66)
36.48(?24.52)
34.62(?28.81)
41.48(?23.88)
62.08(?14.65)
63.58(?15.18)
73.00(?18.77)
Table 3: Root Mean Square Error and Sparsity on Real datasets NIR
6
NIR(fat)
RMSE Sparsity
NIR(sucrose)
RMSE Sparsity
NIR(dry flour)
RMSE Sparsity
NIR(water)
RMSE Sparsity
LOG
EXP
0.334
0.307
99.14
97.29
1.45
1.47
98.71
97.71
0.992
0.908
99.71
98.86
0.400
0.484
98.14
94.14
Lasso
AdLasso
MCP
0.437
0.835
0.943
68.86
88.14
94.14
2.54
2.22
2.07
53.43
86.14
95.43
0.785
0.862
0.839
92.29
99.14
99.71
0.378
0.407
0.504
65.57
85.86
96.29
Conclusion
In this paper we have introduced subordinators of L?evy processes into the definition of nonconvex
penalties. This leads us to a Bayesian nonparametric approach for constructing sparsity-inducing
penalties. In particular, we have illustrated the construction of the LOG and EXP penalties. Along
this line, it would be interesting to investigate other penalty functions via subordinators and compare
the performance of these penalties. We will conduct a comprehensive study in the future work.
Acknowledgments
This work has been supported in part by the Natural Science Foundations of China (No. 61070239).
8
References
[1] D. Applebaum. L?evy Processes and Stochastic Calculus. Cambridge University Press, Cambridge, UK,
2004.
[2] A. Armagan, D. Dunson, and J. Lee. Generalized double Pareto shrinkage. Technical report, Duke
University Department of Statistical Science, February 2011.
[3] P. S. Bradley and O. L. Mangasarian. Feature selection via concave minimization and support vector
machines. In The 26th International Conference on Machine Learning, pages 82?90. Morgan Kaufmann
Publishers, San Francisco, California, 1998.
[4] F. Caron and A. Doucet. Sparse bayesian nonparametric regression. In Proceedings of the 25th international conference on Machine learning, page 88, 2008.
[5] V. Cevher. Learning with compressible priors. In Advances in Neural Information Processing Systems
22, pages 261?269, 2009.
[6] J. Fan and R. Li. Variable selection via nonconcave penalized likelihood and its Oracle properties. Journal
of the American Statistical Association, 96:1348?1361, 2001.
[7] Osborne B. G., Fearn T., Miller A. R., and Douglas S. Application of near-infrared reflectance spectroscopy to compositional analysis of biscuits and biscuit dough. Journal of the Science of Food and
Agriculture, 35(1):99?105, 1984.
[8] C. Gao, N. Wang, Q. Yu, and Z. Zhang. A feasible nonconvex relaxation approach to feature selection. In
Proceedings of the Twenty-Fifth National Conference on Artificial Intelligence (AAAI?11), 2011.
[9] P. J. Garrigues and B. A. Olshausen. Group sparse coding with a Laplacian scale mixture prior. In
Advances in Neural Information Processing Systems 22, 2010.
[10] Z. Ghahramani, T. Griffiths, and P. Sollich. Bayesian nonparametric latent feature models. In World
meeting on Bayesian Statistics, 2006.
[11] J. E. Griffin and P. J. Brown. Bayesian adaptive Lassos with non-convex penalization. Technical report,
University of Kent, 2010.
[12] A. Lee, F. Caron, A. Doucet, and C. Holmes. A hierarchical Bayesian framework for constructing sparsityinducing priors. Technical report, University of Oxford, UK, 2010.
[13] R. Mazumder, J. Friedman, and T. Hastie. SparseNet: Coordinate descent with nonconvex penalties.
Journal of the American Statistical Association, 106(495):1125?1138, 2011.
[14] J. A. Palmer, D. P. Wipf, K. Kreutz-Delgado, and B. D. Rao. Variational EM algorithms for non-Gaussian
latent variable models. In Advances in Neural Information Processing Systems 18, 2006.
[15] N. G. Polson and J. G. Scott. Local shrinkage rules, l?evy processes, and regularized regression. Journal
of the Royal Statistical Society (Series B), 74(2):287?311, 2012.
[16] S.-I. P. Sato. L?evy Processes and infinitely Divisible Distributions. Cambridge University Press, Cambridge, UK, 1999.
[17] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society,
Series B, 58:267?288, 1996.
[18] M. K. Titsias. The infinite gamma-poisson feature models. In Advances in Neural Information Processing
Systems 20, 2007.
[19] J. Weston, A. Elisseeff, B. Sch?olkopf, and M. Tipping. Use of the zero-norm with linear models and
kernel methods. Journal of Machine Learning Research, 3:1439?1461, 2003.
[20] C.-H. Zhang. Nearly unbiased variable selection under minimax concave penalty. The Annals of Statistics,
38:894?942, 2010.
[21] T. Zhang. Analysis of multi-stage convex relaxation for sparse regularization. Journal of Machine Learning Research, 11:1081?1107, 2010.
[22] Z. Zhang, S. Wang, D. Liu, and M. I. Jordan. EP-GIG priors and applications in Bayesian sparse learning.
Journal of Machine Learning Research, 13:2031?2061, 2012.
[23] H. Zou and R. Li. One-step sparse estimates in nonconcave penalized likelihood models. The Annals of
Statistics, 36(4):1509?1533, 2008.
9
|
4763 |@word repository:1 norm:2 proportion:2 triazine:2 calculus:1 simulation:1 covariance:1 kent:1 elisseeff:1 delgado:1 garrigues:1 initial:1 liu:1 series:2 interestingly:1 bradley:1 readily:1 plot:1 intelligence:1 selected:1 accordingly:2 provides:2 evy:13 compressible:1 zhang:6 along:1 constructed:1 direct:2 beta:1 consists:1 subordinators:17 theoretically:1 roughly:1 multi:2 decreasing:1 food:1 xti:1 underlying:1 moreover:8 argmin:3 transformation:6 pseudo:6 concave:13 fat:2 uk:3 yn:1 before:1 local:5 oxford:1 ap:2 plus:1 therein:1 china:2 studied:1 conversely:1 challenging:1 palmer:2 bi:1 zhejiang:1 acknowledgment:1 implement:3 lla:3 word:1 induce:1 griffith:1 get:1 ga:3 selection:10 equivalent:1 map:4 demonstrated:1 go:2 dough:2 convex:3 formulate:1 rule:1 holmes:1 regarded:1 tunning:1 notion:2 coordinate:2 laplace:24 limiting:1 annals:2 construction:6 play:3 suppose:1 sparsityinducing:2 duke:1 us:2 lod:1 origin:1 particularly:3 updating:1 infrared:2 ep:1 role:3 solved:1 leiber:1 wang:2 wj:6 improper:4 principled:1 rewrite:1 solving:2 titsias:1 completely:3 po:4 joint:3 subordination:1 easily:1 derivation:2 artificial:1 widely:1 solve:1 say:1 otherwise:1 ability:1 statistic:3 nondecreasing:2 housing:2 sequence:1 differentiable:1 propose:1 tu:1 uci:2 combining:1 mixing:1 inducing:4 dirac:1 olkopf:1 crossvalidation:1 convergence:2 double:1 help:1 develop:2 strong:1 implies:2 stochastic:2 fix:1 generalization:1 proposition:1 strictly:2 practically:1 considered:1 normal:4 exp:60 mapping:2 bj:24 claim:1 agriculture:1 purpose:1 estimation:3 bridge:2 tf:1 minimization:1 clearly:2 gaussian:4 always:1 aim:1 shrinkage:6 varying:1 derived:1 b100:1 zju:1 likelihood:2 sense:1 hangzhou:1 typically:2 bt:1 interested:1 issue:1 denoted:1 exponent:11 development:1 marginal:1 yu:1 nearly:1 future:1 wipf:1 report:6 inherent:1 employ:1 randomly:1 gamma:13 national:1 comprehensive:1 lebesgue:1 friedman:1 investigate:1 flour:2 mixture:11 tj:9 xb:7 conduct:2 cevher:1 instance:2 modeling:3 rao:1 cost:1 deviation:2 snr:5 adaptively:1 st:4 density:10 fundamental:1 international:2 lee:2 aaai:1 choose:1 american:2 li:5 coding:1 coefficient:2 applebaum:1 later:2 view:1 performed:1 root:3 competitive:1 biscuit:3 rmse:6 square:3 accuracy:2 variance:4 kaufmann:1 miller:1 dry:2 bayesian:12 iid:1 worth:1 definition:1 spe:5 recovers:1 sampled:1 proved:1 dataset:3 recall:2 exemplify:1 subsection:1 lim:2 dt:8 tipping:1 follow:2 response:1 formulation:1 nonparametrics:1 box:1 furthermore:5 stage:2 d:1 su:4 defines:3 aj:13 olshausen:1 brown:2 true:1 unbiased:1 former:1 regularization:2 leibler:2 nonzero:1 illustrated:1 deal:1 attractive:1 ind:3 subordinator:16 abalone:2 generalized:2 variational:2 novel:1 recently:3 mangasarian:1 common:1 discussed:1 interpretation:1 association:2 relating:1 onedimensional:1 cambridge:4 caron:2 tuning:6 grid:1 stable:2 multivariate:1 showed:1 recent:1 compound:11 nonconvex:25 meeting:1 yi:4 inverted:1 seen:1 morgan:1 gig:1 signal:1 technical:3 calculation:1 cross:1 devised:1 a1:2 laplacian:1 calculates:2 prediction:3 regression:10 poisson:15 iteration:2 kernel:1 addition:1 publisher:1 sch:1 rest:1 induced:2 db:1 nonconcave:2 jordan:1 call:2 near:2 noting:1 bernstein:7 divisible:1 hastie:1 lasso:11 cn:1 motivated:2 penalty:50 speaking:1 compositional:1 useful:1 nonparametric:7 generate:1 outperform:1 delta:1 correctly:1 tibshirani:1 hyperparameter:2 group:1 four:4 terminology:1 douglas:1 relaxation:2 monotone:3 inverse:3 clipped:1 family:1 griffin:2 ct:1 fan:1 nonnegative:4 oracle:1 sato:1 bp:2 min:8 department:1 developing:1 scad:2 conjugate:7 sollich:1 em:3 partitioned:1 s1:1 sucrose:2 computationally:1 mcp:11 apply:1 hierarchical:3 alternative:1 rp:1 original:1 denotes:1 standardized:1 reflectance:1 ghahramani:1 especially:2 establish:2 february:1 classical:1 society:2 kth:1 distance:5 armagan:1 nondifferentiable:2 reason:1 water:2 relationship:3 ratio:1 equivalently:1 dunson:1 negative:1 polson:2 proper:4 twenty:1 datasets:6 finite:1 descent:2 extended:1 y1:1 arbitrary:1 intensity:2 introduced:1 cast:1 namely:2 kl:16 specified:1 connection:5 california:1 sparsenet:2 able:2 usually:3 scott:2 sparsity:15 challenge:1 royal:2 natural:1 regularized:1 minimax:2 technology:1 nir:7 prior:19 literature:1 law:1 whisker:1 interesting:5 penalization:3 validation:1 foundation:1 zhzhang:1 pareto:1 penalized:5 summary:1 repeat:2 supported:1 enjoys:1 absolute:1 sparse:8 fifth:1 regard:1 dimension:3 xn:1 world:1 author:2 adaptive:3 san:1 sj:2 kullback:3 doucet:2 b1:3 kreutz:1 francisco:1 xi:3 latent:6 table:5 additionally:2 fse:4 spectroscopy:2 mazumder:1 du:9 zou:4 constructing:3 sp:1 noise:1 hyperparameters:4 osborne:1 x1:1 embeds:1 explicit:1 exponential:4 rederive:1 theorem:11 xt:1 illustrates:1 sparseness:2 margin:1 cookie:1 smoothly:1 logarithmic:3 explore:3 infinitely:1 gao:1 zhihua:1 satisfies:1 weston:1 viewed:1 presentation:1 feasible:1 infinite:3 wt:3 lemma:9 called:3 experimental:2 select:3 college:1 support:1
|
4,158 | 4,764 |
Multiclass Learning with Simplex Coding
Youssef Mroueh],? , Tomaso Poggio],? , Lorenzo Rosasco],? Jean-Jacques E. Slotine?
] - CBCL, McGovern Institute, MIT;? -LCSL, MIT- IIT; ? - ME, BCS, MIT
ymroueh, lrosasco,[email protected] [email protected]
Abstract
In this paper we discuss a novel framework for multiclass learning, defined by
a suitable coding/decoding strategy, namely the simplex coding, that allows us
to generalize to multiple classes a relaxation approach commonly used in binary
classification. In this framework, we develop a relaxation error analysis that avoids
constraints on the considered hypotheses class. Moreover, using this setting we
derive the first provably consistent regularized method with training/tuning complexity that is independent to the number of classes. We introduce tools from
convex analysis that can be used beyond the scope of this paper.
1
Introduction
As bigger and more complex datasets are available, multiclass learning is becoming increasingly important in machine learning. While theory and algorithms for solving binary classification problems
are well established, the problem of multicategory classification is much less understood. Practical
multiclass algorithms often reduce the problem to a collection of binary classification problems. Binary classification algorithms are often based on a relaxation approach: classification is posed as a
non-convex minimization problem and then relaxed to a convex one, defined by suitable convex loss
functions. In this context, results in statistical learning theory quantify the error incurred by relaxation and in particular derive comparison inequalities explicitly relating the excess misclassification
risk to the excess expected loss. We refer to [2, 27, 14, 29] and [18] Chapter 3 for an exhaustive
presentation as well as generalizations.
Generalizing the above approach and results to more than two classes is not straightforward. Over
the years, several computational solutions have been proposed (among others, see [10, 6, 5, 25, 1,
21]). Indeed, most of these methods can be interpreted as a kind of relaxation. Most proposed
methods have complexity which is more than linear in the number of classes and simple one-vs
all in practice offers a good alternative both in terms of performance and speed [15]. Much fewer
works have focused on deriving theoretical guarantees. Results in this sense have been pioneered
by [28, 20], see also [11, 7, 23]. In these works the error due to relaxation is studied asymptotically
and under constraints on the function class to be considered. More quantitative results in terms of
comparison inequalities are given in [4] under similar restrictions (see also [19]). Notably, the above
results show that seemingly intuitive extensions of binary classification algorithms might lead to
methods which are not consistent. Further, it is interesting to note that the restrictions on the function class, needed to prove the theoretical guarantees, make the computations in the corresponding
algorithms more involved and are in fact often ignored in practice.
In this paper we dicuss a novel framework for multiclass learning, defined by a suitable coding/decoding strategy, namely the simplex coding, in which a relaxation error analysis can be developed avoiding constraints on the considered hypotheses class. Moreover, we show that in this framework it is possible to derive the first provably consistent regularized method with training/tuning
complexity that is independent to the number of classes. Interestingly, using the simplex coding,
we can naturally generalize results, proof techniques and methods from the binary case, which is
recovered as a special case of our theory. Due to space restriction in this paper we focus on extensions of least squares, and SVM loss functions, but our analysis can be generalized to a large class
1
of simplex loss functions, including extensions of the logistic and exponential loss functions (used
in boosting). Tools from convex analysis are developed in the supplementary material and can be
useful beyond the scope of this paper, in particular in structured prediction.
The rest of the paper is organized as follow. In Section 2 we discuss the problem statement and
background. In Section 3 we discuss the simplex coding framework which we analyze in Section
4. Algorithmic aspects and numerical experiments are discussed in Section 5 and Section 6, respectively. Proofs and supplementary technical results are given in the appendices.
2
Problem Statement and Previous Work
Let (X, Y ) be two random variables with values in two measurable spaces X and Y = {1 . . . T },
T ? 2. Denote by ?X , the law of X on X , and by ?j (x), the conditional probabilities for j ? Y. The
data is a sample S = (xi , yi )ni=1 , from n identical and independent copies of (X, Y ). We can think of
X as a set of possible inputs and of Y as a set of labels describing a set of semantic categories/classes
the input can belong to. A classification rule is a map b : X ? Y, and its error is measured by the
misclassification risk R(b) = P(b(X) 6= Y ) = E(1I[b(x)6=y] (X, Y )). The optimal classification rule
that minimizes R is the Bayes rule b? (x) = arg maxy?Y ?y (x), x ? X . Computing the Bayes rule
by directly minimizing the risk R is not possible since the probability
Pn distribution is unknown. One
might think of minimizing the empirical risk (ERM) RS (b) = n1 i=1 1I[b(x)6=y] (xi , yi ), which is an
unbiased estimator of the R, but the corresponding optimization problem is in general not feasible.
In binary classification, one of the most common ways to obtain computationally efficient methods
is based on a relaxation approach. We recall this approach in the next section and describe its extension to multiclass in the rest of the paper.
Relaxation Approach to Binary Classification. If T = 2, we can set Y = {?1}. Most modern machine learning algorithms for binary classification consider a convex relaxation of the ERM
functional RS . More precisely: 1) the indicator function in RS is replaced by non negative loss
V : Y ? R ? R+ which is convex in the second argument and is sometimes called a surrogate loss;
2) the classification rule b replaced by a real valued measurable function f : X ? R. A classification rule is then obtained by considering the sign of f . It often suffices to consider a special class
of loss functions, namely large margin loss functions V : R ? R+ of the form V (?yf (x)). This
last expression is suggested by the observation that the misclassification risk, using the labels ?1,
can be written as R(f ) = E(?(?Y f (X))), where ? is the Heaviside step function. The quantity
m = ?yf (x), sometimes called the margin, is a natural point-wise measure of the classification
error. Among other examples of large margin loss functions (such as the logistic and exponential
loss), we recall the hinge loss V (m) = |1 + m|+ = max{1 + m, 0} used in the support vector machine, and the square loss V (m) = (1 + m)2 used in regularized least squares (note that
(1 ? yf (x))2 = (y ? f (x))2 ). Using large margin loss functions it is possible to design effective
learning algorithms replacing the empirical risk with regularized empirical risk minimization
n
ES? (f ) =
1X
V (yi , f (xi )) + ?R(f ),
n i=1
(1)
where R is a suitable regularization functional and ? is the regularization parameter, (see Section
5).
2.1
Relaxation Error Analysis
As we replace the misclassification loss with a convex surrogate loss, we are effectively changing
the problem: the misclassification risk is replaced by the expected loss, E(f ) = E(V (?Y f (X))) .
The expected loss can be seen as a functional on a large space of functions F = FV,? , which depend
on V and ?. Its minimizer, denoted by f? , replaces the Bayes rule as the target of our algorithm.
The question arises of the price we pay by a considering a relaxation approach: ?What is the relationship between f? and b? ?? More generally, ?What is the price we incur into by estimating the
expected risk rather than the misclassification risk?? The relaxation error for a given loss function
can be quantified by the following two requirements:
1) Fisher Consistency. A loss function is Fisher consistent if sign(f? (x)) = b? (x) almost surely
(this property is related to the notion of classification-calibration [2]).
2
2) Comparison inequalities. The excess misclassification risk, and the excess expected loss are
related by a comparison inequality
R(sign(f )) ? R(b? ) ? ?(E(f ) ? E(f? )),
for any function f ? F, where ? = ?V,? is a suitable function that depends on V , and possibly
on the data distribution. In particular ? should be such that ?(s) ? 0 as s ? 0, so that if fn
is a (possibly random) sequence of functions, such that E(fn ) ? E(f? ) (possibly in probability),
then the corresponding sequences of classification rules cn = sign(fn ) is Bayes consistent, i.e.
R(cn ) ? R(b? ) (possibly in probability). If ? is explicitly known, then bounds on the excess
expected loss yield bounds on the excess misclassification risk.
The relaxation error in the binary case has been thoroughly studied in [2, 14]. In particular, Theorem
2 in [2] shows that if a large margin surrogate loss is convex, differentiable and decreasing in a
neighborhood of 0, then the loss is Fisher consistent. Moreover, in this case it is possible to give
an explicit expression for the function ?. In particular, for the hinge loss the target function?is
exactly the Bayes rule and ?(t) = |t|. For least squares, f? (x) = 2?1 (x) ? 1, and ?(t) = t.
The comparison inequality for the square loss can be improved for a suitable class of probability
distribution satisfying the so-called Tsybakov noise condition [22], ?X ({x ? X , |f? (x)| ? s}) ?
Bq sq , s ? [0, 1], q > 0. Under this condition the probability of points such that ?y (x) ? 21 decreases
q+1
polynomially. In this case the comparison inequality for the square loss is given by ?(t) = cq t q+2 ,
see [2, 27].
Previous Works in Multiclass Classification. From a practical perspective, over the years, several
computational solutions to multiclass learning have been proposed. Among others, we mention
for example [10, 6, 5, 25, 1, 21]. Indeed, most of the above methods can be interpreted as a kind
of relaxation of the original multiclass problem. Interestingly, the study in [15] suggests that the
simple one-vs all schemes should be a practical benchmark for multiclass algorithms as it seems
experimentally to achive performance that is similar or better than more sophisticated methods.
As we previously mentioned from a theoretical perspective a general account of a large class of
multiclass methods has been given in [20], building on results in [2] and [28]. Notably, these results
show that seemingly intuitive extensions of binary classification algorithms can lead to inconsistent
methods. These results, see also [11, 23], are developed in a setting where a classification rule
is found by applying a suitable prediction/decoding map to a function f : X ? RT where f is
found considering
function V : Y ? RT ? R+ . The considered functions have to satisfy
P a loss
y
the constraint y?Y f (x) = 0, for all x ? X . The latter requirement is problematic as it makes
the computations in the corresponding algorithms more involved. It is in fact often ignored, so that
practical algorithms often come with no consistency guarantees. In all the above papers relaxation
is studied in terms of Fisher and Bayes consistency and the explicit form of the function ? is not
given. More quantitative results in terms of explicit comparison inequality are given in [4] and (see
also [19]), but also need to to impose the ?sum to zero? constraint on the considered function class.
3
A Relaxation Approach to Multicategory Classification
In this section we propose a natural extension of the relaxation approach that avoids constraining
the class of functions to be considered, and allows us to derive explicit comparison inequalities. See
Remark 1 for related approaches.
c2
?
c1
c3
Figure 1: Decoding with simplex coding T = 3.
Simplex Coding. We start by considering a suitable coding/decoding strategy. A coding map turns
a label y ? Y into a code vector. The corresponding decoding map given a vector returns a label in
3
Y. Note that this is what we implicitly did while treating binary classification,we encoded the label
space Y = {1, 2} using the coding ?1, so that the naturally decoding strategy is simply sign(f (x)).
The coding/decoding strategy we study here is described by the following definition.
Definition 1 (Simplex Coding). The simplex coding is a map C : Y ? RT ?1 , C(y) = cy ,
2
where the code vectors C = {cy | y ? Y} P
? RT ?1 satisfy: 1) kcy k = 1, ?y ? Y, 2)hcy , cy0 i =
1
0
0
? T ?1 , for y 6= y with y, y ? Y, and 3) y?Y cy = 0. The corresponding decoding is the map
D : RT ?1 ? {1, . . . , T }, D(?) = arg maxy?Y h?, cy i , ?? ? RT ?1 .
The simplex coding has been considered in [8],[26], and [16]. It corresponds to T maximally separated vectors on the hypersphere ST ?2 in RT ?1 , that are the vertices of a simplex (see Figure 1).
For binary classification it reduces to the ?1 coding and the decoding map is equivalent to taking
the sign of f . The decoding map has a natural geometric interpretation: an input point is mapped
to a vector f (x) by a function f : X ? RT ?1 , and hence assigned to the class having closest code
2
2
vector ( for y, y 0 ? Y and ? ? RT ?1 , we have kcy ? ?k ? kcy0 ? ?k ? hcy0 , ?i ? hcy , ?i).
Relaxation for Multiclass Learning. We use the simplex coding to propose an extension of binary
classification. Following the binary case, the relaxation can be described in two steps:
1. using the simplex coding, the indicator function is upper bounded by a non-negative loss
function V : Y ? RT ?1 ? R+ , such that 1I[b(x)6=y] (x, y) ? V (y, C(b(x))), for all b : X ?
Y, and x ? X , y ? Y,
2. rather than C ? b we consider functions with values in f : X ? RT ?1 , so that
V (y, C(b(x))) ? V (y, f (x)), for all b : X ? Y, f : X ? RT ?1 and x ? X , y ? Y.
In the next section we discuss several loss functions satisfying the above conditions and we study in
particular the extension of the least squares and SVM loss functions.
Multiclass Simplex Loss Functions. Several loss functions for binary classification can be naturally extended to multiple classes using the simplex coding. Due to space restriction, in this paper
we focus on extensions of the least squares and SVM loss functions, but our analysis can be generalized to a large class of loss functions, including extensions of logistic and exponential loss functions
2
(used in boosting). The Simplex Least Square loss (S-LS) is given by V (y, f (x)) = kcy ? f (x)k ,
and reduces to the usual least square approach to binary classification for T = 2. One natural
extension of the SVM?s hinge loss in this setting would be to consider the Simplex Half space
SVM loss (SH-SVM) V (y, f (x)) = |1 ? hcy , f (x)i|+ . We will see in the following that while
this loss function would induce efficient algorithms in general is not Fisher consistent unless further constraints are assumed. These latter constraints would considerably slow down the computations. We then consider
Simplex Cone SVM (SC-SVM), which is defined as
a second loss function
P
1
V (y, f (x)) = y0 6=y T ?1 + hcy0 , f (x)i . The latter loss function is related to the one considered
+
in the multiclass SVM proposed in [10]. We will see that it is possible to quantify the relaxation error of the loss function without requiring further constraints. Both of the above SVM loss functions
reduce to the binary SVM hinge loss if T = 2.
Remark 1 (Related approaches). An SVM loss is considered in [8] where Vq
(y, f (x)) =
P
cy ?cy0
1
T
?
0
0
0
y 0 6=y |? ? hf (x), vy (y)i|+ and vy (y) = kcy ?c 0 k , with ? = hcy , vy (y)i =
T ?1 . More
2
y
recently [26] considered the loss function V (y, f (x)) = |kcy ? f (x)k ? ?|+ , and a simplex multiP
class boosting loss was introduced in [16], in our notation V (y, f (x)) =
e?hcy ?cy0 ,f (x)i .
j6=y
While all those losses introduce a certain notion of margin that makes use of the geometry of the
simplex coding, it is not to clear how to derive explicit comparison theorems and moreover the computational complexity of the resulting algorithms scales linearly with the number of classes in the
case of the losses considered in [16, 26] and O((nT )? ), ? ? {2, 3} for losses considered in [8] .
4
Figure 2: Level sets of the different losses considered for T = 3. A classification is correct if an
input (x, y) is mapped to a point f (x) that lies in the neighborhood of the vertex cy . The shape of
the neighborhood is defined by the loss. It takes the form of a cone supported on a vertex, in the
case of SC-SVM, a half space delimited by the hyperplane orthogonal to the vertex in the case of
the SH-SVM, and a sphere centered on the vertex, in the case of S-LS.
4
Relaxation Error Analysis
T ?1
If we consider the simplex coding, a function f taking values in R
, and the decoding operator
R
D, the misclassification risk can also be written as: R(D(f )) = X (1 ? ?D(f (x)) )d?X (x). Then,
following a relaxation approach, we replace the misclassification loss by the expected risk induced
by one of the loss functionsR V defined in the previous section. As in the binary case we consider
p
the expected loss E(f ) = V (y, f (x))d?(x, y). Let Lp (X , ?X ) = {f : X ? RT ?1 | kf k? =
R
p
kf (x)k d?X (x) < ?}, p ? 1.
The following theorem studies the relaxation error for SH-SVM, SC-SVM, and S-LS loss functions.
Theorem 1. For SH-SVM, SC-SVM, and S-LS loss functions, there exists a p such that E :
Lp (X , ?X ) ? R+ is convex and continuous. Moreover,
1. The minimizer f? of E over F = {f ? Lp (X , ?X ) | f (x) ? K a.s.} exists and D(f? ) = b? .
2. For any f ? F, R(D(f )) ? R(D(f? )) ? CT (E(f ) ? E(f? ))? , where the expressions of
p, K, f? , CT , and ? are given in Table 1.
Loss
SH-SVM
SC-SVM
p
1
1
K
conv(C)
RT ?1
S-LS
2
RT ?1
f?
cb?
cb?
P
y?Y
?y cy
CT
T ?1
T
q? 1
2(T ?1)
T
?
1
1
1
2
Table 1: conv(C) is the convex hull of the set C defined in (1).
The proof of this theorem is given, in Theorems 1 and 2 for S-LS, and Theorems 3, and 4 for SCSVM and SH-SVM respectively, in Appendix B.
The above theorem can be improved for Least Squares under certain classes of distribution . Toward
this end we introduce the following notion of misclassification noise that generalizes Tsybakov?s
noise condition.
Definition 2. Fix q > 0, we say that the distribution ? satisfies the multiclass noise condition with
parameter Bq , if
T ?1
?X
x?X |0?
min
( cD(f? (x)) ? cj , f? (x) ) ? s
? Bq sq ,
(2)
T
j6=D(f? (x))
where s ? [0, 1].
5
If a distribution ? is characterized by a very large q, then, for each x ? X , f? (x) is arbitrarily close
to one of the coding vectors. For T = 2, the above condition reduces to the binary Tsybakov noise.
Indeed, let c1 = 1, and c2 = ?1, if f? (x) > 0, 21 (c1 ? c2 )f? (x) = f? (x), and if f? (x) < 0,
1
2 (c2 ? c1 )f? (x) = ?f? (x).
The following result improves the exponent of simplex-least square to
q+1
q+2
>
1
2
:
Theorem 2. For each f ? L2 (X , ?X ), if (2) holds, then for S-LS we have the following inequality,
R(D(f )) ? R(D(f? )) ? K
2(T ? 1)
(E(f ) ? E(f? ))
T
q+1
q+2
,
(3)
2q+2
p
with K = 2 Bq + 1 q+2 .
Remark 2. Note that the comparison inequalities show a tradeoff between the exponent ? and the
constant C(T ) for S-LS and SVM losses. While the constant is order T for SVM it is order 1 for SLS, on the other hand the exponent is 1 for SVM losses and 21 for S-LS. The latter could be enhanced
to 1 for close to separable classification problems by virtue of the Tsybakov noise condition.
Remark 3. The comparison inequalities given in Theorems 1 and 2 can be used to derive generalization bounds on the excess misclassification risk. For least squares min-max sharp bound, for
vector valued regression are known [3].
Standard techniques for deriving sample complexity bounds in binary classification extended for
multi-class SVM losses, are found in [7] and could be adapted to our setting. The obtained bound
are not known to be tight. Better bounds akin to those in [18], will be subject of future work.
5
Computational Aspects and Regularization Algorithms
The simplex coding framework allows us to extend batch and online kernel methods to the Multiclass setting.
Computing the Simplex Coding. We begin by noting that
! the simplex coding can be easily com>
1
uq
puted via the recursion: C[i+1] =
, C[2] = [1?1], where u = (? 1i ? ? ?? 1i )
v C[i] ? 1 ? i12
(column vector in Ri ) and v = (0, . . . , 0)(column vector in Ri?1 ) (see Algorithm C.1). Indeed we
have the following result (see the Appendix C.1 for the proof).
Lemma 1. The T columns of C[T ] are a set of T ? 1 dimensional vectors satisfying the properties
of Definition 1.
The above algorithm stems from the observation that the simplex in RT ?1 can be obtained by projecting the simplex in RT onto the hyperplane orthogonal to the element (1, . . . , 0) of the canonical
basis in RT .
Regularized Kernel Methods. We consider regularized methods of the form (1), induced by simplex loss functions and where the hypothesis space is a vector-valued reproducing kernel Hilbert
space H(VV-RKHS) the regularizer is the corresponding norm ||f ||2H . See Appendix D.2 for a brief
introduction to VV-RKHS.
In the following, we consider
that if f minimizes (1) for R(f ) = ||f ||2H
Pn a class of kernels K such
T ?1
[12], where we note that the coefficients
we have that f (x) =
i=1 K(x, xi )ai , ai ? R
are vectors in RT ?1 . In the case that the kernel is induced by a finite dimensional feature map,
k(x, x0 ) = h?(x), ?(x0 )i , where ? : X ? Rp , and h?, ?i is the inner product in Rp , we can
write each function in H as f (x) = W ?(x), where W ? R(T ?1)?p .
It is known [12] that the representer theorem [9] can be easily extended to a vector valued setting,
a simplex version of Tikhonov regularization is given by fS? (x) =
Pn so that that minimizerTof
?1
, for all x ? X , where the explicit expression of the coefficients
j=1 k(x, xj )aj , aj ? R
depends on the considered loss function. We use the following notation: K ? Rn?n , Kij =
k(xi , xj ), ?i, j ? {1 . . . n}, A ? Rn?(T ?1) , A = (a1 , ..., an )T .
Simplex Regularized Least squares (S-RLS). S-RLS is obtained by substituting the simplex least
square loss in the Tikhonov functional. It is easy to see [15] that in this case the coefficients
6
?T X
? + ?nI)W = X
? T Y? in the linear case, where
must satisfy either (K + ?nI)A = Y? or (X
n?p ?
>
n?(T ?1) ?
?
?
X?R
, X = (?(x1 ), ..., ?(xn )) and Y ? R
, Y = (cy1 , ..., cyn )> .
Interestingly, the classical results from [24] can be extended to show that the value fSi (xi ), obtained
computing the solution fSi removing the i ? th point from the training set (the leave one out so?
?
lution), can be computed in closed form. Let floo
? Rn?(T ?1) , floo
= (fS?1 (x1 ), . . . , fS?n (xn )).
Let K(?) = (K + ?nI)?1 and C(?) = K(?)Y? . Define M (?) ? Rn?(T ?1) , such that:
?
M (?)ij = 1/K(?)ii , ? j = 1 . . . T ? 1. One can show that floo
= Y? ? C(?) M (?), where
P
n
1
is the Hadamard product [15]. Then, the leave-one-out error n i=1 1Iy6=D(fSi (x)) (yi , xi ), can
? T X).
?
be minimized at essentially no extra cost by precomputing the eigen decomposition of K (or X
Simplex Cone Support Vector Machine (SC-SVM). Using standard reasoning it is easy to show
that (seePAppendix C.2), for the SC-SVM the coefficients in the representer theorem are given by
ai = ? y6=yi ?iy cy , i = 1, . . . , n, where ?i = (?iy )y?Y ? RT , i = 1, . . . , n, solve the quadratic
programming (QP) problem
?
?
n X
T
? 1 X
?
X
0
1
max
?
?iy Kij Gyy0 ?jy +
?iy
(4)
?
T ?1
?1 ,...,?n ?RT ? 2
0
i=1 y=1
y,y ,i,j
subject to 0 ?
?iy
? C0 ?y,yi , ? i = 1, . . . , n, y ? Y
1
, ?i = (?iy )y?Y ? RT , for i = 1, . . . , n and ?i,j
where Gy,y0 = hcy , cy0 i ?y, y 0 ? Y and C0 = 2n?
is the Kronecker delta.
Simplex Halfspaces Support Vector Machine (SH-SVM). A similar, yet more more complicated
procedure, can be derived for the SH-SVM. Here, we omit this derivation and observe instead that
if we neglect the convex hull constraint from Theorem 1, that requires f (x) ? co(C) for almost all
x ? X , then the SH-SVM has an especially simple formulation at the price of loosing consistency
guarantees. In fact, in this case the coefficients are given by ai = ?i cyi , i = 1, . . . , n, where
?i ? R, with i = 1, . . . , n solve the quadratic programming (QP) problem
n
max
?1 ,...,?n ?R
?
X
1X
?i
?i Kij Gyi yj ?j +
2 i,j
i=1
subject to 0 ? ?i ? C0 , ? i = 1 . . . n,
1
2n? .
3
where C0 =
The latter formulation can be solved at the same complexity of the binary SVM
(worst case O(n )) but lacks consistency.
Online/Incremental Optimization The regularized estimators induced by the simplex loss functions can be computed by means of online/incremental first order (sub) gradient methods. Indeed,
when considering finite dimensional feature maps, these strategies offer computationally feasible solutions to train estimators for large datasets where neither a p by p or an n by n matrix fit in memory.
Following [17] we can alternate a step of stochastic descent on a data point : Wtmp = (1 ? ?i ?)Wi ?
?i ?(V (yi , fWi (xi ))) and a projection on the Frobenius ball Wi = min(1, ??||W1 || )Wtmp (See
tmp F
Algorithn C.5 for details.) The algorithm depends on the used loss function through computation of
the (point-wise) subgradient ?(V ). The latter can be easily computed for all the loss functions previously discussed. For the SLS loss we haveP
?(V (yi , fW (xi ))) = 2(cyi ? W xi )x>
i , while for the SC1
SVM loss we have ?(V (yi , fW (xi ))) = ( k?Ii ck )x>
where
I
=
{y
=
6
y
|
hc
i
i
y , W xi i > ? T ?1 }.
i
For the SH-SVM loss we have: ?(V (y, fW (xi ))) = ?cyi x>
i if cyi W xi < 1 and 0 otherwise .
5.1
Comparison of Computational Complexity
The cost of solving S-RLS for fixed ? is in the worst case O(n3 ) (for example via Cholesky decomposition). If we are interested in computing the regularization path for N regularization parameter
values, then as noted in [15] it might be convenient to perform an eigendecomposition of the kernel matrix rather than solving the systems N times. For explicit p?dimensional feature maps the
cost is O(np2 ), so that the cost of computing the regularization path for simplex RLS algorithm is
O(min(n3 , np2 )) and hence independent of T . One can contrast this complexity with that of a n?aive
One Versus All (OVA) approach that would lead to a O(N n3 T ) complexity. Simplex SVMs can be
solved using solvers available for binary SVMs that are considered to have complexity O(n? ) with
? ? {2, 3}(the actual complexity scales with the number of support vectors) . For SC-SVM, though,
7
we have nT rather than n unknowns and the complexity is (O(nT )? ). SH-SVM in which we omit
the constraint, can be trained with the same complexity as the binary SVM (worst case O(n3 )) but
lacks consistency. Note that unlike for S-RLS, there is no straightforward way to compute the regularization path and the leave one out error for any of the above SVM. The online algorithms induced
by the different simplex loss functions are essentially the same. In particular, each iteration depends
linearly on the number of classes.
6
Numerical Results
We conduct several experiments to evaluate the performance of our batch and online algorithms,
on 5 UCI datasets as listed in Table 2, as well as on Caltech101 and Pubfig83. We compare the
performance of our algorithms to one versus all svm (libsvm) , as well as simplex- based boosting
[16]. For UCI datasets we use the raw features, on Caltech101 we use hierarchical features (hmax),
and on Pubfig83 we use the feature maps from [13]. In all cases the parameter selection is based
either on a hold out (ho) (80% training ? 20% validation) or a leave one out error (loo). For the
model selection of ? in S-LS, 100 values are chosen in the range [?min , ?max ],(where ?min and
?max , correspond to the smallest and biggest eigenvalues of K). In the case of a Gaussian kernel
(rbf) we use a heuristic that sets the width of the Gaussian ? to the 25-th percentile of pairwise
distances between distinct points in the training set. In Table 2 we collect the resulting classification
accuracies.
SC-SVM Online (ho)
SH-SVM Online (ho)
S-LS Online (ho)
S-LS Batch (loo)
S-LS rbf Batch (loo)
SVM batch ova (ho)
SVM rbf batch ova (ho)
Simplex boosting [16]
Landsat
65.15%
75.43%
63.62%
65.88%
90.15%
72.81%
95.33%
86.65%
Optdigit
89.57%
85.58%
91.68%
91.90%
97.09%
92.13%
98.07%
92.82%
Pendigit
81.62%
72.54%
81.39%
80.69%
98.17%
86.93%
98.88%
92.94%
Letter
52.82%
38.40%
54.29%
54.96%
96.48%
62.78%
97.12%
59.65%
Isolet
88.58%
77.65%
92.62%
92.55%
97.05%
90.59%
96.99%
91.02%
Ctech
63.33%
45%
58.39%
66.35%
69.38%
70.13%
51.77%
?
Pubfig83
84.70%
49.76%
83.61%
86.63%
86.75%
85.97%
85.60%
?
Table 2: Accuracies of our algorithms on several datasets.
As suggested by the theory, the consistent methods SC-SVM and S-LS have large performance
advantage over SH-SVM (where we omitted the convex hull constraint). Batch methods are overall
superior to online methods. Online SC-SVM achieves the best results among online methods. More
generally, we see that rbf S- LS has the best performance amongst the simplex methods, including
simplex boosting [16]. We see that S-LS rbf achieves essentially the same performance as One
Versus All SVM-rbf.
References
[1] Erin L. Allwein, Robert E. Schapire, and Yoram Singer. Reducing multiclass to binary: a
unifying approach for margin classifiers. Journal of Machine Learning Research, 1:113?141,
2000.
[2] Peter L. Bartlett, Michael I. Jordan, and Jon D. McAuliffe. Convexity, classification, and risk
bounds. Journal of the American Statistical Association, 101(473):138?156, 2006.
[3] A. Caponnetto and E. De Vito. Optimal rates for regularized least-squares algorithm. Foundations of Computational Mathematics, 2006.
[4] D. Chen and T. Sun. Consistency of multiclass empirical risk minimization methods based in
convex loss. Journal of machine learning, X, 2006.
[5] Crammer.K and Singer.Y. On the algorithmic implementation of multiclass kernel-based vector
machines. JMLR, 2001.
[6] Thomas G. Dietterich and Ghulum Bakiri. Solving multiclass learning problems via errorcorrecting output codes. Journal of Artificial Intelligence Research, 2:263?286, 1995.
[7] Yann Guermeur. Vc theory of large margin multi-category classiers. Journal of Machine
Learning Research, 8:2551?2594, 2007.
8
[8] Simon I. Hill and Arnaud Doucet. A framework for kernel-based multi-category classification.
J. Artif. Int. Res., 30(1):525?564, December 2007.
[9] G. Kimeldorf and G. Wahba. A correspondence between bayesian estimation of stochastic
processes and smoothing by splines. Ann. Math. Stat., 41:495?502, 1970.
[10] Lee.Y, L.Yin, and Wahba.G. Multicategory support vector machines: Theory and application
to the classification of microarray data and satellite radiance data. Journal of the American
Statistical Association, 2004.
[11] Liu.Y. Fisher consistency of multicategory support vector machines. Eleventh International
Conference on Artificial Intelligence and Statistics, 289-296, 2007.
[12] C.A. Micchelli and M. Pontil. On learning vector?valued functions. Neural Computation,
17:177?204, 2005.
[13] N. Pinto, Z. Stone, T. Zickler, and D.D. Cox. Scaling-up biologically-inspired computer vision:
A case-study on facebook. 2011.
[14] M.D. Reid and R.C. Williamson. Composite binary losses. JMLR, 11, September 2010.
[15] Rifkin.R and Klautau.A. In defense of one versus all classification. journal of machine learning, 2004.
[16] Saberian.M and Vasconcelos .N. Multiclass boosting: Theory and algorithms. In NIPS 2011,
2011.
[17] Shai Shalev-Shwartz, Yoram Singer, and Nathan Srebro. Pegasos: Primal estimated subgradient solver for svm. In Proceedings of the 24th ICML, ICML ?07, pages 807?814, New
York, NY, USA, 2007. ACM.
[18] I. Steinwart and A. Christmann. Support vector machines. Information Science and Statistics.
Springer, New York, 2008.
[19] Van de Geer.S Tarigan.B. A moment bound for multicategory support vector machines. JMLR
9, 2171-2185, 2008.
[20] A. Tewari and P. L. Bartlett. On the consistency of multiclass classification methods. In
Proceedings of the 18th Annual Conference on Learning Theory, volume 3559, pages 143?
157. Springer, 2005.
[21] I. Tsochantaridis, T. Joachims, T. Hofmann, and Y. Altun. Large margin methods for structured
and interdependent output variables. JMLR, 6(2):1453?1484, 2005.
[22] Alexandre B. Tsybakov. Optimal aggregation of classifiers in statistical learning. Annals of
Statistics, 32:135?166, 2004.
[23] Elodie Vernet, Robert C. Williamson, and Mark D. Reid. Composite multiclass losses. In
Proceedings of Neural Information Processing Systems (NIPS 2011), 2011.
[24] G. Wahba. Spline models for observational data, volume 59 of CBMS-NSF Regional Conference Series in Applied Mathematics. SIAM, Philadelphia, PA, 1990.
[25] Weston and Watkins. Support vector machine for multi class pattern recognition. Proceedings
of the seventh european symposium on artificial neural networks, 1999.
[26] Tong Tong Wu and Kenneth Lange. Multicategory vertex discriminant analysis for highdimensional data. Ann. Appl. Stat., 4(4):1698?1721, 2010.
[27] Y. Yao, L. Rosasco, and A. Caponnetto. On early stopping in gradient descent learning. Constructive Approximation, 26(2):289?315, 2007.
[28] T. Zhang. Statistical analysis of some multi-category large margin classification methods.
Journal of Machine Learning Research, 5:1225?1251, 2004.
[29] Tong Zhang. Statistical behavior and consistency of classification methods based on convex
risk minimization. The Annals of Statistics, Vol. 32, No. 1, 56134, 2004.
9
|
4764 |@word cox:1 version:1 seems:1 norm:1 c0:4 r:3 decomposition:2 mention:1 moment:1 liu:1 series:1 rkhs:2 interestingly:3 recovered:1 com:1 nt:3 yet:1 written:2 must:1 fn:3 numerical:2 shape:1 hofmann:1 treating:1 v:2 half:2 fewer:1 cy0:4 intelligence:2 hypersphere:1 boosting:7 math:1 zhang:2 c2:4 zickler:1 symposium:1 prove:1 eleventh:1 introduce:3 pairwise:1 x0:2 notably:2 indeed:5 expected:8 behavior:1 tomaso:1 multi:5 inspired:1 decreasing:1 actual:1 considering:5 solver:2 conv:2 begin:1 estimating:1 moreover:5 bounded:1 notation:2 kimeldorf:1 what:3 kind:2 interpreted:2 minimizes:2 developed:3 guarantee:4 quantitative:2 exactly:1 classifier:2 omit:2 mcauliffe:1 reid:2 understood:1 path:3 becoming:1 might:3 studied:3 quantified:1 suggests:1 collect:1 appl:1 co:1 range:1 practical:4 yj:1 practice:2 sq:2 procedure:1 pontil:1 empirical:4 composite:2 projection:1 convenient:1 induce:1 altun:1 onto:1 close:2 selection:2 operator:1 pegasos:1 tsochantaridis:1 context:1 risk:18 applying:1 restriction:4 measurable:2 map:12 equivalent:1 straightforward:2 l:16 convex:15 focused:1 rule:10 estimator:3 isolet:1 deriving:2 notion:3 annals:2 target:2 enhanced:1 pioneered:1 programming:2 hypothesis:3 pa:1 element:1 satisfying:3 recognition:1 i12:1 solved:2 worst:3 cy:8 sun:1 decrease:1 halfspaces:1 mentioned:1 convexity:1 complexity:13 saberian:1 vito:1 trained:1 depend:1 solving:4 tight:1 pendigit:1 incur:1 basis:1 easily:3 iit:1 chapter:1 regularizer:1 derivation:1 train:1 separated:1 distinct:1 describe:1 effective:1 mcgovern:1 youssef:1 sc:11 havep:1 artificial:3 neighborhood:3 shalev:1 exhaustive:1 jean:1 encoded:1 posed:1 supplementary:2 valued:5 say:1 solve:2 otherwise:1 heuristic:1 statistic:4 think:2 seemingly:2 online:11 sequence:2 differentiable:1 eigenvalue:1 advantage:1 propose:2 product:2 uci:2 hadamard:1 rifkin:1 intuitive:2 frobenius:1 requirement:2 satellite:1 incremental:2 leave:4 derive:6 develop:1 stat:2 measured:1 ij:1 christmann:1 come:1 quantify:2 correct:1 hull:3 stochastic:2 centered:1 vc:1 observational:1 material:1 suffices:1 generalization:2 fix:1 extension:11 hold:2 considered:15 cbcl:1 cb:2 scope:2 algorithmic:2 tmp:1 substituting:1 achieves:2 radiance:1 smallest:1 omitted:1 early:1 estimation:1 label:5 tool:2 minimization:4 mit:5 gaussian:2 rather:4 ck:1 pn:3 allwein:1 np2:2 derived:1 focus:2 joachim:1 contrast:1 sense:1 stopping:1 landsat:1 interested:1 provably:2 arg:2 classification:37 among:4 overall:1 denoted:1 exponent:3 smoothing:1 special:2 having:1 cyi:4 vasconcelos:1 identical:1 y6:1 rls:5 icml:2 representer:2 jon:1 future:1 simplex:42 others:2 minimized:1 spline:2 modern:1 replaced:3 geometry:1 n1:1 aive:1 sh:13 primal:1 poggio:1 bq:4 unless:1 orthogonal:2 lution:1 conduct:1 re:1 theoretical:3 kij:3 column:3 ghulum:1 tp:1 cost:4 vertex:6 seventh:1 loo:3 elodie:1 considerably:1 thoroughly:1 st:1 international:1 siam:1 lee:1 decoding:12 michael:1 iy:6 yao:1 w1:1 rosasco:2 possibly:4 american:2 return:1 account:1 de:2 gy:1 coding:26 erin:1 coefficient:5 int:1 satisfy:3 explicitly:2 depends:4 closed:1 analyze:1 start:1 bayes:6 hf:1 complicated:1 aggregation:1 shai:1 simon:1 square:16 ni:4 accuracy:2 yield:1 correspond:1 generalize:2 raw:1 bayesian:1 j6:2 facebook:1 definition:4 slotine:1 lcsl:1 involved:2 naturally:3 proof:4 recall:2 improves:1 organized:1 cj:1 hilbert:1 sophisticated:1 cbms:1 alexandre:1 delimited:1 follow:1 improved:2 maximally:1 formulation:2 though:1 hand:1 steinwart:1 replacing:1 lack:2 logistic:3 yf:3 aj:2 puted:1 artif:1 building:1 dietterich:1 usa:1 requiring:1 unbiased:1 regularization:8 hence:2 assigned:1 arnaud:1 semantic:1 width:1 sc1:1 noted:1 percentile:1 generalized:2 stone:1 hill:1 reasoning:1 wise:2 novel:2 recently:1 common:1 superior:1 functional:4 qp:2 volume:2 discussed:2 belong:1 interpretation:1 relating:1 extend:1 optdigit:1 association:2 refer:1 ai:5 mroueh:1 tuning:2 consistency:10 mathematics:2 fsi:3 calibration:1 closest:1 perspective:2 lrosasco:1 tikhonov:2 certain:2 inequality:11 binary:26 arbitrarily:1 yi:9 seen:1 relaxed:1 impose:1 surely:1 ii:2 multiple:2 bcs:1 reduces:3 stem:1 caponnetto:2 technical:1 characterized:1 offer:2 sphere:1 bigger:1 a1:1 jy:1 prediction:2 regression:1 essentially:3 vision:1 iteration:1 sometimes:2 kernel:9 c1:4 background:1 microarray:1 extra:1 rest:2 unlike:1 regional:1 induced:5 subject:3 december:1 inconsistent:1 jordan:1 noting:1 constraining:1 easy:2 xj:2 fit:1 wahba:3 reduce:2 inner:1 cn:2 lange:1 multiclass:23 tradeoff:1 klautau:1 expression:4 bartlett:2 defense:1 akin:1 f:3 peter:1 york:2 remark:4 ignored:2 useful:1 generally:2 clear:1 listed:1 tewari:1 tsybakov:5 svms:2 category:4 pubfig83:3 schapire:1 sl:2 problematic:1 vy:3 canonical:1 nsf:1 sign:6 jacques:1 delta:1 estimated:1 write:1 vol:1 changing:1 neither:1 libsvm:1 kenneth:1 asymptotically:1 relaxation:24 subgradient:2 year:2 sum:1 cone:3 letter:1 almost:2 classiers:1 yann:1 wu:1 appendix:4 scaling:1 bound:9 ct:3 pay:1 correspondence:1 replaces:1 quadratic:2 annual:1 adapted:1 constraint:11 precisely:1 kronecker:1 ri:2 n3:4 aspect:2 speed:1 argument:1 min:6 nathan:1 separable:1 guermeur:1 structured:2 alternate:1 ball:1 increasingly:1 y0:2 wi:2 lp:3 biologically:1 maxy:2 projecting:1 erm:2 errorcorrecting:1 computationally:2 vq:1 previously:2 discus:4 describing:1 turn:1 needed:1 fwi:1 singer:3 end:1 available:2 generalizes:1 vernet:1 observe:1 hierarchical:1 uq:1 alternative:1 batch:7 ho:6 eigen:1 rp:2 original:1 thomas:1 floo:3 hinge:4 unifying:1 neglect:1 multicategory:6 yoram:2 especially:1 bakiri:1 classical:1 micchelli:1 question:1 quantity:1 strategy:6 rt:22 usual:1 surrogate:3 september:1 gradient:2 amongst:1 distance:1 mapped:2 me:1 discriminant:1 toward:1 code:4 relationship:1 cq:1 minimizing:2 robert:2 statement:2 negative:2 design:1 implementation:1 unknown:2 perform:1 upper:1 observation:2 datasets:5 benchmark:1 finite:2 descent:2 extended:4 rn:4 reproducing:1 sharp:1 introduced:1 namely:3 c3:1 fv:1 established:1 kcy:5 nip:2 beyond:2 suggested:2 pattern:1 including:3 max:6 memory:1 suitable:8 misclassification:12 natural:4 regularized:9 indicator:2 recursion:1 scheme:1 lorenzo:1 brief:1 philadelphia:1 geometric:1 l2:1 interdependent:1 kf:2 law:1 loss:75 interesting:1 srebro:1 versus:4 validation:1 eigendecomposition:1 foundation:1 incurred:1 consistent:8 cd:1 cy1:1 caltech101:2 supported:1 last:1 copy:1 ovum:3 vv:2 institute:1 taking:2 van:1 xn:2 avoids:2 commonly:1 collection:1 polynomially:1 excess:7 implicitly:1 doucet:1 assumed:1 xi:14 shwartz:1 continuous:1 table:5 williamson:2 hc:1 complex:1 european:1 precomputing:1 did:1 linearly:2 noise:6 x1:2 biggest:1 slow:1 ny:1 tong:3 sub:1 explicit:7 exponential:3 lie:1 jmlr:4 watkins:1 hmax:1 theorem:13 down:1 removing:1 svm:47 virtue:1 exists:2 effectively:1 margin:10 chen:1 generalizing:1 yin:1 simply:1 pinto:1 springer:2 corresponds:1 minimizer:2 satisfies:1 acm:1 weston:1 conditional:1 jjs:1 presentation:1 loosing:1 rbf:6 ann:2 replace:2 price:3 feasible:2 fisher:6 experimentally:1 fw:3 reducing:1 hyperplane:2 lemma:1 called:3 geer:1 e:1 highdimensional:1 support:9 cholesky:1 latter:6 arises:1 crammer:1 mark:1 constructive:1 evaluate:1 heaviside:1 avoiding:1
|
4,159 | 4,765 |
Globally Convergent Dual MAP LP Relaxation
Solvers using Fenchel-Young Margins
Tamir Hazan
TTI Chicago
[email protected]
Alexander G. Schwing
ETH Zurich
[email protected]
Raquel Urtasun
TTI Chicago
[email protected]
Marc Pollefeys
ETH Zurich
[email protected]
Abstract
While finding the exact solution for the MAP inference problem is intractable for
many real-world tasks, MAP LP relaxations have been shown to be very effective
in practice. However, the most efficient methods that perform block coordinate
descent can get stuck in sub-optimal points as they are not globally convergent.
In this work we propose to augment these algorithms with an -descent approach
and present a method to efficiently optimize for a descent direction in the subdifferential using a margin-based formulation of the Fenchel-Young duality theorem. Furthermore, the presented approach provides a methodology to construct
a primal optimal solution from its dual optimal counterpart. We demonstrate the
efficiency of the presented approach on spin glass models and protein interaction
problems and show that our approach outperforms state-of-the-art solvers.
1
Introduction
Graphical models are a common method to describe the dependencies of a joint probability distribution over a set of discrete random variables. Finding the most likely configuration of a distribution
defined by such a model, i.e., the maximum a-posteriori (MAP) assignment, is one of the most
important inference tasks. Unfortunately, it is a computationally hard problem for many interesting applications. However, it has been shown that linear programming (LP) relaxations recover the
MAP assignment in many cases of interest (e.g., [13, 23]).
Due to the large amount of variables and constraints, solving inference problems in practice still remains a challenge for standard LP solvers. Development of specifically tailored algorithms has since
become a growing area of research. Many of these designed solvers consider the dual program, thus
they are based on local updates that follow the graphical model structure, which ensures suitability
for very large problems. Unfortunately, the dual program is non-smooth, hence introducing difficulties to existing solvers. For example, block coordinate descent algorithms, typically referred to as
convex max-product, monotonically decrease the dual objective and converge very fast, but are not
guaranteed to reach the global optimum of the dual program [3, 6, 11, 14, 17, 20, 22, 24, 25]. Different approaches to overcome the sub-optimality of the convex max-product introduced different
perturbed programs for which convergence to the dual optimum is guaranteed, e.g., smoothing, proximal methods and augmented Lagrangian methods [6, 7, 8, 16, 18, 19, 27]. However, since these
algorithms consider a perturbed program they are typically slower than the convex max-product
variants [8, 18].
In this work we propose to augment the convex max-product algorithm with a steepest -descent
approach to monotonically decrease the dual objective until reaching the global optimum of the dual
program. To perform the -descent we explore the -subgradients of the dual program, and provide
a method to search for a descent direction in the -subdifferential using a margin-based formulation of the Fenchel-Young duality theorem. This characterization also provides a new algorithm to
1
construct a primal optimal solution for the LP relaxation from a dual optimal solution. We demonstrate the effectiveness of our approach on spin glass models and protein-protein interactions taken
from the probabilistic inference challenge (PIC 2011)1 . We illustrate that the method exhibits nice
convergence properties while possessing optimality certificates.
We begin by introducing the notation, MAP LP relaxations and their dual programs. We subsequently describe the subgradients of the dual and provide an efficient procedure to recover a primal
optimal solution. We explore the -subgradients of the dual objective, and introduce an efficient
globally convergent dual solver based on the -margin of the Fenchel-Young duality theorem. Finally, we extend our approach to graphical models over general region graphs.
2
Background
Graphical models encode joint distributions over discrete product spaces X = X1 ? ? ? ? ? Xn . The
joint probability is defined by combining energy functions over subsets of variables. Throughout
this work we consider two types of functions: single variable functions, ?i (xi ), which correspond
to the n vertices in the graph, i ? {1, ..., n}, and functions over subsets of variables ?? (x? ), for
? ? {1, .., n},
P that correspond
P to the graph hyperedges. The joint distribution is then given by
p(x) ? exp( i?V ?i (xi ) + ??E ?? (x? )). In this paper we focus on estimating the MAP, i.e.,
finding the assignment that maximizes the probability, or equivalently minimizes the energy which
is the negative log probability. Estimating the MAP can be written as a program of the form [10]:
X
X
argmax
?i (xi ) +
?? (x? ).
(1)
x1 ,...,xn
i?V
??E
Due to its combinatorial nature, this problem is NP-hard for general graphical models. It is tractable
only in some special cases such as tree structured graphs, where specialized dynamic programming
algorithms (e.g., max-product belief propagation) are guaranteed to recover the optimum.
The MAP program in Eq. (1) has a linear form, thus it is naturally represented as an integer linear
program. Its tractable relaxation is obtained by replacing the integral constraints with non-negativity
constraints as follows:
X
X
max
b? (x? )?? (x? ) +
bi (xi )?i (xi )
(2)
bi ,b?
?,x?
i,xi
s.t. bi (xi ), b? (x? ) ? 0,
X
b? (x? ) = 1,
x?
X
X
bi (xi ) = 1,
xi
b? (x? ) = bi (xi ).
x? \xi
Whenever the maximizing argument to above linear program happens to be integral, i.e., the optimal
beliefs satisfy bi (xi ), b? (x? ) ? {0, 1}, the program value equals the MAP value. Moreover, the
maximum arguments of the optimal beliefs point toward the MAP assignment [26].
We denote by N (i) the edges that contain vertex i and by N (?) the vertices in the edge ?. Following
[22, 27] we consider the re-parametrized dual
?
?
?
?
?
? X
?
?
X
X
X
q(?) =
max ?i (xi ) +
?i?? (xi ) +
max ?? (x? ) ?
?i?? (xi ) . (3)
xi ?
x? ?
?
?
?
i
??N (i)
i?N (?)
The dual program value upper bounds the primal program described in Eq. (2). Therefore to compute
the primal optimal value one can minimize the dual upper bound. Using block coordinate descent
on the dual objective amounts to optimizing blocks of dual variables while holding the remaining
ones fixed. This results in the convex max-product message-passing update rules [6, 17]:
Repeat until convergence, for every i = 1, ..., n:
n
?xi , ? ? N (i) ???i (xi ) = max ?? (x? ) +
x? \xi
?xi , ? ? N (i) ?i?? (xi )
=
X
o
?j?? (xj )
j?N (?)\i
X
1
?i (xi ) +
???i (xi ) ? ???i (xi )
1 + |N (i)|
??N (i)
1
http://www.cs.huji.ac.il/project/PASCAL/index.php
2
The convex max-product algorithm is guaranteed to converge since it minimizes the dual function,
which is lower bounded by the primal program. Interestingly, the convex max-product shares the
same complexity as the max-product belief propagation, which is attained by replacing the coefficient 1/(1 + |N (i)|) by 1. It has, however, two fundamental problems. First, it can get stuck
in non-optimal stationary points. This happens since the dual objective is non-smooth, thus the
algorithm can reach a corner, for which the dual objective stays fixed when changing only a few
variables. For example, consider the case of a minimization problem where we try to descend from
a pyramid while taking only horizontal and vertical paths. We eventually stay at the same height.
The second drawback of convex max-product is that it does not always produce a primal optimal
solution, bi (xi ), b? (x? ), even when it reaches a dual optimal solution.
In the next section, we consider the dual subgradients, and provide an efficient algorithm for detecting corners, as well as for decoding a primal optimal solution from a dual optimal solution. This is
an intermediate step which facilitates the margin analysis of the Fenchel-Young duality theorem in
Sec. 4. It provides an efficient way to get out of corners, and to reach the optimal dual value.
3
The Subgradients of the Dual Objective and Steepest Descent
Subgradients are generalizations of gradients for non-smooth convex functions. Consider the function q(?) in Eq. (3). A vector d is called a subgradient of q(?) if it supports the epigraph of q(?) at
?, i.e.,
? q(?)
? ? d> ?
? ? q(?) ? d> ?.
??
(4)
The supporting hyperplane at (?, q(?)) with slope d takes the form d> ? ? q ? (d), when defining
the conjugate dual as q ? (d) = max? {d> ? ? q(?)}. From the definition of q ? (d) one can derive
the Fenchel-Young duality theorem: q(?) + q ? (d) ? d> ?, where equality holds if and only if d
is a supporting hyperplane at (?, q(?)). The set of all subgradients is called the subdifferential,
denoted by ?q(?), which can be characterized using the Fenchel-Young theorem as ?q(?) = {d :
q(?) + q ? (d) = ?> d}. The subdifferential provides a way to reason about the optimal solutions of
q(?). Using Eq. (4) we can verify that ? is dual optimal if and only if 0 ? ?q(?). In the following
claim we characterize the subdifferential of the dual function q(?) using the Fenchel-Young duality
theorem:
?
Claim
1. Consider the dual function q(?) given in Eq.
P
P (3). Let Xi = argmaxxi {?i (xi ) ?
?
X = argmaxx? {?? (x? ) + i?N (?) ?i?? (xi )}. Then d ? ?q(?), if
??N (i) ?i?? (xi )} and
P ?
and only if di?? (xi ) = x? \xi b? (x? ) ? bi (xi ) for probability distributions bi (xi ), b? (x? ) whose
nonzero entries belong to Xi? , X?? respectively.
Proof: Using the Fenchel-Young characterization of Eq. (4) for the max-function we obtain the set
of maximizing elements Xi? , X?? . Summing over all regions r ? {i, ?} while noticing the change
of sign, we obtain the marginalization disagreements di?? (xi ).
The convex max-product algorithm performs block coordinate descent updates. Thus it iterates
over vertices i and computes optimal solutions ?i?? (xi ) for every xi , ? ? N (i) analytically, while
holding the rest of the variables fixed. The claim above implies that the convex max-product iterates
over i and generates beliefs bi (xi ), b? (x? ) for every xi , ? ? N (i) that agree on their marginal
probabilities. This interpretation provides an insight into the non-optimal stationary points of the
convex max-product, i.e., points for which it is not able to generate consistent beliefs b? (x? ) when
it iterates over i = 1, . . . , n. The representation of the subdifferential as the amount of disagreement
between the marginalization constraints provides a simple procedure to verify dual optimality, as
well as to construct primal optimal solutions. This is summarized in the corollary below.
Corollary 1. Given a point ?, and sets Xi? , X?? as defined in Claim 1, let x?i , x?? be elements in
Xi? , X?? respectively. Consider the quadratic program
X
2
X
min
b? (x?? ) ? bi (x?i )
bi ,b?
i,x?
i ,??N (i)
?
x?
? \xi
s.t. bi (x?i ), b? (x?? ) ? 0,
X
x?
?
b? (x?? ) = 1,
X
bi (x?i ) = 1.
x?
i
? is a dual optimal solution if and only if the value of the above program equals zero. Moreover, if
? is a dual optimal solution, then the optimal beliefs b?? (x? ), b?i (xi ) are also the optimal solution
3
of
program in Eq. (2). However, if ? is not dual optimal, then the vector d?i?? (xi ) =
P the primal
?
?
? ?
b
(x
?
?
? ? ) ? bi (xi ) points towards the steepest descent direction of the dual function, i.e.,
x \x
?
i
d? = argmin
kdk?1
lim
??0
q(? + ?d) ? q(?)
.
?
Proof: The steepest descent direction d of q is given by minimizing the directional derivative qd0 ,
min qd0 (?) = min max d> y = max min d> y = max ?kyk2 ,
kdk?1
kdk?1 y??q
y??q kdk?1
y??q
which yields the above program (cf . [2], Chapter 4). If the zero vector is part of the subdifferential,
we are dual optimal. Primal optimality follows from Claim 1.
One can monotonically decrease the dual objective by minimizing it along the steepest descent
direction. Unfortunately, following the steepest descent direction does not guarantee convergence to
the global minimum of the dual function [28]. Performing steepest descent might keep minimizing
the dual objective with smaller and smaller increments, thus converging to a suboptimal solution.
The main drawback of steepest descent as well as block coordinate descent when applied to the dual
objective in Eq. (3) is that both procedures only consider the support of Xi? , X?? defined in Claim 1.
In the following we show that by considering the -margin of these supports we can guarantee that
at every iteration we decrease the dual value by at least . This procedure results in an efficient
algorithm that reaches both dual and primal optimal solutions.
4
The -Subgradients of the Dual Objective and Steepest -Descent
To monotonically decrease the dual value while converging to the optimum, we suggest to explore
the -neighborhood of the dual objective in Eq. (3) around the current iterate ?. For this purpose, we
explore its family of -subgradients. Given our convex dual function q(?) and a positive scalar , we
say that a vector d is an -subgradient at ? if it supports the epigraph of q(?) with an -margin:
?
??
? ? d> ?
? ? q(?) ? d> ? ? .
q(?)
(5)
The subgradients of a convex function are also -subgradients. The family of -subgradients is called
the -subdifferential and is denoted by ? q(?). Using the conjugate dual q ? (d), we can characterize
the -subdifferential by employing the -margin Fenchel-Young duality theorem.
n
o
(-margin Fenchel-Young duality)
? q(?) = d : 0 ? q(?) + q ? (d) ? d> ? ?
(6)
The -subdifferential augments the subdifferential of q(?) with additional directions d which control
the -neighborhood of the function. Whenever one finds a steepest descent direction within ? q(?),
it is guaranteed to improve the dual objective by at least . Moreover, if one cannot find such a
direction within the -subdifferential, then q(?) is guaranteed to be -close to the dual optimum.
This is summarized in the following claim.
Claim 2. Let q(?) be a convex function and let be a positive scalar. The -subdifferential ? q(?) is
a convex and compact set. If 0 6? ? q(?) then the direction d? = argmin kdk subject to d ? ? q(?)
is a descent direction and inf ?>0 q(? ? ?d) < q(?) ? . On the other hand, if 0 ? ? q(?) then
q(?) ? inf ? q(?) + .
Proof: [2] Proposition 4.3.1.
Although ? q(?) is a convex and compact set, finding its direction of descent is computationally
challenging. Fortunately, it can be
approximated whenever the convex function
is a sum of simple
Pm
? q(?) = P ? qr (?) satisfies
convex functions, i.e., q(?) =
q
(?).
The
approximation
?
r
r=1
r
? q(?) ? ?? q(?) ? ?m q(?), (see, e.g., [2]). On the one hand, if 0 6? ?? q(?) then the direction of
steepest descent taken from ?? q(?) reduces the dual objective by at least . If 0 ? ?? q(?) then q(?)
is m-close to the dual optimum. In the following claim we use the -margin Fenchel-Young duality
in Eq. (6) to characterize the approximated -subdifferential of the dual function.
4
Claim 3. Consider the dual function q(?) in Eq. (3). Then the approximated -subdifferential con?
sists of vectors d whose
P entries correspond to marginalization disagreements, i.e., d ? ? q(?) if and
only if di?? (xi ) = x? \xi b? (x? ) ? bi (xi ) for probability distributions bi (xi ), b? (x? ) that satisfy
?
?
?
?
?
?
X
X
X
?i?? (xi ) ? ?
bi (xi ) ??i (xi ) ?
?i?? (xi )?
?i
max ?i (xi ) ?
xi ?
?
x
??N (i)
??N (i)
i
?
?
?
?
?
?
X
X
X
?i?? (xi ) ? ?
b? (x? ) ??? (x? ) +
?i?? (xi )? .
??
max ?? (x? ) +
x? ?
?
x
i?N (?)
i?N (?)
?
? if and only if qr (?)
? + q ? (b) ? b> ?? ? with q ? (b) denoting the
Proof: Eq. (6) implies b ? ? qr (?)
r
r
?
? Plugging in qr , q we obtain not only the maximizing beliefs but all beliefs
conjugate dual of qr (?).
r
with an -margin. Summing over r ? {i, ?} while noticing that ?P
i?? (xi ) change signs between q?
and qi we obtain the marginalization disagreements di?? (xi ) = x? \xi b? (x? ) ? bi (xi ).
?? q(?) is described using beliefs bi (xi ), b? (x? ) that satisfy linear constraints, therefore finding a
direction of -descent can be done efficiently. Claim 2 ensures that minimizing the dual objective along a direction of descent decreases its value by at least . Moreover, we are guaranteed to be
(|V |+|E|)-close to a dual optimal solution if no direction of descent is found in ?? q(?). Therefore,
we are able to get out of corners and efficiently reach an approximated dual optimal solution. The
interpretation of the Fenchel-Young margin as the amount of disagreement between the marginalization constraints also provides a simple way to reconstruct an approximately optimal primal solution.
This is summarized in the following corollary.
P
Corollary 2. Given a point ?, set ??i (xi ) = ?i (xi ) ? ??N (i) ?i?? (xi ) and ??? (x? ) = ?? (x? ) +
P
i?N (?) ?i?? (xi ). Consider the quadratic program
2
X
X
b? (x? ) ? bi (xi )
min
bi ,b?
i,xi ,??N (i)
x? \xi
s.t. bi (xi ), b? (x? ) ? 0,
X
b? (x? ) = 1,
x?
X
bi (xi ) = 1
xi
bi (xi )??i (xi ) ? max{??i (xi )} ? ,
xi
xi
X
X
b? (x? )??? (x? ) ? max{??? (x? )} ? .
x?
x?
q(?) is (|V | + |E|)-close to the dual optimal value if and only if the value of the above program
equals zero. Moreover, the optimal beliefs b?? (x? ), b?i (xi ) primal value is (|V | + |E|)-close to the
optimal primal value in Eq. (2).
P However, if q(?) is not (|V | + |E|)-close to the dual optimal value
then the vector d?i?? (xi ) = x? \xi b?? (x? )?b?i (xi ) points towards the steepest -descent direction
of the function, namely
q(? + ?d) ? q(?) +
d? = argmin lim
.
??0
?
kdk?1
Proof: The steepest -descent direction is given by the minimum norm element of the subdifferential, described in Claim 3. (|V | + |E|)-closeness to the dual optimum is given by ([2],
Proposition 4.3.1) once we find the value of the quadratic program to be zero. Note that the superset
?? is composed of |V | + |E| subdifferentials. If the value of the above program equals zero, the
beliefs fulfill marginalization constraints and they denote a probability distribution. Summing both
-margin inequalities w.r.t. i, ?, we obtain
X
X
X
X
bi (xi )??i (xi ) +
b? (x? )??? (x? ) ?
max ??i (xi ) +
max ??? (x? ) ? (|V | + |E|).
i,xi
?,x?
xi
i
?
x?
where the primal on the left hand side of the resulting inequality is larger then the dual subtracted
by (|V | + |E|). With the dual itself upper bounding the primal, the corollary follows.
Thus, we can construct an algorithm that performs improvements over the dual function in each
iteration. We can either perform block-coordinate dual descent (i.e., convex max-product updates)
or steepest -descent steps. Since both methods monotonically improve the same dual function, our
approach is guaranteed to reach the optimal dual solution and to recover the primal optimal solution.
5
(a)
(b)
Figure 1: (a) Difference between the minimal dual value attained by convex max-product q(?CMP )
and our approach q(? ). Convex max-product gets stuck in about 20% of all cases. (b) Dual value
achieved after a certain amount of time for cases where convex max-product gets stuck.
5
High-Order Region Graphs
Graphical models naturally describe probability distributions with different types of regions r ?
{1, ..., n}. However, the linear program relaxation described in Eq. (2) considers interactions between regions which correspond to variables i and regions that correspond to cliques ?. In the
following we extend the -descent framework when considering linear programming relaxations
without constraining the region interactions. Since we allow any regions to interact, we denote these
interactions through a region graph [29]. A region graph is a directed graph whose nodes represent
the regions and its direct edges correspond to the inclusion relation, i.e., a directed edge from node
r to s is possible only if s ? r. We adopt the terminology where P (r) and C(r) stand for all nodes
that are parents and children of the node r, respectively. Thus we consider the linear programming
relaxation of a general high-order graphical model as follows
X
br (xr )?r (xr )
(7)
max
b
r,xr
s.t. br (xr ) ? 0,
X
X
?r, s ? P (r)
br (xr ) = 1,
xr
bs (xs ) = br (xr )
xs \xr
Following [5, 22, 27] we consider the re-parametrized dual program
n
o
X
X
X
q(?) =
max ?r (xr ) +
?c?r (xc ) ?
?r?p (xr )
r
xr
c?C(r)
p?P (r)
which is a sum of max-functions. Its approximated -subdifferential is described with respect to
their Fenchel-Young margins. Using the same reasoning as in Sec. 4 we present a simple way to
recover an -steepest descent direction, as well as to reconstruct an approximated optimal primal
solution.
P
P
Corollary 3. Given a point ?, set ??r (xr ) = ?r (xr ) +
?c?r (xr ) ?
?r?p (xr ).
c?C(r)
p?P (r)
Consider the quadratic program
X
2
X
minb
bp (xp ) ? br (xr )
r,xr ,p?P (r)
s.t.
br (xr ) ? 0,
xp \xr
X
X
br (xr ) = 1,
xr
br (xr )??r (xr ) ? max{??r (xr )} ?
xr
xr
Let |R| be the total number of regions in the graph, then ? is |R|-close to the dual optimal solution
if and only if the value of the above program equals zero. Moreover, the optimal beliefs b?r (xr ) are
also |R|-close to the optimal solution of the primal program in Eq.
P (7). However, if q(?) is not
|R| close to the dual optimal solution then the vector d?r?p (xr ) = xp \xr b?p (xp ) ? b?r (xr ) points
towards the steepest -descent direction of the dual function.
Proof: It is a straightforward generalization of Corollary 2.
When dealing with high-order region graphs, one can choose a region graph, e.g., the Hasse diagram,
that has significantly less edges than a region graph that connects variables i to cliques ?. Therefore,
when considering many high-order regions, the formulation in the above corollary is more efficient
than the one in Corollary 2.
6
time [s]
time [s]
(a)
(b)
(c)
Figure 2: Average time required for different solvers to achieve a specified accuracy on 30 spin glass
models, (a) when solvers are applied to ?hard? problems only, i.e., those where CMP gets stuck far
from the optimum. Average results over 30 models are shown in (b), (c) decrease of the dual value
over time for ADLP and our -descent approach.
6
Experimental Evaluation
To benefit from the efficiency of convex max-product, our implementation starts by employing
block-coordinate descent iterations before switching to the globally convergent -descent approach
once the dual value decreases by less than = 0.01. As we always optimize the same cost function, switching the gradient computation is possible. We employ a backtracking line search in our
-descent approach. In the following we demonstrate the effectiveness of our approach on synthetic
10x10 spin glass models as well as protein interactions from the probabilistic inference challenge
(PIC 2011). We consider spin glass models that consist of local factors, each having 3 states with
values randomly chosen according to N (0, 1). We use three states as convex max-product is optimal
for pairwise spin glass models with only two states per random variable. The pairwise factors of the
regular grid are weighted potentials with +1 on the diagonal and off-diagonal entries being ?1. The
weights are again independently drawn from N (0, 1). In the first experiment we are interested in
estimating how often convex max-product gets stuck in corners. We generate a set of 1000 spin glass
models and estimate the distribution of the dual value difference comparing the -descent approach
with the convex max-product result after 10, 000 iterations. We observe in Fig. 1(a) that about 20%
of the spin glass models have a dual value difference larger than zero.
Having observed that convex max-product does not achieve optimality for 20% of the models, we
now turn our attention to evaluating the run-time of different algorithms. We compare our implementation of the -steepest descent algorithm with the alternating direction method for dual MAPLP relaxations (ADLP) [18]. In addition, we illustrate the performance of convex max-product
(CMP) [6] and compare against the dual-decomposition work of [12] provided in a generic (DDG)
and a re-weighted (DDR) version in the STAIR library [4]. Note that ADLP is also implemented in
this library. All algorithms are restricted to at most 20, 000 iterations. We draw the readers attention
to Fig. 1(b), where we evaluate a single spin glass model and illustrate the dual value obtained after
a certain amount of time. As given by the derivations, CMP is a monotonically decreasing algorithm
that can get stuck in corners. It is important to note that our -descent approach is monotonically
decreasing as well, which contrasts all the other investigated algorithms (ADLP, DDG, DDR).
We evaluate the time it requires the different algorithms to achieve a given accuracy. We first focus
on ?hard? problems, where we defined ?hard? as those spin glass models whose difference between
convex max-product and the -descent method is larger than 0.2. To obtain statistically meaningful
results we average over 30 hard problems and report the time to achieve a given accuracy in Fig. 2(a).
We used the minimum across all dual values found by all algorithms as the optimum. If an algorithm
does not achieve -close accuracy within 20,000 iterations we set its time to the arbitrarily chosen
value of 105 . We note that CMP is very fast for low accuracies (high ) but gets stuck in corners,
not achieving high accuracies (low ). This is also the case for DDG and DDR. ADLP achieves
significantly lower -closeness but the 20, 000 iteration limit stops it from reaching 10?3 . The
previous experiment focus on hard problems. In order to evaluate the average case, we randomly
generate 30 spin glass models. The results are provided in Fig. 2(b). As expected the -descent
approach performs similarly well, ADLP achieves lower accuracies on more samples. The step
apparent for CMP, DDG and DDR is not as sharp, but still very significant.
Protein interactions: We rely on the data provided by the PIC 2011 and compare the -descent
approach to ADLP as it is the most competitive method in the previous experiments. The dual
energy obtained after a given amount of time is illustrated in Fig. 2(c).
7
7
Related Work
We explore methods to solve LP relaxations by monotonically decreasing the value of its dual objective and reconstructing a primal optimal solution. For this purpose we investigate approximated
subgradients of the dual program using the Fenchel-Young margins, and provide a method to reduce
the dual objective in every step by a constant value until convergence to the optimum. Efficient dual
solvers were extensively studied in the context of LP relaxations for the MAP problem [14, 20, 25].
The dual program is non-smooth, thus subgradient descent algorithms are guaranteed to reach the
dual optimum [12], as well as recover the primal optimum [12]. Despite their theoretical guarantees,
subgradient methods are typically slow. Dual block coordinate descent methods, typically referred
to as convex max-product algorithms, are monotonically decreasing, and were shown to be faster
than subgradient methods [3, 6, 11, 17, 22, 24, 27]. Since the dual program is non-smooth, these
algorithms can get stuck in non-optimal stationary points and cannot in general recover a primal
optimal solution [26]. Our work specifically addresses these drawbacks.
Recently, several methods were devised to overcome the sub-optimality of convex max-product
algorithms. Unlike our approach, all these algorithms optimize a perturbed program. Some methods
use the soft-max with low temperature to smooth the dual objective in order to avoid corners as well
as to recover primal optimal solutions [6, 7, 8]. However, these methods are typically slower, as
computation of the low-temperature soft-max is more expensive than max-computation. [19] applied
the proximal method, employing a primal strictly concave perturbation, which results in a smooth
dual approximation that is temperature independent. This approach converges to the dual optimum
and recovers the primal optimal solution. However, it uses a double loop scheme where every
update involves executing a convex sum-product algorithm. Alternative methods applied augmented
Lagrangian techniques to the primal [16] and the dual programs [18]. The augmented Lagrangian
method guarantees to reach the global optimum and recover the dual and primal solutions. Unlike
our approach, this method is not monotonically decreasing and works on a perturbed objective, thus
cannot be efficiently integrated with convex max-product updates that perform block coordinate
descent on the dual of the LP relaxation.
Our approach is based on the -descent algorithm for convex functions [2]. We use the -margin
of the Fenchel-Young duality theorem to adjust the -subdifferential to the dual objective of the
LP relaxation, thus augmenting the convex max-product with the ability to get out of corners. We
also construct an efficient method to recover a primal optimal solution. Our approach is related
to the Bundle method [15, 9], which performs an -subgradient descent in cases where efficient
search in the -subdifferential is impossible. The graphical model structure in our setting makes
searching in the -subdifferential easy, thus our approach is significantly faster. Our algorithm satisfies -complementary slackness while performing -descent step, similarly to the auction algorithm.
However, our algorithm is monotonically decreasing and can be used for general graphical models,
while the auction algorithm might increase its dual and its convergence properties hold only for
network flow problems.
8
Conclusions and Discussion
Evaluating the MAP assignment and solving its LP relaxations are key problems in approximate
inference. Some of the existing solvers, such as convex max-product, have limitations. Mainly, these
solvers can get stuck in a non-optimal stationary point, thus they cannot recover the primal optimal
solution. We explore the properties of subgradients of the dual objective and construct a simple
algorithm that determines if the dual stationary point is optimal and recovers the primal optimal
solution in this case (Corollary 1). Moreover, we investigate the family of -subgradients using
Fenchel-Young margins and construct a monotonically decreasing algorithm that is guaranteed to
achieve optimal dual and primal solutions (Corollary 2), including general region graphs (Corollary
3). We show that our algorithm compares favorably with pervious methods on spin glass models
and protein interactions. The approximated steepest descent direction is recovered by solving a
quadratic program subject to linear constraints. We used the Gurobi solver2 , which ignores the
graphical structure of the linear constraints. We believe that constructing a message-passing solver
for this sub-problem will significantly speed-up our approach. Further extensions, e.g., enforcing
constraints over messages such as those arising from cloud computing are also applicable to our
setting [1, 21].
2
http://www.gurobi.com
8
References
[1] A. Auslender and M. Teboulle. Interior gradient and epsilon-subgradient descent methods for constrained
convex minimization. Mathematics of Operations Research, 2004.
[2] D. P. Bertsekas, A. Nedi?c, and A. E. Ozdaglar. Convex Analysis and Optimization. Athena Scientific,
2003.
[3] A. Globerson and T. S. Jaakkola. Fixing max-product: convergent message passing algorithms for MAP
relaxations. In Proc. NIPS, 2007.
[4] S. Gould, O. Russakovsky, I. Goodfellow, P. Baumstarck, A. Y. Ng, and D. Koller. The STAIR Vision
Library (v2.4), 2011. http://ai.stanford.edu/ sgould/svl.
[5] T. Hazan, J. Peng, and A. Shashua. Tightening fractional covering upper bounds on the partition function
for high-order region graphs. In Proc. UAI, 2012.
[6] T. Hazan and A. Shashua. Norm-product belief propagation: Primal-dual message-passing for approximate inference. Trans. on Information Theory, 2010.
[7] J. K. Johnson. Convex relaxation methods for graphical models: Lagrangian and maximum entropy
approaches. PhD thesis, Massachusetts Institute of Technology, 2008.
[8] V. Jojic, S. Gould, and D. Koller. Accelerated dual decomposition for MAP inference. In Proc. ICML,
2010.
[9] J. H. Kappes, B. Savchynskyy, and C. Schn?orr. A Bundle Approach To Efficient MAP-Inference by
Lagrangian Relaxation. In Proc. CVPR, 2012.
[10] D. Koller and N. Friedman. Probabilistic graphical models. MIT Press, 2009.
[11] V. Kolmogorov. Convergent tree-reweighted message passing for energy minimization. PAMI, 2006.
[12] N. Komodakis, N. Paragios, and G. Tziritas. MRF Energy Minimization & Beyond via Dual Decomposition. PAMI, 2010.
[13] T. Koo, A.M. Rush, M. Collins, T. Jaakkola, and D. Sontag. Dual decomposition for parsing with nonprojective head automata. In Proc. EMNLP, 2010.
[14] A.M.C.A. Koster, S.P.M. van Hoesel, and A.W.J. Kolen. The partial constraint satisfaction problem:
Facets and lifting theorems. Operations Research Letters, 1998.
[15] C. Lemar?echal. An algorithm for minimizing convex functions. Information processing, 1974.
[16] A.F.T. Martins, M.A.T. Figueiredo, P.M.Q. Aguiar, N.A. Smith, and E.P. Xing. An Augmented Lagrangian Approach to Constrained MAP Inference. In Proc. ICML, 2011.
[17] T. Meltzer, A. Globerson, and Y. Weiss. Convergent Message Passing Algorithms ? A Unifying View. In
Proc. UAI, 2009.
[18] O. Meshi and A. Globerson. An Alternating Direction Method for Dual MAP LP Relaxation. In Proc.
ECML PKDD, 2011.
[19] P. Ravikumar, A. Agarwal, and M. J. Wainwright. Message-passing for graph-structured linear programs:
Proximal methods and rounding schemes. JMLR, 2010.
[20] M. Schlesinger. Syntactic analysis of two-dimensional visual signals in noisy conditions. Kibernetika,76.
[21] A. G. Schwing, T. Hazan, M. Pollefeys, and R. Urtasun. Distributed message passing for large scale
graphical models. In Proc. CVPR, 2011.
[22] D. Sontag and T. S. Jaakkola. Tree block coordinate descent for MAP in graphical models. In Proc.
AISTATS, 2009.
[23] D. Sontag, T. Meltzer, A. Globerson, T. Jaakkola, and Y. Weiss. Tightening LP relaxations for MAP using
message passing. In Proc. UAI, 2008.
[24] D. Tarlow, D. Batra, P. Kohli, and V. Kolmogorov. Dynamic tree block coordinate ascent. In Proc. ICML,
2011.
[25] M. J. Wainwright, T. S. Jaakkola, and A. S. Willsky. MAP estimation via agreement on trees: messagepassing and linear programming. Trans. on Information Theory, 2005.
[26] Y. Weiss, C. Yanover, and T. Meltzer. MAP Estimation, Linear Programming and Belief Propagation with
Convex Free Energies. In Proc. UAI, 2007.
[27] T. Werner. Revisiting the linear programming relaxation approach to gibbs energy minimization and
weighted constraint satisfaction. PAMI, 2010.
[28] P. Wolfe. A method of conjugate subgradients for minimizing nondifferentiable functions. Nondifferentiable Optimization, 1975.
[29] J. S. Yedidia, W. T. Freeman, and Y. Weiss. Constructing free-energy approximations and generalized
belief propagation algorithms. Trans. on Information Theory, 2005.
9
|
4765 |@word kohli:1 version:1 norm:2 decomposition:4 configuration:1 denoting:1 interestingly:1 outperforms:1 existing:2 current:1 comparing:1 recovered:1 com:1 written:1 parsing:1 chicago:2 partition:1 designed:1 update:6 stationary:5 steepest:18 smith:1 tarlow:1 provides:7 characterization:2 certificate:1 detecting:1 iterates:3 node:4 height:1 along:2 direct:1 become:1 introduce:1 solver2:1 pairwise:2 peng:1 expected:1 qd0:2 pkdd:1 growing:1 freeman:1 globally:4 decreasing:7 solver:12 considering:3 begin:1 estimating:3 notation:1 project:1 maximizes:1 moreover:7 bounded:1 provided:3 argmin:3 minimizes:2 finding:5 guarantee:4 every:6 concave:1 control:1 ozdaglar:1 bertsekas:1 positive:2 before:1 local:2 limit:1 switching:2 despite:1 path:1 koo:1 approximately:1 pami:3 might:2 studied:1 challenging:1 sists:1 aschwing:1 bi:26 statistically:1 directed:2 globerson:4 practice:2 block:12 xr:30 procedure:4 area:1 eth:2 significantly:4 regular:1 protein:6 suggest:1 get:13 cannot:4 close:10 interior:1 savchynskyy:1 context:1 impossible:1 optimize:3 www:2 map:22 lagrangian:6 maximizing:3 straightforward:1 attention:2 independently:1 convex:42 nedi:1 automaton:1 rule:1 insight:1 searching:1 coordinate:11 increment:1 exact:1 programming:7 us:1 goodfellow:1 agreement:1 element:3 wolfe:1 approximated:8 expensive:1 observed:1 cloud:1 descend:1 revisiting:1 region:18 ensures:2 kappes:1 decrease:8 complexity:1 dynamic:2 solving:3 efficiency:2 joint:4 represented:1 chapter:1 kolmogorov:2 derivation:1 fast:2 effective:1 describe:3 neighborhood:2 whose:4 apparent:1 larger:3 solve:1 stanford:1 say:1 cvpr:2 reconstruct:2 ability:1 syntactic:1 itself:1 noisy:1 propose:2 interaction:8 product:33 combining:1 loop:1 achieve:6 qr:5 convergence:6 parent:1 optimum:15 double:1 produce:1 tti:2 converges:1 executing:1 illustrate:3 derive:1 ac:1 fixing:1 augmenting:1 eq:15 implemented:1 c:1 involves:1 implies:2 tziritas:1 direction:23 drawback:3 subsequently:1 meshi:1 generalization:2 suitability:1 proposition:2 strictly:1 extension:1 rurtasun:1 hold:2 around:1 exp:1 claim:12 achieves:2 adopt:1 purpose:2 estimation:2 proc:13 applicable:1 combinatorial:1 weighted:3 minimization:5 mit:1 always:2 reaching:2 fulfill:1 avoid:1 cmp:6 jaakkola:5 corollary:12 encode:1 focus:3 improvement:1 mainly:1 contrast:1 glass:12 posteriori:1 inference:10 typically:5 integrated:1 relation:1 koller:3 interested:1 dual:107 pascal:1 augment:2 denoted:2 development:1 art:1 smoothing:1 special:1 constrained:2 marginal:1 equal:5 construct:7 once:2 having:2 ng:1 icml:3 np:1 report:1 few:1 employ:1 randomly:2 composed:1 argmax:1 connects:1 friedman:1 interest:1 message:10 investigate:2 evaluation:1 adjust:1 primal:33 stair:2 bundle:2 integral:2 edge:5 partial:1 tree:5 re:3 rush:1 theoretical:1 minimal:1 schlesinger:1 fenchel:17 soft:2 teboulle:1 facet:1 assignment:5 werner:1 cost:1 introducing:2 vertex:4 subset:2 entry:3 rounding:1 johnson:1 characterize:3 dependency:1 perturbed:4 proximal:3 synthetic:1 fundamental:1 huji:1 stay:2 probabilistic:3 off:1 decoding:1 again:1 thesis:1 choose:1 emnlp:1 corner:9 derivative:1 potential:1 kolen:1 orr:1 sec:2 summarized:3 coefficient:1 satisfy:3 try:1 view:1 hazan:4 shashua:2 start:1 recover:11 competitive:1 xing:1 slope:1 minimize:1 il:1 spin:12 php:1 accuracy:7 efficiently:4 correspond:6 yield:1 directional:1 russakovsky:1 reach:9 whenever:3 definition:1 against:1 energy:8 naturally:2 proof:6 di:4 recovers:2 con:1 stop:1 massachusetts:1 lim:2 fractional:1 attained:2 follow:1 methodology:1 wei:4 formulation:3 done:1 furthermore:1 until:3 hand:3 horizontal:1 replacing:2 propagation:5 slackness:1 scientific:1 believe:1 subdifferentials:1 contain:1 verify:2 counterpart:1 hence:1 equality:1 analytically:1 hasse:1 alternating:2 nonzero:1 jojic:1 illustrated:1 reweighted:1 komodakis:1 kyk2:1 covering:1 generalized:1 demonstrate:3 performs:4 temperature:3 auction:2 reasoning:1 possessing:1 recently:1 common:1 specialized:1 extend:2 belong:1 interpretation:2 significant:1 gibbs:1 ai:1 grid:1 pm:1 similarly:2 inclusion:1 mathematics:1 optimizing:1 inf:4 certain:2 inequality:2 arbitrarily:1 minimum:3 additional:1 fortunately:1 converge:2 monotonically:12 signal:1 reduces:1 x10:1 smooth:7 faster:2 characterized:1 devised:1 ravikumar:1 plugging:1 qi:1 converging:2 variant:1 mrf:1 vision:1 iteration:7 represent:1 tailored:1 pyramid:1 achieved:1 agarwal:1 subdifferential:20 background:1 addition:1 diagram:1 hyperedges:1 rest:1 unlike:2 minb:1 ascent:1 subject:2 facilitates:1 flow:1 effectiveness:2 integer:1 intermediate:1 constraining:1 superset:1 easy:1 meltzer:3 iterate:1 xj:1 marginalization:6 suboptimal:1 reduce:1 br:8 sontag:3 passing:9 amount:7 extensively:1 augments:1 http:3 generate:3 sign:2 arising:1 per:1 discrete:2 pollefeys:2 key:1 terminology:1 achieving:1 drawn:1 changing:1 graph:15 relaxation:21 subgradient:7 sum:3 run:1 koster:1 noticing:2 letter:1 raquel:1 throughout:1 family:3 reader:1 draw:1 bound:3 guaranteed:10 convergent:7 quadratic:5 constraint:12 bp:1 generates:1 speed:1 argument:2 optimality:6 min:5 subgradients:16 performing:2 martin:1 gould:2 structured:2 according:1 conjugate:4 smaller:2 across:1 reconstructing:1 lp:13 b:1 happens:2 restricted:1 taken:2 computationally:2 zurich:2 remains:1 agree:1 turn:1 eventually:1 tractable:2 operation:2 yedidia:1 observe:1 v2:1 generic:1 disagreement:5 subtracted:1 alternative:1 slower:2 remaining:1 cf:1 baumstarck:1 graphical:14 unifying:1 xc:1 epsilon:1 objective:21 argmaxxi:1 diagonal:2 exhibit:1 gradient:3 parametrized:2 athena:1 nondifferentiable:2 considers:1 urtasun:2 toward:1 reason:1 enforcing:1 willsky:1 index:1 minimizing:6 equivalently:1 unfortunately:3 holding:2 favorably:1 negative:1 tightening:2 implementation:2 ddr:4 perform:4 upper:4 vertical:1 descent:51 ecml:1 supporting:2 defining:1 head:1 perturbation:1 sharp:1 ttic:2 pic:3 introduced:1 namely:1 required:1 specified:1 gurobi:2 schn:1 sgould:1 nip:1 trans:3 address:1 able:2 auslender:1 beyond:1 below:1 challenge:3 program:36 max:52 including:1 belief:16 wainwright:2 satisfaction:2 difficulty:1 rely:1 yanover:1 scheme:2 improve:2 technology:1 library:3 hoesel:1 negativity:1 nice:1 interesting:1 limitation:1 consistent:1 xp:4 share:1 echal:1 repeat:1 free:2 figueiredo:1 side:1 allow:1 institute:1 kibernetika:1 taking:1 benefit:1 van:1 overcome:2 distributed:1 xn:2 world:1 stand:1 tamir:2 computes:1 evaluating:2 stuck:10 kdk:6 ignores:1 employing:3 far:1 approximate:2 compact:2 keep:1 clique:2 dealing:1 global:4 uai:4 summing:3 xi:88 search:3 svl:1 nature:1 messagepassing:1 argmaxx:1 interact:1 investigated:1 constructing:2 marc:1 aistats:1 main:1 bounding:1 child:1 complementary:1 x1:2 augmented:4 epigraph:2 referred:2 fig:5 slow:1 sub:4 paragios:1 jmlr:1 young:17 theorem:10 x:2 closeness:2 intractable:1 consist:1 phd:1 lifting:1 margin:17 entropy:1 backtracking:1 likely:1 explore:6 visual:1 scalar:2 ch:2 satisfies:2 determines:1 towards:3 aguiar:1 lemar:1 hard:7 change:2 specifically:2 hyperplane:2 schwing:2 called:3 total:1 batra:1 duality:10 experimental:1 meaningful:1 support:4 collins:1 alexander:1 ethz:2 accelerated:1 evaluate:3
|
4,160 | 4,766 |
Fast Variational Inference in the
Conjugate Exponential Family
James Hensman?
Department of Computer Science
The University of Sheffield
[email protected]
Magnus Rattray
Faculty of Life Science
The University of Manchester
[email protected]
Neil D. Lawrence?
Department of Computer Science
The University of Sheffield
[email protected]
Abstract
We present a general method for deriving collapsed variational inference algorithms for probabilistic models in the conjugate exponential family. Our method
unifies many existing approaches to collapsed variational inference. Our collapsed
variational inference leads to a new lower bound on the marginal likelihood. We
exploit the information geometry of the bound to derive much faster optimization
methods based on conjugate gradients for these models. Our approach is very
general and is easily applied to any model where the mean field update equations
have been derived. Empirically we show significant speed-ups for probabilistic
inference using our bound.
1
Introduction
Variational bounds provide a convenient approach to approximate inference in a range of intractable
models [Ghahramani and Beal, 2001]. Classical variational optimization is achieved through coordinate ascent which can be slow to converge. A popular solution [King and Lawrence, 2006, Teh et al.,
2007, Kurihara et al., 2007, Sung et al., 2008, L?azaro-Gredilla and Titsias, 2011, L?azaro-Gredilla
et al., 2011] is to marginalize analytically a portion of the variational approximating distribution,
removing this from the optimization. In this paper we provide a unifying framework for collapsed
inference in the general class of models composed of conjugate-exponential graphs (CEGs).
First we review the body of earlier work with a succinct and unifying derivation of the collapsed
bounds. We describe how the applicability of the collapsed bound to any particular CEG can be
determined with a simple d-separation test. Standard variational inference via coordinate ascent
turns out to be steepest ascent with a unit step length on our unifying bound. This motivates us
to consider natural gradients and conjugate gradients for fast optimization of these models. We
apply our unifying approach to a range of models from the literature obtaining, often, an order of
magnitude or more increase in convergence speed. Our unifying view allows collapsed variational
methods to be integrated into general inference tools like infer.net [Minka et al., 2010].
?
also at Sheffield Institute for Translational Neuroscience, SITraN
1
2
The Marginalised Variational Bound
The advantages to marginalising analytically a subset of variables in variational bounds seem to
be well understood: several different approaches have been suggested in the context of specific
models. In Dirichlet process mixture models Kurihara et al. [2007] proposed a collapsed approach
using both truncated stick-breaking and symmetric priors. Sung et al. [2008] proposed ?latent space
variational Bayes? where both the cluster-parameters and mixing weights were marginalised, again
with some approximations. Teh et al. [2007] proposed a collapsed inference procedure for latent
Dirichlet allocation (LDA). In this paper we unify all these results from the perspective of the ?KL
corrected bound? [King and Lawrence, 2006]. This lower bound on the model evidence is also an
upper bound on the original variational bound, the difference between the two bounds is given by a
Kullback Leibler divergence. The approach has also been referred to as the marginalised variational
bound by L?azaro-Gredilla et al. [2011], L?azaro-Gredilla and Titsias [2011]. The connection between
the KL corrected bound and the collapsed bounds is not immediately obvious. The key difference
between the frameworks is the order in which the marginalisation and variational approximation are
applied. However, for CEGs this order turns out to be irrelevant. Our framework leads to a more
succinct derivation of the collapsed approximations. The resulting bound can then be optimised
without recourse to approximations in either the bound?s evaluation or its optimization.
2.1
Variational Inference
Assume we have a probabilistic model for data, D, given parameters (and/or latent variables), X, Z,
of the form p(D, X, Z) = p(D | Z, X)p(Z | X)p(X). In variational Bayes (see e.g. Bishop [2006])
we approximate the posterior p(Z, X|D) by a distribution q(Z, X). We use Jensen?s inequality
to derive a lower bound on the model evidence L, which serves as an objective function in the
variational optimisation:
Z
p(D, Z, X)
p(D) ? L = q(Z, X) ln
dZ dX.
(1)
q(Z, X)
For tractability the mean field (MF) approach assumes q factorises across its variables, q(Z, X) =
q(Z)q(X). It is then possible to implement an optimisation scheme which analytically optimises
each factor alternately, with the optimal distribution given by
Z
q ? (X) ? exp
q(Z) ln p(D, X|Z) dZ ,
(2)
and similarly for Z: these are often referred to as VBE and VBM steps. King and Lawrence [2006]
substituted the expression for the optimal distribution (for example q ? (X)) back into the bound (1),
eliminating one set of parameters from the optimisation, an approach that has been reused by L?azaroGredilla et al. [2011], L?azaro-Gredilla and Titsias [2011]. The resulting bound is not dependent on
q(X). King and Lawrence [2006] referred to this new bound as ?the KL corrected bound?. The
difference between the bound, which we denote LKL , and a standard mean field approximation LMF ,
is the Kullback Leibler divergence between the optimal form of q ? (X) and the current q(X).
We re-derive their bound by first using Jensen?s inequality to construct the variational lower bound
on the conditional distribution,
Z
p(D, Z|X)
dZ , L1 .
(3)
ln p(D|X) ? q(Z) ln
q(Z)
This object turns out to be of central importance in computing the final KL-corrected bound and
also in computing gradients, curvatures and the distribution of the collapsed variables q ? (X). It is
easy to see that it is a function of X which lower-bounds the log likelihood p(D | X), and indeed
our derivation treats it as such. We now marginalize the conditioned variable from this expression,
Z
ln p(D) ? ln p(X) exp{L1 } dX , LKL ,
(4)
giving us the bound of King and Lawrence [2006] & L?azaro-Gredilla et al. [2011]. Note that one set
of parameters was marginalised after the variational approximation was made.
Using (2), this expression also provides the approximate posterior for the marginalised variables X:
q ? (X) = p(X)eL1 ?LKL
(5)
LKL
and e
appears as the constant of proportionality in the mean-field update equation (2).
2
3
Partial Equivalence of the Bounds
We can recover LMF from LKL by again applying Jensen?s inequality,
Z
Z
p(X)
p(X)
LKL = ln q(X)
exp{L1 } dX ? q(X) ln
exp{L1 } dX,
q(X)
q(X)
which can be re-arranged to give the mean-field bound,
Z
p(D|Z, X)p(Z)p(X)
LKL ? q(X)q(Z) ln
dX dZ,
q(Z)q(X)
(6)
(7)
and it follows that LKL = LMF + KL(q ? (X)||q(X)) and1 LKL ? LMF . For a given q(Z), the bounds
are equal after q(X) is updated via the mean field method: the approximations are ultimately the
same. The advantage of the new bound is to reduce the number of parameters in the optimisation. It
is particularly useful when variational parameters are optimised by gradient methods. Since VBEM
is equivalent to a steepest descent gradient method with a fixed step size, there appears to be a lot to
gain by combining the KLC bound with more sophisticated optimization techniques.
3.1
Gradients
Consider the gradient of the KL corrected bound with respect to the parameters of q(Z):
Z
h ?L i
?LKL
?
1
= exp{?LKL }
exp{L1 }p(X) dX = Eq? (X)
,
??z
??z
??z
(8)
where we have used the relation (5). To find the gradient of the mean-field
bound we note that
h
i it can
be written in terms of our conditional bound (3) as LMF = Eq(X) L1 + ln p(X) ? ln q(X) giving
h ?L i
?LMF
1
= Eq(X)
??z
??z
(9)
thus setting q(X) = q ? (X) not only makes the bounds equal, LMF = LKL , but also their gradients
with respect to ?Z .
Sato [2001] has shown that the variational update equation can be interpreted as a gradient method,
where each update is also a step in the steepest direction in the canonical parameters of q(Z). We
can combine this important insight with the above result to realize that we have a simple method for
computing the gradients of the KL corrected bound: we only need to look at the update expressions
for the mean-field method. This result also reveals the weakness of standard variational Bayesian
expectation maximization (VBEM): it is a steepest ascent algorithm. Honkela et al. [2010] looked to
rectify this weakness by applying a conjugate gradient algorithm to the mean field bound. However,
they didn?t obtain a significant improvement in convergence speed. Our suggestion is to apply
conjugate gradients to the KLC bound. Whilst the value and gradient of the MF bound matches
that of the KLC bound after an update of the collapsed variables, the curvature is always greater. In
practise this means that much larger steps (which we compute using conjugate gradient methods) can
be taken when optimizing the KLC bound than for the MF bound leading to more rapid convergence.
3.2
Curvature of the Bounds
King and Lawrence [2006] showed empirically that the KLC bound could lead to faster convergence
because the bounds differ in their curvature: the curvature of the KLC bound enables larger steps to
be taken by an optimizer. We now derive analytical expressions for the curvature of both bounds.
For the mean field bound we have
h ?2L i
? 2 LMF
1
=
E
,
(10)
q(X)
??z2
??z2
1
We use KL(?||?) to denote the Kullback Leibler divergence between two distributions.
3
and for the KLC bound, with some manipulation of (4) and using (5):
n LKL on ?eLKL o
2 LKL
? 2 LKL
?LKL ? e
?2LKL ?e
=
e
?
e
[i]
[i]
[i]
[j]
[j]
[j]
??z ??z
??z ??z
??z
??z
h ?2L i
h ?L ion
h ?L io
h ?L ?L i n
1
1
1
1
1
?
?
?
= Eq? (X)
?
E
E
.
+
E
q (X)
q (X)
q (X)
[i]
[i]
[j]
[i]
[j]
[j]
??z ??z
??z ??z
??z
??z
(11)
In this result the first term is equal to (10), and the second two terms combine to be always positive
semi-definite, proving King and Lawrence [2006]?s intuition about the curvature of the bound. When
curvature is negative definite (e.g. near a maximum), the KLC bound?s curvature is less negative
definite, enabling larger steps to be taken in optimization. Figure 1(b) illustrates the effect of this as
well as the bound?s similarities.
3.3
Relationship to Collapsed VB
In collapsed inference some parameters are marginalized before applying the variational bound. For
example, Sung et al. [2008] proposed a latent variable model where the model parameters were
marginalised, and Teh et al. [2007] proposed a non-parametric topic model where the document
proportions were collapsed. These procedures lead to improved inference, or faster convergence.
The KLC bound derivation we have provided also marginalises parameters, but after a variational
approximation is made. The difference between the two approaches is distilled in these expressions:
ln Ep(X) exp Eq(Z) ln p(D|X, Z)
Eq(Z) ln Ep(X) p(D|X, Z)
(12)
where the left expression appears in the KLC bound, and the right expression appears in the bound
for collapsed variational Bayes, with the remainder of the bounds being equal. Whilst appropriately conjugate formulation of the model will always ensure that the KLC expression is analytically
tractable, the expectation in the collapsed VB expression is
h not. iSung et al. [2008]
h i propose a first
order approximation to the expectation of the form Eq(Z) f (Z) ? f (Eq(Z) Z ), which reduces
the right expression to the that on the left. Under this approximation2 the KL corrected approach is
equivalent to the collapsed variational approach.
3.4
Applicability
To apply the KLC bound we need to specify a subset, X, of variables to marginalize. We select
the variables that break the dependency structure of the graph to enable the analytic computation
of the integral in (4). Assuming the appropriate conjugate exponential structure for the model we
are left with the requirement to select a sub-set that induces the appropriate factorisation. These
induced factorisations are discussed in some detail in Bishop [2006]. They are factorisations in
the approximate posterior which arise from the form of the variational approximation and from the
structure of the model. These factorisations allow application of KLC bound, and can be identified
using a simple d-separation test as Bishop discusses.
The d-separation test involves checking for independence amongst the marginalised variables (X in
the above) conditioned on the observed data D and the approximated variables (Z in the above). The
requirement is to select a sufficient set of variables, Z, such that the effective likelihood for X, given
by (3) becomes conjugate to the prior. Figure 1(a) illustrates the d-separation test with application
to the KLC bound.
For latent variable models, it is often sufficient to select the latent variables for X whilst collapsing
the model variables. For example, in the specific case of mixture models and topic models, approximating the component labels allows for the marginalisation of the cluster parameters (topics
2
Kurihara et al. [2007] and Teh et al. [2007] suggest a further second order correction and assume that that
q(Z) is Gaussian to obtain tractability. This leads to additional correction terms that augment KLC bound. The
form of these corrections would need to be determined on a case by case basis, and has in fact been shown to
be less effective than those methods unified here [Asuncion et al., 2012].
4
A
B
E
C
F
D
(a)
(b)
Figure 1: (a) An example directed graphical model on which we could use the KLC bound. Given
the observed node C, the nodes A,F d-separate given nodes B,D,E. Thus we could make an explicit
variational approximation for A,F, whilst marginalising B,D,E. Alternatively, we could select B,D,E
for a parameterised approximate distribution, whilst marginalising A,F. (b) A sketch of the KLC
and MF bounds. At the point where the mean field method has q(X) = q ? (X), the bounds are
equal in value as well as in gradient. Away from the this point, the different between the bounds
is the Kullback Leibler divergence between the current MF approximation for X and the implicit
distribution q ? (X) of the KLC bound.
allocations) and mixing proportions. This allowed Sung et al. [2008] to derive a general form for
latent variable models, though our formulation is general to any conjugate exponential graph.
4
Riemannian Gradient Based Optimisation
Sato [2001] and Hoffman et al. [2012] showed that the VBEM procedure performs gradient ascent in
the space of the natural parameters. Using the KLC bound to collapse the problem, gradient methods
seem a natural choice for optimisation, since there are fewer parameters to deal with, and we have
shown that computation of the gradients is straightforward (the variational update equations contain
the model gradients). It turns out that the KLC bound is particularly amenable to Riemannian or
natural gradient methods, because the information geometry of the exponential family distrubution(s), over which we are optimising, leads to a simple expression for the natural gradient. Previous
investigations of natural gradients for variational Bayes [Honkela et al., 2010, Kuusela et al., 2009]
required the inversion of the Fisher information at every step (ours does not), and also used VBEM
steps for some parameters and Riemannian optimisation for other variables. The collapsed nature
of the KLC bound means that these VBEM steps are unnecessary: the bound can be computed by
parameterizing the distribution of only one set of variables (q(Z)) whilst the implicit distribution of
the other variables is given in terms of the first distribution and the data by equation (5).
We optimize the lower bound LKL with respect to the parameters of the approximating distribution
of the non-collapsed variables. We showed in section 2 that the gradient of the KLC bound is given
by the gradient of the standard MF variational bound, after an update of the collapsed variables. It
is clear from their definition that the same is true of the natural gradients.
4.1
Variable Transformations
We can compute the natural gradient of our collapsed bound by considering the update equations of
the non-collapsed problem as described above. However, if we wish to make use of more powerful
optimisation methods like conjugate gradient ascent, it is helpful to re-parameterize the natural parameters in an unconstrained fashion. The natural gradient is given by [Amari and Nagaoka, 2007]:
e(?) = G(?)?1
g
?LKL
??
(13)
where G(?) is the Fisher information matrix whose i,j th element is given by
G(?)[i,j] = ?Eq(X | ?)
5
h ? 2 ln q(X | ?) i
?? [i] ?? [j]
.
(14)
For exponential family distributions, this reduces to ?2? ?(?), where ? is the log-normaliser. Further,
for exponential family distributions, the Fisher information in the canonical parameters (?) and that
in the expectation parameters (?) are reciprocal, and we also have G(?) = ??/??. This means that
the natural gradient in ? is given by
e(?) = G(?)?1
g
?? ?LKL
?LKL
?LKL
e(?) =
=
and g
.
?? ??
??
??
(15)
The gradient in one set of parameters provides the natural gradient in the other. Thus when our
approximating distribution q is exponential family, we can compute the natural gradient without the
expensive matrix inverse.
4.2
Steepest Ascent is Coordinate Ascent
Sato [2001] showed that the VBEM algorithm was a gradient based algorithm. In fact, VBEM
consists of taking unit steps in the direction of the natural gradient of the canonical parameters.
From equation (9) and the work of Sato [2001], we see that the gradient of the KLC bound can
be obtained by considering the standard mean-field update for the non-collapsed parameter Z. We
confirm these relationships for the models studied in the next section in the supplementary material.
Having confirmed that the VB-E step is equivalent to steepest-gradient ascent we now explore
whether the procedure could be improved by the use of conjugate gradients.
4.3
Conjugate Gradient Optimization
One idea for solving some of the problems associated with steepest ascent is to ensure each gradient
step is conjugate (geometrically) to the previous. Honkela et al. [2010] applied conjugate gradients
to the standard mean field bound, we expect much faster convergence for the KLC bound due to
its differing curvature. Since VBEM uses a step length of 1 to optimize,3 we also used this step
length in conjugate gradients. In the natural conjugate gradient method, the search direction at the
ith iteration is given by si = ?e
gi + ?si?1 . Empirically the Fletcher-Reeves method for estimating
? worked well for us:
e i ii
he
gi , g
?F R =
(16)
ei?1 ii?1
he
gi?1 , g
e> G(?)e
where h?, ?ii denotes the inner product in Riemannian geometry, which is given by g
g. We
e> Ge
e> GG?1 g = g
e> g, and
note from Kuusela et al. [2009] that this can be simplified since g
g=g
other conjugate methods, defined in the supplementary material, can be applied similarly.
5
Experiments
For empirical investigation of the potential speed ups we selected a range of probabilistic models.
We provide derivations of the bound and fuller explanations of the models in the supplementary
material. In each experiment, the algorithm was considered to have converged when the change
in the bound or the Riemannian gradient reached below 10?6 . Comparisons between optimisation
procedures always used the same initial conditions (or set of initial conditions) for each method.
First we recreate the mixture of Gaussians example described by Honkela et al. [2010].
5.1
Mixtures of Gaussians
For a mixture of Gaussians, using the d-separation rule, we select for X the cluster allocation (latent)
variables. These are parameterised through the softmax function for unconstrained optimisation.
Our model includes a fully Bayesian treatment of the cluster parameters and the mixing proportions,
whose approximate posterior distributions appear as (5). Full details of the algorithm derivation are
given in the supplementary material. A neat feature is that we can make use of the discussion above
to derive an expression for the natural gradient without a matrix inverse.
3
We empirically evaluated a line-search procedure, but found that in most cases that Wolfe-Powell conditions were met after a single step of unit length.
6
Table 1: Iterations to convergence for the mixture of Gaussians problem, with varying overlap (R). This table
reports the average number of iterations taken to reach (within 10 nats of) the best known solution. For the more
difficult scenarios (with more overlap in the clusters) the VBEM method failed to reach the optimum solution
within 500 restarts
CG. method
Polack-Ribi?ere
Hestenes-Stiefel
Fletcher-Reeves
VBEM
R=1
3, 100.37
1, 371.55
416.18
?
R=2
15, 698.57
5, 501.25
1, 161.35
?
R=3
5, 767.12
5, 922.4
5, 091.0
?
R=4
1, 613.09
358.03
792.10
992.07
R=5
3, 046.25
172.39
494.24
429.57
Table 2: Time and iterations taken to run LDA on the NIPS 2011 corpus, ? one standard deviation, for two
conjugate methods and VBEM. The Fletcher-Reeves conjugate algorithm is almost ten times as fast as VBEM.
The value of the bound at the optimum was largely the same: deviations are likely just due to the choice of
initialisations, of which we used 12.
Method
Hestenes-Steifel
Fletcher-Reeves
VBEM
Time (minutes)
56.4 ? 18.5
38.5 ? 8.7
370 ? 105
Iterations
644.3 ? 214.5
447.8 ? 100.5
4, 459 ? 1, 296
Bound
?1, 998, 780 ? 201
?1, 998, 743 ? 194
?1, 998, 732 ? 241
In Honkela et al. [2010] data are drawn from a mixture of five two-dimensional Gaussians with
equal weights, each with unit spherical covariance. The centers of the components are at (0, 0) and
(?R, ?R). R is varied from 1 (almost completely overlapping) to 5 (completely separate). The
model is initialised with eight components with an uninformative prior over the mixing proportions:
the optimisation procedure is left to select an appropriate number of components.
Sung et al. [2008] reported that their collapsed method led to improved convergence over VBEM.
Since our objective is identical, though our optimisation procedure different, we devised a metric for
measuring the efficacy of our algorithms which also accounts for their propensity to fall into local
minima. Using many randomised restarts, we measured the average number of iterations taken to
reach the best-known optimum. If the algorithm converged at a lesser optimum, those iterations were
included in the denomiator, but we didn?t increment the numerator when computing the average. We
compared three different conjugate gradient approaches and standard VBEM (which is also steepest
ascent on the KLC bound) using 500 restarts.
Table 1 shows the number of iterations required (on average) to come within 10 nats of the best
known solution for three different conjugate-gradient methods and VBEM. VBEM sometimes failed
to find the optimum in any of the 500 restarts. Even relaxing the stringency of our selection to 100
nats, the VBEM method was always at least twice as slow as the best conjugate method.
5.2
Topic Models
Latent Dirichlet allocation (LDA) [Blei et al., 2003] is a popular approach for extracting topics
from documents. To demonstrate the KLC bound we applied it to 200 papers from the 2011 NIPS
conference. The PDFs were preprocessed with pdftotext, removing non-alphabetical characters
and coarsely filtering words by popularity to form a vocabulary size of 2000.4 We selected the latent
topic-assignment variables for parameterisation, collapsing the topics and the document proportions.
Conjugate gradient optimization was compared to the standard VBEM approach.
We used twelve random initializations, starting each algorithm from each initial condition. Topic and
document distributions where treated with fixed, uninformative priors. On average, the HestenesSteifel algorithm was almost ten times as fast as standard VB, as shown in Table 2, whilst the final
bound varied little between approaches.
4
Some extracted topics are presented in the supplementary material.
7
5.3
RNA-seq alignment
An emerging problem in computational biology is inference of transcript structure and expression
levels using next-generation sequencing technology (RNA-Seq). Several models have been proposed. The BitSeq method [Glaus et al., 2012] is based on a probabilistic model and uses Gibbs
sampling for approximate inference. The sampler can suffer from particularly slow convergence
due to the large size of the problem, which has six million latent variables for the data considered
here. We implemented a variational version of their model and optimised it using VBEM and our
collapsed Riemannian method. We applied the model to data described in Xu et al. [2010], a study
of human microRNA. The model was initialised using four random initial conditions, and optimised
using standard VBEM and the conjugate gradient versions of the algorithm. The Polack-Ribi?ere
conjugate method performed very poorly for this problem, often giving negative conjugation: we
omit it here. The solutions found for the other algorithms were all fairly close, with bounds coming within 60 nats. The VBEM method was dramatically outperformed by the Fletcher-Reeves and
Hestenes-Steifel methods: it took 4600 ? 20 iterations to converge, whilst the conjugate methods
took only 268 ? 4 and 265 ? 1 iterations to converge. At about 8 seconds per iteration, our collapsed
Riemannian method requires around forty minutes, whilst VBEM takes almost eleven hours. All the
variational approaches represent an improvement over a Gibbs sampler, which takes approximately
one week to run for this data [Glaus et al., 2012].
6
Discussion
Under very general conditions (conjugate exponential family) we have shown the equivalence of
collapsed variational bounds and marginalized variational bounds using the KL corrected perspective of King and Lawrence [2006]. We have provided a succinct derivation of these bounds, unifying
several strands of work and laying the foundations for much wider application of this approach.
When the collapsed variables are updated in the standard MF bound the KLC bound is identical to
the MF bound in value and gradient. Sato [2001] has shown that coordinate ascent of the MF bound
(as proscribed by VBEM updates) is equivalent to steepest ascent of the MF bound using natural
gradients. This implies that standard variational inference is also performing steepest ascent on the
KLC bound. This equivalence between natural gradients and the VBEM update equations means
our method is quickly implementable for any model where the mean field update equations have
been computed. It is only necessary to determine which variables to collapse using a d-separation
test. Importantly this implies our approach can readily be incorporated in automated inference engines such as that provided by infer.net [Minka et al., 2010]. We?d like to emphasise the ease with
which the method can be applied: we have provided derivations of equivalencies of the bounds and
gradients which should enable collapsed conjugate optimisation of any existing mean field algorithm, with minimal changes to the software. Indeed our own implementations (see supplementary
material) use just a few lines of code to switch between the VBEM and conjugate methods.
The improved performance arises from the curvature of the KLC bound. We have shown that it is
always less negative than that of the original variational bound allowing much larger steps in the
variational parameters as King and Lawrence [2006] suggested. This also provides a gateway to
second-order optimisation, which could prove even faster.
We provided empirical evidence of the performance increases that are possible using our method in
three models. In a thorough exploration of the convergence properties of a mixture of Gaussians
model, we concluded that a conjugate Riemannian algorithm can find solutions that are not found
with standard VBEM. In a large LDA model, we found that performance can be improved many
times over that of the VBEM method. In the BitSeq model for differential expression of genes
transcripts we showed that very large improvements in performance are possible for models with
huge numbers of latent variables.
Acknowledgements
The authors would like to thank Michalis Titsias for helpful commentary on a previous draft and
Peter Glaus for help with a C++ implementation of the RNAseq alignment algorithm. This work
was funded by EU FP7-KBBE Project Ref 289434 and BBSRC grant number BB/1004769/1.
8
References
S. Amari and H. Nagaoka. Methods of information geometry. AMS, 2007.
A. Asuncion, M. Welling, P. Smyth, and Y. Teh. On smoothing and inference for topic models.
arXiv preprint arXiv:1205.2662, 2012.
C. M. Bishop. Pattern Recognition and Machine Learning. Springer New York, 2006.
D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent Dirichlet allocation. The Journal of Machine Learning
Research, 3:993?1022, 2003.
Z. Ghahramani and M. Beal. Propagation algorithms for variational Bayesian learning. Advances in
neural information processing systems, pages 507?513, 2001.
P. Glaus, A. Honkela, and M. Rattray. Identifying differentially expressed transcripts from RNAseq data with biological variation. Bioinformatics, 2012. doi: 10.1093/bioinformatics/bts260.
Advance Access.
M. Hoffman, D. Blei, C. Wang, and J. Paisley. Stochastic variational inference. arXiv preprint
arXiv:1206.7051, 2012.
A. Honkela, T. Raiko, M. Kuusela, M. Tornio, and J. Karhunen. Approximate Riemannian conjugate
gradient learning for fixed-form variational Bayes. The Journal of Machine Learning Research,
9999:3235?3268, 2010.
N. King and N. D. Lawrence. Fast variational inference for Gaussian process models through KLcorrection. Machine Learning: ECML 2006, pages 270?281, 2006.
K. Kurihara, M. Welling, and Y. W. Teh. Collapsed variational Dirichlet process mixture models. In
Proceedings of the International Joint Conference on Artificial Intelligence, volume 20, page 19,
2007.
M. Kuusela, T. Raiko, A. Honkela, and J. Karhunen. A gradient-based algorithm competitive with
variational Bayesian EM for mixture of Gaussians. In Neural Networks, 2009. IJCNN 2009.
International Joint Conference on, pages 1688?1695. IEEE, 2009.
M. L?azaro-Gredilla and M. K. Titsias. Variational heteroscedastic Gaussian process regression. In
Proceedings of the International Conference on Machine Learning (ICML), 2011, 2011.
M. L?azaro-Gredilla, S. Van Vaerenbergh, and N. Lawrence. Overlapping mixtures of Gaussian
processes for the data association problem. Pattern Recognition, 2011.
T. P. Minka, J. M. Winn, J. P. Guiver, and D. A. Knowles. Infer .NET 2.4. Microsoft Research
Cambridge, 2010.
M. A. Sato. Online model selection based on the variational Bayes. Neural Computation, 13(7):
1649?1681, 2001.
J. Sung, Z. Ghahramani, and S. Bang. Latent-space variational Bayes. Pattern Analysis and Machine
Intelligence, IEEE Transactions on, 30(12):2236?2242, 2008.
Y. W. Teh, D. Newman, and M. Welling. A collapsed variational Bayesian inference algorithm for
latent Dirichlet allocation. Advances in neural information processing systems, 19:1353, 2007.
G. Xu et al. Transcriptome and targetome analysis in MIR155 expressing cells using RNAseq. RNA, pages 1610?1622, June 2010. ISSN 1355-8382. doi: 10.1261/rna.2194910. URL
http://rnajournal.cshlp.org/cgi/doi/10.1261/rna.2194910.
9
|
4766 |@word version:2 faculty:1 eliminating:1 inversion:1 proportion:5 reused:1 proportionality:1 covariance:1 initial:4 efficacy:1 initialisation:1 document:4 ours:1 existing:2 current:2 z2:2 si:2 dx:6 written:1 readily:1 realize:1 eleven:1 enables:1 analytic:1 update:13 intelligence:2 fewer:1 selected:2 steepest:10 reciprocal:1 ith:1 el1:1 blei:3 provides:3 draft:1 node:3 org:1 five:1 differential:1 consists:1 prove:1 combine:2 indeed:2 rapid:1 spherical:1 little:1 considering:2 becomes:1 provided:5 estimating:1 project:1 didn:2 interpreted:1 emerging:1 whilst:9 unified:1 differing:1 transformation:1 sung:6 thorough:1 every:1 uk:3 stick:1 unit:4 grant:1 omit:1 appear:1 positive:1 before:1 understood:1 local:1 treat:1 io:1 optimised:4 approximately:1 twice:1 initialization:1 studied:1 equivalence:3 relaxing:1 heteroscedastic:1 ease:1 collapse:2 range:3 directed:1 alphabetical:1 implement:1 definite:3 procedure:8 powell:1 empirical:2 convenient:1 ups:2 word:1 suggest:1 marginalize:3 selection:2 close:1 collapsed:33 context:1 applying:3 optimize:2 equivalent:4 dz:4 center:1 straightforward:1 starting:1 guiver:1 unify:1 identifying:1 immediately:1 factorisation:4 insight:1 parameterizing:1 rule:1 importantly:1 deriving:1 proving:1 coordinate:4 increment:1 variation:1 updated:2 smyth:1 us:2 element:1 wolfe:1 approximated:1 particularly:3 expensive:1 recognition:2 ep:2 observed:2 preprint:2 wang:1 parameterize:1 eu:1 intuition:1 nats:4 practise:1 ultimately:1 solving:1 titsias:5 basis:1 completely:2 easily:1 joint:2 derivation:8 fast:5 describe:1 effective:2 doi:3 artificial:1 newman:1 whose:2 larger:4 supplementary:6 amari:2 neil:1 gi:3 nagaoka:2 vaerenbergh:1 final:2 online:1 beal:2 advantage:2 net:3 analytical:1 took:2 propose:1 product:1 coming:1 remainder:1 combining:1 mixing:4 poorly:1 differentially:1 manchester:2 convergence:10 cluster:5 approximation2:1 requirement:2 optimum:5 object:1 wider:1 derive:6 help:1 ac:3 tornio:1 measured:1 transcript:3 eq:9 implemented:1 involves:1 come:1 implies:2 met:1 differ:1 direction:3 stochastic:1 exploration:1 human:1 enable:2 material:6 investigation:2 biological:1 correction:3 lkl:22 considered:2 magnus:2 around:1 exp:7 lawrence:13 fletcher:5 week:1 optimizer:1 outperformed:1 label:1 propensity:1 ere:2 tool:1 hoffman:2 always:6 gaussian:4 rna:5 varying:1 derived:1 june:1 improvement:3 pdfs:1 sequencing:1 likelihood:3 cg:1 am:1 helpful:2 inference:21 dependent:1 hestenes:3 integrated:1 relation:1 translational:1 augment:1 smoothing:1 softmax:1 fairly:1 marginal:1 field:15 optimises:1 construct:1 distilled:1 having:1 sampling:1 equal:6 optimising:1 identical:2 biology:1 look:1 icml:1 ng:1 report:1 few:1 composed:1 divergence:4 steifel:2 geometry:4 microsoft:1 huge:1 evaluation:1 alignment:2 weakness:2 mixture:11 normaliser:1 amenable:1 integral:1 partial:1 necessary:1 re:3 minimal:1 vbem:28 earlier:1 ribi:2 measuring:1 assignment:1 maximization:1 applicability:2 tractability:2 deviation:2 subset:2 reported:1 dependency:1 twelve:1 international:3 probabilistic:5 quickly:1 again:2 central:1 collapsing:2 leading:1 account:1 potential:1 includes:1 performed:1 view:1 lot:1 break:1 portion:1 reached:1 recover:1 bayes:7 competitive:1 asuncion:2 largely:1 bayesian:5 unifies:1 confirmed:1 converged:2 reach:3 definition:1 initialised:2 james:2 minka:3 obvious:1 associated:1 riemannian:9 gain:1 treatment:1 popular:2 sophisticated:1 back:1 appears:4 restarts:4 specify:1 improved:5 arranged:1 formulation:2 though:2 evaluated:1 marginalising:3 parameterised:2 implicit:2 just:2 honkela:8 sketch:1 ei:1 overlapping:2 propagation:1 lda:4 effect:1 contain:1 true:1 analytically:4 symmetric:1 leibler:4 bbsrc:1 deal:1 numerator:1 gg:1 demonstrate:1 performs:1 l1:6 stiefel:1 variational:49 empirically:4 volume:1 million:1 discussed:1 he:2 association:1 significant:2 expressing:1 cambridge:1 gibbs:2 paisley:1 reef:5 unconstrained:2 similarly:2 funded:1 rectify:1 access:1 similarity:1 gateway:1 curvature:11 posterior:4 own:1 showed:5 perspective:2 optimizing:1 irrelevant:1 manipulation:1 scenario:1 inequality:3 transcriptome:1 life:1 minimum:1 greater:1 additional:1 commentary:1 converge:3 forty:1 determine:1 semi:1 ii:3 full:1 infer:3 reduces:2 faster:5 match:1 devised:1 regression:1 sheffield:5 optimisation:14 expectation:4 metric:1 arxiv:4 iteration:11 sometimes:1 represent:1 achieved:1 ion:1 cell:1 uninformative:2 winn:1 concluded:1 appropriately:1 marginalisation:2 ascent:14 induced:1 seem:2 jordan:1 extracting:1 near:1 easy:1 automated:1 switch:1 independence:1 identified:1 reduce:1 idea:1 inner:1 lesser:1 whether:1 expression:15 recreate:1 six:1 url:1 suffer:1 peter:1 york:1 klc:29 dramatically:1 useful:1 clear:1 ten:2 induces:1 http:1 canonical:3 neuroscience:1 popularity:1 per:1 rattray:3 coarsely:1 key:1 four:1 drawn:1 preprocessed:1 graph:3 geometrically:1 run:2 inverse:2 powerful:1 family:7 almost:4 knowles:1 separation:6 seq:2 vb:4 bound:101 conjugation:1 sato:6 ijcnn:1 worked:1 software:1 speed:4 performing:1 department:2 gredilla:8 conjugate:34 across:1 em:1 character:1 parameterisation:1 taken:6 recourse:1 ln:15 equation:9 randomised:1 turn:4 discus:1 microrna:1 ge:1 tractable:1 fp7:1 serf:1 gaussians:7 apply:3 eight:1 away:1 appropriate:3 original:2 assumes:1 dirichlet:6 ensure:2 denotes:1 michalis:1 graphical:1 marginalized:2 unifying:6 exploit:1 giving:3 ghahramani:3 approximating:4 classical:1 objective:2 looked:1 parametric:1 gradient:57 amongst:1 separate:2 thank:1 cgi:1 topic:10 laying:1 assuming:1 length:4 code:1 issn:1 relationship:2 difficult:1 negative:4 implementation:2 motivates:1 teh:7 upper:1 allowing:1 enabling:1 implementable:1 descent:1 proscribed:1 ecml:1 truncated:1 incorporated:1 varied:2 vbm:1 required:2 kl:10 connection:1 vbe:1 engine:1 hour:1 alternately:1 nip:2 suggested:2 below:1 pattern:3 explanation:1 overlap:2 natural:18 treated:1 marginalised:7 scheme:1 technology:1 factorises:1 raiko:2 review:1 literature:1 prior:4 checking:1 acknowledgement:1 fully:1 expect:1 suggestion:1 generation:1 allocation:6 filtering:1 stringency:1 foundation:1 fuller:1 sufficient:2 neat:1 allow:1 institute:1 fall:1 taking:1 emphasise:1 van:1 hensman:2 vocabulary:1 author:1 made:2 simplified:1 welling:3 transaction:1 bb:1 approximate:8 kullback:4 gene:1 confirm:1 reveals:1 corpus:1 unnecessary:1 alternatively:1 search:2 latent:15 table:5 nature:1 obtaining:1 substituted:1 arise:1 succinct:3 allowed:1 ref:1 body:1 xu:2 referred:3 fashion:1 slow:3 sub:1 explicit:1 wish:1 exponential:10 breaking:1 removing:2 minute:2 specific:2 bishop:4 jensen:3 evidence:3 intractable:1 importance:1 magnitude:1 conditioned:2 illustrates:2 karhunen:2 lmf:8 mf:10 led:1 azaro:8 explore:1 likely:1 failed:2 expressed:1 strand:1 springer:1 extracted:1 kuusela:4 conditional:2 king:10 bang:1 fisher:3 change:2 included:1 determined:2 corrected:8 kurihara:4 sampler:2 select:7 arises:1 bioinformatics:2
|
4,161 | 4,767 |
Efficient Bayes-Adaptive Reinforcement Learning
using Sample-Based Search
Arthur Guez
David Silver
Peter Dayan
[email protected]
[email protected]
[email protected]
Abstract
Bayesian model-based reinforcement learning is a formally elegant approach to
learning optimal behaviour under model uncertainty, trading off exploration and
exploitation in an ideal way. Unfortunately, finding the resulting Bayes-optimal
policies is notoriously taxing, since the search space becomes enormous. In this
paper we introduce a tractable, sample-based method for approximate Bayesoptimal planning which exploits Monte-Carlo tree search. Our approach outperformed prior Bayesian model-based RL algorithms by a significant margin on several well-known benchmark problems ? because it avoids expensive applications
of Bayes rule within the search tree by lazily sampling models from the current
beliefs. We illustrate the advantages of our approach by showing it working in
an infinite state space domain which is qualitatively out of reach of almost all
previous work in Bayesian exploration.
1
Introduction
A key objective in the theory of Markov Decision Processes (MDPs) is to maximize the expected
sum of discounted rewards when the dynamics of the MDP are (perhaps partially) unknown. The
discount factor pressures the agent to favor short-term rewards, but potentially costly exploration
may identify better rewards in the long-term. This conflict leads to the well-known explorationexploitation trade-off. One way to solve this dilemma [3, 10] is to augment the regular state of the
agent with the information it has acquired about the dynamics. One formulation of this idea is the
augmented Bayes-Adaptive MDP (BAMDP) [18, 9], in which the extra information is the posterior
belief distribution over the dynamics, given the data so far observed. The agent starts in the belief
state corresponding to its prior and, by executing the greedy policy in the BAMDP whilst updating
its posterior, acts optimally (with respect to its beliefs) in the original MDP. In this framework, rich
prior knowledge about statistics of the environment can be naturally incorporated into the planning
process, potentially leading to more efficient exploration and exploitation of the uncertain world.
Unfortunately, exact Bayesian reinforcement learning is computationally intractable. Various algorithms have been devised to approximate optimal learning, but often at rather large cost. Here, we
present a tractable approach that exploits and extends recent advances in Monte-Carlo tree search
(MCTS) [16, 20], but avoiding problems associated with applying MCTS directly to the BAMDP.
At each iteration in our algorithm, a single MDP is sampled from the agent?s current beliefs. This
MDP is used to simulate a single episode whose outcome is used to update the value of each node of
the search tree traversed during the simulation. By integrating over many simulations, and therefore
many sample MDPs, the optimal value of each future sequence is obtained with respect to the agent?s
beliefs. We prove that this process converges to the Bayes-optimal policy, given infinite samples. To
increase computational efficiency, we introduce a further innovation: a lazy sampling scheme that
considerably reduces the cost of sampling.
We applied our algorithm to a representative sample of benchmark problems and competitive algorithms from the literature. It consistently and significantly outperformed existing Bayesian RL
methods, and also recent non-Bayesian approaches, thus achieving state-of-the-art performance.
1
Our algorithm is more efficient than previous sparse sampling methods for Bayes-adaptive planning
[25, 6, 2], partly because it does not update the posterior belief state during the course of each
simulation. It thus avoids repeated applications of Bayes rule, which is expensive for all but the
simplest priors over the MDP. Consequently, our algorithm is particularly well suited to support
planning in domains with richly structured prior knowledge ? a critical requirement for applications
of Bayesian reinforcement learning to large problems. We illustrate this benefit by showing that our
algorithm can tackle a domain with an infinite number of states and a structured prior over the
dynamics, a challenging ? if not intractable ? task for existing approaches.
2
Bayesian RL
We describe the generic Bayesian formulation of optimal decision-making in an unknown MDP,
following [18] and [9]. An MDP is described as a 5-tuple M = hS, A, P, R, ?i, where S is the
set of states, A is the set of actions, P : S ? A ? S ? R is the state transition probability kernel,
R : S ? A ? R is a bounded reward function, and ? is the discount factor [23]. When all the
components of the MDP tuple are known, standard MDP planning algorithms can be used to estimate
the optimal value function and policy off-line. In general, the dynamics are unknown, and we assume
that P is a latent variable distributed according to a distribution P (P). After observing a history
of actions and states ht = s1 a1 s2 a2 . . . at?1 st from the MDP, the posterior belief on P is updated
using Bayes? rule P (P|ht ) ? P (ht |P)P (P). The uncertainty about the dynamics of the model can
be transformed into uncertainty about the current state inside an augmented state space S + = S ?H,
where S is the state space in the original problem and H is the set of possible histories. The dynamics
associated with this augmented state space are described by
Z
P + (hs, hi, a, hs0 , h0 i) = 1[h0 = has0 ]
P(s, a, s0 )P (P|h) dP, R+ (hs, hi, a) = R(s, a) (1)
P
+
+
+
Together, the 5-tuple M = hS , A, P , R+ , ?i forms the Bayes-Adaptive MDP (BAMDP) for
the MDP problem M . Since the dynamics of the BAMDP are known, it can in principle be solved
to obtain the optimal value function associated with each action:
"?
#
X 0
?
t ?t
Q (hst , ht i, a) = max E?
?
rt0 |at = a
(2)
?
t0 =t
from which the optimal action for each state can be readily derived. 1 Optimal actions in the BAMDP
are executed greedily in the real MDP M and constitute the best course of action for a Bayesian
agent with respect to its prior belief over P. It is obvious that the expected performance of the
BAMDP policy in the MDP M is bounded above by that of the optimal policy obtained with a fullyobservable model, with equality occurring, for example, in the degenerate case in which the prior
only has support on the true model.
3
3.1
The BAMCP algorithm
Algorithm Description
The goal of a BAMDP planning method is to find, for each decision point hs, hi encountered, the action a that maximizes Equation 2. Our algorithm, Bayes-adaptive Monte-Carlo Planning (BAMCP),
does this by performing a forward-search in the space of possible future histories of the BAMDP
using a tailored Monte-Carlo tree search.
We employ the UCT algorithm [16] to allocate search effort to promising branches of the state-action
tree, and use sample-based rollouts to provide value estimates at each node. For clarity, let us denote
by Bayes-Adaptive UCT (BA-UCT) the algorithm that applies vanilla UCT to the BAMDP (i.e.,
the particular MDP with dynamics described in Equation 1). Sample-based search in the BAMDP
using BA-UCT requires the generation of samples from P + at every single node. This operation
requires integration over all possible transition models, or at least a sample of a transition model P
? an expensive procedure for all but the simplest generative models P (P). We avoid this cost by
only sampling a single transition model P i from the posterior at the root of the search tree at the
1
The redundancy in the state-history tuple notation ? st is the suffix of ht ? is only present to ensure
clarity of exposition.
2
start of each simulation i, and using P i to generate all the necessary samples during this simulation.
Sample-based tree search then acts as a filter, ensuring that the correct distribution of state successors
is obtained at each of the tree nodes, as if it was sampled from P + . This root sampling method was
originally introduced in the POMCP algorithm [20], developed to solve Partially Observable MDPs.
3.2
BA-UCT with Root Sampling
The root node of the search tree at a decision point represents the current state of the BAMDP.
The tree is composed of state nodes representing belief states hs, hi and action nodes representing the effect of particular actions from their parent state node. The visit counts: N (hs, hi) for
state nodes, and N (hs, hi, a) for action nodes, are initialized to 0 and updated throughout search.
A value Q(hs, hi, a), initialized to 0, is also maintained for each action node. Each simulation
traverses the tree withoutpbacktracking by following the UCT policy at state nodes defined by
argmaxa Q(hs, hi, a) + c log(N (hs, hi))/N (hs, hi, a), where c is an exploration constant that needs
to be set appropriately. Given an action, the transition distribution P i corresponding to the current
simulation i is used to sample the next state. That is, at action node (hs, hi, a), s0 is sampled from
P i (s, a, ?), and the new state node is set to hs0 , has0 i. When a simulation reaches a leaf, the tree is
expanded by attaching a new state node with its connected action nodes, and a rollout policy ?ro is
used to control the MDP defined by the current P i to some fixed depth (determined using the discount factor). The rollout provides an estimate of the value Q(hs, hi, a) from the leaf action node.
This estimate is then used to update the value of all action nodes traversed during the simulation: if
R is the sampled discounted return obtained from a traversed action node (hs, hi, a) in a given simulation, then we update the value of the action node to Q(hs, hi, a) + R ? Q(hs, hi, a)/N (hs, hi, a) (i.e.,
the mean of the sampled returns obtained from that action node over the simulations). A detailed
description of the BAMCP algorithm is provided in Algorithm 1. A diagram example of BAMCP
simulations is presented in Figure S3.
The tree policy treats the forward search as a meta-exploration problem, preferring to exploit regions of the tree that currently appear better than others while continuing to explore unknown or
less known parts of the tree. This leads to good empirical results even for small number of simulations, because effort is expended where search seems fruitful. Nevertheless all parts of the tree
are eventually visited infinitely often, and therefore the algorithm will eventually converge on the
Bayes-optimal policy (see Section 3.5).
Finally, note that the history of transitions h is generally not the most compact sufficient statistic
of the belief in fully observable MDPs. Indeed, it can be replaced with unordered transition counts
?, considerably reducing the number of states of the BAMDP and, potentially the complexity of
planning. Given an addressing scheme suitable to the resulting expanding lattice (rather than to a
tree), BAMCP can search in this reduced space. We found this version of BAMCP to offer only a
marginal improvement. This is a common finding for UCT, stemming from its tendency to concentrate search effort on one of several equivalent paths (up to transposition), implying a limited effect
on performance of reducing the number of those paths.
3.3
Lazy Sampling
In previous work on sample-based tree search, indeed including POMCP [20], a complete sample
state is drawn from the posterior at the root of the search tree. However, this can be computationally
very costly. Instead, we sample P lazily, creating only the particular transition probabilities that are
required as the simulation traverses the tree, and also during the rollout.
Consider P(s, a, ?) to be parametrized by a latent variable ?s,a for each state and action pair. These
may depend on each other, as well
R as on an additional set of latent variables ?. The posterior over
P can be written as P (?|h) = ? P (?|?, h)P (?|h), where ? = {?s,a |s ? S, a ? A}. Define
?t = {?s1 ,a1 , ? ? ? , ?st ,at } as the (random) set of ? parameters required during the course of a
BAMCP simulation that starts at time 1 and ends at time t. Using the chain rule, we can rewrite
P (?|?, h) = P (?s1 ,a1 |?, h)P (?s2 ,a2 |?1 , ?, h) . . . P (?sT ,aT |?T ?1 , ?, h)P (? \ ?T |?T , ?, h)
where T is the length of the simulation and ? \ ?T denotes the (random) set of parameters that
are not required for a simulation. For each simulation i, we sample P (?|ht ) at the root and then
lazily sample the ?st ,at parameters as required, conditioned on ? and all ?t?1 parameters sampled
for the current simulation. This process is stopped at the end of the simulation, potentially before
3
Algorithm 1: BAMCP
procedure Simulate( hs, hi, P, d)
if ? d Rmax < then return 0
if N (hs, hi) = 0 then
for all a ? A do
N (hs, hi, a) ? 0, Q(hs, hi, a)) ? 0
end
a ? ?ro (hs, hi, ?)
s0 ? P(s, a, ?)
r ? R(s, a)
R ? r + ? Rollout(hs0 , has0 i, P, d)
N (hs, hi) ? 1, N (hs, hi, a) ? 1
Q(hs, hi, a) ? R
return R
end
q
(hs,hi))
a ? argmax Q(hs, hi, b) + c log(N
N (hs,hi,b)
procedure Search( hs, hi )
repeat
P ? P (P|h)
Simulate(hs, hi, P, 0)
until Timeout()
return argmax Q(hs, hi, a)
a
end procedure
procedure Rollout(hs, hi, P, d )
if ? d Rmax < then
return 0
end
a ? ?ro (hs, hi, ?)
s0 ? P(s, a, ?)
r ? R(s, a)
return r+?Rollout(hs0 , has0 i, P, d+1)
end procedure
b
s0 ? P(s, a, ?)
r ? R(s, a)
R ? r + ? Simulate(hs0 , has0 i, P, d+1)
N (hs, hi) ? N (hs, hi) + 1
N (hs, hi, a) ? N (hs, hi, a) + 1
Q(hs, hi, a) ? Q(hs, hi, a) + R?Q(hs,hi,a)
N (hs,hi,a)
return R
end procedure
all ? parameters have been sampled. For example, if the transition parameters for different states
and actions are independent, we can completely forgo sampling a complete P, and instead draw any
necessary parameters individually for each state-action pair. This leads to substantial performance
improvement, especially in large MDPs where a single simulation only requires a small subset of
parameters (see for example the domain in Section 5.2).
3.4
Rollout Policy Learning
The choice of rollout policy ?ro is important if simulations are few, especially if the domain does
not display substantial locality or if rewards require a carefully selected sequence of actions to be
obtained. Otherwise, a simple uniform random policy can be chosen to provide noisy estimates.
In this work, we learn Qro , the optimal Q-value in the real MDP, in a model-free manner (e.g.,
using Q-learning) from samples (st , at , rt , st+1 ) obtained off-policy as a result of the interaction
of the Bayesian agent with the environment. Acting greedily according to Qro translates to pure
exploitation of gathered knowledge. A rollout policy in BAMCP following Qro could therefore
over-exploit. Instead, similar to [13], we select an -greedy policy with respect to Qro as our rollout
policy ?ro . This biases rollouts towards observed regions of high rewards. This method provides
valuable direction for the rollout policy at negligible computational cost. More complex rollout
policies can be considered, for example rollout policies that depend on the sampled model P i .
However, these usually incur computational overhead.
3.5
Theoretical properties
Define V (hs, hi) = max Q(hs, hi, a) ?hs, hi ? S ? H.
a?A
Theorem 1. For all > 0 (the numerical precision, see Algorithm 1) and a suitably chosen c (e.g. c > Rmax
1?? ), from state hst , ht i, BAMCP constructs a value function at the root
p
node that converges in probability to an 0 -optimal value function, V (hst , ht i) ? V?0 (hst , ht i),
. Moreover, for large enough N (hst , ht i), the bias of V (hst , ht i) decreases as
where 0 = 1??
O(log(N (hst , ht i))/N (hst , ht i)). (Proof available in supplementary material)
4
By definition, Theorem 1 implies that BAMCP converges to the Bayes-optimal solution asymptotically. We confirmed this result empirically using a variety of Bandit problems, for which the
Bayes-optimal solution can be computed efficiently using Gittins indices (see supplementary material).
4
Related Work
In Section 5, we compare BAMCP to a set of existing Bayesian RL algorithms. Given limited
space, we do not provide a comprehensive list of planning algorithms for MDP exploration, but
rather concentrate on related sample-based algorithms for Bayesian RL.
Bayesian DP [22] maintains a posterior distribution over transition models. At each step, a single
model is sampled, and the action that is optimal in that model is executed. The Best Of Sampled Set
(BOSS) algorithm generalizes this idea [1]. BOSS samples a number of models from the posterior
and combines them optimistically. This drives sufficient exploration to guarantee finite-sample performance guarantees. BOSS is quite sensitive to its parameter that governs the sampling criterion.
Unfortunately, this is difficult to select. Castro and Precup proposed an SBOSS variant, which provides a more effective adaptive sampling criterion [5]. BOSS algorithms are generally quite robust,
but suffer from over-exploration.
Sparse sampling [15] is a sample-based tree search algorithm. The key idea is to sample successor
nodes from each state, and apply a Bellman backup to update the value of the parent node from the
values of the child nodes. Wang et al. applied sparse sampling to search over belief-state MDPs[25].
The tree is expanded non-uniformly according to the sampled trajectories. At each decision node, a
promising action is selected using Thompson sampling ? i.e., sampling an MDP from that beliefstate, solving the MDP and taking the optimal action. At each chance node, a successor belief-state
is sampled from the transition dynamics of the belief-state MDP.
Asmuth and Littman further extended this idea in their BFS3 algorithm [2], an adaptation of Forward
Search Sparse Sampling [24] to belief-MDPs. Although they described their algorithm as MonteCarlo tree search, it in fact uses a Bellman backup rather than Monte-Carlo evaluation. Each Bellman
backup updates both lower and upper bounds on the value of each node. Like Wang et al., the tree
is expanded non-uniformly according to the sampled trajectories, albeit using a different method for
action selection. At each decision node, a promising action is selected by maximising the upper
bound on value. At each chance node, observations are selected by maximising the uncertainty
(upper minus lower bound).
Bayesian Exploration Bonus (BEB) solves the posterior mean MDP, but with an additional reward
bonus that depends on visitation counts [17]. Similarly, Sorg et al. propose an algorithm with a
different form of exploration bonus [21]. These algorithms provide performance guarantees after
a polynomial number of steps in the environment. However, behavior in the early steps of exploration is very sensitive to the precise exploration bonuses; and it turns out to be hard to translate
sophisticated prior knowledge into the form of a bonus.
Table 1: Experiment results summary. For each algorithm, we report the mean sum of rewards and confidence
interval for the best performing parameter within a reasonable planning time limit (0.25 s/step for Double-loop,
1 s/step for Grid5 and Grid10, 1.5 s/step for the Maze). For BAMCP, this simply corresponds to the number
of simulations that achieve a planning time just under the imposed limit. * Results reported from [22] without
timing information.
BAMCP
BFS3 [2]
SBOSS [5]
BEB [17]
Bayesian DP* [22]
Bayes VPI+MIX* [8]
IEQL+* [19]
QL Boltzmann*
Double-loop
387.6 ? 1.5
382.2 ? 1.5
371.5 ? 3
386 ? 0
377 ? 1
326 ? 31
264 ? 1
186 ? 1
Grid5
72.9 ? 3
66 ? 5
59.3 ? 4
67.5 ? 3
-
5
Grid10
32.7 ? 3
10.4 ? 2
21.8 ? 2
10 ? 1
-
Dearden?s Maze
965.2 ? 73
240.9 ? 46
671.3 ? 126
184.6 ? 35
817.6 ? 29
269.4 ? 1
195.2 ? 20
5
Experiments
We first present empirical results of BAMCP on a set of standard problems with comparisons to
other popular algorithms. Then we showcase BAMCP?s advantages in a large scale task: an infinite
2D grid with complex correlations between reward locations.
5.1
Standard Domains
Algorithms
The following algorithms were run: BAMCP - The algorithm presented in Section 3, implemented
with lazy sampling. The algorithm was run for different number of simulations (10 to 10000) to
span different planning times. In all experiments, we set ?ro to be an -greedy policy with = 0.5.
The UCT exploration constant was left unchanged for all experiments (c = 3), we experimented
with other values of c ? {0.5, 1, 5} with similar results. SBOSS [5]: for each domain, we varied
the number of samples K ? {2, 4, 8, 16, 32} and the resampling threshold parameter ? ? {3, 5, 7}.
BEB [17]: for each domain, we varied the bonus parameter ? ? {0.5, 1, 1.5, 2, 2.5, 3, 5, 10, 15, 20}.
BFS3 [2] for each domain, we varied the branching factor C ? {2, 5, 10, 15} and the number of
simulations (10 to 2000). The depth of search was set to 15 in all domains except for the larger grid
and maze domain where it was set to 50. We also tuned the Vmax parameter for each domain ? Vmin
was always set to 0. In addition, we report results from [22] for several other prior algorithms.
Domains
For all domains, we fix ? = 0.95. The Double-loop domain is a 9-state deterministic MDP with 2
actions [8], 1000 steps are executed in this domain. Grid5 is a 5 ? 5 grid with no reward anywhere
except for a reward state opposite to the reset state. Actions with cardinal directions are executed
with small probability of failure for 1000 steps. Grid10 is a 10 ? 10 grid designed like Grid5. We
collect 2000 steps in this domain. Dearden?s Maze is a 264-states maze with 3 flags to collect [8].
A special reward state gives the number of flags collected since the last visit as reward, 20000 steps
are executed in this domain. 2
To quantify the performance of each algorithm, we measured the total undiscounted reward over
many steps. We chose this measure of performance to enable fair comparisons to be drawn with
prior work. In fact, we are optimising a different criterion ? the discounted reward from the start
state ? and so we might expect this evaluation to be unfavourable to our algorithm.
One major advantage of Bayesian RL is that one can specify priors about the dynamics. For the
Double-loop domain, the Bayesian RL algorithms were run with a simple Dirichlet-Multinomial
1
model with symmetric Dirichlet parameter ? = |S|
. For the grids and the maze domain, the algorithms were run with a sparse Dirichlet-Multinomial model, as described in [11]. For both of these
models, efficient collapsed sampling schemes are available; they are employed for the BA-UCT and
BFS3 algorithms in our experiments to compress the posterior parameter sampling and the transition
sampling into a single transition sampling step. This considerably reduces the cost of belief updates
inside the search tree when using these simple probabilistic models. In general, efficient collapsed
sampling schemes are not available (see for example the model in Section 5.2).
Results
A summary of the results is presented in Table 1. Figure 1 reports the planning time/performance
trade-off for the different algorithms on the Grid5 and Maze domain.
On all the domains tested, BAMCP performed best. Other algorithms came close on some tasks,
but only when their parameters were tuned to that specific domain. This is particularly evident for
BEB, which required a different value of exploration bonus to achieve maximum performance in
each domain. BAMCP?s performance is stable with respect to the choice of its exploration constant
c and it did not require tuning to obtain the results.
BAMCP?s performance scales well as a function of planning time, as is evident in Figure 1. In contrast, SBOSS follows the opposite trend. If more samples are employed to build the merged model,
SBOSS actually becomes too optimistic and over-explores, degrading its performance. BEB cannot
take advantage of prolonged planning time at all. BFS3 generally scales up with more planning
time with an appropriate choice of parameters, but it is not obvious how to trade-off the branching
factor, depth, and number of simulations in each domain. BAMCP greatly benefited from our lazy
2
The result reported for Dearden?s maze with the Bayesian DP alg. in [22] is for a different version of the
task in which the maze layout is given to the agent.
6
Sum of Rewards after 1000 steps
90
90
BAMCP
80
80
70
70
60
60
50
50
40
40
30
30
20
BFS3
20
10
?3
10
?2
?1
10
90
0
10
10
10
SBOSS
80
80
70
70
60
60
50
50
40
40
30
30
20
20
10
?3
10
?2
?1
10
?3
Undiscounted sum of rewards after 20000 steps
1000
900
BAMCP (BA?UCT+RS+LS+RL)
BEB
0
10
1100
1000
BFS3
SBOSS
10
10
10
?3
10
?2
10
Average Time per Step (s)
1100
BA?UCT + RL
BA?UCT
1000
900
900
800
800
700
700
600
600
500
500
400
400
400
300
300
300
200
200
200
100
100
0
0
800
700
600
500
?1
10
Average Time per Step (s)
(b)
0
10
?1
10
?1
10
10
0
BEB
(a)
1100
?2
10
90
10
0
BA?UCT + RS + LS + RL (BAMCP)
BA?UCT + RS + LS
BA?UCT + RS + RL
BA?UCT + RS
100
?1
10
Average Time per Step (s)
(c)
0
10
0
?1
10
0
10
Average Time per Step (s)
(d)
Figure 1: Performance of each algorithm on the Grid5 (a.) and Maze domain (b-d) as a function of planning
time. Each point corresponds to a single run of an algorithm with an associated setting of the parameters. Increasing brightness inside the points codes for an increasing value of a parameter (BAMCP and BFS3: number
of simulations, BEB: bonus parameter ?, SBOSS: number of samples K). A second dimension of variation
is coded as the size of the points (BFS3: branching factor C, SBOSS: resampling parameter ?). The range of
parameters is specified in Section 5.1. a. Performance of each algorithm on the Grid5 domain. b. Performance
of each algorithm on the Maze domain. c. On the Maze domain, performance of vanilla BA-UCT with and
without rollout policy learning (RL). d. On the Maze domain, performance of BAMCP with and without the
lazy sampling (LS) and rollout policy learning (RL) presented in Sections 3.4, 3.3. Root sampling (RS) is
included.
sampling scheme in the experiments, providing 35? speed improvement over the naive approach in
the maze domain for example; this is illustrated in Figure 1(c).
Dearden?s maze aptly illustrates a major drawback of forward search sparse sampling algorithms
such as BFS3. Like many maze problems, all rewards are zero for at least k steps, where k is the
solution length. Without prior knowledge of the optimal solution length, all upper bounds will be
higher than the true optimal value until the tree has been fully expanded up to depth k ? even if a
simulation happens to solve the maze. In contrast, once BAMCP discovers a successful simulation,
its Monte-Carlo evaluation will immediately bias the search tree towards the successful trajectory.
5.2
Infinite 2D grid task
We also applied BAMCP to a much larger problem. The generative model for this infinite-grid
MDP is as follows: each column i has an associated latent parameter pi ? Beta(?1 , ?1 ) and each
row j has an associated latent parameter qj ? Beta(?2 , ?2 ). The probability of grid cell ij having
a reward of 1 is pi qj , otherwise the reward is 0. The agent knows it is on a grid and is always free
to move in any of the four cardinal directions. Rewards are consumed when visited; returning to the
same location subsequently results in a reward of 0. As opposed to the independent Dirichlet priors
employed in standard domains, here, dynamics are tightly correlated across states (i.e., observing
a state transition provides information about other state transitions). Posterior inference (of the
7
14
Discounted sum of rewards
Undiscounted sum of rewards
90
80
70
60
50
BAMCP
BAMCP Wrong prior
Random
40
30
20
10
?2
10
?1
10
0
10
12
10
8
6
4
2
?2
10
1
10
Planning time (s)
?1
10
0
10
1
10
Planning time (s)
Figure 2: Performance of BAMCP as a function of planning time on the Infinite 2D grid task of Section 5.2,
for ? = 0.97, where the grids are generated with Beta parameters ?1 = 1, ?1 = 2, ?2 = 2, ?2 = 1 (See
supp. Figure S4 for a visualization). The performance during the first 200 steps in the environment is averaged
over 50 sampled environments (5 runs for each sample) and is reported both in terms of undiscounted (left) and
discounted (right) sum of rewards. BAMCP is run either with the correct generative model as prior or with an
incorrect prior (parameters for rows and columns are swapped), it is clear that BAMCP can take advantage of
correct prior information to gain more rewards. The performance of a uniform random policy is also reported.
dynamics P) in this model requires approximation because of the non-conjugate coupling of the
variables, the inference is done via MCMC (details in Supplementary). The domain is illustrated in
Figure S4.
Planning algorithms that attempt to solve an MDP based on sample(s) (or the mean) of the posterior
(e.g., BOSS, BEB, Bayesian DP) cannot directly handle the large state space. Prior forward-search
methods (e.g., BA-UCT, BFS3) can deal with the state space, but not the large belief space: at every
node of the search tree they must solve an approximate inference problem to estimate the posterior
beliefs. In contrast, BAMCP limits the posterior inference to the root of the search tree and is not
directly affected by the size of the state space or belief space, which allows the algorithm to perform
well even with a limited planning time. Note that lazy sampling is required in this setup since a full
sample of the dynamics involves infinitely many parameters.
Figure 2 (and Figure S5) demonstrates the planning performance of BAMCP in this complex domain. Performance improves with additional planning time, and the quality of the prior clearly
affects the agent?s performance. Supplementary videos contrast the behavior of the agent for different prior parameters.
6
Future Work
The UCT algorithm is known to have several drawbacks. First, there are no finite-time regret bounds.
It is possible to construct malicious environments, for example in which the optimal policy is hidden
in a generally low reward region of the tree, where UCT can be misled for long periods [7]. Second,
the UCT algorithm treats every action node as a multi-armed bandit problem. However, there is
no actual benefit to accruing reward during planning, and so it is in theory more appropriate to use
pure exploration bandits [4]. Nevertheless, the UCT algorithm has produced excellent empirical
performance in many domains [12].
BAMCP is able to exploit prior knowledge about the dynamics in a principled manner. In principle,
it is possible to encode many aspects of domain knowledge into the prior distribution. An important
avenue for future work is to explore rich, structured priors about the dynamics of the MDP. If this
prior knowledge matches the class of environments that the agent will encounter, then exploration
could be significantly accelerated.
7
Conclusion
We suggested a sample-based algorithm for Bayesian RL called BAMCP that significantly surpassed
the performance of existing algorithms on several standard tasks. We showed that BAMCP can
tackle larger and more complex tasks generated from a structured prior, where existing approaches
scale poorly. In addition, BAMCP provably converges to the Bayes-optimal solution.
The main idea is to employ Monte-Carlo tree search to explore the augmented Bayes-adaptive search
space efficiently. The naive implementation of that idea is the proposed BA-UCT algorithm, which
cannot scale for most priors due to expensive belief updates inside the search tree. We introduced
three modifications to obtain a computationally tractable sample-based algorithm: root sampling,
which only requires beliefs to be sampled at the start of each simulation (as in [20]); a model-free
RL algorithm that learns a rollout policy; and the use of a lazy sampling scheme to sample the
posterior beliefs cheaply.
8
References
[1] J. Asmuth, L. Li, M.L. Littman, A. Nouri, and D. Wingate. A Bayesian sampling approach to exploration
in reinforcement learning. In Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial
Intelligence, pages 19?26, 2009.
[2] J. Asmuth and M. Littman. Approaching Bayes-optimality using Monte-Carlo tree search. In Proceedings
of the 27th Conference on Uncertainty in Artificial Intelligence, pages 19?26, 2011.
[3] R. Bellman and R. Kalaba. On adaptive control processes. Automatic Control, IRE Transactions on,
4(2):1?9, 1959.
[4] S. Bubeck, R. Munos, and G. Stoltz. Pure exploration in multi-armed bandits problems. In Proceedings
of the 20th international conference on Algorithmic learning theory, pages 23?37. Springer-Verlag, 2009.
[5] P. Castro and D. Precup. Smarter sampling in model-based Bayesian reinforcement learning. Machine
Learning and Knowledge Discovery in Databases, pages 200?214, 2010.
[6] P.S. Castro. Bayesian exploration in Markov decision processes. PhD thesis, McGill University, 2007.
[7] P.A. Coquelin and R. Munos. Bandit algorithms for tree search. In Proceedings of the 23rd Conference
on Uncertainty in Artificial Intelligence, pages 67?74, 2007.
[8] R. Dearden, N. Friedman, and S. Russell. Bayesian Q-learning. In Proceedings of the National Conference
on Artificial Intelligence, pages 761?768, 1998.
[9] M.O.G. Duff. Optimal Learning: Computational Procedures For Bayes-Adaptive Markov Decision Processes. PhD thesis, University of Massachusetts Amherst, 2002.
[10] AA Feldbaum. Dual control theory. Automation and Remote Control, 21(9):874?1039, 1960.
[11] N. Friedman and Y. Singer. Efficient Bayesian parameter estimation in large discrete domains. Advances
in Neural Information Processing Systems (NIPS), pages 417?423, 1999.
[12] S. Gelly, L. Kocsis, M. Schoenauer, M. Sebag, D. Silver, C. Szepesv?ari, and O. Teytaud. The grand challenge of computer Go: Monte Carlo tree search and extensions. Communications of the ACM, 55(3):106?
113, 2012.
[13] S. Gelly and D. Silver. Combining online and offline knowledge in UCT. In Proceedings of the 24th
International Conference on Machine learning, pages 273?280, 2007.
[14] J.C. Gittins, R. Weber, and K.D. Glazebrook. Multi-armed bandit allocation indices. Wiley Online
Library, 1989.
[15] M. Kearns, Y. Mansour, and A.Y. Ng. A sparse sampling algorithm for near-optimal planning in large
Markov decision processes. In Proceedings of the 16th international joint conference on Artificial
intelligence-Volume 2, pages 1324?1331, 1999.
[16] L. Kocsis and C. Szepesv?ari. Bandit based Monte-Carlo planning. Machine Learning: ECML 2006, pages
282?293, 2006.
[17] J.Z. Kolter and A.Y. Ng. Near-Bayesian exploration in polynomial time. In Proceedings of the 26th
Annual International Conference on Machine Learning, pages 513?520, 2009.
[18] J.J. Martin. Bayesian decision problems and Markov chains. Wiley, 1967.
[19] N. Meuleau and P. Bourgine. Exploration of multi-state environments: Local measures and backpropagation of uncertainty. Machine Learning, 35(2):117?154, 1999.
[20] D. Silver and J. Veness. Monte-Carlo planning in large POMDPs. Advances in Neural Information
Processing Systems (NIPS), pages 2164?2172, 2010.
[21] J. Sorg, S. Singh, and R.L. Lewis. Variance-based rewards for approximate Bayesian reinforcement
learning. In Proceedings of the 26th Conference on Uncertainty in Artificial Intelligence, 2010.
[22] M. Strens. A Bayesian framework for reinforcement learning. In Proceedings of the 17th International
Conference on Machine Learning, pages 943?950, 2000.
[23] C. Szepesv?ari. Algorithms for reinforcement learning. Synthesis Lectures on Artificial Intelligence and
Machine Learning. Morgan & Claypool Publishers, 2010.
[24] T.J. Walsh, S. Goschin, and M.L. Littman. Integrating sample-based planning and model-based reinforcement learning. In Proceedings of the 24th Conference on Artificial Intelligence (AAAI), 2010.
[25] T. Wang, D. Lizotte, M. Bowling, and D. Schuurmans. Bayesian sparse sampling for on-line reward
optimization. In Proceedings of the 22nd International Conference on Machine learning, pages 956?963,
2005.
9
|
4767 |@word h:45 exploitation:3 version:2 polynomial:2 seems:1 nd:1 suitably:1 simulation:30 r:6 pressure:1 brightness:1 minus:1 bourgine:1 tuned:2 existing:5 current:7 guez:1 written:1 readily:1 must:1 stemming:1 numerical:1 sorg:2 designed:1 update:8 resampling:2 implying:1 greedy:3 generative:3 leaf:2 selected:4 vmin:1 intelligence:8 short:1 meuleau:1 transposition:1 provides:4 ire:1 node:32 location:2 traverse:2 teytaud:1 rollout:16 beta:3 incorrect:1 prove:1 overhead:1 combine:1 inside:4 manner:2 introduce:2 acquired:1 indeed:2 expected:2 behavior:2 planning:29 multi:4 bellman:4 discounted:5 prolonged:1 actual:1 armed:3 increasing:2 becomes:2 provided:1 bounded:2 notation:1 maximizes:1 moreover:1 bonus:8 rmax:3 degrading:1 developed:1 whilst:1 finding:2 guarantee:3 every:3 act:2 tackle:2 ro:6 returning:1 wrong:1 uk:3 control:5 demonstrates:1 appear:1 before:1 negligible:1 timing:1 treat:2 local:1 limit:3 accruing:1 kalaba:1 path:2 optimistically:1 might:1 chose:1 collect:2 challenging:1 limited:3 walsh:1 range:1 averaged:1 regret:1 backpropagation:1 procedure:8 empirical:3 significantly:3 confidence:1 integrating:2 regular:1 argmaxa:1 glazebrook:1 cannot:3 close:1 selection:1 collapsed:2 applying:1 fruitful:1 equivalent:1 imposed:1 deterministic:1 layout:1 go:1 rt0:1 l:4 thompson:1 immediately:1 pure:3 rule:4 handle:1 variation:1 updated:2 mcgill:1 exact:1 us:1 trend:1 expensive:4 particularly:2 updating:1 showcase:1 database:1 observed:2 solved:1 wang:3 wingate:1 region:3 connected:1 episode:1 remote:1 trade:3 decrease:1 russell:1 valuable:1 substantial:2 principled:1 environment:8 complexity:1 reward:30 littman:4 schoenauer:1 dynamic:16 depend:2 rewrite:1 solving:1 singh:1 incur:1 dilemma:1 efficiency:1 completely:1 joint:1 various:1 describe:1 effective:1 monte:11 artificial:8 has0:5 outcome:1 h0:2 whose:1 quite:2 supplementary:4 solve:5 larger:3 otherwise:2 favor:1 statistic:2 noisy:1 timeout:1 kocsis:2 online:2 advantage:5 sequence:2 ucl:3 propose:1 interaction:1 reset:1 adaptation:1 loop:4 combining:1 translate:1 degenerate:1 achieve:2 poorly:1 description:2 parent:2 double:4 requirement:1 undiscounted:4 silver:5 executing:1 converges:4 gittins:2 illustrate:2 coupling:1 ac:3 measured:1 ij:1 solves:1 bamdp:13 implemented:1 c:1 hst:8 trading:1 implies:1 quantify:1 involves:1 concentrate:2 direction:3 merged:1 correct:3 drawback:2 filter:1 subsequently:1 exploration:23 successor:3 enable:1 unfavourable:1 material:2 require:2 behaviour:1 fix:1 traversed:3 extension:1 considered:1 claypool:1 algorithmic:1 major:2 early:1 a2:2 estimation:1 outperformed:2 currently:1 visited:2 sensitive:2 individually:1 clearly:1 always:2 rather:4 beb:9 avoid:1 goschin:1 encode:1 derived:1 improvement:3 consistently:1 greatly:1 contrast:4 greedily:2 lizotte:1 bos:5 inference:4 dayan:2 suffix:1 hidden:1 bandit:7 transformed:1 provably:1 dual:1 augment:1 art:1 integration:1 special:1 marginal:1 construct:2 once:1 having:1 ng:2 sampling:33 veness:1 optimising:1 represents:1 future:4 others:1 report:3 cardinal:2 employ:2 few:1 composed:1 tightly:1 comprehensive:1 national:1 replaced:1 argmax:2 rollouts:2 attempt:1 friedman:2 evaluation:3 chain:2 tuple:4 arthur:1 necessary:2 stoltz:1 tree:36 continuing:1 initialized:2 theoretical:1 uncertain:1 stopped:1 column:2 lattice:1 cost:5 addressing:1 subset:1 uniform:2 successful:2 too:1 optimally:1 reported:4 considerably:3 st:7 explores:1 international:6 amherst:1 grand:1 preferring:1 probabilistic:1 off:6 together:1 precup:2 synthesis:1 thesis:2 aaai:1 opposed:1 creating:1 leading:1 return:8 li:1 supp:1 expended:1 unordered:1 automation:1 kolter:1 depends:1 performed:1 root:10 optimistic:1 observing:2 start:5 bayes:19 competitive:1 maintains:1 variance:1 efficiently:2 gathered:1 identify:1 bayesian:31 produced:1 carlo:11 trajectory:3 notoriously:1 confirmed:1 drive:1 pomdps:1 history:5 reach:2 definition:1 failure:1 obvious:2 naturally:1 associated:6 proof:1 sampled:15 gain:1 richly:1 popular:1 massachusetts:1 knowledge:10 improves:1 carefully:1 sophisticated:1 actually:1 originally:1 asmuth:3 higher:1 specify:1 formulation:2 done:1 just:1 uct:25 anywhere:1 until:2 correlation:1 working:1 quality:1 perhaps:1 mdp:27 effect:2 true:2 equality:1 symmetric:1 illustrated:2 deal:1 during:8 branching:3 bowling:1 maintained:1 strens:1 criterion:3 evident:2 complete:2 nouri:1 weber:1 discovers:1 ari:3 common:1 multinomial:2 rl:15 empirically:1 volume:1 significant:1 s5:1 tuning:1 vanilla:2 grid:11 automatic:1 similarly:1 rd:1 stable:1 posterior:16 recent:2 showed:1 verlag:1 meta:1 came:1 morgan:1 additional:3 employed:3 converge:1 maximize:1 period:1 branch:1 full:1 mix:1 reduces:2 match:1 offer:1 long:2 devised:1 visit:2 coded:1 a1:3 ensuring:1 variant:1 surpassed:1 iteration:1 kernel:1 tailored:1 smarter:1 cell:1 addition:2 szepesv:3 taxing:1 interval:1 diagram:1 malicious:1 publisher:1 appropriately:1 extra:1 swapped:1 elegant:1 near:2 ideal:1 enough:1 variety:1 affect:1 approaching:1 opposite:2 idea:6 avenue:1 translates:1 consumed:1 qj:2 t0:1 allocate:1 effort:3 grid5:7 suffer:1 peter:1 constitute:1 action:32 generally:4 detailed:1 governs:1 clear:1 s4:2 discount:3 simplest:2 reduced:1 generate:1 s3:1 per:4 discrete:1 pomcp:2 affected:1 visitation:1 key:2 redundancy:1 four:1 nevertheless:2 enormous:1 achieving:1 drawn:2 threshold:1 clarity:2 ht:13 asymptotically:1 sum:7 run:7 uncertainty:9 extends:1 almost:1 throughout:1 reasonable:1 draw:1 decision:10 vpi:1 bound:5 hi:43 display:1 encountered:1 annual:1 aspect:1 simulate:4 speed:1 span:1 optimality:1 performing:2 expanded:4 martin:1 structured:4 according:4 conjugate:1 across:1 making:1 s1:3 happens:1 castro:3 modification:1 computationally:3 equation:2 visualization:1 turn:1 count:3 eventually:2 montecarlo:1 singer:1 know:1 tractable:3 end:8 available:3 operation:1 generalizes:1 apply:1 generic:1 appropriate:2 encounter:1 original:2 compress:1 denotes:1 dirichlet:4 ensure:1 exploit:5 gelly:2 especially:2 build:1 unchanged:1 objective:1 move:1 costly:2 rt:1 dp:5 explorationexploitation:1 parametrized:1 aptly:1 collected:1 maximising:2 length:3 bamcp:39 index:2 code:1 providing:1 innovation:1 difficult:1 unfortunately:3 executed:5 ql:1 potentially:4 setup:1 ba:14 implementation:1 policy:26 unknown:4 boltzmann:1 perform:1 upper:4 twenty:1 observation:1 markov:5 benchmark:2 finite:2 ecml:1 extended:1 incorporated:1 precise:1 communication:1 mansour:1 varied:3 duff:1 david:1 introduced:2 pair:2 required:6 specified:1 conflict:1 nip:2 able:1 suggested:1 usually:1 challenge:1 max:2 including:1 video:1 belief:22 dearden:5 critical:1 suitable:1 misled:1 representing:2 scheme:6 mdps:7 library:1 mcts:2 naive:2 prior:27 literature:1 discovery:1 fully:2 expect:1 lecture:1 generation:1 allocation:1 agent:12 sufficient:2 s0:5 principle:2 pi:2 row:2 course:3 summary:2 repeat:1 last:1 free:3 offline:1 bias:3 taking:1 attaching:1 fifth:1 sparse:8 munos:2 benefit:2 distributed:1 depth:4 dimension:1 world:1 avoids:2 rich:2 transition:15 maze:17 forward:5 qualitatively:1 reinforcement:10 adaptive:10 vmax:1 far:1 transaction:1 approximate:4 observable:2 compact:1 search:38 latent:5 lazily:3 table:2 promising:3 learn:1 robust:1 expanding:1 schuurmans:1 alg:1 excellent:1 complex:4 domain:37 did:1 main:1 s2:2 backup:3 sebag:1 repeated:1 child:1 fair:1 augmented:4 benefited:1 representative:1 gatsby:2 wiley:2 precision:1 learns:1 theorem:2 specific:1 showing:2 list:1 experimented:1 intractable:2 albeit:1 bayesoptimal:1 phd:2 conditioned:1 occurring:1 illustrates:1 margin:1 suited:1 locality:1 simply:1 explore:3 infinitely:2 bubeck:1 cheaply:1 lazy:7 partially:2 applies:1 springer:1 aa:1 corresponds:2 chance:2 lewis:1 acm:1 goal:1 consequently:1 hs0:5 exposition:1 towards:2 hard:1 included:1 infinite:7 determined:1 reducing:2 uniformly:2 acting:1 except:2 flag:2 kearns:1 total:1 called:1 partly:1 tendency:1 forgo:1 formally:1 select:2 coquelin:1 support:2 accelerated:1 mcmc:1 tested:1 avoiding:1 correlated:1
|
4,162 | 4,768 |
Exponential Concentration for Mutual Information
Estimation with Application to Forests
John Lafferty
Department of Computer Science
Department of Statistics
University of Chicago, IL 60637
[email protected]
Han Liu
Department of Operations Research
and Financial Engineering
Princeton University, NJ 08544
[email protected]
Larry Wasserman
Department of Statistics
Machine Learning Department
Carnegie Mellon University, PA 15231
[email protected]
Abstract
We prove a new exponential concentration inequality for a plug-in estimator of the
Shannon mutual information. Previous results on mutual information estimation
only bounded expected error. The advantage of having the exponential inequality
is that, combined with the union bound, we can guarantee accurate estimators of
the mutual information for many pairs of random variables simultaneously. As an
application, we show how to use such a result to optimally estimate the density
function and graph of a distribution which is Markov to a forest graph.
1
Introduction
We consider the problem of nonparametrically estimating the Shannon mutual information between
two random variables. Let X1 ? X1 and X2 ? X2 be two random variables with domains X1 and
X2 and joint density p(x1 , x2 ). The mutual information between X1 and X2 is
Z Z
p(x1 , x2 )
dx1 dx2 = H(X1 ) + H(X2 ) ? H(X1 , X2 ),
p(x1 , x2 ) log
I(X1 ; X2 ) :=
p(x1 )p(x2 )
X1 X2
Z Z
where H(X1 , X2 ) = ?
p(x1 , x2 ) log p(x1 , x2 )dx1 dx2 (and similarly for H(X1 ) and H(X2 ))
are the corresponding Shannon entropies [4]. The mutual information is a measure of dependence
between X1 and X2 . To estimate I(X1 ; X2 ) well, it suffices to estimate H(X1 , X2 ) := H(p).
A simple way to estimate the Shannon entropy is to use a kernel density estimator (KDE) [22, 1, 9,
5, 20, 7], i.e., the densities p(x, y), p(x), and p(y) are separately estimated from samples and the
estimated densities are used to calculate the entropy. Alternative methods involve estimation of the
entropies using spacings [25, 26, 23], k-nearest neighbors [11, 12], the Edgeworth expansion [24],
and convex optimization [17]. More discussions can be found in the survey articles [2, 19]. There
have been many recent developments in the problem of estimating Shannon entropy and related
quantities as well as application of these results to machine learning problems [18, 21, 8,?6]. Under
weak conditions, it has been shown that there are estimators that achieve the parametric n-rate of
convergence in mean squared error (MSE), where n is the sample size.
In this paper, we construct an estimator with this rate, but we also prove an exponential concentration
b of H(p) satisfies
inequality for the estimator. More specifically, we show that our estimator H
2
b ? H(p)| > ? 2 exp ? n
sup P |H
(1.1)
36?2
p??
1
where ? is a nonparametric class of distributions defined in Section 2 and ? is a constant. To
the best of our knowledge, this is the first such exponential inequality for nonparametric Shannon
entropy and mutual information
estimation. The advantage of this result, over the usual results which
b ? H(p)|2 = O(n?1 ), is that we can apply the union bound and thus guarantee
state that E |H
accurate mutual information estimation for many pairs of random variables simultaneously. As an
application, we consider forest density estimation [15], which, in a d-dimenionsal problem, requires
estimating d(d+1)
mutual informations in order to apply the Chow-Liu algorithm. As long as logn d ?
2
0 as n ? ?, we can estimate the forest graph well, even if d = d(n) increases with n exponentially
fast.
The rest of this paper is organized as follows. The assumptions and estimator are given in Section 2.
The main theoretical analysis is in Section 3. In Section 4 we show how to apply the result to forest
density estimation. Some discussion and possible extensions are provided in the last section.
2
Estimator and Main Result
Let X = (X1 , X2 ) ? R2 be a random vector with density p(x) := p(x1 , x2 ) and let x1 , . . . , xn ?
X ? R2 be a random sample from p. In this paper, we only consider the case of bounded domain
X = [0, 1]2 . We want to estimate the Shannon entropy
Z
H(p) = ?
p(x) log p(x)dx.
(2.1)
X
We start with some assumptions on the density function p(x1 , x2 ).
Assumption 2.1 (Density assumption). We assume the density p(x1 , x2 ) belongs to a 2nd-order
H?older class ?? (2, L) and is bounded away from zero and infinity. In particular, there exist constants
?1 , ?2
0 < ?1 ? min p(x) ? max p(x) ? ?2 < ?,
x?X
x?X
(2.2)
and for any (x1 , x2 )T ? X , there exists a constant L such that, for any (u, v)T ? X
?p(x1 , x2 )
?p(x1 , x2 )
u?
v ? L(u2 + v 2 ).
(2.3)
p(x1 + u, x2 + v) ? p(x1 , x2 ) ?
?x1
?x2
Assumption 2.2 (Boundary assumption). If {xn } ? X is any sequence converging to a boundary
point x? , we require the density p(x) has vanishing first order partial derivatives:
lim
n??
?p(xn )
?p(xn )
= lim
= 0.
n??
?x1
?x2
(2.4)
To efficiently estimate the entropy in (2.1), we use a KDE based ?plug-in? estimator. Bias at the
boundaries turns out to be very important in this problem; see [10] for a discussion of boundary
bias. To correct the boundary effects, we use the following ?mirror image? kernel density estimator:
(
n
x2 ? xi2
x1 + xi1
x2 ? xi2
1 X
x1 ? xi1
K
+
K
K
peh (x1 , x2 ) :=
K
nh2 i=1
h
h
h
h
x1 ? xi1
x2 + xi2
x1 + xi1
x2 + xi2
+K
K
+K
K
h
h
h
h
i
i
i
x1 ? x1
x2 ? 2 + x2
x1 + x1
x2 ? 2 + xi2
+K
K
+K
K
h
h
h
h
i
i
i
x1 ? 2 + x1
x2 ? x2
x1 ? 2 + x1
x2 + xi2
+K
K
+K
K
h
h
h
h
)
x1 ? 2 + xi1
x2 ? 2 + xi2
+K
K
. (2.5)
h
h
Here h is the bandwidth and K(?) is a univariate kernel function. We denote by K2 (u, v) :=
K(u)K(v) the bivariate product kernel. This estimator has nine terms; one corresponds to the
original data in the unit square [0, 1]2 , and each of the remaining terms corresponds to reflecting the
data across one of the four sides or four corners of the square.
2
Assumption 2.3 (Kernel assumption). The kernel K(?) is nonnegative and has a bounded support
Z 1
Z 1
[?1, 1] with
K(u)du = 1 and
uK(u)du = 0.
?1
?1
By Assumption 2.1, the values of the true density lie in the interval [?1 , ?2 ]. We propose a clipped
KDE estimator
pbh (x) = T?1 ,?2 (e
ph (x)) ,
(2.6)
where T?1 ,?2 (a) = ?1 ? I(a < ?1 ) + a ? I(?1 ? a ? ?2 ) + ?2 ? I(a > ?2 ), so that the estimated
density also has this property. Letting g(u) = u log u, we propose the following plug-in entropy
estimator:
Z
Z
H (b
ph ) := ?
g (b
ph (x)) dx = ?
X
X
pbh (x) log pbh (x)dx.
(2.7)
Remark 2.1. The clipped estimator pbh requires the knowledge of ?1 and ?2 . In applications, we do
not need to know the exact values of ?1 and ?2 ; lower and upper bounds are sufficient.
Our main technical result is the following exponential concentration inequality on H(b
ph ) around the
population quantity H(p). Our proof is given in Section 3.
Theorem 2.1. Under Assumptions 2.1, 2.2, and 2.3, if we choose the bandwidth according to h
n?1/4 , then there exists a constant N0 such that for all n > N0 ,
n2
sup P |H (b
ph ) ? H (p)| > ? 2 exp ?
,
(2.8)
36?2
p??? (2,L)
where ? = max {| log ?1 |, | log ?2 |} + 1.
To the best of our knowledge, this is the first time an exponential inequality like (2.8) has been
established for Shannon entropy estimation over the H?older class. It is easy to see
that (2.8) implies
?
b ? H(p)| = O(n?1/2 ). The
the parametric n-rate of convergence in mean squared error, E |H
bandwidth h n?1/4 in the above theorem is different from the usual choice for optimal bivariate
density estimation, which is hP n?1/6 for the 2nd-order H?older class. By using h n?1/4 ,
we undersmooth the density estimate. As we show in the next section, such a bandwidth choice is
important for achieving the optimal rate for entropy estimation.
Let I(p) := I(X1 ; X2 ) be the Shannon mutual information, and define
Z Z
pbh (x1 , x2 )
I(b
ph ) :=
pbh (x1 , x2 ) log
dx1 dx2 .
(2.9)
pbh (x1 )b
ph (x2 )
X1 X2
The next corollary provides an exponential inequality for Shannon mutual information estimation.
Corollary 2.1. Under the same conditions as in Theorem 2.1, if we choose h n?1/4 , then there
exists a constant N1 , such that for all n > N1 ,
n2
sup P |I (b
ph ) ? I (p)| > ? 6 exp ?
,
(2.10)
324?2
p??? (2,L)
where ? = max {| log ?1 |, | log ?2 |} + 1.
Proof. Using the same proof for Theorem 2.1, we can show that (2.8) also holds for estimating
univariate entropies H(X1 ) and H(X2 ). The desired result then follows from the union bound
since I(p) := I(X1 ; X2 ) = H(X1 ) + H(X2 ) ? H(X1 , X2 ).
Remark 2.2. We use the same bandwidth h n?1/4 to estimate the bivariate density p(x1 , x2 )
and univariate densities p(x1 ), p(x2 ). A related result is presented in [15]. They consider the same
problem setting as ours and also use a KDE based plug-in estimator to estimate the mutual information. However, unlike our proposal, they advocate the use of different bandwidths for bivariate
and univariate entropy estimations. For bivariate case they use h2 n?1/6 ; for univariate case they
use h1 n?1/5 . Such bandwidths h1 and h2 are useful for optimally estimating the density functions. However,such a choice achieves
of mutual information estimation:
a suboptimal rate in terms
2/3 2
supp??? (2,L) P |I (b
ph ) ? I (p)| > ? c1 exp ?c2 n , where c1 and c2 are two constants.
Our method achieves the faster parametric rate.
3
3
Theoretical Analysis
Here we present the detailed proof of Theorem 2.1. To analyze the error |H (b
ph ) ? H (p)|, we first
decompose it into a bias or approximation error term, and a ?variance? or estimation error term:
|H (b
ph ) ? H (p)| ? |H (b
ph ) ? EH (b
ph )| + |EH (b
ph ) ? H (p)| .
|
{z
} |
{z
}
Variance
(3.1)
Bias
We are going to show that
P |H (b
ph ) ? EH (b
ph )| >
?
|
{z
}
p??? (2,L)
sup
n2
2 exp ?
,
32?2
(3.2)
Variance
|EH (b
ph ) ? H (p)| ?
{z
}
p??? (2,L) |
sup
c1 h2 +
c3
,
nh2
(3.3)
Bias
where c1 and c3 are two constants. Since the bound on the variance in (3.2) does not depend on h,
to optimize the rate, we only need to choose h to minimize the righthand side of (3.3). Therefore
h n?1/4 achieves the optimal rate. In the rest of this section, we bound the bias and variance
terms separately.
3.1 Analyzing the Bias Term
Here we prove (3.3). Let u be a vector. We denote the sup norm by kuk? . The next lemma bounds
the integrated squared bias of the kernel density estimator over the support X := [0, 1]2 .
Lemma 3.1. Under Assumptions 2.1, 2.2, and 2.3, there exists a constant c > 0 such that
Z
2
sup
(Ee
ph (x) ? p(x)) dx ? ch4 .
(3.4)
X
p??? (2,L)
Proof. We partition the support X := [0, 1]2 into three regions X = B ? C ? I, the boundary area
B, the corner area C, and the interior area I:
C
B
I
= {x : kx ? uk? ? h for u = (0, 0)T , or (0, 1)T , or (1, 0)T , or (1, 1)T },
= {x : x is within distance h to an edge of X , but does not belong to C},
= X \ (C ? B).
(3.5)
(3.6)
(3.7)
We have the following decomposition:
Z
Z
Z
Z
2
2
(Ee
ph (x) ? p(x)) dx =
+ + (Ee
ph (x) ? p(x)) dx = TI + TC + TB .
X
I
C
B
From standard results on kernel density estimation, we know that supp??(2,L) TI ? ch4 . In the next
Z
Z
2
2
two subsections, we bound TB :=
(Ee
ph (x) ? p(x)) dx and TC :=
(Ee
ph (x) ? p(x)) dx.
B
3.1.1
C
Analyzing TB
Let A := {x : 0 ? x1 ? h and h ? x2 ? 1 ? h}. We have
Z
Z
2
2
TB =
(Ee
ph (x) ? p(x)) dx ? c
(Ee
ph (x) ? p(x)) dx.
B
(3.8)
A
For x ? A, we have
n
1 X
x1 ? xi1
x2 ? xi2
x1 + xi1
x2 ? xi2
K
K
+K
K
.
peh (x) =
nh2 i=1
h
h
h
h
Therefore, for x ? A we have
Ee
ph (x)
=
1
h2
Z
1
Z
1
K
0
0
x1 ? t 1
h
4
K
x2 ? t 2
h
p(t1 , t2 )dt1 dt2
(3.9)
Z 1Z 1
x1 + t 1
x2 ? t2
1
K
K
p(t1 , t2 )dt1 dt2
+ 2
h 0 0
h
h
Z 1Z 1
=
K(u1 )K(u2 )p(x1 + u1 h, x2 + u2 h)du1 du2
?
?1
Z
1
x1
h
Z
?
x1
h
K(u1 )K(u2 )p(x1 ? u1 h, x2 ? u2 h)du1 du2 .
+
?1
(3.10)
?1
Since p ? ?? (2, L) and 0 < x1 ? h, we have
|p(x1 + u1 h, x2 + u2 h) ? p(x1 , x2 ) ? h5p(x), uih| ? Lkuk22 h2 ,
?p(x)
?p(x)
(2x1 +u1 h) +
(u2 h)| ? L[(2 + u1 )2 + u22 ]h2 .
|p(x1 ?u1 h, x2 ?u2 h) ? p(x1 , x2 ) +
?x1
?x2
?p(x)
h + ?p(x) h +
Since |u1 |, |u2 | ? 1, we have |p(x1 + u1 h, x2 + u2 h) ? p(x1 , x2 )| ?
1
?x
?x2
?p(x)
?p(x)
Lkuk22 h2 . Similarly, |p(x1 ? u1 h, x2 ? u2 h) ? p(x1 , x2 )| ? 9
h +
h + 10Lh2 .
?x1
?x2
For any x ? A, we can bound the bias term
|Ee
ph (x) ? p(x)|
Z 1Z 1
ph (x) ?
= Ee
K(u1 )K(u2 )p(t1 , t2 )du1 du2
?1 ?1
Z 1Z 1
?
K(u1 )K(u2 )p(x1 + u1 h, x2 + u2 h) ? p(x1 , x2 )du1 du2
?1
Z
?
1
x1
h
Z
?
(3.11)
(3.12)
(3.13)
x1
h
K(u1 )K(u2 )p(?u1 h ? x1 , x2 ? u2 h) ? p(x1 , x2 )du1 du2
?1 ?1
?p(x)
h + 2 ?p(x) h + 12Lh2
? 10
?x1
?x2
+
(3.14)
? 12Lh2 + 12Lh2
= 24Lh2 ,
?p(x)
where the last inequality follows from the fact that ?p(x)
,
older condition
?x1
?x2 ? Lh, by the H?
and the assumption that the density p(x) has vanishing partial derivatives on the boundary points.
Therefore, we have TB ? ch5 .
3.1.2
Analyzing TC
Let A1 := {x : 0 ? x1 , x2 ? h}. We now analyze the term TC :
Z
Z
2
2
TC =
(Ee
ph (x) ? p(x)) dx ? c
(Ee
ph (x) ? p(x)) dx.
C
(3.15)
A1
For notational simplicity, we write
Ux,h (a, b) = K
x1 ? a
h
K
x2 ? b
h
.
(3.16)
For x ? A1 , we have
peh (x) =
n
1 X
Ux,h (xi1 , xi2 ) + Ux,h (?xi1 , xi2 ) + Ux,h (xi1 , ?xi2 ) + Ux,h (?xi1 , ?xi2 ) . (3.17)
nh2 i=1
Therefore, for x ? A1 we have
Ee
ph (x)
=
1
h2
Z
1
Z
1
[Ux,h (t1 , t2 ) + Ux,h (?t1 , t2 ) + Ux,h (t1 , ?t2 ) + Ux,h (?t1 , ?t2 )] p(t1 , t2 )dt1 dt2
0
0
5
Z
1
=
?
x2
h
Z
?
1
+
?
Z
+
x2
h
1
K(u1 )K(u2 )p(u1 h ? x1 , x2 + u2 h)du1 du2
(3.19)
K(u1 )K(u2 )p(u1 h + x1 , ?x2 + u2 h)du1 du2
(3.20)
K(u1 )K(u2 )p(?x1 + u1 h, ?x2 + u2 h)du1 du2 .
(3.21)
1
?
x1
h
1
Z
x2
h
x1
h
Since K(?) is a symmetric kernel on [?1, 1], we have
Z 1 Z 1
Z 1 Z
K(u1 )K(u2 )du1 du2 =
x2
h
x1
h
1
1
?
Z
(3.18)
1
x1
h
Z
1
K(u1 )K(u2 )p(x1 + u1 h, x2 + u2 h)du1 du2
x1
h
Z
x2
h
Z
+
1
Z
x2
h
Z
?
Z
x1
h
x2
h
x1
h
x1
h
?
?
x2
h
x1
h
x2
h
Z
1
?
Z
x1
h
(3.22)
K(u1 )K(u2 )du1 du2 .
(3.23)
1
Z
?1
1
K(u1 )K(u2 )du1 du2 ,
?1
K(u1 )K(u2 )du1 du2 =
Therefore, for x = (x1 , x2 )T ? A1 ,
Z 1 Z 1
Z 1 Z 1 Z
p(x1 , x2 ) =
+
+
?
x2
h
x
? h2
?
?
+
?
x1
h
1
Z
x2
h
1
x1
h
p(x1 , x2 )K(u1 )K(u2 )du1 du2 .
Using the fact that p ? ?? (2, L), 0 ? x1 , x2 ? h, and ?1 ? u1 , u2 ? 1, we have
|p(x1 + u1 h, x2 + u2 h) ? p(x1 , x2 )| ? 4Lh2 ,
(3.24)
|p(u1 h ? x1 , x2 + u2 h) ? p(x1 , x2 )| ? 20Lh2 ,
(3.25)
2
|p(u1 h + x1 , u2 h ? x2 ) ? p(x1 , x2 )| ? 20Lh ,
(3.26)
2
(3.27)
|p(u1 h ? x1 , u2 h ? x2 ) ? p(x1 , x2 )| ? 36Lh .
For x ? A1 , we can then bound the bias term as
|Ee
ph (x) ? p(x)|
Z 1Z 1
ph (x) ?
K(u1 )K(u2 )p(t1 , t2 )du1 du2
= Ee
?1 ?1
Z 1 Z 1
K(u1 )K(u2 )|p(x1 + u1 h, x2 + u2 h) ? p(x1 , x2 )|du1 du2
?
?
x2
h
Z
1
+
?
Z
+
x1
h
x1
h
x2
h
1
Z
1
x2
h
(3.30)
K(u1 )K(u2 )|p(u1 h ? x1 , x2 + u2 h) ? p(x1 , x2 )|du1 du2
(3.31)
K(u1 )K(u2 )|p(u1 h + x1 , u2 h ? x2 ) ? p(x1 , x2 )|du1 du2
(3.32)
1
?
Z
(3.29)
1
Z
x2
h
Z
+
?
(3.28)
x1
h
1
x1
h
K(u1 )K(u2 )|p(u1 h ? x1 , u2 h ? x2 ) ? p(x1 , x2 )|du1 du2
? 80Lh2 .
(3.33)
(3.34)
6
Therefore, we have TC ? ch .
Combining the analysis of TB , TC , and TI , we show that the mirror image kernel density estimator
is free of boundary bias. Thus the desired result of Lemma 3.1 is proved.
3.1.3
Analyzing the Bias of the Entropy Estimator
Lemma 3.2. Under Assumptions 2.1, 2.2, and 2.3, there exists a universal constant C ? that does
not depend on the true density p, such that
C?
sup EH (b
ph ) ? H(p) ? ? .
(3.35)
n
p??? (2,L)
6
Proof. Recalling that g(u) = u log u, by Taylor?s theorem we have
i
h
i2
h
1
g (b
ph (x)) ? g (p(x)) = log(p(x)) + 1 ? pbh (x) ? p(x) +
? pbh (x) ? p(x) , (3.36)
2?(x)
where ?(x) lies in between pbh (x) and p(x). It is obvious that ?1 ? ?(x) ? ?2 .
Let ? be as defined in the statement of the theorem. Using Fubini?s theorem, H?older?s inequality and
the fact
that the Lebesgue
measure of X is 1, we have
EH (b
ph ) ? H(p)
(3.37)
Z
(3.38)
g (b
ph (x)) ? g (p(x)) dx
= E
X
Z
(3.39)
= E g (b
ph (x)) ? g (p(x)) dx
X
Z
Z
h
i
h
i
2
1
?
log(p(x)) + 1 ? E pbh (x) ? p(x) dx +
? E pbh (x) ? p(x) dx
X 2?(x)
X
sZ
Z
h
i2
h
i2
1
? ?
Eb
ph (x) ? p(x) dx +
?
E pbh (x) ? p(x) dx
(3.40)
2?1 X
X
sZ
Z
h
i2
h
i2
1
Ee
ph (x) ? p(x) dx +
? ?
?
E peh (x) ? p(x) dx.
(3.41)
2?1 X
X
c3
.
(3.42)
? c1 h2 + c2 h4 +
nh2
The last inequality follows from standard results of kernel density estimation and Lemma 3.1, where
c1 , c2 , c3 are three constants. We get the desired result by setting h n?1/4 .
3.2 Analyzing the Variance Term
Lemma 3.3. Under Assumptions 2.1, 2.2, and 2.3, we have,
n2
sup P |H (b
ph ) ? EH (b
ph )| > ? 2 exp ?
.
32?2
p??? (2,L)
(3.43)
Proof. Let pb0h (x) be the kernel density estimator defined as in (2.6) but with the j th data point
j 0
0
xj replaced
by an arbitrary value
(x ) . Since g (u) = log u + 1, by Assumption 2.1, we have
max |g 0 (b
ph (x))| , |g 0 (b
p0h (x))| ? ?.
For notational simplicity, we write the product kernel as K2 = K ?K. Using the mean-value theorem
and the fact that T?1 ,?2 (?) is a contraction, we have
sup
|H (b
ph ) ? H (b
p0h )|
(3.44)
x1 ,...,xn ,(xj )0
=
?
Z
0
g
(b
p
(x))
?
g
(b
p
(x))
dx
h
h
x1 ,...,xn ,(xj )0
X
Z
?
sup
|b
ph (x) ? pb0h (x)| dx
sup
(3.45)
(3.46)
x1 ,...,xn ,(xj )0
Z
=
?
sup
x1 ,...,xn ,(xj )0
|T?1 ,?2 [e
ph (x)] ? T?1 ,?2 [e
p0h (x)]| dx
(3.47)
X
j
j 0
Z
1
x ?x
1
(x ) ? x
?
K2
(3.48)
? 4?
sup
dx
2 K2
h
nh2
h
x1 ,...,xn ,(xj )0 X nh
Z
1
y?x
K
dx
(3.49)
? 8? sup
2
2
h
y
X nh
Z
8?
8?
?
K2 (u)du =
.
(3.50)
n
n
Therefore, using McDiarmaid?s inequality [16], we get the desired inequality (3.43). The uniformity
result holds since the constant does not depend on the true density p.
7
4
Application to Forest Density estimation
We apply the concentration inequality (2.10) to analyze an algorithm for learning high dimensional forest graph models [15]. In a forest density estimation problem, we observe n data points
x1 , . . . , xn ? Rd from a d-dimensional random vector X. We have two learning tasks: (i) we want
to estimate an acyclic undirected graph F = (V, E), where V is the vertex set containing all the
random variables and E is the edge set such that an edge (j, k) ? E if and only if the corresponding
random variables Xj and Xk are conditionally independent given the other variables X\{j,k} ; (ii)
once we have an estimated graph Fb, we want to estimate the density function p(x).
Using the negative log-likelihood loss, Liu et al. [15] show that the graph estimation problem can be
recast as the problem of finding the maximum weight spanning forest for a weighted graph, where
the weight of the edge connecting nodes j and k is I(Xj ; Xk ), the mutual information between
b j ; Xk ) from (2.9). The
these two variables. Empirically, we replace I(Xj ; Xk ) by its estimate I(X
forest graph can be obtained by the Chow-Liu algorithm [3, 13], which is an iterative algorithm. At
each iteration the algorithm adds an edge connecting that pair of variables with maximum mutual
information among all pairs not yet visited by the algorithm, if doing so does not form a cycle. When
stopped early, after s < d ? 1 edges have been added, it yields the best s-edge weighted forest. Once
b is estimated, we propose to estimate the forest density as
a forest graph Fb = (V, E)
Y
Y
Y
peh2 (xj , xk )
pbFb (x) =
?
peh2 (xu ) ?
peh1 (x` ),
(4.1)
peh2 (xj )e
ph2 (xk )
b
(j,k)?E
b
u?U
b
`?V \U
b is the set of isolated vertices in the estimated forest Fb. Our estimator is different from the
where U
estimator proposed by [15]?once the graph Fb is given, we treat the isolated variables differently
than the connected variables. As will be shown in Theorem 4.2, such a choice leads to minimax
optimal forest density estimation, while the obtained rate from [15] is suboptimal.
Let Fds denote the set of forest graphs with d nodes and no more than s edges. Let D(?k?) be the
Kullback-Leibler divergence. We define the s-oracle forest Fs? := (V, E ? ) and its corresponding
oracle density estimator pF ? to be
Y p(xj , xk ) Y
Fs? = arg min D(pkpF ) and pF ? :=
p(x` ).
(4.2)
p(xj , xk )
F ?Fds
`?V
(j,k)?E ?
Let ?? (2, L) be defined as in Assumption (2.1). We define a density class P? as
P? := p : p is a d-dimensional density with p(xj , xk ) ? ?? (2, L) for any j 6= k .
(4.3)
The next two theorems show that the above forest density estimation procedure is minimax optimal
for both graph recovery and density estimation. Their proofs are provided in a technical report [14].
Theorem 4.1 (Graph Recovery). Let Fb be the estimated s-edge forest graph using the Chow-Liu
algorithm. Under the same condition as Theorem 12 in [15], If we choose h n?1/4 for the mutual
information estimator in (2.9), then
r
s
log d
?
b
whenever
? 0.
(4.4)
sup P F 6= Fs = O
n
n
p?P?
Theorem 4.2 (Density Estimation). Once the s-edge forest graph Fb as in Theorem 4.1 has been
obtained, we calculate the density estimator (B.1) by choosing h1 n?1/5 and h2 n?1/6 . Then,
r
Z
s
d?s
sup E
pbFb (x) ? pF ? (x) dx ? C ?
+ 4/5 .
(4.5)
2/3
n
n
p?P?
X
5
Discussions and Conclusions
Theorem 4.1 allows d to increase exponentially fast as n increases and still guarantees graph recovery consistency. Theorem 4.2 provides the rate of convergence for the L1 -risk. The obtained rate
is minimax optimal over the class P? . The term sn?2/3 corresponds to the price paid to estimate
bivariate densities; while the term (d ? s)n?4/5 corresponds to the price paid to estimate univariate densities. In this way, we see that the exponential concentration inequality for Shannon mutual
information leads to significantly improved theoretical analysis of the forest density estimation, in
terms of both graph estimation and density estimation. This research was supported by NSF grant
IIS-1116730 and AFOSR contract FA9550-09-1-0373.
8
References
[1] Ibrahim A. Ahmad and Pi-Erh Lin. A nonparametric estimation of the entropy for absolutely continuous
distributions (corresp.). IEEE Transactions on Information Theory, 22(3):372?375, 1976.
[2] J Beirlant, E J Dudewicz, L Gy?orfi, and E C Van Der Meulen. Nonparametric entropy estimation: An
overview. International Journal of Mathematical and Statistical Sciences, 6(1):17?39, 1997.
[3] C. Chow and C. Liu. Approximating discrete probability distributions with dependence trees. Information
Theory, IEEE Transactions on, 14(3):462?467, 1968.
[4] Thomas M. Cover and Joy A. Thomas. Elements of Information Theory. Wiley, 1991.
[5] Paul P. B. Eggermont and Vincent N. LaRiccia. Best asymptotic normality of the kernel density entropy
estimator for smooth densities. IEEE Transactions on Information Theory, 45(4):1321?1326, 1999.
[6] A. Gretton, R. Herbrich, and A. J. Smola. The kernel mutual information. In Acoustics, Speech, and
Signal Processing, 2003. Proceedings.(ICASSP?03). 2003 IEEE International Conference on, volume 4,
pages IV?880. IEEE, 2003.
[7] Peter Hall and Sally Morton. On the estimation of entropy. Annals of the Institute of Statistical Mathematics, 45(1):69?88, 1993.
[8] A. O. Hero III, B. Ma, O. J. J. Michel, and J. Gorman. Applications of entropic spanning graphs. Signal
Processing Magazine, IEEE, 19(5):85?95, 2002.
[9] Harry Joe. Estimation of entropy and other functionals of a multivariate density. Annals of the Institute
of Statistical Mathematics, 41(4):683?697, December 1989.
[10] M. C. Jones, M. C. Linton, and J. P. Nielsen. A simple bias reduction method for density estimation.
Biometrika, 82(2):327?338, 1995.
[11] Shiraj Khan, Sharba Bandyopadhyay, Auroop R. Ganguly, Sunil Saigal, David J. Erickson, Vladimir
Protopopescu, and George Ostrouchov. Relative performance of mutual information estimation methods
for quantifying the dependence among short and noisy data. Phys. Rev. E, 76:026209, Aug 2007.
[12] Alexander Kraskov, Harald St?ogbauer, and Peter Grassberger. Estimating mutual information. Physical
review. E, Statistical, nonlinear, and soft matter physics, 69(6 Pt 2), June 2004.
[13] Joseph B. Kruskal. On the shortest spanning subtree of a graph and the traveling salesman problem.
Proceedings of the American Mathematical Society, 7(1):48?50, 1956.
[14] Han Liu, John Lafferty, and Larry Wasserman. Optimal forest density estimation. Technical Report, 2012.
[15] Han Liu, Min Xu, Haijie Gu, Anupam Gupta, John D. Lafferty, and Larry A. Wasserman. Forest density
estimation. Journal of Machine Learning Research, 12:907?951, 2011.
[16] C. McDiarmid. On the method of bounded differences. In Surveys in Combinatorics, number 141 in
London Mathematical Society Lecture Note Series, pages 148?188. Cambridge University Press, August
1989.
[17] XuanLong Nguyen, Martin J. Wainwright, and Michael I. Jordan. Estimating divergence functionals and the likelihood ratio by convex risk minimization. IEEE Transactions on Information Theory,
56(11):5847?5861, 2010.
[18] D. P?al, B. P?oczos, and C. Szepesv?ari. Estimation of R?enyi entropy and mutual information based on
generalized nearest-neighbor graphs. Arxiv preprint arXiv:1003.1954, 2010.
[19] L. Paninski. Estimation of entropy and mutual information. Neural Computation, 15(6):1191?1253,
2003.
[20] Liam Paninski and Masanao Yajima. Undersmoothed kernel entropy estimators. IEEE Transactions on
Information Theory, 54(9):4384?4388, 2008.
[21] Barnab?as P?oczos and Jeff G. Schneider. Nonparametric estimation of conditional information and divergences. Journal of Machine Learning Research - Proceedings Track, 22:914?923, 2012.
[22] B. W. Silverman. Density Estimation for Statistics and Data Analysis. Chapman and Hall. New York,
NY, 1986.
[23] A.B. Tsybakov and van den Meulen. Root-n Consistent Estimators of Entropy for Densities with Unbounded Support, volume 23. Universite catholique de Louvain,Institut de statistique, 1994.
[24] Marc M. Van Hulle. Edgeworth approximation of multivariate differential entropy. Neural Comput.,
17(9):1903?1910, September 2005.
[25] O Vasicek. A test for normality based on sample entropy. Journal of the Royal Statistical Society Series
B, 38(1):54?59, 1976.
[26] Ven Es Bert. Estimating functionals related to a density by a class of statistics based on spacings. Scandinavian Journal of Statistics, 19(1):61?72, 1992.
9
|
4768 |@word norm:1 nd:2 decomposition:1 contraction:1 paid:2 reduction:1 liu:8 series:2 pbh:13 ours:1 yet:1 dx:26 john:3 grassberger:1 chicago:1 partition:1 n0:2 joy:1 xk:9 vanishing:2 short:1 fa9550:1 provides:2 node:2 herbrich:1 mcdiarmid:1 unbounded:1 mathematical:3 c2:4 h4:1 differential:1 prove:3 advocate:1 expected:1 pf:3 provided:2 estimating:8 bounded:5 finding:1 nj:1 guarantee:3 ti:3 biometrika:1 k2:5 uk:2 unit:1 grant:1 t1:9 engineering:1 treat:1 analyzing:5 eb:1 liam:1 union:3 edgeworth:2 silverman:1 procedure:1 area:3 universal:1 significantly:1 orfi:1 statistique:1 get:2 interior:1 risk:2 optimize:1 ostrouchov:1 convex:2 survey:2 simplicity:2 recovery:3 wasserman:3 estimator:28 financial:1 population:1 annals:2 pt:1 magazine:1 exact:1 pa:1 element:1 preprint:1 uih:1 calculate:2 region:1 cycle:1 connected:1 ahmad:1 dt2:3 depend:3 uniformity:1 gu:1 icassp:1 joint:1 differently:1 haijie:1 enyi:1 fast:2 london:1 choosing:1 statistic:5 ganguly:1 noisy:1 advantage:2 sequence:1 propose:3 product:2 combining:1 achieve:1 convergence:3 stat:1 nearest:2 aug:1 implies:1 correct:1 larry:4 require:1 suffices:1 barnab:1 decompose:1 extension:1 hold:2 around:1 hall:2 exp:6 ch4:2 kruskal:1 achieves:3 early:1 entropic:1 estimation:38 visited:1 peh:4 weighted:2 minimization:1 corollary:2 morton:1 june:1 notational:2 likelihood:2 integrated:1 chow:4 going:1 arg:1 among:2 logn:1 development:1 mutual:23 masanao:1 construct:1 once:4 having:1 chapman:1 ven:1 jones:1 t2:10 report:2 simultaneously:2 divergence:3 corresp:1 replaced:1 lebesgue:1 n1:2 recalling:1 righthand:1 accurate:2 edge:10 partial:2 lh:3 institut:1 tree:1 iv:1 taylor:1 desired:4 vasicek:1 isolated:2 theoretical:3 stopped:1 soft:1 cover:1 vertex:2 optimally:2 combined:1 st:1 density:53 international:2 contract:1 xi1:11 physic:1 michael:1 lh2:8 connecting:2 squared:3 containing:1 choose:4 corner:2 american:1 derivative:2 michel:1 supp:2 de:2 gy:1 harry:1 matter:1 combinatorics:1 h1:3 root:1 analyze:3 sup:17 doing:1 start:1 minimize:1 square:2 il:1 variance:6 efficiently:1 yield:1 weak:1 vincent:1 phys:1 whenever:1 u22:1 obvious:1 universite:1 proof:8 sunil:1 ph2:1 proved:1 knowledge:3 lim:2 subsection:1 organized:1 nielsen:1 reflecting:1 fubini:1 improved:1 nh2:6 smola:1 traveling:1 nonlinear:1 nonparametrically:1 effect:1 true:3 du1:19 hulle:1 symmetric:1 leibler:1 dx2:3 i2:5 conditionally:1 generalized:1 yajima:1 l1:1 image:2 ari:1 empirically:1 overview:1 bandyopadhyay:1 physical:1 exponentially:2 nh:2 volume:2 belong:1 mellon:1 cambridge:1 rd:1 consistency:1 mathematics:2 similarly:2 hp:1 scandinavian:1 han:3 add:1 du2:19 multivariate:2 recent:1 belongs:1 inequality:14 oczos:2 der:1 george:1 schneider:1 shortest:1 ogbauer:1 signal:2 ii:2 p0h:3 gretton:1 smooth:1 technical:3 faster:1 plug:4 long:1 lin:1 a1:6 converging:1 dudewicz:1 cmu:1 arxiv:2 iteration:1 kernel:16 harald:1 c1:6 proposal:1 szepesv:1 want:3 separately:2 spacing:2 interval:1 rest:2 unlike:1 undirected:1 december:1 lafferty:4 jordan:1 ee:16 kraskov:1 iii:1 easy:1 xj:14 bandwidth:7 suboptimal:2 ibrahim:1 f:3 peter:2 speech:1 york:1 nine:1 remark:2 useful:1 detailed:1 involve:1 xuanlong:1 nonparametric:5 tsybakov:1 ph:45 exist:1 nsf:1 fds:2 estimated:7 track:1 carnegie:1 write:2 discrete:1 four:2 achieving:1 kuk:1 graph:21 clipped:2 bound:10 nonnegative:1 oracle:2 infinity:1 x2:118 u1:43 min:3 martin:1 department:5 according:1 across:1 joseph:1 rev:1 den:1 turn:1 xi2:13 know:2 letting:1 hero:1 salesman:1 operation:1 apply:4 observe:1 away:1 alternative:1 anupam:1 original:1 thomas:2 remaining:1 eggermont:1 approximating:1 society:3 added:1 quantity:2 parametric:3 concentration:6 dependence:3 usual:2 erickson:1 september:1 distance:1 spanning:3 ratio:1 vladimir:1 statement:1 kde:4 negative:1 galton:1 upper:1 markov:1 bert:1 arbitrary:1 august:1 undersmooth:1 david:1 pair:4 c3:4 khan:1 acoustic:1 louvain:1 established:1 tb:6 recast:1 max:4 royal:1 wainwright:1 eh:7 minimax:3 older:5 normality:2 meulen:2 sn:1 review:1 asymptotic:1 afosr:1 relative:1 loss:1 lecture:1 acyclic:1 erh:1 h2:11 sufficient:1 consistent:1 article:1 saigal:1 pi:1 supported:1 last:3 free:1 catholique:1 bias:13 uchicago:1 side:2 institute:2 neighbor:2 van:3 boundary:8 xn:10 fb:6 nguyen:1 transaction:5 functionals:3 kullback:1 sz:2 continuous:1 iterative:1 forest:23 expansion:1 mse:1 du:3 beirlant:1 domain:2 marc:1 main:3 paul:1 x1:133 xu:2 ny:1 wiley:1 exponential:9 comput:1 lie:2 theorem:17 hanliu:1 dx1:3 r2:2 gupta:1 bivariate:6 exists:5 joe:1 mirror:2 subtree:1 kx:1 gorman:1 entropy:25 tc:7 paninski:2 univariate:6 ux:9 sally:1 u2:43 ch:1 corresponds:4 dt1:3 satisfies:1 ma:1 conditional:1 quantifying:1 jeff:1 replace:1 price:2 specifically:1 lemma:6 e:1 shannon:11 support:4 alexander:1 absolutely:1 princeton:2
|
4,163 | 4,769 |
Learning to Align from Scratch
Gary B. Huang1
Marwan A. Mattar1 Honglak Lee2 Erik Learned-Miller1
1
University of Massachusetts, Amherst, MA
{gbhuang,mmattar,elm}@cs.umass.edu
2
University of Michigan, Ann Arbor, MI
[email protected]
Abstract
Unsupervised joint alignment of images has been demonstrated to improve performance on recognition tasks such as face verification. Such alignment reduces
undesired variability due to factors such as pose, while only requiring weak supervision in the form of poorly aligned examples. However, prior work on unsupervised alignment of complex, real-world images has required the careful selection of feature representation based on hand-crafted image descriptors, in order to
achieve an appropriate, smooth optimization landscape. In this paper, we instead
propose a novel combination of unsupervised joint alignment with unsupervised
feature learning. Specifically, we incorporate deep learning into the congealing
alignment framework. Through deep learning, we obtain features that can represent the image at differing resolutions based on network depth, and that are tuned
to the statistics of the specific data being aligned. In addition, we modify the
learning algorithm for the restricted Boltzmann machine by incorporating a group
sparsity penalty, leading to a topographic organization of the learned filters and
improving subsequent alignment results. We apply our method to the Labeled
Faces in the Wild database (LFW). Using the aligned images produced by our
proposed unsupervised algorithm, we achieve higher accuracy in face verification
compared to prior work in both unsupervised and supervised alignment. We also
match the accuracy for the best available commercial method.
1
Introduction
One of the most challenging aspects of image recognition is the large amount of intra-class variability, due to factors such as lighting, background, pose, and perspective transformation. For tasks
involving a specific object category, such as face verification, this intra-class variability can often be
much larger than inter-class differences. This variability can be seen in Figure 1, which shows sample images from Labeled Faces in the Wild (LFW), a data set used for benchmarking unconstrained
face verification performance. The task in LFW is, given a pair of face images, determine if both
faces are of the same person (matched pair), or if each shows a different person (mismatched pair).
Figure 1: Sample images from LFW: matched pairs (top row) and mismatched pairs (bottom row)
Recognition performance can be significantly improved by removing undesired intra-class variability, by first aligning the images to some canonical pose or configuration. For instance, face verification accuracy can be dramatically increased through image alignment, by detecting facial feature
points on the image and then warping these points to a canonical configuration. This alignment
process can lead to significant gains in recognition accuracy on real-world face verification, even
1
for algorithms that were explicitly designed to be robust to some misalignment [1]. Therefore,
the majority of face recognition systems evaluated on LFW currently make use of a preprocessed
version of the data set known as LFW-a,1 where the images have been aligned by a commercial fiducial point-based supervised alignment method [2]. Fiducial point (or landmark-based) alignment
algorithms [1, 3?5], however, require a large amount of supervision or manual effort. One must decide which fiducial points to use for the specific object class, and then obtain many example image
patches of these points. These methods are thus hard to apply to new object classes, since all of this
manual collection of data must be re-done, and the alignment results may be sensitive to the choice
of fiducial points and quality of training examples.
An alternative to this supervised approach is to take a set of poorly aligned images (e.g., images
drawn from approximately the same distribution as the inputs to the recognition system) and attempt
to make the images more similar to each other, using some measure of joint similarity such as
entropy. This framework of iteratively transforming images to reduce the entropy of the set is known
as congealing [6], and was originally applied to specific types of images such as binary handwritten
characters and magnetic resonance image volumes [7?9]. Congealing was extended to work on
complex, real-world object classes such as faces and cars [10]. However, this required a careful
selection of hand-crafted feature representation (SIFT [11]) and soft clustering, and does not achieve
as large of an improvement in verification accuracy as supervised alignment (LFW-a).
In this work, we propose a novel combination of unsupervised alignment and unsupervised feature learning, specifically by incorporating deep learning [12?14] into the congealing framework.
Through deep learning, we can obtain a feature representation tuned to the statistics of the specific
object class we wish to align, and capture the data at multiple scales by using multiple layers of a
deep learning architecture. Further, we incorporate a group sparsity constraint into the deep learning algorithm, leading to a topographic organization on the learned filters, and show that this leads
to improved alignment results. We apply our method to unconstrained face images and show that,
using the aligned images, we achieve a significantly higher face verification accuracy than obtained
both using the original face images and using the images produced by prior work in unsupervised
alignment [10]. In addition, the accuracy surpasses that achieved using supervised fiducial points
based alignment [3], and matches the accuracy using the LFW-a images produced by commercial
supervised alignment.
2
Related Work
We review relevant work in unsupervised joint alignment and deep learning.
2.1
Unsupervised Joint Alignment
Cox et al. presented a variation of congealing for unsupervised alignment, where the entropy similarity measure is replaced with a least-squares similarity measure [15, 16]. Liu et al. extended
congealing by modifying the objective function to allow for simultaneous alignment and clustering [17]. Mattar et al. developed a transformed variant of Bayesian infinite models that can also
simultaneously align and cluster complex data sets [18]. Zhu et al. developed a method for non-rigid
alignment using a model parameterized by mesh vertex coordinates in a deformable Lucas-Kanade
formulation [19]. However, this technique requires additional supervision in the form of object part
(e.g., eye) detectors specific to the data to be aligned.
In this work, we chose to extend the original congealing method, rather than other alignment frameworks, for several reasons. The algorithm uses entropy as a measure of similarity, rather than variance or least squares, thus allowing for the alignment of data with multiple modes. Unlike other joint
alignment procedures [15], the main loop scales linearly with the number of images to be aligned,
allowing for a greater number of images to be jointly aligned, smoothing the optimization landscape.
Finally, congealing requires only very weak supervision in the form of poorly aligned images. However, our proposed extensions, using features obtained from deep learning, could also be applied
to other alignment algorithms that have only been used with a pixel intensity representation, such
as [15, 16, 19].
2.2
Deep Learning
A deep belief network (DBN) is a generative graphical model consisting of a layer of visible units
and multiple layers of hidden units, where each layer encodes statistical dependencies in the units in
1
http://www.openu.ac.il/home/hassner/data/lfwa/
2
the layer below [12]. DBNs and related unsupervised learning algorithms such as auto-encoders [13]
and sparse coding [20, 21] have been used to learn higher-level feature representations from unlabeled data, suitable for use in tasks such as classification. These methods have been successfully
applied to computer vision tasks [22?26], as well as audio recognition [27], natural language processing [28], and information retrieval [29]. To the best of our knowledge, our proposed method is
the first to apply deep learning to the alignment problem.
DBNs are generally trained using images drawn from the same distribution as the test images, which
in our case corresponds to learning from faces in the LFW training set. In many machine learning
problems, however, we are given only a limited amount of labeled data, which can cause an overfitting problem. Thus, we also examine the strategy of self-taught learning [30] (related to semisupervised learning [31]). The idea of self-taught learning is to use a large amount of unlabeled
data from a distribution different from the labeled data, and transfer low-level structures that can be
shared between unlabeled and labeled data. For generic object categorization, Raina et al. [30] and
Lee et al. [23] have shown successful applications of self-taught learning, using sparse coding and
deep belief networks to learn feature representations from natural images. In this paper, we examine
whether self-taught learning can be successful for alignment tasks.
In addition, we augment the training procedure of DBNs by adding a group sparsity regularization
term, leading to a set of learned filters with a linear topographic organization. This idea is closely
related to the Group Lasso for regression [32] and Topographic ICA [33], and has been applied to
sparse coding with basis functions that form a generally two-dimensional topological map [34]. We
extend this method to basis functions that are learned in a convolutional manner, and to higher-order
features obtained from a multi-layer convolutional DBN.
3
Methodology
We begin with a review of the congealing framework. We then show how deep learning can be
incorporated into this framework using convolutional DBNs, and how the learning algorithm can be
modified through group sparsity regularization to improve congealing performance.
3.1
Congealing
We first define two terms used in congealing, the distribution field (DF) and the location stack. Let
X = {1, 2, . . . , M } be the set of all feature values. For example, letting the feature space be intensity
values, M = 2 for binary images and M = 256 for 8-bit grayscale images. A distribution field is
a distribution over X at each location in the image representation; e.g., for binary images, a DF
would be a distribution over {0, 1} at each pixel in the image. One can view the DF as a generative
independent pixel model of images, by placing a random variable Xi at each pixel location i. An
image then consists of a draw from the alphabet X for each Xi according to the distribution over X
at the ith pixel of the DF. Given a set of images, the location stack is defined as the set of values,
with domain X , at a specific location across a set of images. Thus, the empirical distribution at a
given location of a DF is determined by the corresponding location stack.
Congealing proceeds by iteratively computing the empirical distribution defined by a set of images,
then for each image, choosing a transformation (we use the set of similarity transformations) that
reduces the entropy of the distribution field. Figure 2 illustrates congealing on one dimensional
binary images. Under an independent pixel model and uniform distribution over transformations,
minimizing the entropy of the distribution field is equivalent to maximizing the likelihood according
to the distribution field [6].
Once congealing has been performed on a set of images (e.g., a training set), funneling [6,10] can be
used to quickly align additional images, such as from a new test set. This is done by maintaining the
Figure 2: Schematic illustration of congealing of one dimensional binary images, where the transformation space is left-right translation
3
Visible nodes
Hidden detection nodes
NV
NH = Nv - NW
Hidden pooling nodes
NP = NH/C
hkij
vij
(C=2)
pk?
NW
?filtering?
(detection)
k=1,?,K (K=4)
?pooling?
Figure 3: Illustration of convolutional RBM with probabilistic max-pooling. For illustration, we
used pooling ratio C = 2 and number of filters K = 4. See text for details.
sequence of DFs from each iteration of congealing. A new image is then aligned by transforming it
iteratively according to the sequence of saved DFs, thereby approximating the results of congealing
on the original set of images as well as the new test image. As mentioned earlier, congealing was extended to work on complex object classes, such as faces, by using soft clustering of SIFT descriptors
as the feature representation [10]. We will refer to this congealing algorithm as SIFT congealing.
We now describe our proposed extension, which we refer to as deep congealing.
3.2
Deep Congealing
To incorporate deep learning within congealing, we use the convolutional restricted Boltzmann machine (CRBM) [23,35] and convolutional deep belief network (CDBN) [23]. The CRBM is an extension of the restricted Boltzmann machine, which is a Markov random field with a hidden layer and
a visible layer (corresponding to image pixels in computer vision problems), where the connection
between layers is bipartite. In the CRBM, rather than fully connecting the hidden layer and visible
layer, the weights between the hidden units and the visible units are local (i.e., 10 ? 10 pixels instead
of full image) and shared among all hidden units. An illustration of CRBM can be found in Figure 3.
The CRBM has three sets of parameters: (1) K convolution filter weights between the hidden nodes
and the visible nodes, where each filter is NW ? NW pixels (i.e., W k ? RNW ?NW , k = 1, ..., K);
(2) hidden biases bk ? R that are shared among hidden nodes; and (3) visible bias c ? R that is
shared among visible nodes.
To make CRBMs more scalable, Lee et al. developed probabilistic max-pooling, a technique for
incorporating local translation invariance. Max-pooling refers to operations where a local neighborhood (e.g., 2 ? 2 grid) of feature detection outputs is shrunk to a pooling node by computing the
maximum of the local neighbors. Max-pooling makes the feature representation more invariant to
local translations in the input data, and has been shown to be useful in computer vision [23, 25, 36].
Letting P (v, h) = Z1 exp(?E(v, h)), we define the energy function of the probabilistic maxpooling CRBM as follows:2
E(v, h)
=
?
NH
K X
X
? k ? v)ij +
hkij (W
k=1 i,j=1
s.t.
X
hkij
NV
NH
NV
K
X
X
X
X
1 2
vrs ?
bk
hkij ? c
vrs
2
r,s=1
r,s=1
i,j=1
k=1
? 1, ?k, ?
(i,j)?B?
? k refers to flipping the original filter W k in both upside-down and left-right directions, and
Here, W
? denotes convolution. B? refers to a C ? C block of locally neighboring hidden units (i.e., pooling
region) hki,j that are pooled to a pooling node pk? . The CRBM can be trained by approximately
maximizing the log-likelihood of the unlabeled data via contrastive divergence [37]. For details on
learning and inference in CRBMs, see [23].
After training a CRBM, we can use it to compute the posterior of the pooling units given the input
data. These pooling unit activations can be used as input to further train the next layer CRBM. By
stacking the CRBMs, the algorithm can capture high-level features, such as hierarchical object-part
decompositions. After constructing a convolutional deep belief network, we perform (approximate)
2
We use real-valued visible units in the first-layer CRBM; however, we use binary-valued visible units when
constructing the second-layer CRBM. See [23] for details.
4
inference of the whole network in a feedforward (bottom-up) manner. Specifically, letting I(hkij ) ,
? k ? v)ij , we can infer the pooling unit activations as a softmax function:
bk + ( W
P
k
(i0 ,j 0 )?B? exp(I(hi0 j 0 ))
k
P
P (p? = 1|v) =
1 + (i0 ,j 0 )?B? exp(I(hki0 j 0 ))
Given a set of poorly aligned face images, our goal is to iteratively transform each image to reduce
the total entropy over the pooling layer outputs of a CDBN applied to each of the images. For a
CDBN with K pooling layer groups, we now have K location stacks at each image location (after
max-pooling), over a binary distribution for each location stack. Given N unaligned face images,
let P be the number of pooling units in each group in the top-most layer of the CDBN. We use
the pooling unit probabilities, with the interpretation that the pooling unit can be considered as a
k,(n)
mixture of sub-units that are on and off [6]. Letting p?
be the pooling unit ? in group k for image
PN
k,(n)
n under some transformation T n , we define D?k (1) = N1 n=1 p?
and D?k (0) = 1 ? D?k (1).
P
k
Then, the entropy for a specific pooling unit is H(D? ) = ? s?{0,1} D?k (s) log(D?k (s)). At each
iteration of congealing, we find a transformation for each image that decreases the total entropy
PK PP
k
k=1
?=1 H(D? ). Note that if K = 1, this reduces to the traditional congealing formulation on
the binary output of the single pooling layer.
3.3
Learning a Topology
As congealing reduces entropy by performing local hill-climbing in the transformation parameters,
a key factor in the success of congealing is the smoothness of this optimization landscape. In SIFT
congealing, smoothness is achieved through soft clustering and the properties of the SIFT descriptor.
Specifically, to compute the descriptor, the gradient is computed at each pixel location and added
to a weighted histogram over a fixed number of angles. The histogram bins have a natural circular
topology. Therefore, the gradient at each location contributes to two neighboring histogram bins,
weighted using linear interpolation. This leads to a smoother optimization landscape when congealing. For instance, if a face is rotated a fraction of the correct angle to put it into a good alignment,
there will be a corresponding partial decrease in entropy due to this interpolated weighting.
In contrast, there is no topology on the filters produced using standard learning of a CRBM. This
may lead to plateaus or local minima in the optimization landscape with congealing, for instance,
if one filter is a small rotation of another filter, and a rotation of the image causes a section of the
face to be between these two filters. This problem may be particularly severe for filters learned at
deeper layers of a CDBN. For instance, a second-layer CDBN trained on face images would likely
learn multiple filters that resemble eye detectors, capturing slightly different types and scales of
eyes. If these filters are activating independently, then the resulting entropy of a set of images may
not decrease even if eyes in different images are brought into closer alignment.
A CRBM is generally trained with sparsity regularization [38], such that each filter responds to
a sparse set of input stimuli. A smooth optimization for congealing requires that, as an image
patch is transformed from one such sparse set to another, the change in pooling unit activations is
also gradual rather than abrupt. Therefore, we would like to learn filters with a linear topological
ordering, such that when a particular pooling unit pk? at location ? and associated with filter k
0
is activated, the pooling units at the same location, associated with nearby filters, i.e., pk? for k 0
close to k, will also have partial activation. To learn a topology on the learned filters, we add the
following group sparsity
penalty to the learning objective function (i.e., negative log-likelihood):
P pP
d2
k 2
0
Lsparsity = ? k,?
0
k ?k ?k (p? ) , where ?d is a Gaussian weighting, ?d ? exp(? 2? 2 ). Let
the term array be used to refer to the set of pooling units associated with a particular filter, i.e., pk?
for all locations ?. This regularization penalty is a sum (L1 norm) of L2 norms, each of which is a
Gaussian weighting, centered at a particular array, of the pooling units across each array at a specific
location. In practice, rather than weighting every array in each summand, we use a fixed kernel
covering five consecutive filters, i.e., ?d = 0 for |d| > 2.
The rationale behind such a regularization term is that, unlike an L2 norm, an L1 norm encourages
sparsity. This sum of L2 norms thus encourages sparsity at the group level, where a group is a set
of Gaussian weighted activations centered at a particular array. Therefore, if two filters are similar
and tend to both activate for the same visible data, a smaller penalty will be incurred if these filters
5
(a) Without topology
(b) With topology
Figure 4: Visualization of second layer filters learned from face images, without topology (left) and
with topology (right). By learning with a linear topology, nearby filters (in row major order) have
correlated activations. This leads to filters for particular facial features to be grouped together, such
as eye detectors at the end of the row third from the bottom.
are nearby in the topological ordering, as this will lead to a more sparse representation at the group
L2 level. To account for this penalty term, we augment the learning algorithm by taking a step in
the negative derivative with respect to the CRBM weights. We define ?(i, j) as the pooling location
k,k0
1
associated with position (i, j), and J as Jij
= qP
pk
(1 ? pk?(i,j) )hkij . We
k00
2 ?(i,j)
k00 ?k00 ?k0 (p?(i,j) )
P
0
can write the full gradient as ?W k Lsparsity = ? k0 ?k?k0 (v ? J?k,k ), where ? denotes convolution
0
0
and J?k,k means J k,k flipped horizontally and vertically. Thus we can efficiently compute the
gradient as a sum of convolutions.
Following the procedure given by Sohn et al. [39], we initialize the filters using expectationmaximization under a mixture of Gaussians/Bernoullis, before proceeding with CRBM learning.
Therefore, when learning with the group sparsity penalty, we periodically reorder the filters using
the following greedy strategy. Taking the first filter, we iteratively add filters one by one to the end
of the filter set, picking the filter that minimizes the group sparsity penalty.
4
Experiments
We learn three different convolutional DBN models to use as the feature representation for deep
congealing. First, we learn a one-layer CRBM from the Kyoto images,3 a standard natural image
data set, to evaluate the performance of congealing with self-taught CRBM features. Next, we learn
a one-layer CRBM from LFW face images, to compare performance when learning the features
directly on images of the object class to be aligned. Finally, we learn a two-layer CDBN from LFW
face images, to evaluate performance using higher-order features. For all three models, we also
compare learning the weights using the standard sparse CDBN learning, as well as learning with
group sparsity regularization. Visualizations of the top layer weights of the two-layer CDBN are
given in Figure 4, demonstrating the effect of adding the sparsity regularization term.
We used K = 32 filters for the one-layer models and K = 96 in the top layer of the two-layer
models. During learning, we used a pooling size of 5x5 for the one-layer models and 3x3 in both
layers of the two-layer model. We used ? 2 = 1 in the Gaussian weighting for group sparsity
regularization. For computing the pooling layer representation to use in congealing, we modified
the pooling size to 3x3 for the one-layer models and 2x2 for the second layer in the two-layer
model, and adjusted the hidden biases to give an expected activation of 0.025 for the hidden units.
In Figure 5, we show a selection of images under several alignment methods. Each image is shown
in its original form, and aligned using SIFT Congealing, Deep Congealing with topology, using a
one-layer and two-layer CDBN trained on faces, and the LFW-a alignment.
We evaluate the effect of alignment on verification accuracy using View 1 of LFW. For the congealing methods, 400 images from the training set were congealed and used to form a funnel to
subsequently align all of the images in both the training and test sets. To obtain verification accuracy, we use a variation on the method of Cosine Similarity Metric Learning (CSML) [40], one of
the top-performing methods on LFW. As in CSML, we first apply whitening PCA and reduce the
representation to 500 dimensions. We then normalize each image feature vector, and apply a linear
SVM to an image pair by combining the image feature vectors using element-wise multiplication.
3
http://www.cnbc.cmu.edu/cplab/data_kyoto.html
6
original
SIFT
deep l1
deep l2
LFW-a
original
SIFT
deep l1
deep l2
LFW-a
Figure 5: Sample images from LFW produced by different alignment algorithms. For each set of five
images, the alignments are, from left to right: original images; SIFT Congealing; Deep Congealing,
Faces, layer 1, with topology; Deep Congealing, Faces, layer 2, with topology; Supervised (LFW-a).
Note that if the weights of the SVM are 1 and the bias is 0, then this is equivalent to cosine similarity. We find that this procedure yields comparable accuracy to CSML but is much faster and less
sensitive to the regularization parameters.4 As our goal is to improve verification accuracy through
better alignment, we focus on performance using a single feature representation, and only use the
square root LBP features [40, 41] on 150x80 croppings of the full LFW images.
Table 1 gives the verification accuracy for this verification system using images produced by a number of alignment algorithms. Deep congealing gives a significant improvement over SIFT congealing. Using a CDBN representation learned with a group sparsity penalty, leading to learned filters
with topographic organization, consistently gives a higher accuracy of one to two percentage points.
We compare with two supervised alignment systems, the fiducial points based system of [3],5 and
LFW-a. Note that LFW-a was produced by a commercial alignment system, in the spirit of [3], but
with important differences that have not been published [2]. Congealing with a one-layer CDBN6
trained on faces, with topology, gives verification accuracy significantly higher than using images
produced by [3], and comparable to the accuracy using LFW-a images.
Moreover, we can combine the verification scores using images from the one-layer and two-layer
CDBN trained on faces, learning a second SVM on these scores. By doing so, we achieve a further
gain in verification performance, achieving an accuracy of 0.831, exceeding the accuracy using
LFW-a. This suggests that the two-layer CDBN alignment is somewhat complementary to the onelayer alignment. That is, although the two-layer CDBN alignment produces a lower verification
accuracy, it is not strictly worse than the one-layer CDBN alignment for all images, but rather
is aligning according to a different set of statistics, and achieves success on a different subset of
images than the one-layer CDBN model. As a control, we performed the same score combination
using the scores produced from images from the one-layer CDBN alignment trained on faces, with
topology, and the original images. This gave a verification accuracy of 0.817, indicating that the
improvement from combining two-layer scores is not merely obtained from using two different sets
of alignments.
4
We note that the accuracy published in [40] was higher than we were able to obtain in our own implementation. After communicating with the authors, we found that they used a different training procedure than
described in the paper, which we believe inadvertently uses some test data as training, due to View 1 and View
2 of LFW not being mutually exclusive. Following the training procedure detailed in the paper, which we view
to be correct, we find the accuracy to be about 3% lower than the published results.
5
Using code available at http://www.robots.ox.ac.uk/?vgg/research/nface/
6
Technically speaking, the term ?one-layer CDBN? denotes a CRBM.
7
Table 1: Unconstrained face verification accuracy on View 1 of LFW using images produced by
different alignment algorithms. By combining the classifier scores produced by layer 1 and 2 using
a linear SVM, we achieve higher accuracy using unsupervised alignment than obtained using the
widely-used LFW-a images, generated using a commercial supervised fiducial-points algorithm.
Alignment
Original
SIFT Congealing
Deep Congealing, Kyoto, layer 1
Deep Congealing, Kyoto, layer 1, with topology
Deep Congealing, Faces, layer 1
Deep Congealing, Faces, layer 1, with topology
Deep Congealing, Faces, layer 2
Deep Congealing, Faces, layer 2, with topology
Combining Scores of Faces, layers 1 and 2, with topology
Fiducial Points-based Alignment [3] (supervised)
LFW-a (commercial)
5
Accuracy
0.742
0.758
0.807
0.815
0.802
0.820
0.780
0.797
0.831
0.805
0.823
Conclusion
We have shown how to combine unsupervised joint alignment with unsupervised feature learning.
By congealing on the pooling layer representation of a CDBN, we are able to achieve significant
gains in verification accuracy over existing methods for unsupervised alignment. By adding a group
sparsity penalty to the CDBN learning algorithm, we can learn filters with a linear topology, providing a smoother optimization landscape for congealing. Using face images aligned by this method,
we obtain higher verification accuracy than the supervised fiducial points based method of [3]. Further, despite being unsupervised, our method is still able to achieve comparable accuracy to the
widely used LFW-a images, obtained by a commercial fiducial point-based alignment system whose
detailed procedure is unpublished. We believe that our proposed method is an important contribution
in developing generic alignment systems that do not require domain-specific fiducial points.
References
[1] L. Wolf, T. Hassner, and Y. Taigman. Similarity scores based on background samples. In ACCV, 2009.
[2] Y. Taigman, L. Wolf, and T. Hassner. Multiple one-shots for utilizing class label information. In BMVC,
2009.
[3] M. Everingham, J. Sivic, and A. Zisserman. ?Hello! My name is... Buffy? - automatic naming of characters in TV video. In BMVC, 2006.
[4] T. L. Berg, A. C. Berg, M. Maire, R. White, Y. W. Teh, E. Learned-Miller, and D. A. Forsyth. Names and
faces in the news. In CVPR, 2004.
[5] Y. Zhou, L. Gu, and H.-J. Zhang. Bayesian tangent shape model: Estimating shape and pose parameters
via Bayesian inference. In CVPR, 2003.
[6] E. Learned-Miller. Data driven image models through continuous joint alignment. PAMI, 2005.
[7] E. Miller, N. Matsakis, and P. Viola. Learning from one example through shared densities on transforms.
In CVPR, 2000.
[8] L. Zollei, E. Learned-Miller, E. Grimson, and W. Wells. Efficient population registration of 3d data.
In Workshop on Computer Vision for Biomedical Image Applications: Current Techniques and Future
Trends, at ICCV, 2005.
[9] E. Learned-Miller and V. Jain. Many heads are better than one: Jointly removing bias from multiple
MRIs using nonparametric maximum likelihood. In Proceedings of Information Processing in Medical
Imaging, pages 615?626, 2005.
[10] G. B. Huang, V. Jain, and E. Learned-Miller. Unsupervised joint alignment of complex images. In ICCV,
2007.
[11] D. G. Lowe. Distinctive image features from scale-invariant keypoints. IJCV, 60(2):91?110, 2004.
[12] G. E. Hinton, S. Osindero, and Y.-W. Teh. A fast learning algorithm for deep belief nets. Neural Computation, 18(7):1527?1554, 2006.
[13] Y. Bengio, P. Lamblin, D. Popovici, and H. Larochelle. Greedy layer-wise training of deep networks. In
NIPS, 2007.
8
[14] M. Ranzato, Y.-L. Boureau, and Y. LeCun. Sparse feature learning for deep belief networks. In NIPS,
2007.
[15] M. Cox, S. Lucey, S. Sridharan, and J. Cohn. Least squares congealing for unsupervised alignment of
images. In CVPR, 2008.
[16] M. Cox, S. Sridharan, S. Lucey, and J. Cohn. Least squares congealing for large numbers of images. In
ICCV, 2009.
[17] X. Liu, Y. Tong, and F. W. Wheeler. Simultaneous alignment and clustering for an image ensemble. In
ICCV, 2009.
[18] M. A. Mattar, A. R. Hanson, and E. G. Learned-Miller. Unsupervised joint alignment and clustering using
Bayesian nonparametrics. In UAI, 2012.
[19] J. Zhu, L. V. Gool, and S. C. Hoi. Unsupervised face alignment by nonrigid mapping. In ICCV, 2009.
[20] B. A. Olshausen and D. J. Field. Emergence of simple-cell receptive field properties by learning a sparse
code for natural images. Nature, 381:607?609, 1996.
[21] H. Lee, A. Battle, R. Raina, and A. Y. Ng. Efficient sparse coding algorithms. In NIPS, 2007.
[22] M. Zeiler, D. Krishnan, G. Taylor, and R. Fergus. Deconvolutional networks. In CVPR, 2010.
[23] H. Lee, R. Grosse, R. Ranganath, and A. Y. Ng. Unsupervised learning of hierarchical representations
with convolutional deep belief networks. Communications of the ACM, 54(10):95?103, 2011.
[24] J. Yang, K. Yu, Y. Gong, and T. S. Huang. Linear spatial pyramid matching using sparse coding for image
classification. In CVPR, pages 1794?1801, 2009.
[25] K. Jarrett, K. Kavukcuoglu, M. Ranzato, and Y. LeCun. What is the best multi-stage architecture for
object recognition? In ICCV, 2009.
[26] G. B. Huang, H. Lee, and E. Learned-Miller. Learning hierarchical representations for face verification
with convolutional deep belief networks. In CVPR, 2012.
[27] H. Lee, Y. Largman, P. Pham, and A. Y. Ng. Unsupervised feature learning for audio classification using
convolutional deep belief networks. In NIPS, 2009.
[28] R. Collobert and J. Weston. A unified architecture for natural language processing: Deep neural networks
with multitask learning. In ICML, 2008.
[29] R. Salakhutdinov and G. E. Hinton. Semantic hashing. International Journal of Approximate Reasoning,
50:969?978, 2009.
[30] R. Raina, A. Battle, H. Lee, B. Packer, and A. Y. Ng. Self-taught learning: Transfer learning from
unlabeled data. In ICML, 2007.
[31] O. Chapelle, B. Sch?olkopf, and A. Zien. Semi-supervised learning. MIT Press, 2006.
[32] M. Yuan and L. Yin. Model selection and estimation in regression with grouped variables. Technical
report, University of Wisconsin, 2004.
[33] A. Hyv?arinen, P. O. Hoyer, and M. Inki. Topographic independent component analysis. Neural Computation, 13(7):1527?1558, 2001.
[34] K. Kavukcuoglu, M. Ranzato, R. Fergus, and Y. LeCun. Learning invariant features through topographic
filter maps. In CVPR, 2009.
[35] M. Norouzi, M. Ranjbar, and G. Mori. Stacks of convolutional restricted Boltzmann machines for shiftinvariant feature learning. In CVPR, pages 2735?2742, 2009.
[36] Y.-L. Boureau, F. R. Bach, Y. LeCun, and J. Ponce. Learning mid-level features for recognition. In CVPR,
2010.
[37] G. E. Hinton. Training products of experts by minimizing contrastive divergence. Neural Computation,
14(8):1771?1800, 2002.
[38] H. Lee, C. Ekanadham, and A. Y. Ng. Sparse deep belief net model for visual area V2. In NIPS, 2008.
[39] K. Sohn, D. Y. Jung, H. Lee, and A. H. III. Efficient learning of sparse, distributed, convolutional feature
representations for object recognition. In ICCV, 2011.
[40] H. V. Nguyen and L. Bai. Cosine similarity metric learning for face verification. In ACCV, 2010.
[41] T. Ojala, M. Pietikinen, and D. Harwood. A comparative study of texture measures with classification
based on feature distributions. Pattern Recognition, 19(3):51?59, 1996.
9
|
4769 |@word multitask:1 cox:3 version:1 mri:1 norm:5 everingham:1 d2:1 hyv:1 gradual:1 decomposition:1 crbms:3 contrastive:2 thereby:1 shot:1 bai:1 configuration:2 liu:2 uma:1 score:8 tuned:2 deconvolutional:1 existing:1 current:1 activation:7 must:2 mesh:1 subsequent:1 visible:11 periodically:1 shape:2 designed:1 generative:2 greedy:2 ith:1 detecting:1 node:9 location:17 zhang:1 five:2 yuan:1 consists:1 ijcv:1 wild:2 combine:2 manner:2 cnbc:1 inter:1 crbm:19 expected:1 ica:1 examine:2 multi:2 salakhutdinov:1 begin:1 estimating:1 matched:2 moreover:1 what:1 minimizes:1 developed:3 differing:1 unified:1 transformation:8 elm:1 every:1 classifier:1 uk:1 control:1 unit:24 medical:1 before:1 local:7 modify:1 vertically:1 despite:1 interpolation:1 approximately:2 pami:1 chose:1 suggests:1 challenging:1 limited:1 jarrett:1 lecun:4 practice:1 block:1 x3:2 procedure:7 maire:1 wheeler:1 area:1 empirical:2 significantly:3 matching:1 refers:3 unlabeled:5 selection:4 close:1 put:1 www:3 equivalent:2 map:2 demonstrated:1 ranjbar:1 maximizing:2 independently:1 resolution:1 abrupt:1 communicating:1 array:5 utilizing:1 lamblin:1 population:1 variation:2 coordinate:1 dbns:4 commercial:7 us:2 k00:3 element:1 trend:1 recognition:11 particularly:1 labeled:5 database:1 bottom:3 capture:2 region:1 news:1 ranzato:3 ordering:2 decrease:3 mentioned:1 grimson:1 transforming:2 harwood:1 hi0:1 trained:8 technically:1 distinctive:1 bipartite:1 misalignment:1 basis:2 gu:1 joint:10 k0:4 alphabet:1 train:1 jain:2 fast:1 describe:1 activate:1 congealing:55 choosing:1 neighborhood:1 whose:1 huang1:1 larger:1 valued:2 widely:2 cvpr:10 statistic:3 topographic:7 jointly:2 transform:1 emergence:1 sequence:2 net:2 propose:2 unaligned:1 product:1 jij:1 neighboring:2 relevant:1 aligned:15 loop:1 combining:4 poorly:4 achieve:8 deformable:1 normalize:1 olkopf:1 cluster:1 produce:1 categorization:1 comparative:1 rotated:1 object:12 ac:2 gong:1 pose:4 ij:2 expectationmaximization:1 c:1 resemble:1 larochelle:1 lee2:1 direction:1 closely:1 dfs:2 saved:1 filter:36 modifying:1 shrunk:1 centered:2 correct:2 subsequently:1 hoi:1 bin:2 require:2 hassner:3 activating:1 arinen:1 adjusted:1 extension:3 strictly:1 pham:1 considered:1 exp:4 nw:5 mapping:1 major:1 achieves:1 consecutive:1 estimation:1 label:1 currently:1 sensitive:2 grouped:2 successfully:1 weighted:3 brought:1 mit:1 gaussian:4 hello:1 modified:2 rather:6 pn:1 zhou:1 funneling:1 focus:1 ponce:1 improvement:3 consistently:1 bernoulli:1 likelihood:4 contrast:1 inference:3 rigid:1 i0:2 hidden:13 transformed:2 pixel:10 classification:4 among:3 html:1 augment:2 lucas:1 resonance:1 smoothing:1 softmax:1 initialize:1 spatial:1 field:8 once:1 ng:5 placing:1 flipped:1 yu:1 unsupervised:24 icml:2 future:1 np:1 stimulus:1 report:1 summand:1 simultaneously:1 divergence:2 packer:1 replaced:1 consisting:1 n1:1 attempt:1 detection:3 organization:4 circular:1 intra:3 cdbn:20 severe:1 alignment:60 mixture:2 activated:1 behind:1 closer:1 partial:2 facial:2 taylor:1 x80:1 re:1 instance:4 increased:1 soft:3 earlier:1 stacking:1 ekanadham:1 surpasses:1 vertex:1 subset:1 uniform:1 successful:2 osindero:1 dependency:1 encoders:1 eec:1 my:1 person:2 density:1 international:1 amherst:1 lee:9 probabilistic:3 off:1 picking:1 connecting:1 quickly:1 together:1 huang:3 worse:1 expert:1 derivative:1 leading:4 account:1 coding:5 pooled:1 forsyth:1 explicitly:1 collobert:1 performed:2 view:6 root:1 lowe:1 doing:1 contribution:1 square:5 il:1 accuracy:28 convolutional:13 descriptor:4 variance:1 efficiently:1 ensemble:1 miller:8 yield:1 landscape:6 climbing:1 weak:2 handwritten:1 bayesian:4 kavukcuoglu:2 norouzi:1 produced:11 vrs:2 lighting:1 published:3 simultaneous:2 detector:3 plateau:1 manual:2 energy:1 pp:2 associated:4 mi:1 rbm:1 gain:3 massachusetts:1 knowledge:1 car:1 higher:10 originally:1 supervised:12 hashing:1 methodology:1 zisserman:1 improved:2 bmvc:2 formulation:2 evaluated:1 done:2 ox:1 nonparametrics:1 biomedical:1 stage:1 hand:2 cohn:2 mode:1 quality:1 believe:2 semisupervised:1 olshausen:1 name:2 effect:2 requiring:1 regularization:9 iteratively:5 semantic:1 undesired:2 white:1 x5:1 during:1 self:6 encourages:2 covering:1 cosine:3 nonrigid:1 hill:1 mattar:2 l1:4 largman:1 reasoning:1 image:101 wise:2 novel:2 inki:1 rotation:2 qp:1 hki:1 volume:1 nh:4 extend:2 interpretation:1 significant:3 refer:3 honglak:2 smoothness:2 automatic:1 unconstrained:3 dbn:3 grid:1 language:2 chapelle:1 robot:1 supervision:4 similarity:9 maxpooling:1 whitening:1 align:5 aligning:2 add:2 posterior:1 own:1 perspective:1 driven:1 binary:8 success:2 seen:1 minimum:1 additional:2 greater:1 somewhat:1 determine:1 semi:1 smoother:2 multiple:7 full:3 upside:1 reduces:4 infer:1 kyoto:3 smooth:2 keypoints:1 match:2 faster:1 technical:1 bach:1 retrieval:1 naming:1 schematic:1 involving:1 variant:1 regression:2 scalable:1 vision:4 lfw:28 df:5 metric:2 cmu:1 iteration:2 represent:1 histogram:3 kernel:1 pyramid:1 achieved:2 cell:1 addition:3 background:2 lbp:1 sch:1 unlike:2 nv:4 pooling:32 tend:1 spirit:1 sridharan:2 yang:1 feedforward:1 bengio:1 iii:1 krishnan:1 gave:1 architecture:3 lasso:1 topology:19 reduce:3 idea:2 vgg:1 csml:3 whether:1 pca:1 effort:1 penalty:9 speaking:1 cause:2 deep:41 dramatically:1 generally:3 useful:1 detailed:2 amount:4 transforms:1 nonparametric:1 mid:1 locally:1 sohn:2 category:1 http:3 percentage:1 canonical:2 write:1 taught:6 group:18 key:1 demonstrating:1 achieving:1 drawn:2 preprocessed:1 registration:1 imaging:1 merely:1 fraction:1 sum:3 taigman:2 angle:2 parameterized:1 decide:1 patch:2 home:1 draw:1 comparable:3 bit:1 capturing:1 layer:60 topological:3 constraint:1 x2:1 encodes:1 nearby:3 interpolated:1 aspect:1 performing:2 developing:1 according:4 tv:1 combination:3 battle:2 across:2 slightly:1 smaller:1 character:2 restricted:4 invariant:3 iccv:7 mori:1 visualization:2 mutually:1 letting:4 end:2 umich:1 available:2 operation:1 gaussians:1 apply:6 hierarchical:3 v2:1 appropriate:1 generic:2 magnetic:1 alternative:1 matsakis:1 original:10 top:5 clustering:6 denotes:3 zeiler:1 graphical:1 maintaining:1 approximating:1 warping:1 objective:2 added:1 flipping:1 strategy:2 receptive:1 exclusive:1 zien:1 fiducial:11 traditional:1 responds:1 hoyer:1 gradient:4 majority:1 landmark:1 reason:1 erik:1 code:2 illustration:4 ratio:1 minimizing:2 providing:1 negative:2 implementation:1 boltzmann:4 perform:1 allowing:2 teh:2 convolution:4 markov:1 accv:2 viola:1 extended:3 variability:5 incorporated:1 head:1 hinton:3 communication:1 stack:6 intensity:2 bk:3 pair:6 required:2 unpublished:1 connection:1 z1:1 hanson:1 sivic:1 learned:17 nip:5 able:3 proceeds:1 below:1 pattern:1 sparsity:15 max:5 video:1 belief:10 gool:1 suitable:1 natural:6 raina:3 zhu:2 improve:3 eye:5 auto:1 text:1 prior:3 review:2 l2:6 tangent:1 popovici:1 multiplication:1 wisconsin:1 fully:1 rationale:1 filtering:1 funnel:1 incurred:1 verification:23 vij:1 translation:3 row:4 jung:1 lucey:2 bias:5 allow:1 deeper:1 mismatched:2 neighbor:1 face:42 taking:2 sparse:13 distributed:1 depth:1 dimension:1 world:3 author:1 collection:1 openu:1 nguyen:1 ranganath:1 approximate:2 overfitting:1 rnw:1 uai:1 marwan:1 reorder:1 xi:2 lsparsity:2 fergus:2 grayscale:1 continuous:1 table:2 kanade:1 nature:1 learn:10 transfer:2 robust:1 contributes:1 improving:1 complex:5 constructing:2 domain:2 pk:8 main:1 linearly:1 whole:1 complementary:1 crafted:2 benchmarking:1 grosse:1 tong:1 sub:1 position:1 wish:1 exceeding:1 weighting:5 third:1 removing:2 down:1 specific:10 sift:11 svm:4 incorporating:3 workshop:1 adding:3 texture:1 illustrates:1 boureau:2 onelayer:1 entropy:12 michigan:1 yin:1 likely:1 visual:1 horizontally:1 gary:1 corresponds:1 wolf:2 acm:1 ma:1 weston:1 goal:2 buffy:1 ann:1 careful:2 shared:5 hard:1 change:1 infinite:1 specifically:4 determined:1 total:2 invariance:1 arbor:1 inadvertently:1 shiftinvariant:1 indicating:1 berg:2 ojala:1 incorporate:3 evaluate:3 audio:2 scratch:1 correlated:1
|
4,164 | 477 |
A Parallel Analog CCD/CMOS Signal Processor
Charles F. Neugebauer
Amnon Yariv
Department of Applied Physics
California Institute of Technology
Pasadena, CA 91125
Abstract
A CCO based signal processing IC that computes a fully parallel single
quadrant vector-matrix multiplication has been designed and fabricated with a
2j..un CCO/CMOS process. The device incorporates an array of Charge
Coupled Devices (CCO) which hold an analog matrix of charge encoding the
matrix elements. Input vectors are digital with 1 - 8 bit accuracy.
1 INTRODUCTION
Vector-matrix multiplication (VMM) is often used in neural network theories to describe
the aggregation of signals by neurons. An input vector encoding the activation levels of
input neurons is multiplied by a matrix encoding the synaptic connection strengths to
create an output vector. The analog VLSI architecture presented here has been devised to
perfonn the vector-matrix multiplication using CCO technology. The architecture
calculates a VMM in one clock cycle, an improvement over previous semiparallel devices
(Agranat et al., 1988), (Chiang. 1990). This architecture is also useful for general signal
processing applications where moderate resolution is required, such as image processing.
As most neural models have robust behavior in the presence of noise and inaccuracies,
analog VLSI offers the potential for highly compact neural circuitry. Analog
multiplication circuitry can be made much smaller than its digital equivalent, offering
substantial savings in power and IC size at the expense of limited accuracy and
programmability. Oigitall/O, however, is desirable as it allows the use of standard
memory and control circuits at the system level. The device presented here has digital
input and analog output and elucidates all relevant perfonnance characteristics including
748
A Parallel Analog CCD/CMOS Signal Processor
accuracy, speed, power dissipation and charge retention of the VMM. In practice. on-chip
charge domain AID converters are used for converting analog output signals to facilitate
digital communication with off-chip devices.
Matrix Charge
I
2
.
--==
=
~
>
Q.
0
I I.
U.
J
Column Gate
Input Vector
Figure 1: Simplified Schematic of CID Vector Matrix Multiplier
2 ARCHITECTURE DESCRIPTION
The vector-matrix multiplier consists of a matrix of CCD cells that resemble Charge
Injection Device (CID) imager pixels in that one of the cell's gates is connected vertically
from cell to cell fonning a column electrode while another gate is connected horizontally
fonning a row electrode. The charge stored beneath the row and column gates encodes
the matrix. A simplified schematic in Figure 1 shows the array organization.
2.1 BINARY VECTOR MATRIX MULTIPLICATION
In its most basic configuration, the VMM circuit computes the product of a binary input
vector, Uj' and an analog matrix of charge. The computation done by each CID cell in
the matrix is a multiply-accumulate in which the charge, Qij' is multiplied by a binary
input vector element, Uj' encoded on the column line and this product is summed with
other products in the same row to fonn the vector product, Ii, on the row lines.
Multiplication by a binary num ber is equivalent to adding or not adding the charge at a
749
750
Neugebauer and Yariv
particular matrix element to its associated row line.
The matrix element operation is shown in Figure 2 which displays a cross-section of one
of the rows with the associated potential wells at different times in the computation.
Matrix Charge
V (out)
L\ 1
Column Gate
\
+lOV
+10V
OV
i"",l",~""J"",.
I
R"",l",~""J2~~:!~ting)
V row+ Q/C
OV
IER~
V row
V row+ Q/C
+10V
l(floating)
rzz,Laarwul2!:ting)
Y2ZlZi;ZZU~ QV22lZUA
."."."."."..
."."."."."
.................
(c)
(d)
+10V
OV
Figure 2: CID Cell Operation
In the initial state, prior to the VMM computation, the matrix of charges Qij is moved
A Parallel Analog C CO/CMOS Signal Processor
beneath the column electrodes by placing a positive voltage on all column lines, shown
in Figure 2(a).. A positive voltage creates a deep potential well for electrons. At this
point, the row lines are reset to a reference voltage, Vrow' by FETs Q1 and then
disconnected from the voltage source, shown in Figure 2(b). The computation occurs
when the column lines are pulsed to a negative voltage corresponding to the input vector
Uj' shown in Figure 2(c). The binary Uj is represented by a negative pulse on the jth
column line if the element Uj is a binary 1, otherwise the column line is kept at the
positive voltage. This causes the charges in the columns that correspond to binary l's in
the input vector to be transferred to their respective row electrodes which thus experience a
voltage change given by
N-l Q
.1Vi
ijUj
=L
j=O Crow
where N is the number of elements in the input vector and Crow is the total capacitance
of the row electrode. Once the charge has been transferred, the column lines are reset to
their original positive voltages 1, resulting in the potential diagram in Figure 2(d). The
voltage changes on the row lines are then sampled and the matrix of charges are returned
to the column electrodes in preparation for the next VMM by pulsing the row electrodes
negative as in Figure 2(e). In this manner, a complete binary vector is multiplied by an
analog matrix of charge in one CCD clock cycle.
3 DESIGN AND OPERATION
The implementation of this architecture contains facilities for electronic loading of the
matrix. Originally proposed as an optically loaded device (Agranat et al., 1988), the
electronically loaded version has proven more reliable and consistent.
3.1 LOADING THE CCD ARRAY WITH MATRIX ELEMENTS
The CCD matrix elements described above can be modified to operate as standard four
phase CCD shift registers by simply adding another gate. The matrix cell is shown in
Figure 3. The fabricated single quadrant cell size is 24J.l.m by 24J.lIl1 using a 2J.lIl1
minimum feature size CCD/CMOS process. More aggressive design rules in the same
process can reduce this to 20J.lIl1 by 20J.un. These cells, when abutted with each other in
a row, form a horizontal shift register which is used to load the matrix. Electronic
loading of the matrix is accomplished in a fashion similar to CCD imagers. A fast CCD
shift register running vertically is added along one side of the matrix which is loaded with
one column of matrix charges from a single external analog data source. Once the fast
shift register is loaded, it is transferred into the array by clocking the matrix electrodes to
act as an array of horizontal shift registers, shown in Figure 3(a). This process is repeated
until the entire matrix has been filled with charge.
1Returning the column lines to their original voltage levels has the effect of canceling the
effect of stray capacitive coupling between the row and column lines, since the net column
voltage change is zero.
751
752
Neugebauer and Yariv
Phasel
Phase2
Phase3
DC
Column
DC
.,1
F-
1
!1
1
Phase4
Row
r-
1
~l~~$~$~?i~a;;;:?14====~------~r(b)
Figure 3: CID Cell Used to Load Matrix
When the matrix has been loaded, the charge can be used for computation with two of the
four gates at each matrix cell kept at constant potentials, shown in Figure 3(b). The
computation process moves the charge repeatedly between two electrodes. Incomplete
charge transfer, a problem with our previous architecture (Agranat et al., 1990), does not
degrade perfonnance since any charge left behind under the column gates during
computation is picked up on the next cycle, shown in Figure 2(e). Only dark current
generation degrades the matrix charges during VMM, causing them to increase
nonunifonnly. In order to limit the effects of dark current generation on the matrix
precision, the matrix charge must be refreshed periodically.
3.2 FLOATING GATE ROW AMPLIFIERS
In order to achieve better linearity when sensing charge, a floating gate amplifier is often
used in CCD circuits. In the scheme described above, the induced voltage change of the
row electrode significantly modifies its parasitic capacitance, resulting in a nonlinear
voltage versus charge characteristic. To alleviate this problem, an operational amplifier
with a capacitor in the feedback loop is added to each row line, shown in Figure 4. When
A Parallel Analog CCO/CMOS Signal Processor
charge is moved underneath the row line in the course of a VMM operation, the row
voltage is kept constant by the action of the op-amp with an output voltage given by
N-l Q U
AVi=L
j=o
~
Cf
where Cf is the feedback capacitance.
Reset
Cf
"
Column Gate
Figure 4: Linear Charge Sensing
The feedback capacitor is a poly-poly structure with vastly improved linearity compared to
the row capacitance. This enhancement also has the effect of speeding the row line
summation due to the well known benefits of current mode transmission. In addition.
the possibility of digitally selecting a feedback capacitor value by switching power-of-two
sized capacitors into the feedback loops creates a practical means of controlling the gain of
the output amplifiers, with the potential for significantly extending the dynamic range of
the device.
3.3 DIGITAL INPUT BUFFER AND DIVIDE-BY-TWO CIRCUITRY
Many applications such as image processing require multilevel input capability. This can
easily be implemented by using the VMM circuitry in a bit-serial mode. The operation
of the device is identical to the structure described above except that processing n-bit input
precision requires n cycles of the device. Digital shift registers are added to each input
column line that sequentially present the column lines with successively more significant
bits of the input vector, shown in Figure 5. Using the notation u}n-I), which represents
the binary vector formed by taking the nth bits of all the input elements, the first VMM
done by the circuit is given by
(0)
AV?
1
N-l Q"U(O)
~
IJ j
=?.J
j=O
Cf
where AV i(O) is the output vector represented as voltage changes on the row lines. The
row voltages are stored on large capacitors, CI, which are allowed to share charge with
another set of equally sized capacitors, C2, effectively dividing the output vector by two.
753
754
Neugebauer and Yariv
Reset
Matrix Charge
Cr
Row Line
/'
~
Column Gate
-.
Figure 5: Switched Capacitor Divide-By-Two Circuit
The next most significant bit input vector, Uj(1), is then multiplied and creates another
set of row voltage changes which are stored and shared to add another charge to the
previously divided charge giving
N-1 Q. -0(1)
N-1 Q"U(O)
out (1) _ ~ IJ j
1 ~ IJ j
V?
-?,..
+-?,..
1
j=O Cf
2j =0
Cf
where Viout( 1) is the voltage on C2 after two clock cycles. The process is repeated n
times, effectively weighting each successive bit's data by the proper power of two factor
giving a total output VOltage~OfN-l {
))
N-1
v~ut (n-l) = 1
Qi
2k-nut-1) =...L
QijDj
Cf '=0
k=l
Cfj=o
after n clock cycles where Dj now represents the multivalued digital input vector. In this
manner, multivalued input of n-bit precision can be processed where n is only limited by
the analog accuracy of the components2.
L
r.
L
4 EXPERIMENTAL RESULTS
A number of VMM circuits have been fabricated implementing the architecture described
above in a 2Jl.11l double-poly CCD/CMOS process. The largest circuit contains a
128x128 array of matrix elements. The matrix is loaded electronically through a single
pin using the CCD shift register mode of the CID cell, shown in Figure 3. Matrix
element mismatches due to threshold variations are avoided since all matrix elements are
created by the same set of electrodes.
A list of relevant system characteristics is given in Table 1. The matrix of charge is
2 If 4-bit input is required the device is simply clocked four times. Since the power of two
scaling is divisive. the most significant bit is always given the same weighting regardless of
the input word length.
A Parallel Analog CCD/CMOS Signal Processor
loaded in 4ms and needs to be refreshed every 20ms to retain acceptable weight accuracy at
room temperature. giving a refresh overhead of 20%. A simple linear ftlter bank was
loaded with a sinusoidal matrix and multiplied with a slowly chirped input signal to
detennine the linearity and noise limits.
Table 1: Experimental Results
Charge Transfer Efficiency
Cell Size
Bit Rate
Refresh Time
Noise Limits
Linearity
Power Consumption
(excluding output drivers)
Connections Per Second
(binary input vectors)
0.99995
24 J.1ffi x 24 Jlffi
4 MHz
4ms
7 bits
5 bits
<l00mW
6.4 x 10 10
5 SUMMARY
A CCD based vector matrix multiplication scheme has been developed that offers high
speed and low power in addition to provisions for digital I/O. Intended for neural network
and image processing applications. the architecture is intended to integrate well into
digital environments.
Acknowledgements
This work was supported by a grant from the U.S. Army Center for Signals Warfare.
Rererences
A. Agranat. C. F. Neugebauer and A. Yariv. (1988) Parallel Optoelectronic Realization of
Neural Network Models Using CID Technology. Applied Optics 27 :4354-4355.
A. Agranat. C. F. Neugebauer. R.D. Nelson and A. Yariv. (1990) The CCD Neural
Processor: A Neural Integrated Circuit with 65,536 Programmable Analog Synapses.
IEEE Trans. on Circuits and Systems 37 :1073-1075.
A. M. Chiang. (1990) A CCD Programmable Signal Processor. IEEE Journal of Solid
State Circuits 25 :1510-1517.
755
|
477 |@word version:1 loading:3 cco:5 pulse:1 q1:1 fonn:1 solid:1 initial:1 configuration:1 contains:2 optically:1 selecting:1 offering:1 amp:1 current:3 activation:1 must:1 refresh:2 periodically:1 designed:1 device:11 chiang:2 num:1 successive:1 x128:1 along:1 c2:2 driver:1 qij:2 consists:1 overhead:1 manner:2 lov:1 behavior:1 linearity:4 notation:1 circuit:10 developed:1 fabricated:3 perfonn:1 every:1 act:1 charge:34 returning:1 control:1 imager:1 grant:1 positive:4 retention:1 vertically:2 limit:3 switching:1 encoding:3 co:1 limited:2 range:1 practical:1 yariv:6 practice:1 cfj:1 agranat:5 significantly:2 word:1 quadrant:2 equivalent:2 ier:1 center:1 modifies:1 regardless:1 resolution:1 rule:1 array:6 variation:1 controlling:1 elucidates:1 element:12 cycle:6 connected:2 substantial:1 digitally:1 environment:1 dynamic:1 ov:3 creates:3 efficiency:1 easily:1 chip:2 represented:2 fast:2 describe:1 avi:1 encoded:1 otherwise:1 net:1 product:4 reset:4 j2:1 relevant:2 beneath:2 causing:1 loop:2 realization:1 achieve:1 description:1 moved:2 electrode:11 enhancement:1 transmission:1 extending:1 double:1 cmos:8 coupling:1 ij:3 op:1 dividing:1 implemented:1 resemble:1 implementing:1 require:1 multilevel:1 alleviate:1 summation:1 hold:1 ic:2 electron:1 circuitry:4 largest:1 create:1 always:1 modified:1 cr:1 voltage:20 neugebauer:6 improvement:1 underneath:1 warfare:1 entire:1 integrated:1 pasadena:1 vlsi:2 pixel:1 summed:1 once:2 saving:1 identical:1 placing:1 represents:2 vmm:11 floating:3 phase:1 intended:2 amplifier:4 organization:1 highly:1 clocking:1 multiply:1 possibility:1 behind:1 programmability:1 experience:1 respective:1 perfonnance:2 filled:1 incomplete:1 divide:2 column:23 mhz:1 stored:3 retain:1 physic:1 off:1 vastly:1 successively:1 slowly:1 external:1 aggressive:1 potential:6 sinusoidal:1 register:7 vi:1 picked:1 aggregation:1 parallel:7 capability:1 formed:1 accuracy:5 loaded:8 characteristic:3 correspond:1 processor:7 synapsis:1 canceling:1 synaptic:1 associated:2 refreshed:2 sampled:1 gain:1 multivalued:2 ut:1 provision:1 originally:1 improved:1 done:2 clock:4 until:1 horizontal:2 nonlinear:1 mode:3 facilitate:1 effect:4 multiplier:2 facility:1 nut:1 during:2 fet:1 clocked:1 m:3 complete:1 dissipation:1 temperature:1 image:3 charles:1 jl:1 analog:16 accumulate:1 significant:3 dj:1 add:1 moderate:1 pulsed:1 buffer:1 binary:10 accomplished:1 minimum:1 converting:1 signal:12 ii:1 desirable:1 offer:2 cross:1 devised:1 divided:1 serial:1 equally:1 calculates:1 schematic:2 qi:1 basic:1 cell:13 addition:2 diagram:1 source:2 operate:1 induced:1 incorporates:1 capacitor:7 presence:1 architecture:8 converter:1 reduce:1 shift:7 amnon:1 returned:1 cause:1 repeatedly:1 action:1 deep:1 programmable:2 useful:1 dark:2 processed:1 per:1 four:3 threshold:1 kept:3 electronic:2 acceptable:1 scaling:1 bit:13 display:1 strength:1 optic:1 encodes:1 speed:2 injection:1 transferred:3 department:1 disconnected:1 smaller:1 previously:1 pin:1 ffi:1 operation:5 detennine:1 multiplied:5 optoelectronic:1 gate:12 original:2 capacitive:1 running:1 cf:7 ccd:17 giving:3 ting:2 uj:6 move:1 capacitance:4 added:3 occurs:1 degrades:1 cid:7 consumption:1 nelson:1 degrade:1 lil1:3 rzz:1 length:1 expense:1 negative:3 design:2 implementation:1 proper:1 av:2 neuron:2 communication:1 excluding:1 dc:2 fonning:2 required:2 connection:2 california:1 inaccuracy:1 trans:1 mismatch:1 including:1 memory:1 reliable:1 power:7 nth:1 scheme:2 technology:3 created:1 coupled:1 speeding:1 prior:1 acknowledgement:1 multiplication:7 fully:1 generation:2 proven:1 versus:1 digital:9 switched:1 integrate:1 consistent:1 bank:1 share:1 row:28 course:1 summary:1 supported:1 electronically:2 jth:1 side:1 ber:1 institute:1 taking:1 benefit:1 feedback:5 computes:2 made:1 simplified:2 avoided:1 compact:1 sequentially:1 un:2 table:2 crow:2 transfer:2 robust:1 ca:1 operational:1 poly:3 domain:1 noise:3 repeated:2 allowed:1 fashion:1 aid:1 precision:3 stray:1 weighting:2 ofn:1 load:2 sensing:2 list:1 adding:3 effectively:2 ci:1 simply:2 army:1 horizontally:1 sized:2 room:1 shared:1 change:6 except:1 total:2 pulsing:1 experimental:2 divisive:1 rererences:1 parasitic:1 preparation:1
|
4,165 | 4,770 |
Dynamical And-Or Graph Learning for Object Shape
Modeling and Detection
Liang Lin?
Sun Yat-Sen University
Guangzhou, P.R. China 510006
[email protected]
Xiaolong Wang
Sun Yat-Sen University
Guangzhou, P.R. China 510006
[email protected]
Abstract
This paper studies a novel discriminative part-based model to represent and recognize object shapes with an ?And-Or graph?. We define this model consisting of three layers: the leaf-nodes with collaborative edges for localizing local
parts, the or-nodes specifying the switch of leaf-nodes, and the root-node encoding the global verification. A discriminative learning algorithm, extended from
the CCCP [23], is proposed to train the model in a dynamical manner: the model
structure (e.g., the configuration of the leaf-nodes associated with the or-nodes) is
automatically determined with optimizing the multi-layer parameters during the
iteration. The advantages of our method are two-fold. (i) The And-Or graph
model enables us to handle well large intra-class variance and background clutters
for object shape detection from images. (ii) The proposed learning algorithm is
able to obtain the And-Or graph representation without requiring elaborate supervision and initialization. We validate the proposed method on several challenging
databases (e.g., INRIA-Horse, ETHZ-Shape, and UIUC-People), and it outperforms the state-of-the-arts approaches.
1
Introduction
Part-based and hierarchical representations have been widely studied in computer vision, and lead
to some elegant frameworks for complex object detection and recognition. However, most of the
methods address only the hierarchical decomposition by tree-structure models [5, 25], and oversimplify the reconfigurability (i.e. structural switch) in hierarchy, which is the key to handle the large
intra-class variance in object detection. In addition, the interactions of parts are often omitted in
learning and detection. And-Or graph models are recently explored in [26, 27] to hierarchically
model object categories via ?and-nodes? and ?or-nodes? that represent, respectively, compositions
of parts and structural variation of parts. Their main limitation is that the learning process is strongly
supervised and the model structure needs to be manually annotated.
The key contribution of this work is a novel And-Or graph model, whose parameters and structure
can be jointly learned in a weakly supervised manner. We achieve the superior performance on the
task of detecting and localizing shapes from cluttered backgrounds, compared to the state-of-theart approaches. As Fig. 3(a) illustrates, the proposed And-Or graph model consists of three layers
described as follows.
The leaf-nodes in the bottom layer represent a batch of local classifiers of contour fragments. We
provide a partial matching scheme that can recognize the accurate part of the contour, to deal with
?
Corresponding author is Liang Lin. This work was supported by National Natural Science Foundation of
China (no. 61173082), Fundamental Research Funds for the Central Universities (no. 2010620003162041),
and the Guangdong Natural Science Foundation (no.S2011010001378).This work was also partially funded by
SYSU-Sugon high performance computing typical application project.
1
the problem that the true contours of objects are often connected to background clutters due to
unreliable edge extraction.
The or-nodes in the middle layer are ?switch? variables specifying the activation of their children
leaf-nodes. We utilize the or-nodes accounting for alternate ways of composition, rather than just
defining multi-layer compositional detectors, which is shown to better handle the intra-class variance
and inconsistency caused by unreliable edge detection. Each or-node is used to select one contour
from the candidates detected via the associated leaf-nodes in the bottom layer. Moreover, during
detection, location displacement is allowed for each or-node to tackle the part deformation.
The root-node (i.e. the and-node) in the top layer is a global classifier capturing the holistic deformation of the object. The contours selected via the or-nodes are further verified as a whole, in order
to make the detection robust against the background clutters.
The collaborative edges between leaf-nodes are defined by the probabilistic co-occurrence of local
classifiers, which relax the conditional independence assumption commonly used in previous tree
structure models. Concretely, our model allows nearby contours to interact with each other.
The key problem of training our And-Or graph model is automatic structure determination. We
propose a novel learning algorithm, namely dynamic CCCP , extended from the concave-convex
procedure (CCCP) [23, 22] by embedding the structural reconfiguration. It iterates to dynamically
determine the production of leaf-nodes associated with the or-nodes, which is often simplified by
manually fixing in previous methods [25, 16]. The other structure attributes (e.g., the layout of
or-nodes and the activation of leaf-nodes) are implicitly inferred with the latent variables.
2
Related Work
Remarkable progress has been made in shape-based object detection [6, 10, 9, 11, 19]. By employing some shape descriptors and matching schemes, many works represent and recognize object
shapes as a loose collection of local contours. For example, Ferrari et al. [6] used a codebook of
PAS (pairwise adjacent segments) to localize object of interest; Maji et al. [11] proposed a maximum
margin hough voting for hypothesis regions combining with intersection kernel SVM(IKSVM) for
verification; Yang and Latecki [19] constructed shape models in a fully connected graph form with
partially-supervised learning, and detected objects via a Particle Filters (PF) framework.
Recently, the tree structure latent models [25, 5] have provided significant improvements on object
detection. Based on these methods, Srinivasan et al. [16] trained the descriptive contour-based detector by using the latent-SVM learning; Song et al. [15] integrated the context information with the
learning, namely Context-SVM. Schnitzspan et al. [14] further combined the latent discriminative
learning with conditional random fields using multiple features.
Knowledge representation with And-Or graph was first introduced for modeling visual patterns by
Zhu and Mumford [27]. Its general idea, i.e. using configurable graph structures with And, Or
nodes, has been applied in object and scene parsing [26, 18, 24] and action classification [20].
3
And-Or Graph Representation for Object Shape
The And-Or Graph model is defined as G = (V, E), where V represents three types of nodes and
E the graph edges. As Fig. 3(a) illustrates, the square on the top is the root-node representing
the complete object instances. The dashed circles derived from the root are z or-nodes arranged
in a layout of b1 ? b2 blocks, representing the object parts. Each or-node comprises an unfixed
number of leaf-nodes (denoted by the solid circles on the bottom); the leaf-nodes are allowed to be
dynamically created and removed during the learning. For simplicity, we set the maximum number
m of leaf-nodes affiliated to one or-node, and the parameters of non-existing leaf-nodes to zero.
Then the maximum number of all nodes in the model is 1 + n = 1 + z + z ? m. We use i = 0
indexing the root node, i = 1, ..., z the or-nodes and j = z + 1, ..., n the leaf-nodes. We also define
that j ? ch(i) indexes the child nodes of node i. The horizontal graph edges (i.e., collaborative
edges) are defined between the leaf-nodes that are associated with different or-nodes, in order to
encode the compatibility of object parts. The definitions of G are presented as follows.
Leaf-node: Each leaf-node Lj , j = z + 1, ..., n is a local classifier of contours, whose placement is
decided by its parent or-node (the localized block). Suppose a contour fragment c on the edge map
X is captured by the block located at pi = (pxi , pyi ), as the input of classifier. We denote ?l (pi , c) as
2
the feature vector using the Shape Context descriptor [3]. For any classifier, only the part of c fallen
into the block will be taken into account, and we set ?l (pi , c) = 0 if c is entirely out. The response
of classifier Lj at location pi of the edge map X is defined as:
RLj (X, pi ) = max ?jl ? ?l (pi , c),
(1)
c?X
where ?jl is a parameter vector, which is set to zero if the corresponding leaf-node Lj is nonexistent.
Then we can detect the contour from edge map X via the classifier, cj = argmaxc?X ?jl ? ?l (pi , c).
Or-node: Each or-node Ui , i = 1, ..., z is proposed to specify a proper contour from a set of candidates detected via its children leaf-nodes. Note that we can also consider the or-node activating one
leaf-node. The or-nodes are allowed to perturb slightly with respect to the root. For each or-node
Ui , we define the deformation feature as ?s (p0 , pi ) = (dx, dy, dx2 , dy 2 ), where (dx, dy) is the displacement of the or-node position pi to the expected position p0 determined by the root-node. Then
the cost of locating Ui at pi is:
Costi (p0 , pi ) = ??is ? ?s (p0 , pi ),
(2)
?is
s
where
is a 4-dimensional parameter vector corresponding to ? (p0 , pi ). In our method, each ornode contains at most m leaf-nodes, among which one is to be activated during inference. For each
leaf-node Lj associated with Ui , we introduce an indicator variable vj ? {0, 1} representing whether
it is activated or not. Then we derive the auxiliary ?switch? vector for Ui , vi = (vj1 , vj2 , ..., vjm ),
where ||vi || = 1. Thus, the response of the or-node Ui is defined as,
?
RUi (X, p0 , pi , vi ) =
RLj (X, pi ) ? vj + Costi (p0 , pi ).
(3)
j?ch(i)
Collaborative Edge: For any pair of leaf-nodes (Lj , Lj ? ) respectively associated with two different or-nodes, we define the collaborative edge between them according to their contextual cooccurrence. That is, how likely it is that the object contains contours detected via the two leaf-nodes.
The response of the pairwise potentials is parameterized as,
RE (V ) =
n
?
?
j=z+1
j ? ?neigh(j)
e
?(j,j
? ) ? vj ? vj ? ,
(4)
where neigh(j) is defined as the neighbor leaf-nodes from the other or-node adjacent (in spatial
e
direction) to Lj , and V is a joint vector for each vi : V = (v1 , ..., vz ) = (vz+1 , ..., vn ). ?(j,j
?)
indicates the compatibility between Lj and Lj ? .
Root-node: The root-node represents a global classifier to verify the ensemble of contour fragments
C r = {c1 , ..., cz } proposed by the or-nodes. The response of the root-node is parameterized as,
RT (C r ) = ? r ? ?r (C r ),
(5)
where ?r (C r ) is the feature vector of C r and ? r the corresponding parameter vector.
Therefore, the overall response of the And-Or graph is:
RG (X, P, V ) =
a
?
RUi (X, p0 , pi , vi ) + RE (V ) + RT (C r )
i=1
z
n
?
? l l
?
=
[
?j ? ? (pi , cj ) ? vj ? ?is ? ?s (p0 , pi )] +
?
e
r
r
r
? ) ? vj ? vj ? + ? ? ? (C ), (6)
?(j,j
j=z+1 j ? ?neigh(j)
i=1 j?ch(i)
where P = (p0 , p1 , ..., pz ) is a vector of the positions of or-nodes. For better understanding, we
refer H = (P, V ) as the latent variables during inference, where P implies the deformation of
parts represented by the or-nodes and V implies the discrete distribution of leaf-nodes (i.e., which
leaf-nodes are activated for detection). The Eq.(6) can be further simplified as :
RG (X, H) = ? ? ?(X, H),
(7)
where ? includes the complete parameters of And-Or graph, and ?(X, H) is the feature vector,
l
e
e
? = (?z+1
, ..., ?nl , ??1s , ..., ??zs , ?(z+1,z+1+m)
, ..., ?(n?m,n)
, ? r ).
(8)
?(X, H) = (? (p1 , cz+1 ) ? vz+1 , ? ? ? , ? (pz , cn ) ? vn ,
?s (p0 , p1 ), ? ? ? , ?s (p0 , pz ), vz+1 ? vz+1+m , ..., vn?m ? vn , ?r (C r )).
(9)
l
l
3
?
?
(a)
?
?
(b)
?
?
(c)
Figure 1: Illustration of dynamical structure learning. Parts of the model, two or-nodes (U1 , U6 ), are
visualized in three intermediate steps. (a) The initial structure, i.e., the regular layout of an object.
Two new structures are dynamically generated during iteration. (b) A leaf-node associated with U1
is removed. (c) A new leaf-node is created and assigned to U6 .
4
Inference
The inference task is to localize the optimal contour fragments within the detection window, which
is slidden at all scales and positions of the edge map X. Assuming the root-node is located at p0 ,
the object shape is localized by maximizing RG (X, H) defined in (6):
S(p0 , X) = max RG (X, H).
(10)
H
The inference procedure integrates the bottom-up testing and top-down verification:
Bottom-up testing: For each or-node Ui , its children leaf-nodes (i.e. the local classifiers) are utilized to detect contour fragments within the edge map X. Assume that leaf-node Lj , j ? ch(i)
associated with Ui is activated, vj = 1, and the optimal contour fragment cj is localized by maximizing the response in Eq.(3), where the optimal location p?i,j is also determined. Then we generate
a set of candidates for each or-node, {cj , p?i,j }, each of which is one detected contour fragments via
the leaf-nodes. These sets of candidates will be passed to the top-down step where the leaf-node
activation vi for Ui can be further validated. We calculate the response for the bottom-up step, as,
z
?
Rbot (V ) =
(11)
RUi (X, p0 , p?i , vi ),
i=1
where V = {vi } denotes a hypothesis of leaf-node activation for all or-nodes. In practice, we can
further prune the candidate contours by setting a threshold on Rbot (V ). Thus, given the V = {vi },
we can select an ensemble of contours C r = {c1 , ..., cz }, each of which is detected by an activated
leaf-node, Lj , vj = 1.
Top-down verification: Given the ensemble of contours C r , we then apply the global classifier
at the root-node to verify C r by Eq. (5), as well as the accumulated pairwise potentials on the
collaborative edges defined in Eq.(4).
By incorporating the bottom-up and top-down steps, we obtain the response of And-Or graph model
by Eq.(6). The final detection is acquired by selecting the maximum score in Eq.(10).
5
Discriminative Learning for And-Or Graph
We formulate the learning of And-Or graph model as a joint optimization task for model structure and parameters, which can be solved by an iterative method extended from the CCCP framework [22]. This algorithm iterates to determine the And-Or graph structure in a dynamical manner:
given the inferred latent variables H = (P, V ) in each step, the leaf-nodes can be automatically
created or removed to generate a new structural configuration. To be specific, a new leaf-node is
encouraged to be created as the local detector for contours that cannot be handled by the current
model(Fig. 1(c)); a leaf-node is encourage to be removed if it has similar discriminative ability as
other ones(Fig. 1(b)). We thus call this procedure dynamical CCCP (dCCCP).
5.1 Optimization Formulation
Suppose a set of positive and negative training samples (X1 , y1 ),...,(XN , yN ) are given, where X is
the edge map, y = ?1 is the label to indicate positive and negative samples. We assume the samples
indexed from 1 to K are the positive samples, and the feature vector for each sample (X, y) as,
4
{
?(X, y, H) =
?(X, H)
0
if y = +1
,
if y = ?1
(12)
where H is the latent variables. Thus, Eq.(10) can be rewritten as a discriminative function,
S? (X) = argmaxy,H (? ? ?(X, y, H)).
(13)
The optimization of this function can be solved by using structural SVM with latent variables,
?
1
min ???2 + D
[max(? ? ?(Xk , y, H) + L(yk , y, H)) ? max(? ? ?(Xk , yk , H))],
? 2
y,H
H
N
(14)
k=1
where D is a penalty parameter(set as 0.005 empirically), and L(yk , y, H) is the loss function. We
define that L(yk , y, H) = 0 if yk = y, ?1? if yk ?= y in our method.
The optimization target in Equation(14) is non-convex. The CCCP framework [23] was recently
utilized in [22, 25] to provide a local optimum solution by iteratively solving the latent variables
H and the model parameter ?. However, the CCCP does not address the or-nodes in hierarchy,
i.e., assuming the configuration of structure is fixed. In the following, we propose the dCCCP by
embedding a structural reconfiguration step.
5.2 Optimization with dynamic CCCP
Following the original CCCP framework, we convert the function in Eq. (14) into a convex and
concave form as,
?
?
1
min[ ???2 + D
max(? ? ?(Xk , y, H) + L(yk , y, H))] ? [D
max(? ? ?(Xk , yk , H))]
?
y,H
H
2
N
N
k=1
k=1
= min[f (?) ? g(?)],
(15)
(16)
?
where f (?) represents the first two terms, and g(?) represents the last term in (15).
The original CCCP includes two iterative steps: (I) fixing the model parameters, estimate the latent variables H ? for each positive samples; (II) compute the model parameters by the traditional
structural SVM method. In our method, besides the inferred H ? , we need to further determine
the graph configuration, i.e. the production of leaf-nodes associated with or-nodes, to obtain the
complete structure. Thus, we insert one step between two original ones to perform the structure
reconfiguration. The three iterative steps are presented as follows.
(I) For optimization, we first find a hyperplane qt to upper bound the concave part ?g(?) in Eq.(16),
?g(?) ? ?g(?t ) + (? ? ?t ) ? qt , ??.
(17)
where ?t includes the model parameters obtained in the previous iteration. We construct qt by
calculating the optimal latent variables Hk? = argmaxH (?t ??(Xk , yk , H)). Since ?(Xk , yk , H) =
0 when yk = ?1, we only take the positive training samples into account during computation. Then
?N
the hyperplane is constructed as qt = ?D k=1 ?(Xk , yk , Hk? ).
(II) In this step, we adjust the model structure by reconfiguring the leaf-nodes. In our model, each
leaf-node is mapped to several feature dimensions of the vector ?(X, y, H ? ). Thus, the process
of reconfiguration is equivalent to reorganizing the feature vector ?(X, y, H ? ). Accordingly, the
hyperplane qt would change with ?(X, y, H ? ), and would lead to non-convergence of learning.
Therefore, we operate on ?(X, y, H ? ) guided by the Principal Component Analysis(PCA). That is,
we allow the adjustment only with the non-principal components (dimensions) of ?(X, y, H ? ), in
terms of preserving the significant information of ?(X, y, H ? ) [8]. As a result, qt is assumed to be
unaltered. This step of model reconfiguration can be then divided into two sub-steps.
(i) Feature refactoring guided by PCA. Given ?(Xk , yk , Hk? ) of all positive samples, we apply
PCA on them,
K
?
?(Xk , yk , Hk? ) ? u +
?k,i ei ,
(18)
i=1
where K is the number of the eigenvectors, ei the eigenvector with its parameter ?k,i . We set K a
?K
large number so that ||?(Xk , yk , Hk? ) ? (u + i=1 ?k,i ei )||2 < ?, ?k. For the jth bin of the feature
5
( 1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
?
?
(a)
(b)
(c)
Figure 2: A toy example for structural clustering. We consider 4 samples, X1 , . . . , X4 , for training the structure of Ui . (a) shows the feature vectors ? of the samples associated with Ui , and the
intensity of the feature bin indicates the feature value. The red and green bounding boxes on the
vectors indicate the non-principal features representing the detected contour fragments via two different leaf-nodes. (b) illustrates the clustering performed with ?? . The vector ??6 , ?8 , ?9 ? of X2 is
grouped from the right cluster to the left one. (c) shows the adjusted feature vectors according to the
clustering. Note that clustering would result in structural reconfiguration, as we discuss in the text.
This figure is encouraged to be view in electronic version.
vector, we consider it non-principal only if ei,j < ? and uj < ? for all ei and u, (? = 2.0, ? = 0.001
in experiments).
For each or-node Ui , a set of detected contour fragments, {c1i , c2i , ..., cK
i }, are obtained with the
given Hk? of all positive samples. The feature vectors for these contours that are generated by
K
the leaf-nodes, {?l (p1i , c1i ), ..., ?l (pK
i , ci )}, are mapped to different parts of the complete feature
?
?
)}. More specifically, once we select the jth bin for the
vector, {?(X1 , y1 , H1 ), ..., ?(XK , yK , HK
l
all feature vectors ? , it can be either principal or not in different vectors ?. For all feature vector ?l ,
we select the non-principal bins to form a new vector. We thus refactor the feature vectors of these
K
contours as {?? (p1i , c1i ), ..., ?? (pK
i , ci )}.
(ii) Structural reconfiguration by clustering. To trigger the structural reconfiguration, for each ornode Ui , we perform the clustering for detected contour fragments represented by the newly formed
feature vectors. We first group the contours detected by the same leaf-node into the same cluster
as a temporary partition. Then the re-clustering is performed by applying the ISODATA algorithm
and the Euclidean distance. And the close contours are grouped into the same cluster. According
to the new partition, we can re-organize the feature vectors, i.e. represent the similar contour with
the same bins in the complete feature vector ?. Please recall that the vector of one contour is part
of ?. We present a toy example for illustration in Fig. 2. The selected feature vector (non-principal)
?? (p2i , c2i ) = ??6 , ?8 , ?9 ? of X2 is grouped from one cluster to another; by comparing (a) with (c)
we can observe that ??6 , ?8 , ?9 ? is moved to ??1 , ?3 , ?4 ?.
With the re-organization of feature vectors, we can accordingly reconfigure the leaf-nodes corresponding to the clusters of contours. There are two typical states.
? New leaf-nodes are created once more clusters are generated than previous. Their parameters can be learned based on the feature vectors of contours within the clusters.
? One leaf-node is removed when the feature bins related to it are zero, which implies the
contours detected by the leaf-node are grouped to another cluster.
In practice, we constrain the extent of structural reconfiguration, i.e., only few leaf-nodes can be
created or removed for each or-node per iteration. After the structural reconfiguration, we denote
all the feature vectors ?(Xk , yk , Hk? ) are adjusted to ?d (Xk , yk , Hk? ). Then the new hyperplane is
?N
generated as qtd = ?D k=1 ?d (Xk , yk , Hk? ).
(III) Given the newly generated model structures represented by the feature vectors ?d (Xk , yk , Hk? ),
we can learn the model parameters by solving ?t+1 = argmin? [f (?) + ? ? qtd ]. By substituting
?g(?) with the upper bound hyperplane qtd , the optimization task in Eq. (15) can be rewritten as,
?
1
min ???2 + D
[max(? ? ?(Xk , y, H) + L(yk , y, H)) ? ? ? ?d (Xk , yk , Hk? )].
? 2
y,H
N
k=1
This is a standard structural SVM problem, whose solution is presented as,
6
(19)
and-node
0.6
leaf-node
0.55
0.5
AP
0.45
UIUC human
0.65
or-node
0.4
0.35
AOT
AOG
1
(a)
(b)
3
5
Iteration
7
9
(c)
Figure 3: The trained And-Or graph model with the UIUC-People dataset. (a) visualizes the three
layer model, where the images on the top imply the verification via the root-node. (b) exhibits the
leaf-nodes associated with the or-nodes, U1 , . . . , U8 ; a practical detection with the activated leafnodes are highlighted by red. (c) shows the average precisions (AP) results generated by the And-Or
tree (AOT) model and the And-Or graph (AOG) model.
?? = D
?
?
?k,y,H
??(Xk , y, H),
(20)
k,y,H
where ??(Xk , y, H) = ?d (Xk , yk , Hk? ) ? ?(Xk , y, H). We calculate ?? by maximizing the dual
function:
max
?
?
?k,y,H L(yk , y, H) ?
k,y,H
D?
2
?
k,k
?
?k,y,H ?k? ,y? ,H ? ??(Xk , y, H)??(Xk? , y ? , H ? ). (21)
y,H,y ? ,H ?
It is a dual problem in standard SVM, which can be solved by applying the cutting plane method [1]
and Sequential Minimal Optimization [13]. Thus, we obtain the updated parameters ?t+1 , and
continue the 3-step iteration until the function in Eq.(16) converges.
5.3 Initialization
At the beginning of learning, the And-Or graph model can be initialized as follows. For each training
sample (whose contours have been extracted), we partition it into a regular layout of several blocks,
each of which corresponds to one or-node. The contours fallen into the block are treated as the
input for learning. Once there are more than two contours in one block, we select the one with
largest length. Then the leaf-nodes are generated by clustering the selected contours without any
constraints, and we can thus obtain the initial feature vector ?d for each sample.
6
Experiments
We evaluate our method for object shape detection, using three benchmark datasets: the UIUCPeople [17], the ETHZ-Shape [7] and the INRIA-Horse [7].
Implementation setting. We fix the number of or-nodes in the And-Or model as 8 for the UIUCPeople dataset, and 6 in other experiments. The initial layout is a regular partition (e.g. 4 ? 2 blocks
for the UIUC-People dataset and 2 ? 3 for others). There are at most m = 4 leaf-nodes for each
or-node. For positive samples, we extract their clutter-free object contours; for negative samples,
we compute their edge maps by using the Pb edge detector [12] with an edge link method. The
convergence of our learning algorithm take 6 ? 9 iterations. During detection, the edge maps of
test images are extracted as for negative training samples, within which the object is searched at 6
different scales, 2 per octave. For each contour as the input to the leaf-node, we sample 20 points
and compute the Shape Context descriptor for each point; the descriptor is quantized with 6 polar
angles and 2 radial bins. We adopt the testing criterion defined in the PASCAL VOC challenge: a
detection is counted as correct if the intersection over union with the groundtruth is at least 50%.
Experiment I. The UIUC-People dataset contains 593 images (346 for training, 247 for testing).
Most of the images contain one person playing badminton. Fig. 3(b) shows the trained And-Or
model(AOG) in that each of the 8 or-nodes associates with 2 ? 4 leaf-nodes. To evaluate the benefit
from the collaborative edges, we degenerate our model to the And-Or Tree (AOT) by removing the
collaborative edges. As Fig. 3(c) illustrates, the average precisions (AP) of detection by applying
AOG and AOT are 56.20%and 53.84% respectively. Then we compare our model with the stateof-the-art detectors in [18, 2, 4, 5], some of which used manually labeled models. Following the
7
Our AOG
Our AOT
Wang et al. [18]
Andriluka et al. [2]
Felz et al. [5]
Bourdev et al. [4]
Accuracy
0.680
0.660
0.668
0.506
0.486
0.458
Our method
Ma et al. [10]
Srinivasan et al. [16]
Maji et al. [11]
Felz et al. [5]
Lu et al. [9]
(a)
Applelogos Bottles
0.910
0.926
0.881
0.920
0.845
0.916
0.869
0.724
0.891
0.950
0.844
0.641
Giraffes Mugs
0.803 0.885
0.756 0.868
0.787 0.888
0.742 0.806
0.608 0.721
0.617 0.643
Swans
0.968
0.959
0.922
0.716
0.391
0.798
Average
0.898
0.877
0.872
0.771
0.712
0.709
(b)
Table 1: (a) Comparisons of detection accuracies on the UIUC-People dataset. (b) Comparisons of
average precision (AP) on the ETHZ-Shape dataset.
metric mentioned in [18], to calculate the detection accuracy, we only consider the detection with
the highest score on an image for all the methods. As Table. 1a reports, our methods outperforms
other approaches.
INRIA Horse s
1
0.9
0.8
0.7
Recall
0.6
0.5
0.4
IKSVM
M2HT+IKSVM [11]
KAS [7]
TPS?RPM [6]
Voting with grps + verif [21]
Our AOG
Our AOT
0.3
0.2
0.1
0
0
0.2
0.4
0.6
0.8
1
FPPI
(a)
(b)
(c)
(d)
Figure 4: (a)Experimental results with the recall-FPPI measurement on the INRIA-Horse database.
(b),(c) and (d) shows a few object shape detections by applying our method on the three datasets,
and the false positives are annotated by blue frames.
Experiment II. The INRIA-Horse dataset consists of 170 horse images and 170 images without
horses. Among them, 50 positive examples and 80 negative examples are used for training and
remaining 210 images for testing. Fig. 4 reports the plots of false positives per image (FPPI) vs.
recall. It is shown that our system substantially outperforms the recent methods: the AOG and AOT
models achieve detection rates of 89.6% and 88.0% at 1.0 FPPI, respectively; in contrast, the results
of competing methods are: 87.3% in [21], 85.27% in [11], 80.77% in [7], and 73.75% in [6].
Experiment III. We test our method with more object categories on the ETHZ-Shape dataset: Applelogos, Bottles, Giraffes, Mugs and Swans. For each category (including 32 ? 87 images), half of
the images are randomly selected as positive examples, and 70 ? 90 negative examples are obtained
from the other categories as well as backgrounds. The trained model for each category is tested
on the remaining images. Table 1b reports the results evaluated by the mean average precision.
Compared with the current methods [11, 16, 5, 9, 10], our model achieves very competitive results.
A few results are visualized in Fig.4(b),(c) and (d) for experiment I, II, and III respectively.
7
Conclusion
This paper proposes a discriminative contour-based object model with the And-Or graph representation. This model can be trained in a dynamical manner that the model structure is automatically
determined during iterations as well as the parameters. Our method achieves the state-of-art of
object shape detection on challenging datasets.
8
References
[1] Y. Altun, I. Tsochantaridis, and T. Hofmann, Hidden markov support vector machines, In ICML,
2003. 7
[2] M. Andriluka, S. Roth, and B. Schiele, Pictorial structures revisited: People detection and articulated pose estimation, In CVPR, 2009. 7, 8
[3] S. Belongie, J. Malik, and J. Puzicha, Shape Matching and Object Recognition using Shape
Contexts, IEEE TPAMI, 24(1): 705-522, 2002. 3
[4] L. Bourdev, S. Maji, T. Brox, and J. Malik, Detecting people using mutually consistent poselet
activations, In ECCV, 2010. 7, 8
[5] P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan, Object Detection with
Discriminatively Trained Part-based Models, IEEE TPAMI, 2010. 1, 2, 7, 8
[6] V. Ferrari, F. Jurie, and C. Schmid, From Images to Shape Models for Object Detection, Int?l J.
of Computer Vision, 2009. 2, 8
[7] V. Ferrari, L. Fevrier, F. Jerie, and C. Schmid, Groups of Adjacent Contour Segments for Object
Detection, IEEE TPAMI, 30(1): 36-51, 2008. 7, 8
[8] N. Kambhatla and T. K. Leen, Dimension Reduction by Local Principal Component Analysis,
Neural Computation, 9: 1493-1516, 1997. 5
[9] C. Lu, L. J. Latecki, N. Adluru, X. Yang, and H. Ling, Shape Guided Contour Grouping with
Particle Filters, In ICCV, 2009. 2, 8
[10] T. Ma and L. J. Latecki, From Partial Shape Matching through Local Deformation to Robust
Global Shape Similarity for Object Detection, In CVPR, 2011. 2, 8
[11] S. Maji and J. Malik, Object Detection using a Max-Margin Hough Transform, In CVPR, 2009.
2, 8
[12] D. R. Martin, C. C. Fowlkes, and J. Malik, Learning to detect natural image boundaries using
local brightness, color, and texture cues, IEEET PAMI, 26(5): 530-549, 2004. 7
[13] J. C. Platt, Using analytic qp and sparseness to speed training of support vector machines, In
Advances in Neural Information Processing Systems, pages 557-563, 1998. 7
[14] P. Schnitzspan, M. Fritz, S. Roth, and B. Schiele, Discriminative structure learning of hierarchical representations for object detection, In CVPR, 2009. 2
[15] Z. Song, Q. Chen, Z. Huang, Y. Hua, and S. Yan, Contextualizing Object Detection and Classification, In CVPR, 2010. 2
[16] P. Srinivasan, Q. Zhu, and J. Shi, Many-to-one Contour Matching for Describing and Discriminating Object Shape, In CVPR, 2010. 2, 8
[17] D. Tran and D. Forsyth, Improved human parsing with a full relational model, In ECCV, 2010.
7
[18] Y. Wang, D. Tran, and Z. Liao, Learning Hierarchical Poselets for Human Parsing, In CVPR,
2011. 2, 7, 8
[19] X. Yang and L. J. Latecki, Weakly Supervised Shape Based Object Detection with Particle
Filter, In ECCV, 2010. 2
[20] B. Yao, A. Khosla, and L. Fei-Fei, Classifying Actions and Measuring Action Similarity by
Modeling the Mutual Context of Objects and Human Poses, In ICML, 2011. 2
[21] P. Yarlagadda, A. Monroy and B. Ommer, Voting by Grouping Dependent Parts, In ECCV,
2010. 8
[22] C.-N. J. Yu and T. Joachims, Learning structural svms with latent variables, In ICML, 2009. 2,
4, 5
[23] A. Yuille and A. Rangarajan, The concave-convex procedure(cccp), In NIPS, pages 1033-1040,
2001. 1, 2, 5
[24] Y.B. Zhao and S.C. Zhu, Image Parsing via Stochastic Scene Grammar, In NIPS, 2011. 2
[25] L. Zhu, Y. Chen, A. Yuille, and W. Freeman, Latent Hierarchical Structural Learning for Object
Detection, In CVPR, 2010. 1, 2, 5
[26] L. Zhu, Y. Chen, Y. Lu, C. Lin, and A. Yuille, Max Margin AND/OR Graph Learning for
Parsing the Human Body, In CVPR, 2008. 1, 2
[27] S.C. Zhu and D. Mumford, A stochastic grammar of images, Foundations and Trends in Computer Graphics and Vision, 2(4): 259-362, 2006. 1, 2
9
|
4770 |@word unaltered:1 version:1 middle:1 decomposition:1 accounting:1 p0:15 brightness:1 solid:1 reduction:1 initial:3 configuration:4 contains:3 fragment:10 selecting:1 score:2 fevrier:1 nonexistent:1 outperforms:3 existing:1 current:2 com:1 contextual:1 comparing:1 ka:1 activation:5 gmail:1 dx:2 parsing:5 partition:4 shape:27 enables:1 hofmann:1 analytic:1 plot:1 fund:1 v:1 half:1 leaf:55 selected:4 cue:1 accordingly:2 plane:1 xk:23 beginning:1 detecting:2 iterates:2 node:116 location:3 codebook:1 quantized:1 org:1 revisited:1 constructed:2 consists:2 introduce:1 manner:4 acquired:1 pairwise:3 expected:1 p1:3 uiuc:6 multi:2 freeman:1 voc:1 automatically:3 pf:1 window:1 reorganizing:1 latecki:4 project:1 provided:1 moreover:1 costi:2 argmin:1 substantially:1 eigenvector:1 z:1 voting:3 concave:4 tackle:1 classifier:11 platt:1 ramanan:1 yn:1 organize:1 positive:12 local:11 encoding:1 ap:4 pami:1 inria:5 initialization:2 china:3 studied:1 dynamically:3 specifying:2 challenging:2 co:1 jurie:1 decided:1 practical:1 testing:5 practice:2 block:8 union:1 procedure:4 displacement:2 yan:1 matching:5 radial:1 regular:3 altun:1 cannot:1 close:1 tsochantaridis:1 context:6 applying:4 equivalent:1 map:8 roth:2 maximizing:3 shi:1 layout:5 rlj:2 cluttered:1 convex:4 pyi:1 formulate:1 simplicity:1 u6:2 badminton:1 embedding:2 handle:3 ferrari:3 variation:1 updated:1 hierarchy:2 suppose:2 target:1 trigger:1 schnitzspan:2 hypothesis:2 pa:1 associate:1 trend:1 recognition:2 located:2 utilized:2 database:2 labeled:1 bottom:7 wang:3 solved:3 calculate:3 region:1 connected:2 sun:2 removed:6 highest:1 yk:24 mentioned:1 ui:13 schiele:2 cooccurrence:1 dynamic:2 trained:6 weakly:2 solving:2 segment:2 yuille:3 joint:2 represented:3 maji:4 train:1 articulated:1 detected:11 horse:7 whose:4 widely:1 cvpr:9 relax:1 grammar:2 ability:1 jointly:1 highlighted:1 transform:1 final:1 advantage:1 descriptive:1 tpami:3 sen:2 propose:2 tran:2 interaction:1 combining:1 holistic:1 degenerate:1 achieve:2 moved:1 validate:1 parent:1 convergence:2 optimum:1 cluster:8 rangarajan:1 converges:1 object:40 derive:1 bourdev:2 pose:2 fixing:2 qt:6 progress:1 eq:11 auxiliary:1 implies:3 indicate:2 poselets:1 direction:1 guided:3 annotated:2 attribute:1 filter:3 correct:1 stochastic:2 human:5 mcallester:1 bin:7 activating:1 fix:1 adjusted:2 insert:1 substituting:1 kambhatla:1 achieves:2 adopt:1 omitted:1 polar:1 estimation:1 felz:2 integrates:1 label:1 vz:5 grouped:4 largest:1 rather:1 ck:1 guangzhou:2 encode:1 derived:1 pxi:1 validated:1 joachim:1 improvement:1 indicates:2 hk:13 contrast:1 detect:3 inference:5 dependent:1 accumulated:1 integrated:1 lj:11 hidden:1 compatibility:2 overall:1 classification:2 among:2 dual:2 denoted:1 pascal:1 stateof:1 proposes:1 art:3 spatial:1 andriluka:2 brox:1 mutual:1 field:1 construct:1 once:3 extraction:1 manually:3 encouraged:2 represents:4 x4:1 neigh:3 icml:3 yu:1 theart:1 others:1 report:3 few:3 randomly:1 recognize:3 national:1 pictorial:1 consisting:1 detection:34 organization:1 interest:1 intra:3 adjust:1 argmaxy:1 nl:1 activated:6 contextualizing:1 xiaolong:1 accurate:1 edge:22 encourage:1 partial:2 tree:5 indexed:1 hough:2 euclidean:1 initialized:1 circle:2 re:5 deformation:5 girshick:1 minimal:1 instance:1 modeling:3 ommer:1 localizing:2 measuring:1 cost:1 graphic:1 configurable:1 combined:1 person:1 fritz:1 fundamental:1 discriminating:1 probabilistic:1 yao:1 central:1 huang:1 zhao:1 toy:2 account:2 potential:2 b2:1 includes:3 int:1 forsyth:1 caused:1 vi:9 performed:2 root:13 view:1 h1:1 red:2 competitive:1 collaborative:8 contribution:1 square:1 formed:1 accuracy:3 variance:3 descriptor:4 ensemble:3 fallen:2 lu:3 visualizes:1 detector:5 definition:1 against:1 associated:11 argmaxc:1 newly:2 dataset:8 recall:4 knowledge:1 color:1 cj:4 supervised:4 response:8 specify:1 improved:1 refactor:1 arranged:1 formulation:1 box:1 strongly:1 evaluated:1 leen:1 just:1 until:1 horizontal:1 ei:5 yat:2 requiring:1 true:1 verify:2 contain:1 assigned:1 guangdong:1 iteratively:1 dx2:1 deal:1 mug:2 adjacent:3 during:9 please:1 criterion:1 octave:1 tps:1 complete:5 image:17 novel:3 recently:3 superior:1 empirically:1 qp:1 jl:3 significant:2 composition:2 refer:1 measurement:1 automatic:1 particle:3 monroy:1 funded:1 supervision:1 similarity:2 swan:2 recent:1 optimizing:1 p1i:2 poselet:1 continue:1 inconsistency:1 captured:1 preserving:1 verif:1 prune:1 determine:3 dashed:1 ii:6 multiple:1 full:1 determination:1 lin:3 divided:1 cccp:11 liao:1 vision:3 metric:1 iteration:8 represent:5 kernel:1 cz:3 c1:2 background:5 addition:1 argmaxh:1 operate:1 elegant:1 call:1 structural:16 reconfiguring:1 yang:3 intermediate:1 iii:3 switch:4 independence:1 competing:1 idea:1 cn:1 whether:1 handled:1 pca:3 passed:1 penalty:1 song:2 reconfigurability:1 locating:1 compositional:1 action:3 eigenvectors:1 clutter:4 visualized:2 category:5 svms:1 generate:2 aot:7 adluru:1 per:3 blue:1 discrete:1 srinivasan:3 group:2 key:3 threshold:1 pb:1 localize:2 verified:1 utilize:1 v1:1 graph:27 convert:1 angle:1 parameterized:2 fppi:4 groundtruth:1 electronic:1 vn:4 dy:3 rpm:1 capturing:1 layer:9 entirely:1 bound:2 fold:1 placement:1 constraint:1 constrain:1 fei:2 scene:2 x2:2 nearby:1 u1:3 speed:1 min:4 martin:1 according:3 alternate:1 slightly:1 iccv:1 indexing:1 taken:1 equation:1 mutually:1 discus:1 loose:1 describing:1 rewritten:2 apply:2 observe:1 hierarchical:5 occurrence:1 fowlkes:1 batch:1 original:3 top:7 denotes:1 clustering:8 remaining:2 calculating:1 perturb:1 uj:1 malik:4 mumford:2 rt:2 traditional:1 exhibit:1 distance:1 link:1 mapped:2 extent:1 assuming:2 besides:1 length:1 index:1 illustration:2 liang:2 negative:6 implementation:1 affiliated:1 proper:1 perform:2 upper:2 datasets:3 markov:1 benchmark:1 defining:1 extended:3 relational:1 vj2:1 y1:2 frame:1 vj1:1 intensity:1 inferred:3 introduced:1 namely:2 pair:1 bottle:2 learned:2 temporary:1 nip:2 address:2 able:1 dynamical:6 pattern:1 challenge:1 max:10 green:1 including:1 natural:3 c1i:3 vjm:1 treated:1 indicator:1 zhu:6 representing:4 scheme:2 aog:7 imply:1 isodata:1 created:6 extract:1 schmid:2 text:1 understanding:1 fully:1 loss:1 discriminatively:1 limitation:1 remarkable:1 localized:3 foundation:3 verification:5 consistent:1 playing:1 pi:19 classifying:1 production:2 eccv:4 supported:1 last:1 free:1 jth:2 allow:1 neighbor:1 felzenszwalb:1 sysu:1 benefit:1 boundary:1 dimension:3 xn:1 contour:44 author:1 collection:1 commonly:1 concretely:1 simplified:2 made:1 counted:1 employing:1 implicitly:1 cutting:1 unreliable:2 global:5 b1:1 assumed:1 belongie:1 discriminative:8 latent:13 iterative:3 khosla:1 table:3 learn:1 robust:2 interact:1 complex:1 vj:9 pk:2 hierarchically:1 main:1 giraffe:2 whole:1 bounding:1 ling:1 child:4 allowed:3 refactoring:1 p2i:1 x1:3 body:1 fig:9 elaborate:1 precision:4 sub:1 position:4 comprises:1 candidate:5 reconfigure:1 down:4 removing:1 specific:1 explored:1 pz:3 svm:7 grouping:2 incorporating:1 false:2 sequential:1 ci:2 texture:1 illustrates:4 sparseness:1 margin:3 rui:3 chen:3 rg:4 intersection:2 likely:1 visual:1 adjustment:1 partially:2 rbot:2 hua:1 ch:4 corresponds:1 extracted:2 ma:2 conditional:2 u8:1 change:1 determined:4 typical:2 specifically:1 hyperplane:5 reconfiguration:10 principal:8 c2i:2 experimental:1 select:5 puzicha:1 people:7 searched:1 support:2 ethz:4 evaluate:2 tested:1
|
4,166 | 4,771 |
Identification of Recurrent Patterns in the Activation
of Brain Networks
Firdaus Janoos? Weichang Li Niranjan Subrahmanya
ExxonMobil Corporate Strategic Research
Annandale, NJ 08801
? M?orocz William M. Wells (III)
Istv?an A.
Harvard Medical School
Boston, MA 02115
Abstract
Identifying patterns from the neuroimaging recordings of brain activity related
to the unobservable psychological or mental state of an individual can be treated
as a unsupervised pattern recognition problem. The main challenges, however,
for such an analysis of fMRI data are: a) defining a physiologically meaningful
feature-space for representing the spatial patterns across time; b) dealing with
the high-dimensionality of the data; and c) robustness to the various artifacts and
confounds in the fMRI time-series.
In this paper, we present a network-aware feature-space to represent the states
of a general network, that enables comparing and clustering such states in a
manner that is a) meaningful in terms of the network connectivity structure;
b)computationally efficient; c) low-dimensional; and d) relatively robust to structured and random noise artifacts. This feature-space is obtained from a spherical
relaxation of the transportation distance metric which measures the cost of transporting ?mass? over the network to transform one function into another. Through
theoretical and empirical assessments, we demonstrate the accuracy and efficiency
of the approximation, especially for large problems.
1
Introduction
In addition to functional localization and integration, mapping the neural correlates of ?mental
states? or ?brain states? (i.e. the distinct cognitive, affective or perceptive states of the human mind)
is an important research topic for understanding the connection between mind and brain [2]. In
functional neuroimaging, this problem is equivalent to identifying recurrent spatial patterns from
the recorded activation of neural circuits and relating them with the mental state of the subject.
Although clustering the data across time to identify the intrinsic state of an individual from EEG
and MEG measurements is an established procedure in electrophysiology [19], analysis of temporal patterns in functional MRI data have generally used supervised techniques such as multivariate
regression and classification [18, 11, 9], which restrict analysis to observed behavioral correlates of
mental state, ignoring any information about the intrinsic mental state that might be present in the
data.
In contrast to clustering voxels based on the similarity of their functional activity (i.e. along the
spatial dimension) [15], the problem of clustering fMRI data along the temporal dimension has
not been widely explored in literature, primarily because of the following challenges: a) Lack of
?
Corresponding [email protected]
1
a physiologically meaningful metric to compare the difference between the spatial distribution of
recorded brain activity (i.e. brain states) at two different time-points; b) Problems that arise because
the number of voxels (i.e. dimensions) is orders of magnitude larger (N ? O(105 ) vs. T ? O(102 ))
than the number of scans (i.e. samples) ; and c) Structured and systematic noise due to factors such
as magnetic baseline drift, respiratory and cardiac activity, and head motion. The dimensionality
problem in fMRI has been typically addressed through PCA [16], ICA[3] or by selection of a subset
of voxels either manually or via regression against the stimulus [18, 11]. PCA has generally been
found to be problematic in fMRI [18, 11, 13], since the largest variance principal components usually
correspond to motion and physiological noise such as respiration and pulsatile activity, while ICA
does not provide an automated way of selecting components. On the other hand, supervised featurespaces are inherently biased towards the experimental variables against which they were selected or
by the investigator?s expectations, and may not capture unexpected patterns in the data.
In the first contribution of this paper, we address these problems by using a network-aware metric
that captures the difference between the states zt1 , zt2 at two different time-points t1 , t2 of a
temporally evolving function zG : V ? [0, T ] ? R defined on the vertices V of a network (i.e. an
weighted undirected graph) G = (V, E), in a manner that is aware of the connectivity structure
E of the underlying network. Intuitively, this network-aware metric assesses the distance between
two states zt1 , zt2 that differ mainly on proximally connected nodes to be less than the distance
between states zt1 , zt2 that differ on unconnected nodes. This concept is illustrated in Fig. 1.
In the context of neuroimaging, where the network measures the functional connectivity [4]
between brain regions, this implies that two
brain activation patterns that differ mainly on
functionally similar regions are functionally
closer than two that differ on functionally unrelated regions. For example, zt1 and zt2 that
zt1
zt3
activated mainly in the cingulo-opercular network would be functionally more similar with
each other than with zt3 that exhibited activity
mainly in the fronto-parietal network.
Such network awareness is provided by the
zt2
Kantorovich metric[20], also called the transportation distance (TD), which measures the
minimum flow of ?mass? over the network to Figure 1: Shown are zt1 , zt2 and zt3 , three states
make zt1 at time t1 match zt2 that at t2 . The of the function zG on the network G. Here, zt1 and
cost of this flow is encoded by the weights of zt2 activate on more proximal regions of the graph and
the edges of the graph. The Earth Movers Dis- are hence assessed to be more similar than zt1 and zt3 .
Similarly, for zt2 and zt3 .
tance (EMD), closely related to the transportation distance, is widely used for clustering and
retrieval in computer vision, medical imaging,
bio-informatics and data-mining [21, 22, 7]. One major strength of this family of metrics for neuroimaging applications, over voxel-wise image matching, is that it allows for partial matches thereby
mitigating the effect of small differences between the measurements that arise due to spatial displacement such as head-motion or from random noise [21].
The TD, however, has the following limitations: Firstly, it is computationally expensive with worstcase complexity of O(NV 3 log NV ) where NV is the number of nodes in the graph [17]. If the
number of time-series observations is T , clustering requires O(T 2 ) comparisons, making computation prohibitively expensive for large data-sets. Secondly, and more importantly, the metric is the
solution to an optimization problem and therefore does not have a tractable geometric structure. For
example, there is no closed form expression of the centroid of a cluster under this metric. As a result,
determining the statistical properties of clusters obtained under this metric, leave alone developing
more sophisticated models, is not straightforward. Although linear embedding (i.e. Euclidean) approximations have been proposed for the EMD [12, 22], they are typically defined for comparing
probability distributions over regular grids and extension to functions over arbitrary networks is an
open problem.
2
The second contribution of this paper is to address these issues through the development of a linear
feature-space that provides a good approximation of the transportation distance. This feature?space
is motivated by spherical relaxation [14] of the dual polytope of the transportation problem, as
described in Section 2. The network function zG is then embedded into an Euclidean space via a
similarity transformation such that the the transportation distance is well-approximated by the `2
distance in this space, as elucidated in Section 3. In contrast to existing linear approximations, the
feature-space developed here has a very simple form closely related to the graph Laplacian [6].
Theoretical bounds to the error the approximation are developed and the accuracy of the method
is validated empirically in Section 4.1. Here, we show that the feature?space does not deteriorate,
but on the contrary, may improve as the size of the graph increases, making it highly suitable for
dealing with large networks like the brain. Its application to extracting the intrinsic mental-states,
in an unsupervised manner, from an fMRI study for a visuo-spatial motor task is demonstrated in
Section 4.2. Detailed proofs and descriptions are provided in the Supplemental to the manuscript.
2
Transportation Distance and Spherical Relaxation
Let zt1 and zt2 denote the states of zG at time-points t1 , t2 on the graph G = (V, E), with nodes
V = {1 . . . NV } and edges E = {(i, j) | i, j ? V}. The symmetric distance matrix WG [i, j] ? R+
encodes the cost of transport between
nodes i and j . Also, define the difference between two states
P
as dz = zt1 ? zt2 , and assume i?V dz[i] = 0 without loss of generality 1 . The minimal cost
TD(zt1 , zt1 ), of transport f : E ? R+ of ?mass? over the network to convert zt1 into zt2 , is posed as
the following linear program (LP):
TD(zt1 , zt2 ) = min
f
XX
X
subject to
f [i, j]WG [i, j],
f [i, j] ?
j
i?V b?V
X
f [j, i] = dz[i].
(1)
j
The corresponding TP dual, formulated in the unrestricted dual variables g : V ? R, is:
TD(zt1 , zt2 ) = max hg, dzi
g
?
1
?1
?1
0
?
? ..
? .
?
?1
0
?
??1
1
where A = ?
1
?0
? .
? .
? .
?
?0
1
?
..
.
subject to
0
?1
0
0
?1
0
...
...
..
.
...
...
...
..
.
...
..
.
Ag ? wG
?
(2)
?
0
0?
?
.. ?
. ?
?
?1?
?
0?
?
0?
.. ?
?
. ?
?
?1?
?
..
.
and
?
WG [1, 2]
? WG [1, 3] ?
?
?
..
?
?
?
?
.
?
?
?WG [1, N ]?
?
?
?
?
wG = ? WG [2, 1] ? .
? WG [2, 3] ?
?
?
..
?
?
?
?
.
?
?
?WG [2, N ]?
?
?
..
.
The feasible set of the dual is a convex polytope formed by the intersection of the half-spaces
specified by the constraints {ai,j , i = 1 : NV , j = 1 . . . NV , i 6= j}, corresponding to the rows of A,
and containing a +1 entry in the i?th position and a ?1 entry in the j?th position. These constraints
which form normals to the hyper-planes bounding this polytope, are symmetrically distributed in
the +i ? ?j quadrant of RNV for each combination of i and j . Moreover, A is totally uni-modular
[5], and has rank of NV ? 1 with the LP polytope lying in an NV ? 1 dimensional space orthogonal
to 1NV , the 1 ?vector in RNV . In the discussion below, we operate in the original RNV notation,
by considering
its restriction to the NV ? 1 dimensional sub-space {g ? RNV | hg, 1NV i = 0},
P
i.e. i?V g[i] = 0. The optimal solution to this problem will lie on the NV ? 1 simplicial complex
?
formed by intersections of the NV ? 1 dimensional hyper-planes each at a distance of WG [i, j]/ 2
from the origin, and in the non-degenerate case will coincide with the extreme-points of the polytope
Ag ? wG .
Consider the a special case for the fully-connected graph with WG [i, j] = 1, ?i, j ? V. Here,
subject to
TD(zt1 , zt2 ) = max < g, dz >
g
Ag ? 1NV ?(NV ?1) .
(3)
?
Each hyper-plane of the LP polytope is at distance 1/? 2 from the origin and the maximum inscribed
hyper-sphere, with center at the origin and radius 1/ 2 touches all the polytope?s hyper-planes. The
1
Add dummy node with index NV +1 where dz[NV +1] = ?
3
P
i?V
dz[i] and WG [i, NV +1] = 0, ?i ? V.
main idea of the embedding is to use the regularity of this polytope, with 2NV ? 2 extreme points
symmetrically distributed in RNV ?1 (? Proposition 2 in the Supplemental) and approximate it by this
hyper-sphere. Relaxing the feasible set of the TP dual from the convex polytope to this hyper-sphere,
eqn. (2) becomes:
such that
d t1 , zt2 ) = max < g, dz >
TD(z
g
1
||g||2 = ? ,
2
(4)
which has a direct solution
d t1 , zt2 ) = ?1 ||dz|| = ?1 ||zt1 ? zt2 ||
TD(z
2
2
with
1 dz
b? = ?
g
2 ||dz||
(5)
The worst-case error of this approximation is O(||dz||) (?Theorem 1 of the Supplemental), proving
that quality of the linear approximation for a graph where all nodes are equidistant neighbors of each
other does not deteriorate as the size of the graph increases.
3
Linear Feature Space Embedding
In the case of an arbitrary distance matrix WG , however, the polytope loses its regular structure, and
has a variable number of extreme points. Also, in general, the maximal inscribed hyper-sphere does
not touch all the bounding hyper-planes, resulting in a very poor approximation [14]. Therefore, to
use the spherical relaxation for the general problem, we apply a similarity transformation M, such
that A ? M = diag{wG }?1 A and M positive semi-definite. Expressing eqn. (2) in terms of a new
variable ? , Mg, we see that the general problem:
TD(zt1 , zt2 ) = max < g, dz >
g
such that
Ag ? wG
(6)
is equivalent to the special case given by eqn. (3), in a transformed space, as per:
TD(zt1 , zt2 ) = max < M? ?, dz >
such that
?
A? ? 1NV ?(NV ?1) ,
(7)
d t1 , zt2 ) =
where M? is the (pseudo-)inverse of M. Then, the approximation of eqn. (4) yields: TD(z
>
?1
1
? ||M
(zt1 ? zt2 )||.
2
As shown in Supplemental Section A, the transformation matrix M = N1V LG , where LG = D?G ? ?G
is the un-normalized Laplacian matrix of the graph. Here, ?G is the adjacency matrix
P such that
?G [i, j] = WG [i, j]?1 , ?i 6= j and D?G is the diagonal degree matrix with D?G [i, i] = j?V ?G [i, j]
and D?G [i, j] = 0, for i 6= j . Defining V?V> = LG as the eigen-system of the graph Laplacian, and
the projection of zt onto the feature space V?? as zbt = ?? V> zt yields:
d t1 , zt2 ) = ?1 ||?? V> dz|| = ?1 ||zc
c
TD(z
t1 ? z
t2 ||.
2
2
(8)
Consequently, the transportation distance can be approximated by a `2 metric through a similarity
transformation of the original space. In this case the error of the approximation is O(??1
min ||dz||2 )
(?Theorem 1 of the Supplemental), which implies that the approximation improves as the smallest
eigenvalue of the graph Laplacian increases. Also, notice that the eigenvector vNV of LG corresponding to the smallest
P eigenvalue ?NV = 0 is a constant vector, and therefore hvNV , dzi = 0 by
the requirement that i?V dz[i] = 0 , thereby automatically reducing the dimension of the projected
space to NV ? 1 .
Dimensionality reduction of the feature-space can be achieved by discarding eigenvectors of LG
with the P largest eigenvalues whose inverse sum contributes to less than a certain percentage of the
total inverse spectral energy. If eigenvectorsqwith eigenvalues
q ?1 ? ?2 ? . . . ? ?P are discarded, the
?2
d t1 , zt2 ) is equal to PP ??2 / PNV
additional error in TD(z
k=P +1 ?k .
k=1 k
4
Results
First, we start by providing an empirical validation of the approximation to the transportation distance in Section 4.1 And then the feature-space is used to find representative patterns (i.e. brain
states) in the dynamically changing activations of the brain during a visuo-motor task in Section 4.2.
4
4.1
Validation
To validate the linear approximation to the transportation distance on networks, like the brain, that
exhibit a scale-free property [1], we simulated random graphs of NV vertices using the following
procedure: a) Create an edge between nodes i and j with probability ? ?(di + dj + ) , where di
is the degree of node i, and ?, are constants that are varied across experiments; b) sample the
weight of the edge from a ?21 distribution scaled by a constant ? , varied across experiments. For
each instance G(n) of the graph, a set of T = P
100 states zt : V(i) ? R, t = 1 . . . 104 were sampled
from a standard normal distribution such that i dz[i] = 0. The experiment was repeated 10 times
at graph sizes of NV = 2n , n = 4 . . . 12.
The transportation problem was solved using network simplex [17] in the IBM CPLEXr optimization package, while the linear approximation was implemented in Matlabr . All experiments were
run on a 2.6Hz Opteron cluster with 16 processors and 32GB RAM each. The amortized running
time for one pair-wise comparison is shown in Fig. 2(a). While an individual run of the network
simple algorithm is much faster than the eigen-system computation of the linear feature-space, repeatedly solving TD(zt1 , zt2 ) for all pairs of zt1 , zt2 is orders of magnitude slower than a simple
Euclidean distance, reducing its net efficiency.
The relative error, as shown in Fig. 2(b), reduces with increasing number of vertices, approximately
as O(NV ?1 ). This is because the approximation error for an arbitrary graph is O(??1
min ||dz||2 ) , while
for random graphs satisfying basic regularity conditions the eigenvalues of the graph Laplacian
increase as O(NV ) [8]. In comparison, the Euclidean metric ||zt1 ? zt2 ||2 starts with a much higher
relative error with respect to the transportation distance, and although its error also reduces with
graph size, the trend is slower. Secondly, the variance of the error is much higher than the linear
embedding proposed here.
In the context of clustering, which is the motivation for this work, a more important property is that
the approximation preserve the relative configuration (i.e. homomorphism) between observations
rather than the numerical values of their distances (i.e. isomorphism), as characterized by its ability to preserve the relative ordering between points (i.e. a topological equivalence property). From
Fig. 2(c), we observe that for data-points that are relatively close to each other, the ordering relationships are preserved with very high accuracy and it reduces as the relative distance between the
points increases.
Another important property for an embedding scheme, especially for non-linear manifolds like that
induced by the TD, is its ability to preserve the relative distances between points that are in local
neighborhoods (i.e. a coordinate chart property ). This is quantified by a normalized neighborhood
error as defined by:
NormErr(zt1 , zt2 ) =
d t1 , zt2 )
|a ? b|
TD(zt1 , zt2 )
TD(z
, where a = P
and b = P
.
d
|a|
TD(z
,
z
)
tn
t1
n?Nt1
n?Nt TD(zt1 , ztn )
1
d metrics
The neighborhoods Nt1 contain the 10 nearest neighbors of zt1 under the TD and TD
respectively. The formulation has the effect of normalizing the distance between zt1 , zt2 with respect
to the local neighborhood of zt1 . It can be seen in Fig. 2(d) that the approximation error according
to this measure is extremely low and almost constant with respect to NV for points that are close to
d does not hold for distant points on the manifold
each other. These plot indicate that although TD
induced by TD, it provides a good approximation of its topology.
4.2
Neuroimaging Data
Clustering using the feature-space described in this paper was applied to a data-set of fifteen subjects performing a visuo-motor task during functional MR imaging to discover salient patterns of
recurrent brain activation. The subjects were visually exposed to oriented wedges filled with highcontrast random noise patterns and displayed randomly in one of four quadrants. They were asked
to focus on a center dot and to perform a finger-tapping motion with the right or left hand when the
visual wedge was active in the upper right or lower left quadrants, respectively. Block length of each
visual wedge stimulation varied from 5 to 15s and noise patterns changed at a frequency of 5Hz. A
multi-shot 3D Gradient Echo Planar Imaging (EPI) sequence accelerated in the slice encoding direction with GRAPPA and UNFOLD was used on a GE 3T MRI scanner with a quadrature head
5
80%
60%
Error
Sec
10
1
0.1
20%
0.01
0%
128
256
Euclidean
40%
?? approx
128
512 1024 2048 4096 8192 16384
Number of nodes
Number of nodes
60%
25%-ile
T?
50%-ile
approx
75%-ile
25%-ile (Euclidean)
50%-ile (Euclidean)
Euclidean
75%-ile (Euclidean)
35%
30%
75%-ile
25%
Error
Error
80%
512 1024 2048 4096 8192 16384
(b) Approximation error
(a) Running time
100%
256
40%
50%-ile
20%
15%
25%-ile
10%
20%
Overall
5%
0%
0%
128 256 512 1024 2048 4096 8192 16384
128
Number of nodes
256
512 1024 2048 4096 8192 16384
Number of nodes
(c) Ordering error
(d) Neighborhood error
Figure 2: Fig.(a) shows the (amortized) per-comparison running time in seconds for the transportation disd with respect to with respect to graph size NV . In Fig.(b) the relative
tance TD and its approximation TD
d
approximation error (TD ? TD)/TD (?1 std.dev.) is graphed. The error for an Euclidean approximation
||zt1 ? zt2 ||2 is also shown for comparison. Fig.(c) shows the quartile-wise ordering error (?1 std.dev). For
d t1 , zt2 ) with respect to
each zt1 , the fraction of {zt2 , t2 = 1 . . . T, t2 6= t1 } that are misordered by TD(z
the ordering induced by TD(zt1 , zt2 ) is calculated. The set {zt2 } is divided into quartiles according to their
distance TD(zt1 , zt2 ) from zt1 , where the 25 percentile is set of the first 25% closest points to zt1 (similarly
for the 50 and 75%-iles). Also shown is the ordering error of the Euclidean metric with respect to TD. Errorbars are omitted for clarity. Fig (d) shows the quartile-wise approximation error normalized by the average
distance of its 10 nearest neighbors. The dashed line shows the un-normalized approximation error (? Fig.(b))
for reference.
coil and T = 171 volumes were acquired at TR=1.05s, an isotropic resolution of 3mm, with total
imaging time of 3min and the first five volumes were discarded from the analysis. High resolution
anatomical scans were also acquired, bias-field corrected, normalized to an MNI atlas space and
segmented into gray and white matter regions. The fMRI scans were motion corrected using linear
registration and co- registered with the structural scans using SPM8 [16]. Next, the time-series data
were high-pass filtered (0.5Hz) to remove gross artifacts due to breathing, blood pressure changes
and scanner drift. The data were first analyzed for task related activity using a general linear model
(GLM) with SPM8 for reference. The design matrix included a regressor for the presentation of the
wedge in each quadrant, convolved with a canonical hemodynamic response function. These results
are shown in Fig. 3(a).
Note that the data for each subject were processed separately. The mean volume of the time-series
was then subtracted, white matter masked out and all further processing was performed on the gray
matter. The functional networks for a subject were computed by estimating the correlations between voxels using the method described in Supplemental Section C, that is sparse, consistent and
computationally efficient. The distance matrix of the functional connectivity graph was constructed
as WG [i, j] = ? log(|?[i, j]|/? ), where ?[i, j] is the correlation between voxels i and j and ? is a
user-defined scale parameter (typically set to 10). This mapping has the effect that WG [i, j] ? 0 as
|?[i, j]| ? 1 and WG [i, j] ? ? as |?[i, j]| ? 0 .
The linear feature-space (?eqn. (8)) was computed from the graph Laplacian of ?G , where ?G [i, j] =
WG [i, j]?1 , retaining only those basis vectors corresponding to the top 80 eigenvalues (? 50% of
the spectral energy), and the fMRI volumes were embedded into this low dimensional space. For
6
clustering, the state-space method (SSM) of Janoos, et al. [13] was used, which is a modified hidden
Markov model with Gaussian emission probabilities that assigns a state (i.e. cluster) label to each
scan while accounting for the temporal blurring cause by the hemodynamic response. This method
associates
P each time-point t of the fMRI time-series with a vector ?t = {?t [1] . . . ?t [K] | ?t [k] ?
[0, 1],
k ?t [k] = 1} giving the probability of belonging to state 1 . . . K. A multinomial logistic
classifier (MLC) was then trained to predict the wedge position at time t from ?t . The number
of clusters was determined by selecting a value of 5 ? K ? 15 that minimized the generalization
error of the MLC, which acts as a statistic to assess the quality of the model-fit and perform model
selection.
It should be noted here that identification of patterns of recorded brain activity was performed in
a purely unsupervised manner. Only model selection and model interpretation was done, post hoc,
using observable correlates of the unobservable mental state of the subject. Spatial maps for each
wedge orientation were computed as an average of cluster centroids weighted by the MLC weights
for that orientation. The z-statistic spatial maps for the group from this analysis are shown in
Fig. 3(b), and exhibit the classic contra-lateral retinotopic organization of the primary visual cortex with the motor representation areas in both hemispheres. Fig. 3(c) shows the distribution of state
probabilities for one subject corresponding to a sequence of wedges oriented in each quadrant for
4?TRs each. Here, we see that the probability of a particular state is highly structured with respect
to the orientation of the wedge. For example, at the start of the presentation with the wedge in the
lower-right quadrant, state 1 is most probable. But by the second interval, state 2 becomes more
dominant and this distribution remains stable for the rest of this presentation. Then, as the display
transitions to the lower-left quadrant, states 3 and 4 become equiprobable. However, as this orientation is maintained, the probability distribution peaks about state 4 and remains stable. A similar
pattern in observed in the probability distributions for the other orientations.
For comparison, we also performed the same clustering using a low-dimensional PCA basis explaining ? 50% of the variance of the data (d = 60), and the low-dimensional basis (CorrEig) proposed
by [13] derived from the eigen-decomposition of the voxel-wise correlation matrix (d ? 110).
Multinomial logistic classifiers (MLC) were trained for each case and number of states were tuned
using the same procedure as above. The spatial maps reconstructed from these two feature-spaces
(not shown here) exhibited task-specific activation patterns, although the foci were much weaker
d feature-space. The error of the MLC in
and much more diffused as compared to those of the TD
predicting the stimulus at time t from the state probability vector ?t , which reflects the model?s
ability to capture patterns in the data related to the mental state of the subject, for these three feature
spaces is listed in Table 1.
d
TD
PCA
CorrEig
Lower right
Lower left
Upper left
Upper right
Overall
0.17 (? 0.05)
0.41 (? 0.08)
0.29 (? 0.05)
0.13 (? 0.02)
0.37 (? 0.10)
0.22 (? 0.04)
0.21 (? 0.04)
0.39 (? 0.09)
0.30 (? 0.06)
0.12 (? 0.03)
0.36 (? 0.08)
0.23 (? 0.05)
0.16 (? 0.07)
0.38 (? 0.18)
0.26 (? 0.10)
Table 1: The generalization error of the multinomial logistic classifier to predict the orientation of the wedge
from the distribution of state labels estimated by the SSM trained on three low-dimensional representations of
d b) PCA basis; and c) the eigen basis of the
the fMRI data: a) the approximate transportation distance TD;
voxel-wise correlation matrix (CorrEig). Due to the random presentation of wedge orientations, the chance
level prediction error varied between 68% ? ?81% for each subject.
We see that the prediction error ? and therefore the ability of the state-space model to identify
d feature-space as commental-state related patterns in the data ? is significantly better for the TD
?6
pared to that of [13] (p < 10 , 1-sided 2-sample t-test) while PCA performs significantly worse
d representation provides a sigthan both the other feature-spaces, as expected. Moreover, the TD
nificant difference (p < 0.001, 1-sided 2-sample t-test) between the prediction rates for the wedge
orientations with and without the finger-tapping task, implying that the model is able to better detect
brain patterns when both visual and motor regions are involved as compared to those involving only
the visual regions, probably because of the more distinct functional signature of the former.
7
12
1
(a) GLM regression
1
2
3
4
(b) SSM based clustering
5
6
7
TR
8 9
10 11 12 13 14 15 16
1
label
Cluster
State label
2
3
4
5
6
7
8
(c) Cluster membership probability vs. experimental stimulus
Figure 3: Fig. (a): Group-level maximum intensity projections of significantly activated voxels (p < 0.05,
FWE corrected) at the four orientations of the wedge and the hand motor actions, computed using SPM8
Fig. (b): Group-level z-maps showing the activity for each orientation of the wedge computed as an average of
cluster centroids weighted by the MLC weights. Displayed are the posterio-lateral and posterio-medial views of
the left and right hemispheres respectively. Values |z| ? 1 have been masked out for visual clarity. Fig. (c): The
SSM state probability vector ?t for one subject. The size of the circles corresponds to the marginal probability
?t [k] of state k = 1 . . . 8 during the display of the wedge in lower right, lower left, upper left and upper right
quadrants for 4TRs each. States have been relabeled for expository purposes.
5
Conclusion
In this paper, we have presented an approach to compare and identify patterns of brain activation
during a mental process using a distance metric that is aware of the connectivity structure of the
underlying brain networks. This distance metric is obtained by an Euclidean approximation of the
transportation distance between patterns via a spherical relaxation of the linear-programming dual
polytope. The embedding is achieved by a transformation of the original space of the function
with the graph Laplacian of the network. Intuitively, the eigen-system of graph Laplacian indicates
min-flow / max-cut partitions of the graph [10], and therefore projecting on these basis increases
the cost if the difference between two states of the function is concentrated on relatively distant or
disconnected regions of the graph.
We provided theoretical bounds on the quality of the approximation and through empirical validation
demonstrated low error that, importantly, decreases as the size of the problem increases. We also
showed the superior ability of this distance metric to identify salient patterns of brain activity, related
to the internal mental state of the subject, from an fMRI study of visuo-motor tasks.
The framework presented here is applicable to the more general problem of identifying patterns in
time-varying measurements distributed over a network that has an intrinsic notion of distance and
proximity, such as social, sensor, communication, transportation, energy and other similar networks.
Future work would include assessing the quality of the approximation for sparse, restricted topology,
small-world and scale-free networks that arise in many real world cases, and applying the method
for detecting patterns and outliers in these types of networks.
8
References
[1] Achard, S., Salvador, R., Whitcher, B., Suckling, J., Bullmore, E.: A resilient, low-frequency, small-world
human brain functional network with highly connected association cortical hubs. Neurosci 26(1), 63?72
(Jan 2006) 5
[2] Barrett, L.F.: The future of psychology: Connecting mind to brain. Perspect Psychol Sci 4(4), 326?339
(Jul 2009) 1
[3] Calhoun, V.D., Adali, T., Pearlson, G.D., Pekar, J.J.: Spatial and temporal independent component analysis of functional MRI data containing a pair of task-related waveforms. Hum Brain Map 13(1), 43?53
(May 2001) 2
[4] Cecchi, G., Rish, I., Thyreau, B., Thirion, B., Plaze, M., Paillere-Martinot, M.L., Martelli, C., Martinot,
J.L., Poline, J.B.: Discriminative network models of schizophrenia. In: Adv Neural Info Proc Sys (NIPS)
22, pp. 252?260 (2009) 2
[5] Chandrasekaran, R.: Total unimodularity of matrices. SIAM Journal on Applied Mathematics 17(6), pp.
1032?1034 (1969) 3
[6] Chung, F.: Lectures on Spectral Graph Theory. CBMS Reg Conf Series Math, Am Math Soc (1997) 3
[7] Deng, Y., Du, W.: The Kantorovich metric in computer science: A brief survey. Electronic Notes in
Theoretical Computer Science 253(3), 73 ? 82 (2009) 2
[8] Ding, X., Jiang, T.: Spectral distributions of adjacency and Laplacian matrices of random graphs. The
Annals of Applied Probability 20(6), 2086 ?2117 (2010) 5
[9] Friston, K., Chu, C., Mouro-Miranda, J., Hulme, O., Rees, G., Penny, W., Ashburner, J.: Bayesian decoding of brain images. Neuroimage 39(1), 181?205 (Jan 2008) 1
[10] Grieser, D.: The first eigenvalue of the laplacian, isoperimetric constants, and the max flow min cut
theorem. Archiv der Mathematik 87, 75?85 (2006) 8
[11] Haynes, J.D., Rees, G.: Decoding mental states from brain activity in humans. Nature Rev: Neurosci
7(7), 523?534 (Jul 2006) 1, 2
[12] Indyk, P., Thaper, N.: Fast color image retrieval via embeddings. ICCV (2003) 2
[13] Janoos, F., Singh, S., Wells III, W., M?orocz, I.A., Machiraju, R.: State?space models of mental processes
from fMRI (2011) 2, 7
[14] Khachiyan, L.G., Todd, M.J.: On the complexity of approximating the maximal inscribed ellipsoid for a
polytope. Mathematical Programming 61, 137?159 (1993) 3, 4
[15] Lashkari, D., Sridharan, R., Golland, P.: Categories and functional units: An infinite hierarchical model
for brain activations. In: Advances in Neural Information Processing Systems. vol. 23, pp. 1252?1260
(2010) 1
[16] Multiple: Statistical Parametric Mapping: The Analysis of Functional Brain Images. Acad Press (2007)
2, 6
[17] Orlin, J.B.: On the simplex algorithm for networks and generalized networks. In: Mathematical Programming Essays in Honor of George B. Dantzig Part I, Mathematical Programming Studies, vol. 24, pp.
166?178. Springer Berlin Heidelberg (1985) 2, 5
[18] O?Toole, A.J., Jiang, F., Abdi, H., Pnard, N., Dunlop, J.P., Parent, M.A.: Theoretical, statistical, and practical perspectives on pattern-based classification approaches to the analysis of functional neuroimaging
data. J Cog Neurosci 19(11), 1735?1752 (Nov 2007) 1, 2
[19] Pascual-Marqui, R.D., Michel, C.M., Lehmann, D.: Segmentation of brain electrical activity into microstates: model estimation and validation. IEEE Trans Biomed Eng 42(7), 658?665 (Jul 1995) 1
[20] Rachev, S.T., Ruschendorf, L.: Mass transportation problems: Volume I: Theory (probability and its
applications) (March 1998) 2
[21] Rubner, Y., Tomasi, C., Guibas, L.J.: The earth mover?s distance as a metric for image retrieval (1998) 2
[22] Shirdhonkar, S., Jacobs, D.: Approximate earth mover?s distance in linear time. In: Comp Vis Pat Recog.,
IEEE Conf. pp. 1 ?8 (23-28 2008) 2
9
|
4771 |@word mri:3 open:1 essay:1 pearlson:1 accounting:1 decomposition:1 jacob:1 eng:1 homomorphism:1 fifteen:1 thereby:2 tr:2 pressure:1 shot:1 reduction:1 configuration:1 series:6 selecting:2 tuned:1 existing:1 rish:1 comparing:2 com:1 nt:1 activation:8 chu:1 nt1:2 numerical:1 distant:2 partition:1 enables:1 motor:7 remove:1 plot:1 atlas:1 medial:1 v:2 alone:1 half:1 selected:1 implying:1 martinot:2 plane:5 isotropic:1 sys:1 filtered:1 mental:12 provides:3 detecting:1 node:13 math:2 firstly:1 ssm:4 five:1 mathematical:3 along:2 constructed:1 direct:1 become:1 khachiyan:1 affective:1 behavioral:1 manner:4 acquired:2 deteriorate:2 expected:1 ica:2 multi:1 brain:26 grappa:1 spherical:5 td:37 automatically:1 considering:1 totally:1 becomes:2 provided:3 xx:1 underlying:2 moreover:2 circuit:1 notation:1 increasing:1 mass:4 janoos:4 discover:1 unrelated:1 eigenvector:1 developed:2 supplemental:6 subrahmanya:1 ag:4 transformation:5 nj:1 temporal:4 pseudo:1 act:1 prohibitively:1 scaled:1 classifier:3 bio:1 unit:1 medical:2 paillere:1 t1:13 positive:1 local:2 todd:1 acad:1 encoding:1 jiang:2 tapping:2 approximately:1 might:1 pared:1 dantzig:1 quantified:1 dynamically:1 equivalence:1 relaxing:1 co:1 practical:1 transporting:1 block:1 definite:1 procedure:3 displacement:1 jan:2 unfold:1 area:1 empirical:3 evolving:1 significantly:3 matching:1 projection:2 regular:2 quadrant:8 onto:1 close:2 selection:3 context:2 applying:1 restriction:1 equivalent:2 map:5 demonstrated:2 transportation:17 dz:18 center:2 straightforward:1 convex:2 survey:1 resolution:2 identifying:3 assigns:1 importantly:2 embedding:6 proving:1 classic:1 coordinate:1 notion:1 annals:1 user:1 programming:4 grieser:1 origin:3 harvard:1 amortized:2 trend:1 recognition:1 expensive:2 approximated:2 satisfying:1 associate:1 std:2 cut:2 observed:2 recog:1 ding:1 solved:1 capture:3 worst:1 electrical:1 region:8 connected:3 adv:1 ordering:6 decrease:1 gross:1 lashkari:1 complexity:2 asked:1 signature:1 trained:3 isoperimetric:1 solving:1 singh:1 exposed:1 purely:1 localization:1 efficiency:2 blurring:1 basis:6 various:1 finger:2 pnv:1 epi:1 distinct:2 fast:1 activate:1 hyper:9 neighborhood:5 whose:1 encoded:1 widely:2 larger:1 posed:1 modular:1 calhoun:1 wg:22 ability:5 statistic:2 bullmore:1 transform:1 echo:1 indyk:1 hoc:1 sequence:2 mg:1 eigenvalue:7 net:1 maximal:2 degenerate:1 description:1 validate:1 parent:1 cluster:9 regularity:2 requirement:1 assessing:1 leave:1 recurrent:3 nearest:2 school:1 soc:1 implemented:1 dzi:2 implies:2 firdaus:2 indicate:1 differ:4 direction:1 wedge:15 radius:1 closely:2 waveform:1 plaze:1 opteron:1 zt3:5 quartile:3 human:3 adjacency:2 resilient:1 generalization:2 proposition:1 probable:1 secondly:2 extension:1 hold:1 lying:1 scanner:2 proximity:1 mm:1 normal:2 visually:1 guibas:1 mapping:3 predict:2 major:1 smallest:2 omitted:1 earth:3 purpose:1 estimation:1 proc:1 applicable:1 label:4 largest:2 create:1 weighted:3 reflects:1 pekar:1 sensor:1 gaussian:1 modified:1 rather:1 varying:1 validated:1 focus:2 emission:1 derived:1 rank:1 indicates:1 mainly:4 contrast:2 centroid:3 baseline:1 detect:1 am:1 membership:1 typically:3 hidden:1 transformed:1 mitigating:1 biomed:1 unobservable:2 classification:2 dual:6 overall:2 orientation:10 issue:1 retaining:1 development:1 spatial:10 integration:1 special:2 marginal:1 equal:1 aware:5 field:1 emd:2 manually:1 haynes:1 unsupervised:3 contra:1 vnv:1 fmri:12 simplex:2 t2:6 stimulus:3 minimized:1 future:2 primarily:1 equiprobable:1 oriented:2 randomly:1 preserve:3 mover:3 individual:3 william:1 organization:1 mining:1 highly:3 analyzed:1 extreme:3 activated:2 hg:2 edge:4 closer:1 partial:1 orthogonal:1 filled:1 euclidean:12 circle:1 theoretical:5 fronto:1 minimal:1 psychological:1 instance:1 dev:2 tp:2 strategic:1 cost:5 vertex:3 subset:1 entry:2 masked:2 proximal:1 rees:2 peak:1 siam:1 systematic:1 informatics:1 decoding:2 regressor:1 connecting:1 connectivity:5 recorded:3 containing:2 worse:1 cognitive:1 conf:2 chung:1 michel:1 li:1 sec:1 matter:3 vi:1 performed:3 view:1 closed:1 start:3 jul:3 orlin:1 contribution:2 ass:2 formed:2 accuracy:3 chart:1 variance:3 correspond:1 identify:4 simplicial:1 yield:2 confounds:1 ztn:1 identification:2 bayesian:1 thaper:1 comp:1 processor:1 ashburner:1 against:2 energy:3 pp:6 frequency:2 involved:1 proof:1 visuo:4 di:2 sampled:1 color:1 dimensionality:3 improves:1 segmentation:1 sophisticated:1 cbms:1 manuscript:1 higher:2 supervised:2 planar:1 response:2 formulation:1 done:1 generality:1 salvador:1 correlation:4 hand:3 eqn:5 retinotopic:1 transport:2 touch:2 assessment:1 lack:1 logistic:3 artifact:3 quality:4 gray:2 graphed:1 effect:3 concept:1 normalized:5 contain:1 former:1 hence:1 symmetric:1 illustrated:1 white:2 during:4 maintained:1 noted:1 percentile:1 unimodularity:1 generalized:1 dunlop:1 demonstrate:1 tn:1 performs:1 motion:5 image:5 wise:6 superior:1 functional:14 stimulation:1 empirically:1 multinomial:3 volume:5 association:1 interpretation:1 relating:1 functionally:4 measurement:3 respiration:1 expressing:1 ai:1 approx:2 grid:1 mathematics:1 similarly:2 dj:1 dot:1 fwe:1 stable:2 similarity:4 cortex:1 add:1 dominant:1 multivariate:1 closest:1 showed:1 perspective:1 hemisphere:2 certain:1 honor:1 der:1 seen:1 minimum:1 unrestricted:1 additional:1 george:1 mr:1 deng:1 dashed:1 semi:1 multiple:1 corporate:1 reduces:3 segmented:1 match:2 faster:1 characterized:1 sphere:4 retrieval:3 divided:1 post:1 niranjan:1 estimating:1 schizophrenia:1 laplacian:10 ile:9 prediction:3 regression:3 basic:1 involving:1 vision:1 metric:18 expectation:1 represent:1 achieved:2 preserved:1 addition:1 golland:1 separately:1 addressed:1 interval:1 biased:1 operate:1 rest:1 exhibited:2 hulme:1 probably:1 nv:29 recording:1 subject:14 hz:3 undirected:1 induced:3 contrary:1 flow:4 zbt:1 sridharan:1 extracting:1 inscribed:3 structural:1 symmetrically:2 iii:2 embeddings:1 automated:1 fit:1 equidistant:1 psychology:1 restrict:1 topology:2 idea:1 expression:1 pca:6 motivated:1 isomorphism:1 gb:1 cecchi:1 shirdhonkar:1 cause:1 repeatedly:1 action:1 generally:2 detailed:1 eigenvectors:1 listed:1 concentrated:1 processed:1 category:1 percentage:1 problematic:1 canonical:1 notice:1 estimated:1 dummy:1 per:2 anatomical:1 vol:2 group:3 salient:2 four:2 istv:1 blood:1 changing:1 clarity:2 miranda:1 registration:1 imaging:4 graph:29 relaxation:5 ram:1 fraction:1 convert:1 sum:1 run:2 inverse:3 package:1 lehmann:1 family:1 almost:1 chandrasekaran:1 electronic:1 bound:2 display:2 topological:1 trs:2 activity:12 elucidated:1 strength:1 mni:1 constraint:2 encodes:1 archiv:1 min:6 extremely:1 achard:1 performing:1 relatively:3 structured:3 developing:1 according:2 expository:1 combination:1 poor:1 disconnected:1 belonging:1 march:1 across:4 cardiac:1 lp:3 rev:1 making:2 intuitively:2 iccv:1 projecting:1 restricted:1 glm:2 outlier:1 sided:2 computationally:3 remains:2 mathematik:1 thirion:1 mind:3 ge:1 tractable:1 apply:1 observe:1 hierarchical:1 spectral:4 magnetic:1 subtracted:1 robustness:1 eigen:5 slower:2 convolved:1 original:3 rnv:5 top:1 clustering:11 running:3 include:1 giving:1 especially:2 approximating:1 diffused:1 hum:1 parametric:1 primary:1 diagonal:1 kantorovich:2 exhibit:2 gradient:1 distance:33 simulated:1 lateral:2 sci:1 berlin:1 topic:1 polytope:12 manifold:2 meg:1 length:1 index:1 relationship:1 ellipsoid:1 providing:1 lg:5 neuroimaging:6 info:1 design:1 zt:3 perform:2 upper:5 observation:2 markov:1 discarded:2 parietal:1 displayed:2 pat:1 defining:2 unconnected:1 communication:1 head:3 varied:4 mlc:6 arbitrary:3 drift:2 intensity:1 toole:1 pair:3 specified:1 connection:1 tomasi:1 errorbars:1 registered:1 established:1 nip:1 trans:1 address:2 able:1 usually:1 pattern:24 zt2:37 below:1 breathing:1 challenge:2 program:1 max:7 tance:2 suitable:1 treated:1 friston:1 predicting:1 representing:1 scheme:1 improve:1 brief:1 temporally:1 psychol:1 understanding:1 voxels:6 literature:1 geometric:1 determining:1 relative:7 embedded:2 loss:1 fully:1 lecture:1 limitation:1 validation:4 awareness:1 degree:2 rubner:1 consistent:1 exxonmobil:2 ibm:1 row:1 poline:1 changed:1 spm8:3 free:2 dis:1 zc:1 bias:1 weaker:1 neighbor:3 explaining:1 martelli:1 sparse:2 penny:1 distributed:3 slice:1 dimension:4 calculated:1 transition:1 world:3 cortical:1 author:1 coincide:1 projected:1 voxel:3 social:1 correlate:3 zt1:36 reconstructed:1 approximate:3 observable:1 uni:1 nov:1 dealing:2 active:1 discriminative:1 physiologically:2 un:2 table:2 nature:1 robust:1 inherently:1 ignoring:1 eeg:1 contributes:1 heidelberg:1 du:1 complex:1 diag:1 main:2 neurosci:3 bounding:2 noise:6 arise:3 motivation:1 pulsatile:1 repeated:1 quadrature:1 respiratory:1 fig:16 representative:1 marqui:1 nificant:1 pascual:1 sub:1 position:3 neuroimage:1 lie:1 rachev:1 theorem:3 cog:1 discarding:1 specific:1 showing:1 hub:1 thyreau:1 explored:1 barrett:1 physiological:1 normalizing:1 intrinsic:4 relabeled:1 magnitude:2 boston:1 intersection:2 electrophysiology:1 visual:6 unexpected:1 springer:1 corresponds:1 loses:1 chance:1 worstcase:1 ma:1 coil:1 formulated:1 presentation:4 consequently:1 towards:1 feasible:2 change:1 included:1 determined:1 infinite:1 reducing:2 corrected:3 principal:1 called:1 total:3 pas:1 experimental:2 meaningful:3 zg:4 perceptive:1 internal:1 scan:5 assessed:1 abdi:1 accelerated:1 adali:1 investigator:1 hemodynamic:2 reg:1
|
4,167 | 4,772 |
Scaling MPE Inference for Constrained Continuous
Markov Random Fields with Consensus Optimization
Stephen H. Bach
University of Maryland, College Park
College Park, MD 20742
[email protected]
Matthias Broecheler
Aurelius LLC
[email protected]
Lise Getoor
University of Maryland, College Park
College Park, MD 20742
[email protected]
Dianne P. O?Leary
University of Maryland, College Park
College Park, MD 20742
[email protected]
Abstract
Probabilistic graphical models are powerful tools for analyzing constrained, continuous domains. However, finding most-probable explanations (MPEs) in these
models can be computationally expensive. In this paper, we improve the scalability of MPE inference in a class of graphical models with piecewise-linear and
piecewise-quadratic dependencies and linear constraints over continuous domains.
We derive algorithms based on a consensus-optimization framework and demonstrate their superior performance over state of the art. We show empirically that in
a large-scale voter-preference modeling problem our algorithms scale linearly in
the number of dependencies and constraints.
1
Introduction
There is a growing need for statistical models which can capture rich dependencies in structured
data. Link predication, collective classification, modeling information diffusion, entity resolution,
and viral marketing are all important tasks where incorporating structural dependencies is crucial
for good predictive performance. Graphical models [1] are an expressive class of statistical models
to address such problems, but their applicability to large datasets is often limited by impractically
expensive inference and learning algorithms.
In this paper, we focus on scaling up most-probable-explanation (MPE) inference for a particular
class of graphical models called constrained continuous Markov random fields (CCMRFs) [2]. Like
other Markov random fields (MRFs), CCMRFs define a joint distribution over a collection of random variables and capture local dependencies through potential functions. However, unlike many
popular discrete MRFs which are defined over binary random variables, CCMRFs are defined over
continuous random variables. They also allow their domains to be constrained. This makes CCMRFs ideally suited to reason over continuous quantities, such as similarity, affinity, or probability,
without making assumptions about the variables? marginal distributions.1
MPE inference for CCMRFs is tractable under mild convexity assumptions because it can be cast as a
convex numeric optimization problem, which can be solved by interior-point methods [3]. However,
for large problems, interior-point methods are impractically slow because each step takes time up to
cubic in the size of the problem.
1
In contrast with Gaussian random fields where random variables are assumed to be Gaussian.
1
We show how hinge-loss potential functions that are often used to model real world problems in
CCMRFs (see, e.g., [3, 2, 4, 5, 6, 7]) can be exploited to significantly speed up the numeric optimization and therefore MPE inference. To do so, we rely on a consensus optimization framework [8].
Consensus optimization has recently been shown to perform well on relaxations of discrete optimization problems, like MRF MPE inference [8, 9, 10].
The contributions of this paper are as follows: First, we derive algorithms for the MPE problem
in CCMRFs with piecewise-linear and piecewise-quadratic dependencies in Section 3. Next, we
improve the performance of consensus optimization by deriving an algorithm that exploits opportunities for closed-form solutions to subproblems, based on the current optimization iterate, before
resorting to an iterative solver when the closed-form solution is not applicable. Then, we present an
experimental evaluation (Section 4) that demonstrates superior performance of our approach over
a commercial interior-point method, the current state-of-the-art for CCMRF MPE inference. In a
voter-preference modeling problem, our algorithms scaled linearly in the number of dependencies
and constraints. In addition, compared to an exact solver, our method achieves at least 99.6% of the
optimal solution. Finally, we show that our improved consensus-optimization algorithm more than
doubles the speed of a less sophisticated approach. To the best of our knowledge, we are the first
to show results on MPE inference for any MRF variant using consensus optimization with iterative
methods to solve subproblems.
2
Background
In this section we formally introduce the class of probabilistic graphical models for which we derive
inference algorithms and present a simple running example (this is the same example used in our
experiments in Section 4). We also give an overview of consensus optimization [8], the abstract
framework we will use to derive our algorithms in Section 3.
2.1
Constrained continuous Markov random fields and the MPE problem
A constrained continuous Markov random field (CCMRF) is a probabilistic graphical model defined
over continuous random variables with a constrained domain [2]. In this paper, we focus on a
common subclass in which dependencies among continuous random variables are defined in terms
of hinge-loss functions and linear constraints:
Definition 1. A hinge-loss constrained continuous Markov random field f is a probability density
over a finite set of n random variables X = {X1 , . . . , Xn } with domain D = [0, 1]n . Let ? =
{?1 , . . . , ?m } be a finite set of m continuous potential functions of the form
?j (X) = [max {`j (X), 0}]pj
where `j is a linear function of X and pj ? {1, 2}. Let C = {C1 , . . . , Cr } be a finite set of r linear
constraint functions associated with two index sets denoting equality and inequality constraints, E
? = {X ? D|Ck (X) = 0, ?k ? E and Ck (X) ? 0, ?k ? I}.
and I, which define the feasible set D
?
? then, for a set of non-negative free parameters ? =
If X ?
/ D, then f (X) = 0. If X ? D,
{?1 , . . . , ?m },
?
?
?
?
Z
m
m
X
X
1
f (X) =
exp ??
?j ?j (X)? ; Z(?) =
exp ??
?j ?j (X)? dX.
Z(?)
?
D
j=1
j=1
Definition 1 is a special case of the definition of CCMRFs of Broecheler and Getoor [2]. It says that
hinge-loss CCMRFs are models in which densities of assignments to variables are defined by an
exponential of the negated, weighted sum of functions over those assignments, unless any constraint
is violated, in which case the density is zero.
? In a hinge-loss CCMRF, the normalThe MPE problem is to maximize f (X) such that X ? D.
izing function Z(?) is constant over X for fixed parameters and the exponential is maximized by
minimizing its negated argument, so the MPE problem is
m
X
arg max f (X) ? arg min
?j ?j (X) s.t. Ck (X) = 0, ?k ? E and Ck (X) ? 0, ?k ? I. (1)
X
X?[0,1]n j=1
2
Hinge-loss CCMRFs have two main desirable properties. First, the MPE problem is convex. Second,
they are expressive. Hinge-loss functions are useful for many domains. Instances of hinge-loss
CCMRFs have been used previously to model many problems, including link prediction, collective
classification [3, 2], prediction of opinion diffusion [4], medical decision making [5], trust analysis
in social networks [6], and group detection in social networks [7].
For ease of presentation, in the rest of this paper, when we refer to CCMRFs we mean hinge-loss
CCMRFs. Next, we present a motivating CCMRF, using an example from Broecheler et. al. [4].
Example 1 (Opinion diffusion). Consider a social network S ? (V, E) of voters in a set V with
relationships defined by annotated, unweighted, directed edges (va , vb )? ? E. Here, va , vb ? V
and ? is an annotation denoting the type of relationship: friend, boss, etc. To reason about
voter?s opinions towards two hypothetical political parties, liberal (L) and conservative (C), we
introduce two nonnegative random variables Xa,L and Xa,C , summing to at most one, representing
the strength of voter va ?s preferences for each political party. We assume that va ?s preference results
from an intrinsic opinion and the influence of va ?s social group. We represent the intrinsic opinion
by opinion(va ), ranging from ?1 (strongly favoring L) to 1 (strongly favoring C).
The influence of the social group is modeled by potential functions that we generically denote
as ?. First we penalize deviations from intrinsic opinions. If opinion(va ) < 0, then ? ?
[max{|opinion(va )| ? Xa,L , 0}]p , which penalizes preferences that are weaker than intrinsic opinions. Similarly, ? ? [max{opinion(va ) ? Xa,C , 0}]p . when opinion(va ) > 0. These hinge-loss
potential functions are weighted by a fixed parameter ?opinion .
Next we penalize disagreements between voters in a social group. For each edge (va , vb )? we
introduce potential functions ? ? [max{Xb,L ? Xa,L , 0}]p and ? ? [max{Xb,C ? Xa,C , 0}]p ,
penalizing preferences of va that are not as strong as those of vb . These potential functions are
weighted by parameters ?? defining the relative influence of the ? relationship. For example, we
expect more influence from a close friend than from a co-worker.
We consider p = 1, meaning that the model has no preference between distributing the loss and
accumulating it on a single potential function, and p = 2, meaning that that the model prefers to
distribute the loss among multiple hinge-loss functions. To illustrate the choice, consider a single
voter in a CCMRF with two equally-weighted potential functions ?1 ? [max{0.9 ? Xa,L , 0}]p and
?2 ? [max{0.6 ? Xa,C , 0}]p . Let 0.9 and 0.6 represent the preferences of the voter?s two friends.
If p = 1, then any assignment Xa,L , Xa,C with Xa,L ? [0.4, 0.9] and Xa,C = 1 ? Xa,L is a
MPE. However, if p = 2, then only the assignment Xa,L = 0.65, Xa,C = 0.35 is a MPE. We see
that, all else being equal, squared potential functions ?respect? the minima of individual potential
functions if they cannot all be minimized. However, this useful modeling feature generally increases
the computational cost. As we demonstrate in Section 4, scaling MPE inference for CCMRFs with
piecewise-quadratic potential functions is one of the contributions of our work.
2.2
Consensus optimization
Consensus optimization is a framework that optimizes an objective by dividing it into independent
subproblems and then iterating to reach a consensus on the optimum [8]. In this subsection we
present an abstract consensus optimization algorithm for Problem (1), the MPE problem for CCMRFs. In Section 3 we will derive specialized versions for different potential functions.
Given a CCMRF (X, ?, C, E, I, ?) and parameter ? > 0, the algorithm first constructs a modified
MPE problem in which each potential and constraint is a function of different variables. The variables are constrained to make the new and original MPE problems equivalent. We let xj be a copy
of the variables in X that are used in the potential function ?j , j = 1, . . . , m and xk+m be a copy
of those used in the constraint function Ck , k = 1, . . . , r. We also introduce an indicator function
Ik for each constraint function where Ik [Ck (xk+m )] = 0 if the constraint is satisfied and ? if it is
not. Finally, let Xi be the variables in X that are copied in xi , i = 1, . . . , m + r.
Consensus optimization solves the new MPE problem
arg min
xi ?[0,1]ni
m
X
j=1
?j ?j (xj ) +
r
X
Ik [Ck (xk+m )] subject to xi = Xi
k=1
3
(2)
Algorithm Consensus optimization
Input: CCMRF (X, ?, C, E, I, ?), ? > 0
Initialize xj as a copy of the variables in X that appear in ?j , j = 1, . . . , m
Initialize xk+m as a copy of the variables in X that appear in Ck , k = 1, . . . , r.
Initialize yi at 0, i = 1, . . . , m + r.
while not converged do
for i = 1, . . . , m + r do
yi ? yi + ?(xi ? Xi )
end for
for j = 1, . . . , m do
xj ? arg minxj ?[0,1]nj ?j ?j (xj ) + ?2 kxj ? Xj + ?1 yj k22
end for
for k = 1, . . . , r do
xk+m ? arg minxk+m ?[0,1]nk+m Ik [Ck (xk+m )] + ?2 kxk+m ? Xk+m + ?1 yk+m k22
end for
Set each variable in X to the average of its copies
end while
where i = 1, . . . , m + r and ni is the number of components of xi . Inspection shows that Problems (1) and (2) are equivalent.
We use the alternating direction method of multipliers (ADMM) [11, 12, 8] to solve Problem (2).
ADMM can be viewed as an approach to combining the scalability of dual decomposition and the
convergence properties of augmented Lagrangian methods [8]. We outline the algorithm in the
above pseudocode. At each step in the iteration, it solves m + r independent optimization problems,
one for each ?j and each Ck . It then averages the copies of variables to get the consensus variables
X for the next iteration. Lagrange multipliers yi for each xi ensure convergence. The objective is
known to converge to its optimum and the iterates to approach feasibility under mild assumptions
[13, 14, 8]. See Boyd et. al. [8] or this paper?s supplementary material for more information. In the
next section we derive algorithms with specific methods for updating each xj .
3
Solving the MPE problem with consensus optimization
We now derive algorithms to update xj for each potential function ?j . At this point we drop the
more complex notation and view each update as an instance of the problem
arg min ?[max {cT x + c0 , 0}]p + (?/2)kx ? dk22
(3)
x?[0,1]n
where c, d ? Rn , c0 ? R, ? ? 0, p ? {1, 2}, and ? > 0. To map an update to Problem (3) for a
potential function ?j and parameter ?j , let n = nj , cT x + c0 = `(xj ), d = Xj ? (1/?)yj , ? = ?j ,
p = pj , and keep ? the same.
Our first algorithm, CO-Linear, solves the MPE problem when p = 1 and n ? 2 in each instance
of Problem (3), i.e., each potential function has at most two unknowns and is piecewise-linear. We
present the update in terms of the intermediate optimization problems it solves. (We use variables
? with parenthetical superscripts to easily refer to the solutions of intermediate problems, but implementations should not treat them as separate variables.) It first finds ?1 , which is easy to do by
(1)
inspection. For each component ?j of ?(1)
?
?0 if dj < 0
(1)
?j = dj if 0 ? dj ? 1
?
1 if dj > 1
where j = 1, . . . , n. We refer to this procedure as clipping the vector d to the interval [0, 1].
In this section, when we refer to clipping to [a, b], we mean an identical vector except that any
component outside a bound a or b is changed to that bound. ?2 is also easy to find: clip the vector
d ? (?/?)c to [0, 1]. There are two cases when finding ?(3) . If n = 1, clip the scalar ?c0 /c1 to
4
Algorithm Update for CO-Linear
Input: c, d ? Rn where n ? 2, c0 ? R, ? ? 0, ? > 0
Output: x? = arg minx?[0,1]n ?[max {cT x + c0 , 0}] + (?/2)kx ? dk22
?(1) ? arg minx?[0,1]n (?/2)kx ? dk22 (by inspection)
if cT ?(1) + c0 ? 0 then
x? ? ?(1)
else
?(2) ? arg minx?[0,1]n ?cT x + (?/2)kxi ? dk22 (by inspection)
if cT ?(2) + c0 ? 0 then
x? ? ?(2)
else
x? ? ?(3) ? arg minx?[0,1]n s.t. cT x+c0 =0 (?/2)kx ? dk22 (by substitution and inspection)
end if
end if
[0, 1]. If n = 2, solve cT x = ?c0 for one of the components of x, substitute to eliminate that
component in the objective, and compute the interval [min, max] on which x ? [0, 1]2 when the
remaining component is in [min, max] and cT x = ?c0 . Inspect the reduced objective and clip the
unconstrained minimizer to [min, max]. Substitute the result back into cT x = ?c0 to find the other
component.
To verify that the CO-Linear update is correct, first consider the case when cT ?(1) + c0 ? 0. Since
?(1) minimizes (?/2)kx ? dk22 and ?[max {cT x + c0 , 0}] ? 0, each term of the update objective
is minimized at ?(1) , so x? = ?(1) . In the second case, if cT ?(1) + c0 > 0, but cT ?(2) + c0 ? 0,
then observe that ?(2) minimizes an objective which bounds the update objective below, but the two
objectives are equal at ?(2) . Therefore, x? = ?(2) . Finally, in the third case, cT ?(1) + c0 > 0 and
cT ?(2) + c0 < 0. We know ?x ? [0, 1]n such that cT x + c0 = 0, so the problem can be split into
two feasible problems:
? (1)
?
(?/2)kx ? dk22
arg min
x?[0,1]n s.t. cT x+c0 ?0
? (2)
?
arg min
x?[0,1]n s.t. cT x+c0 ?0
?cT x + (?/2)kx ? dk22 .
Either x? = ? (1) or x? = ? (2) (or both). We use Lemma 4 of Martins et. al. [9] which states that
given a convex, feasible optimization problem over a nonempty convex subset of Rn with a convex
constraint, if that constraint is violated by the minimizer to a relaxed problem without that constraint
over the same set, then that constraint will be active at the minimizer to the original problem. Since
cT ?(1) + c0 > 0 and cT ?(2) + c0 < 0, we conclude that cT ? (1) + c0 = 0 and cT ? (2) + c0 = 0.
Therefore x? = ? (1) = ? (2) = ?(3) .
CO-Linear is sufficient to solve many useful and interesting models. Unfortunately, the piecewisequadratic case (p = 2) is more difficult. If n > 1 and it cannot be established that cT x? + c0 ? 0,
then the approach of CO-Linear is not applicable, because minimizing ?cT xxT c + 2?c0 cT x +
(?/2)kx ? dk22 over [0, 1]n does not have a (known) closed-form solution in general. That motivates
us to derive an algorithm for the piecewise-quadratic case that can resort to a sufficiently general
iterative solver if necessary. Obviously, a naive algorithm could use an iterative method immediately
if n > 1. However, CO-Linear still offers some insight into the problem. If clipping d to [0, 1] gives
a vector ?(1) such that cT ?(1) + c0 ? 0, then again it is the minimizer.
Our second algorithm, CO-Quad, first tries to find x? by clipping d to [0, 1] for any n. If it does
not succeed and n = 1, then ?(2) can be found by inspection. If n > 1, then an iterative method
is required. Note that now after concluding that cT x? + c0 ? 0 we can just minimize ?cT xxT c +
2?c0 cT x + (?/2)kx ? dk22 to find x? since ?cT xxT c + 2?c0 cT x is symmetric about the hyperplane
cT x + c0 = 0, (?/2)kx ? dk22 is minimized for some x such that cT x + c0 ? 0, and the objective is
the same as the subproblem on that region.
5
Algorithm Update for CO-Quad
Input: c, d ? Rn , c0 ? R, ? ? 0, ? > 0
Output: x? = arg minx?[0,1]n ?[max {cT x + c0 , 0}]2 + (?/2)kx ? dk22
?(1) ? arg minx?[0,1]n (?/2)kx ? dk22 (by inspection)
if cT ?(1) + c0 ? 0 then
x? ? ?(1)
else
if n = 1 then
x? ? ?(2) ? arg minx?[0,1]n ?cT xxT c + 2?c0 cT x + (?/2)kx ? dk22 (by inspection)
else
x? ? ?(3) ? arg minx?[0,1]n ?cT xxT c+2?c0 cT x+(?/2)kx?dk22 (by iterative method)
end if
end if
To update xk+m for each constraint Ck , both CO-Linear and CO-Quad use the method proposed by
Martins et. al. [9], which handles the case when Ck (xk+m ) = 0 is a probability simplex. This is
sufficient for the purposes of this work.
4
Experiments
We evaluated the scalability of CO-Linear and CO-Quad by generating social networks of varying
sizes, constructing CCMRFs with them, and measuring the running time required to find a MPE.
We compared our approach to the previous state-of-the-art approach for finding MPEs in CCMRFs,
which uses an interior point method implemented in MOSEK, a commercial optimization package
(http://www.mosek.com). Next we describe the social-network and CCMRF generation procedure,
the implementations and setup, and then present the results.
4.1
Social-network and CCMRF generation
Our social-network generation process follows Example 1 and is based on the procedure described
by Broecheler et. al. [4] to generate social networks using power-law degree distributions. Given
a desired number of vertices N (which the procedure matches approximately) and a list of edge
types, along with parameters ? and ? for each type, the procedure samples in- and out-degrees
for each node for each edge type from the power-law distribution D(k) ? ?k ?? . Incoming and
outgoing edges of the same type are then matched randomly to create edges until no more matches
are possible. Vertices with no incoming or outgoing edges are removed from the network. We used
six edge types with various parameters to represent relationships in social networks with different
combinations of abundance and exclusivity, choosing ? between 2 and 3, and ? between 0 and 1, as
suggested by Broecheler et. al. We then annotated each vertex with a value in [?1, 1] uniformly at
random to represent intrinsic opinions as described in Example 1.
We generated social networks with between 22,050 and 66,150 vertices, which induced CCMRFs
with between 130,082 and 397,494 total potential functions and constraints. In all the CCMRFs,
between 83% and 85% of those totals were potential functions and between 15% and 17% were
constraints. For each social network, we created both a CCMRF to test CO-Linear (p = 1 in
Definition 1) and one to test CO-Quad (p = 2). We chose ?opinion = 0.5 and chose ??1 , . . . , ??6
between 0 and 1 to model both more and less influential relationships.
4.2
Implementation
We implemented CO-Linear and CO-Quad in Java. We used the interior-point method in MOSEK
to find ?3 in the update for CO-Quad when necessary by passing the problem via MOSEK?s Java
native interface wrapper. We also compared with MOSEK?s interior-point method by encoding the
entire MPE problem as a linear program or a second-order cone program as appropriate, and passing
the encoded problem via the Java native interface wrapper.
6
600
?
60000
?
50000
?
CO-??Linear
?
Interior-??point
?method
?
400
?
Time
?in
?seconds
?
Time
?in
?seconds
?
500
?
300
?
200
?
100
?
CO-??Quad
?
Naive
?CO-??Quad
?
Interior-??point
?method
?
40000
?
30000
?
20000
?
10000
?
0
?
0
?
125000
?
175000
? 225000
? 275000
? 325000
? 375000
?
Number
?of
?poten2al
?func2ons
?and
?constraints
?
125000
?
(a) Piecewise-linear MPE problems
175000
? 225000
? 275000
? 325000
? 375000
?
Number
?of
?poten2al
?func2ons
?and
?constraints
?
(b) Piecewise-quadratic MPE problems
Figure 1: Average running times to find a most probable explanation (MPE) in CCMRFs.
All experiments were performed on a single machine with 2 6-core 3.06 Ghz Intel Xeon X5675
processors with 48GB of RAM. Each optimizer used a single thread. All results are averaged over 3
runs. All differences between CO-Linear and the interior-point method are significant at p = 0.0005.
All differences between CO-Quad and the interior-point method are significant at p = 0.005 on
problems with more than 175,000 potential functions and constraints. (The interior-point method
exhibited much higher variance in running times on piecewise-quadratic problems.) All differences
between CO-Quad and Naive CO-Quad are significant at p = 0.0005.
4.3
Results
We first evaluated the scalability of CO-Linear and compared with MOSEK?s interior-point method.
Figure 1a shows the results. The running time of the interior-point method quickly exploded as
the problem size increased. Although we do not show it in the figure, the average running time
on the largest problem was about 4,900 seconds (over 1 hour, 20 minutes). This demonstrates the
limited scalability of the interior-point method. In contrast, CO-Linear displays excellent scalability.
The average running time on the largest problem was about 130 seconds (2 minutes, 10 seconds).
Further, the running time grows linearly in the number of potential functions and constraints in the
CCMRF, i.e., the number of subproblems that must be solved at each iteration. The line of best
fit has R2 = 0.99834. Combined with Figure 1a, this shows that CO-Linear scaled linearly with
increasing problem size. We emphasize that the implementation of CO-Linear is research code
written in Java and the interior-point method is a commercial package running as native code. The
dramatic differences in running times illustrate the superior utility of CO-Linear for these problems.
We then evaluated CO-Quad. Figure 1b shows the results (note the 2-orders-of-magnitude increase
on the vertical axis between CO-Linear and CO-Quad). Again, the running time of the interiorpoint method quickly exploded. We could only test it on the three smallest problems, the largest of
which took an average of about 56,500 seconds to solve (over 15 hours, 40 minutes). Consensus
optimization again scaled linearly to the problem. The line of best fit has R2 = 0.9842. To compare
with the interior-point method, on the third-smallest problem, CO-Quad took an average of about
5,250 seconds (under 1 hour, 28 minutes). We also evaluated a naive variant of CO-Quad which
immediately updates xj via the interior-point method when there are two unknowns. As Figure 1b
shows, the difference is significant. This demonstrates that CO-Quad is a further improvement on a
less sophisticated approach over the previous state-of-the-art.
One of the advantages of interior-point methods is great numerical stability and accuracy, Consensus
optimization, which treats both objective terms and constraints as subproblems, often returns points
that are only optimal and feasible to moderate precision for non-trivially constrained problems [8].
Although this is often acceptable, we quantified the mix of infeasibility and suboptimality by repairing the infeasibility and measuring the resulting total suboptimality. We first projected the solutions
returned by consensus optimization onto the feasible region, which took a negligible amount of computational time. Let pC be the value of the objective in Problem (1) at such a point and let pIP M be
7
the value of the objective at the point returned by the interior-point method. Then the relative error
on that problem is (pC ? pIP M )/pIP M . The relative error was consistently small. For CO-Linear,
it varied between 0.2% and 0.4%, and did not trend upward as the problem size increased. For
CO-Quad, when the interior-point method also returned a solution, the relative error was always less
than 0.05% and also did not trend upward. This shows that consensus optimization was accurate,
in addition to being dramatically faster (lower absolute time) and more scalable (smaller growth in
time with problem size).
5
Discussion and conclusion
In this paper we advanced the state-of-the-art in solving the MPE problem for CCMRFs. With specialized algorithms, consensus optimization offers far superior scalability. In our experiments the
computational cost grew linearly with the number of potential functions and constraints. This is crucially important if models are to scale to the sizes of data now available. As we build bigger models,
it will be important to understand the trade-off between speed and accuracy. The well-understood
theory of consensus optimization can help here. It is a major difference between our work and that
of Broecheler et. al. [4], which used heuristics to solve the MPE problem by partitioning CCMRFs,
fixing values of variables at the boundaries, solving relatively large subproblems with interior-point
methods, and repeating with different partitions. A direction for future work is studying how to
enforce desired combinations of speed and accuracy when solving MPE problems.
Such work could have a broader impact for research on solving the MPE problem for MRFs using
decomposition-based approaches, which is an active area of research. Much work has studied dual
decomposition for solving relaxations of discrete MPE problems [15]. Martins et. al. [9], and Meshi
and Globerson [10] recently studied using consensus optimization to solve convex relaxations of the
MPE problem for discrete MRFs. They solved the problem for MRFs which induced subproblems
with closed-form solutions. Meshi and Globerson [10] also showed advantages of solving the dual
of the relaxation and decoding the values of the discrete primal variables, but such an approach
is not applicable to our work. Other recent approaches include that of Ravikumar et. al. [16], an
algorithm for solving a relaxed MPE problem by solving a sequence of subproblems in a process
called proximal minimization.
There are a number of remaining research problems. The first is to expand the number of unknowns
in subproblems that can be solved in closed form. Another is analyzing the Karush-Kuhn-Tucker
optimality conditions for the subproblems to eliminate variables when possible and solve them more
efficiently. While all (hinge-loss) CCMRF subproblems could be solved with a general-purpose
algorithm, such as an interior-point method, we showed that even in cases when an algorithm might
have to resort to an interior-point method, exploiting opportunities for closed-form solutions greatly
improved speed.
Acknowledgments
The authors would like to thank Neal Parikh and the anonymous reviewers for their helpful suggestions. This material is based upon work supported by the National Science Foundation under Grant
No. 0937094, the Department of Energy under Grant No. DESC0002218, and the Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior National Business Center
(DoI/NBC) contract number D12PC00337. The U.S. Government is authorized to reproduce and
distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon.
Disclaimer: The views and conclusions contained herein are those of the authors and should not
be interpreted as necessarily representing the official policies or endorsements, either expressed or
implied, of IARPA, DOI/NBA, or the U.S. Government.
References
[1] D. Koller and N. Friedman. Probabilistic Graphical Models: Principles and Techniques. The
MIT Press, 2009.
[2] M. Broecheler and L. Getoor. Computing marginal distributions over continuous Markov networks for statistical relational learning. In Advances in Neural Information Processing Systems
(NIPS), 2010.
8
[3] M. Broecheler, L. Mihalkova, and L. Getoor. Probabilistic similarity logic. In Proceedings of
the 26th Conference on Uncertainty in Artificial Intelligence (UAI), 2010.
[4] M. Broecheler, P. Shakarian, and V. S. Subrahmanian. A scalable framework for modeling
competitive diffusion in social networks. In Proceedings of the Second International Conference on Social Computing (SocialCom), 2010.
[5] S. H. Bach, M. Broecheler, S. Kok, and L. Getoor. Decision-driven models with probabilistic
soft logic. In NIPS Workshop on Predictive Models in Personalized Medicine, 2010.
[6] B. Huang, A. Kimmig, L. Getoor, and J. Golbeck. Probabilistic soft logic for trust analysis
in social networks. In International Workshop on Statistical Relational Artificial Intelligence
(StaRAI), 2012.
[7] B. Huang, S. H. Bach, E. Norris, J. Pujara, and L. Getoor. Social group modeling with probabilistic soft logic. In NIPS Workshop on Social Network and Social Media Analysis: Methods,
Models, and Applications, 2012.
[8] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed Optimization and Statistical Learning Via the Alternating Direction Method of Multipliers. Now Publishers, 2011.
[9] A. Martins, M. Figueiredo, P. Aguiar, N. Smith, and E. Xing. An augmented Lagrangian
approach to constrained MAP inference. In Proceedings of the 28th International Conference
on Machine Learning (ICML), 2011.
[10] O. Meshi and A. Globerson. An alternating direction method for dual MAP LP relaxation. In
Proceedings of the 2011 European conference on machine learning and knowledge discovery
in databases (ECML), 2011.
[11] R. Glowinski and A. Marrocco. Sur l?approximation, par e? l?ements finis d?ordre un, et la
r?esolution, par p?enalisation-dualit?e, d?une classe de probl`emes de Dirichlet non lin?eaires. Revue franc?aise d?automatique, informatique, recherche op?erationnelle, 9(2):41?76, 1975.
[12] D. Gabay and B. Mercier. A dual algorithm for the solution of nonlinear variational problems
via finite element approximation. Computers & Mathematics with Applications, 2(1):17?40,
1976.
[13] D. Gabay. Applications of the method of multipliers to variational inequalities, volume 15,
chapter 9, pages 299?331. Elsevier, 1983.
[14] J. Eckstein and D. P. Bertsekas. On the Douglas-Rachford splitting method and the proximal
point algorithm for maximal monotone operators. Math. Program., 55(3):293?318, 1992.
[15] D. Sontag, A. Globerson, and T. Jaakkola. Introduction to dual decomposition for inference,
chapter 8, pages 219?254. MIT Press, 2011.
[16] P. Ravikumar, A. Agarwal, and M. J. Wainwright. Message-passing for graph-structured linear
programs: proximal methods and rounding schemes. Journal of Machine Learning Research,
11:1043?1080, 2010.
9
|
4772 |@word mild:2 version:1 c0:38 crucially:1 decomposition:4 dramatic:1 wrapper:2 substitution:1 denoting:2 current:2 com:2 dx:1 must:1 written:1 chu:1 numerical:1 partition:1 drop:1 exploded:2 update:12 intelligence:3 une:1 inspection:8 xk:9 smith:1 core:1 recherche:1 iterates:1 math:1 node:1 preference:8 liberal:1 along:1 ik:4 introduce:4 nbc:1 automatique:1 growing:1 quad:18 solver:3 increasing:1 project:1 notation:1 matched:1 medium:1 interpreted:1 minimizes:2 finding:3 nj:2 hypothetical:1 subclass:1 growth:1 demonstrates:3 scaled:3 partitioning:1 medical:1 grant:2 appear:2 bertsekas:1 before:1 negligible:1 understood:1 local:1 treat:2 thereon:1 encoding:1 analyzing:2 approximately:1 might:1 chose:2 voter:8 studied:2 quantified:1 co:38 ease:1 limited:2 averaged:1 directed:1 acknowledgment:1 globerson:4 yj:2 revue:1 kimmig:1 procedure:5 area:1 significantly:1 java:4 boyd:2 get:1 cannot:2 interior:24 close:1 onto:1 operator:1 influence:4 accumulating:1 www:1 equivalent:2 map:3 lagrangian:2 reviewer:1 center:1 convex:6 resolution:1 splitting:1 immediately:2 insight:1 deriving:1 stability:1 handle:1 commercial:3 exact:1 us:1 trend:2 element:1 expensive:2 updating:1 native:3 database:1 exclusivity:1 subproblem:1 solved:5 capture:2 region:2 trade:1 removed:1 yk:1 convexity:1 ideally:1 interiorpoint:1 solving:9 predictive:2 upon:1 kxj:1 joint:1 easily:1 various:1 chapter:2 xxt:5 informatique:1 describe:1 doi:2 artificial:2 repairing:1 outside:1 choosing:1 encoded:1 supplementary:1 solve:8 heuristic:1 say:1 superscript:1 obviously:1 advantage:2 sequence:1 matthias:2 took:3 maximal:1 combining:1 scalability:7 exploiting:1 convergence:2 double:1 optimum:2 generating:1 help:1 derive:8 friend:3 illustrate:2 fixing:1 op:1 solves:4 dividing:1 implemented:2 c:3 strong:1 direction:4 kuhn:1 annotated:2 correct:1 opinion:15 material:2 meshi:3 government:2 karush:1 anonymous:1 probable:3 sufficiently:1 exp:2 great:1 major:1 achieves:1 optimizer:1 smallest:2 purpose:3 applicable:3 largest:3 create:1 tool:1 weighted:4 minimization:1 mit:2 gaussian:2 always:1 modified:1 ck:12 cr:1 varying:1 pip:3 broader:1 jaakkola:1 lise:1 focus:2 improvement:1 consistently:1 greatly:1 contrast:2 political:2 bos:1 helpful:1 inference:13 elsevier:1 mrfs:5 eliminate:2 entire:1 koller:1 favoring:2 expand:1 reproduce:1 upward:2 arg:16 classification:2 among:2 dual:6 constrained:11 art:5 special:1 initialize:3 marginal:2 field:7 equal:2 construct:1 identical:1 park:6 icml:1 mosek:6 future:1 minimized:3 simplex:1 piecewise:10 franc:1 randomly:1 national:2 individual:1 friedman:1 detection:1 message:1 evaluation:1 generically:1 pc:2 primal:1 copyright:1 xb:2 accurate:1 edge:8 worker:1 necessary:2 unless:1 penalizes:1 parenthetical:1 desired:2 instance:3 xeon:1 modeling:6 increased:2 soft:3 measuring:2 assignment:4 clipping:4 applicability:1 cost:2 vertex:4 deviation:1 subset:1 rounding:1 motivating:1 dependency:8 kxi:1 proximal:3 combined:1 density:3 international:3 probabilistic:8 off:1 contract:1 decoding:1 quickly:2 leary:1 squared:1 again:3 satisfied:1 huang:2 resort:2 return:1 potential:23 distribute:2 de:2 performed:1 view:2 try:1 closed:6 mpe:34 competitive:1 xing:1 annotation:2 contribution:2 minimize:1 disclaimer:1 ni:2 accuracy:3 variance:1 efficiently:1 maximized:1 processor:1 converged:1 reach:1 definition:4 mihalkova:1 energy:1 tucker:1 associated:1 popular:1 knowledge:2 subsection:1 minxj:1 socialcom:1 sophisticated:2 back:1 higher:1 improved:2 evaluated:4 strongly:2 marketing:1 xa:15 just:1 until:1 expressive:2 trust:2 nonlinear:1 grows:1 k22:2 verify:1 multiplier:4 equality:1 alternating:3 symmetric:1 neal:1 suboptimality:2 eaires:1 outline:1 demonstrate:2 interface:2 ranging:1 meaning:2 variational:2 recently:2 parikh:2 superior:4 common:1 viral:1 specialized:2 pseudocode:1 empirically:1 overview:1 volume:1 rachford:1 refer:4 significant:4 probl:1 unconstrained:1 resorting:1 trivially:1 similarly:1 mathematics:1 dj:4 similarity:2 etc:1 showed:2 recent:1 optimizes:1 moderate:1 driven:1 inequality:2 binary:1 yi:4 exploited:1 minimum:1 relaxed:2 converge:1 maximize:1 stephen:1 multiple:1 desirable:1 mix:1 match:2 faster:1 bach:4 offer:2 lin:1 equally:1 ravikumar:2 bigger:1 va:12 feasibility:1 prediction:2 mrf:2 variant:2 scalable:2 impact:1 iteration:3 represent:4 agarwal:1 c1:2 penalize:2 addition:2 background:1 interval:2 else:5 crucial:1 publisher:1 rest:1 umd:3 unlike:1 exhibited:1 subject:1 induced:2 structural:1 intermediate:2 split:1 easy:2 iterate:1 xj:11 fit:2 thread:1 six:1 utility:1 distributing:1 gb:1 returned:3 sontag:1 passing:3 prefers:1 dramatically:1 useful:3 generally:1 iterating:1 amount:1 repeating:1 kok:1 clip:3 reduced:1 http:1 generate:1 governmental:1 discrete:5 group:5 pj:3 penalizing:1 douglas:1 diffusion:4 ram:1 ordre:1 relaxation:5 monotone:1 graph:1 sum:1 cone:1 run:1 package:2 powerful:1 uncertainty:1 minxk:1 endorsement:1 decision:2 acceptable:1 scaling:3 vb:4 nba:1 bound:3 ct:41 display:1 copied:1 ements:1 quadratic:6 nonnegative:1 activity:1 strength:1 constraint:24 personalized:1 speed:5 argument:1 min:8 concluding:1 optimality:1 emes:1 martin:4 relatively:1 structured:2 influential:1 department:2 combination:2 smaller:1 lp:1 making:2 computationally:1 previously:1 nonempty:1 know:1 tractable:1 end:8 mercier:1 studying:1 available:1 finis:1 observe:1 appropriate:1 disagreement:1 enforce:1 original:2 substitute:2 running:11 ensure:1 remaining:2 include:1 graphical:7 opportunity:2 hinge:12 dirichlet:1 medicine:1 exploit:1 build:1 implied:1 objective:12 quantity:1 erationnelle:1 md:3 affinity:1 minx:8 link:2 maryland:3 separate:1 entity:1 thank:1 consensus:23 reason:2 code:2 sur:1 index:1 relationship:5 modeled:1 minimizing:2 difficult:1 unfortunately:1 setup:1 subproblems:11 marrocco:1 negative:1 implementation:4 collective:2 motivates:1 unknown:3 perform:1 negated:2 policy:1 inspect:1 vertical:1 markov:7 datasets:1 finite:4 predication:1 ecml:1 defining:1 grew:1 relational:2 glowinski:1 ccmrf:12 rn:4 varied:1 peleato:1 cast:1 required:2 eckstein:2 herein:1 established:1 hour:3 nip:3 address:1 suggested:1 below:1 program:4 max:15 including:1 explanation:3 wainwright:1 power:2 getoor:8 business:1 rely:1 indicator:1 advanced:2 representing:2 scheme:1 improve:2 axis:1 created:1 reprint:1 naive:4 dualit:1 ccmrfs:22 discovery:1 relative:4 law:2 loss:14 expect:1 par:2 interesting:1 generation:3 suggestion:1 foundation:1 degree:2 sufficient:2 principle:1 changed:1 supported:1 free:1 copy:6 figueiredo:1 allow:1 weaker:1 understand:1 absolute:1 ghz:1 distributed:1 boundary:1 xn:1 llc:1 numeric:2 rich:1 world:1 unweighted:1 author:2 collection:1 projected:1 party:2 far:1 social:20 emphasize:1 keep:1 logic:4 active:2 incoming:2 uai:1 summing:1 assumed:1 conclude:1 xi:9 continuous:13 iterative:6 infeasibility:2 un:1 excellent:1 complex:1 necessarily:1 constructing:1 domain:6 official:1 european:1 did:2 main:1 linearly:6 iarpa:2 gabay:2 x1:1 augmented:2 aise:1 intel:1 cubic:1 slow:1 precision:1 esolution:1 exponential:2 classe:1 third:2 abundance:1 minute:4 specific:1 list:1 r2:2 incorporating:1 izing:1 intrinsic:5 workshop:3 magnitude:1 notwithstanding:1 subrahmanian:1 kx:14 nk:1 authorized:1 suited:1 broecheler:10 lagrange:1 kxk:1 contained:1 expressed:1 scalar:1 norris:1 minimizer:4 succeed:1 viewed:1 presentation:1 towards:1 aguiar:1 admm:2 feasible:5 except:1 uniformly:1 impractically:2 hyperplane:1 lemma:1 conservative:1 called:2 total:3 experimental:1 la:1 starai:1 formally:1 college:6 dk22:15 violated:2 outgoing:2
|
4,168 | 4,773 |
Convolutional-Recursive Deep Learning
for 3D Object Classification
Richard Socher, Brody Huval, Bharath Bhat, Christopher D. Manning, Andrew Y. Ng
Computer Science Department, Stanford University, Stanford, CA 94305, USA
[email protected], {brodyh,bbhat,manning}@stanford.edu, [email protected]
Abstract
Recent advances in 3D sensing technologies make it possible to easily record color
and depth images which together can improve object recognition. Most current
methods rely on very well-designed features for this new 3D modality. We introduce a model based on a combination of convolutional and recursive neural
networks (CNN and RNN) for learning features and classifying RGB-D images.
The CNN layer learns low-level translationally invariant features which are then
given as inputs to multiple, fixed-tree RNNs in order to compose higher order features. RNNs can be seen as combining convolution and pooling into one efficient,
hierarchical operation. Our main result is that even RNNs with random weights
compose powerful features. Our model obtains state of the art performance on a
standard RGB-D object dataset while being more accurate and faster during training and testing than comparable architectures such as two-layer CNNs.
1
Introduction
Object recognition is one of the hardest problems in computer vision and important for making
robots useful in home environments. New sensing technology, such as the Kinect, that can record
high quality RGB and depth images (RGB-D) has now become affordable and could be combined
with standard vision systems in household robots. The depth modality provides useful extra information to the complex problem of general object detection [1] since depth information is invariant
to lighting or color variations, provides geometrical cues and allows better separation from the background. Most recent methods for object recognition with RGB-D images use hand-designed features
such as SIFT for 2d images [2], Spin Images [3] for 3D point clouds, or specific color, shape and
geometry features [4, 5].
In this paper, we introduce the first convolutional-recursive deep learning model for object recognition that can learn from raw RGB-D images. Compared to other recent 3D feature learning methods
[6, 7], our approach is fast, does not need additional input channels such as surface normals and obtains state of the art results on the task of detecting household objects. Fig. 1 outlines our approach.
Code for training and testing is available at www.socher.org.
Our model starts with raw RGB and depth images and first separately extracts features from them.
Each modality is first given to a single convolutional neural net layer (CNN, [8]) which provides
useful translational invariance of low level features such as edges and allows parts of an object
to be deformable to some extent. The pooled filter responses are then given to a recursive neural
network (RNN, [9]) which can learn compositional features and part interactions. RNNs hierarchically project inputs into a lower dimensional space through multiple layers with tied weights and
nonlinearities.
We also explore new deep learning architectures for computer vision. Our previous work on RNNs
in natural language processing and computer vision [9, 10] (i) used a different tree structure for each
input, (ii) employed a single RNN with one set of weights, (iii) restricted tree structures to be strictly
1
RGB CNN
Softmax Classifier
Depth CNN
Label: Coffee Mug
K filters
4 pooling regions
Multiple RNN
Multiple RNN
Convolution
K
K
Filter Responses get pooled
K
Convolution
K
Merging of pooled vectors Merging of pooled vectors
Figure 1: An overview of our model: A single CNN layer extracts low level features from RGB and
depth images. Both representations are given as input to a set of RNNs with random weights. Each
of the many RNNs (around 100 for each modality) then recursively maps the features into a lower
dimensional space. The concatenation of all the resulting vectors forms the final feature vector for a
softmax classifier.
binary, and (iv) trained the RNN with backpropagation through structure [11, 12]. In this paper, we
expand the space of possible RNN-based architectures in these four dimensions by using fixed tree
structures and multiple RNNs on the same input and allow n-ary trees. We show that because of
the CNN layer, fixing the tree structure does not hurt performance and it allows us to speed up
recognition. Similar to recent work [13, 14] we show that performance of RNN models can improve
with an increasing number of features. The hierarchically composed RNN features of each modality
are concatenated and given to a joint softmax classifier.
Most importantly, we demonstrate that RNNs with random weights can also produce high quality
features. So far random weights have only been shown to work for convolutional neural networks
[15, 16]. Because the supervised training reduces to optimizing the weights of the final softmax
classifier, a large set of RNN architectures can quickly be explored. By combining the above ideas
we obtain a state of the art system for classifying 3D objects which is very fast to train and highly
parallelizable at test time.
We first briefly describe the unsupervised learning of filter weights and their convolution to obtain
low level features. Next we give details of how multiple random RNNs can be used to obtain high
level features of the entire image. Then, we discuss related work. In our experiments we show
quantitative comparisons of different models, analyze model ablations and describe our state-of-theart results on the RGB-D dataset of Lai et al. [2].
2
Convolutional-Recursive Neural Networks
In this section, we describe our new CNN-RNN model. We first learn the CNN filters in an unsupervised way by clustering random patches and then feed these patches into a CNN layer. The resulting
low-level, translationally invariant features are given to recursive neural networks. RNNs compose
higher order features that can then be used to classify the images.
2.1
Unsupervised Pre-training of CNN Filters
We follow the procedure described by Coates et al. [13] to learn filters which will be used in
the convolution. First, random patches are extracted into two sets, one for each modality (RGB
and depth). Each set of patches is then normalized and whitened. The pre-processed patches are
clustered by simply running k-means. Fig. 2 shows the resulting filters for both modalities. They
capture standard edge and color features. One interesting result when applying this method to the
depth channel is that the edges are much sharper. This is due to the large discontinuities between
object boundaries and the background. While the depth channel is often quite noisy most of the
features are still smooth.
2
Filters:
RGB
Depth
Gray scale
Figure 2: Visualization of the k-means filters used in the CNN layer after unsupervised pre-training:
(left) Standard RGB filters (best viewed in color) capture edges and colors. When the method is
applied to depth images (center) the resulting filters have sharper edges which arise due to the strong
discontinuities at object boundaries. The same is true, though to a lesser extent, when compared to
filters trained on gray scale versions of the color images (right).
2.2
A Single CNN Layer
To generate features for the RNN layer, a CNN architecture is chosen for its translational invariance
properties. The main idea of CNNs is to convolve filters over the input image in order to extract
features. Our single layer CNN is similar to the one proposed by Jarrett et. al [17] and consists of a
convolution, followed by rectification and local contrast normalization (LCN). LCN was inspired by
computational neuroscience and is used to contrast features within a feature map, as well as across
feature maps at the same spatial location [17, 18, 14].
We convolve each image of size (height and width) dI with K square filters of size dP , resulting in
K filter responses, each of dimensionality dI ? dP + 1. We then average pool them with square
regions of size d` and a stride size of s, to obtain a pooled response with width and height equal
to r = (dI ? d` )/s + 1. So the output X of the CNN layer applied to one image is a K ? r ? r
dimensional 3D matrix. We apply this same procedure to both color and depth images separately.
2.3
Fixed-Tree Recursive Neural Networks
The idea of recursive neural networks [19, 9] is to learn hierarchical feature representations by
applying the same neural network recursively in a tree structure. In our case, the leaf nodes of the
tree are K-dimensional vectors (the result of the CNN pooling over an image patch repeated for all
K filters) and there are r2 of them.
In our previous RNN work [9, 10, 20] the tree structure depended on the input. While this allows
for more flexibility, we found that for the task of object classification in conjunction with a CNN
layer it was not necessary for obtaining high performance. Furthermore, the search over optimal
trees slows down the method considerably as one can not easily parallelize the search or make use
of parallelization of large matrix products. The latter could benefit immensely from new multicore
hardware such as GPUs. In this work, we focus on fixed-trees which we can design to be balanced.
Previous work also only combined pairs of vectors. We generalize our RNN architecture to allow
each layer to merge blocks of adjacent vectors instead of only pairs.
We start with a 3D matrix X ? RK?r?r for each image (the columns are K-dimensional). We
define a block to be a list of adjacent column vectors which are merged into a parent vector p ? RK .
In the following we use only square blocks for convenience. Blocks are of size K ? b ? b. For
instance, if we merge vectors in a block with b = 3, we get a total size 128 ? 3 ? 3 and a resulting
list of vectors (x1 , . . . , x9 ). In general, we have b2 many vectors in each block. The neural network
3
for computing the parent vector is
??
x1
? ? ??
p = f ?W ? ... ?? ,
xb2
?
?
(1)
2
where the parameter matrix W ? RK?b K , f is a nonlinearity such as tanh. We omit the bias
term which turns out to have no effect in the experiments below. Eq. 1 will be applied to all blocks
of vectors in X with the same weights W . Generally, there will be (r/b)2 many parent vectors p,
forming a new matrix P1 . The vectors in P1 will again be merged in blocks just as those in matrix
X using Eq. 1 with the same tied weights resulting in matrix P2 . This procedure continues until
only one parent vector remains. Fig. 3 shows an example of a pooled CNN output of size K ? 4 ? 4
and a RNN tree structure with blocks of 4 children.
The model so far has been unsupervised. However,
our original task is to classify each block into one of
many object categories. Therefore, we use the top
vector Ptop as the feature vector to a softmax classifier. In order to minimize the cross entropy error of
the softmax, we could backpropagate through the recursive neural network [12] and convolutional layers
[8]. In practice, this is very slow and we will discuss
alternatives in the next section.
2.4
W
p3 p4
p1 p2
K
filters
Multiple Random RNNs
Previous work used only a single RNN. We can
actually use the 3D matrix X as input to a number of RNNs. Each of N RNNs will output a Kdimensional vector. After we forward propagate
through all the RNNs, we concatenate their outputs
to a N K-dimensional vector which is then given to
the softmax classifier.
Instead of taking derivatives of the W matrices of the
RNNs which would require backprop through structure [11], we found that even RNNs with random
weights produce high quality feature vectors. Similar results have been found for random weights in
the closely related CNNs [16]. Before comparing to
other approaches, we briefly review related work.
3
p
W
W
W
W
x3 x4
x1 x2
Figure 3: Recursive Neural Network applied
to blocks: At each node, the same neural network is used to compute the parent vector of
a set of child vectors. The original input matrix is the output of a pooled convolution.
Related Work
There has been great interest in object recognition and scene understanding using RGB-D data.
Silberman and Fergus have published a 3D dataset for full scene understanding [21]. Koppula et al.
also recently provided a new dataset for indoor scene segmentation [4].
The most common approach today for standard object recognition is to use well-designed features
based on orientation histograms such as SIFT, SURF [22] or textons and give them as input to a
classifier such as a random forest. Despite their success, they have several shortcomings such as
being only applicable to one modality (grey scale images in the case of SIFT), not adapting easily
to new modalities such as RGB-D or to varying image domains. There have been some attempts
to modify these features to colored images via color histograms [23] or simply extending SIFT
to the depth channel [2]. More advanced methods that generalize these ideas and can combine
several important RGB-D image characteristics such as size, 3D shape and depth edges are kernel
descriptors [5].
4
Another related line of work is about spatial pyramids in object classification, in particular the
pyramid matching kernel [24]. The similarity is mostly in that our model also learns a hierarchical
image representation that can be used to classify objects.
Another solution to the above mentioned problems is to employ unsupervised feature learning methods [25, 26, 27] (among many others) which have made large improvements in object recognition.
While many deep learning methods exist for learning features from rgb images, few deep learning
architectures have yet been investigated for 3D images. Very recently, Blum et al. [6] introduced
convolutional k-means descriptors (CKM) for RGB-D data. They use SURF interest points and
learn features using k-means similar to [28]. Their work is similar to ours in that they also learn
features in an unsupervised way.
Very recent work by Bo et al. [7] uses unsupervised feature learning based on sparse coding to learn
dictionaries from 8 different channels including grayscale intensity, RGB, depth scalars, and surface
normals. Features are then used in hierarchical matching pursuit which consists of two layers. Each
layer has three modules: batch orthogonal matching pursuit, pyramid max pooling, and contrast
normalization. This results in a very large feature vector size of 188,300 dimensions which is used
for classification.
Lastly, recursive autoencoders have been introduced by Pollack [19] and Socher et al. [10] to which
we compare quantitatively in our experiment section. Recursive neural networks have been applied
to full scene segmentation [9] but they used hand-designed features. Farabet et al. [29] also introduce
a model for scene segmentation that is based on multi-scale convolutional neural networks and learns
feature representations.
4
Experiments
All our experiments are carried out on the recent RGB-D dataset of Lai et al. [2]. There are 51
different classes of household objects and 300 instances of these classes. Each object instance is
imaged from 3 different angles resulting in roughly 600 images per instance. The dataset consists of
a total of 207,920 RGB-D images. We subsample every 5th frame of the 600 images resulting in a
total of 120 images per instance.
In this work we focus on the problem of category recognition and we use the same setup as [2] and
the 10 random splits they provide. All development is carried out on a separate split and model
ablations are run on one of the 10 splits. For each split?s test set we sample one object from each
class resulting in 51 test objects, each with about 120 independently classified images. This leaves
about 34,000 images for training our model. Before the images are given to the CNN they are resized
to be dI = 148.
Unsupervised pre-training for CNN filters is performed for all experiments by using k-means on
500,000 image patches randomly sampled from each split?s training set. Before unsupervised pretraining, the 9 ? 9 ? 3 patches for RGB and 9 ? 9 patches for depth are individually normalized
by subtracting the mean and divided by the standard deviation of its elements. In addition, ZCA
whitening is performed to de-correlate pixels and get rid of redundant features in raw images [30].
A valid convolution is performed with filter bank size K = 128 and filter width and height of 9.
Average pooling is then performed with pooling regions of size d` = 10 and stride size s = 5 to
produce a 3D matrix of size 128 ? 27 ? 27 for each image.
Each RNN has non-overlapping child sizes of 3 ? 3 applied spatially. This leads to the following
matrices at each depth of the tree: X ? R128?27?27 to P1 ? R128?9?9 to P2 ? R128?3?3 to finally
P3 ? R128 . We use 128 randomly initialized RNNs in both modalities. The combination of RGB
and depth is done by concatenating the final features which have 2 ? 1282 = 32, 768 dimensions.
4.1
Comparison to Other Methods
In this section we compare our model to related models in the literature. Table 1 lists the main
accuracy numbers and compares to the published results [2, 5, 6, 7]. Recent work by Bo et al.
[5] investigates multiple kernel descriptors on top of various features, including 3D shape, physical
size of the object, depth edges, gradients, kernel PCA, local binary patterns,etc. In contrast, all our
features are learned in an unsupervised way from the raw color and depth images. Blum et al. [6]
5
Classifier
Linear SVM [2]
Kernel SVM [2]
Random Forest [2]
SVM [5]
CKM [6]
SP+HMP [7]
CNN-RNN
Extra Features for 3D;RGB
Spin Images, efficient match kernel (EMK),
random Fourier sets, width, depth, height;
SIFT, EMK, texton histogram, color histogram
same as above
same as above
3D shape, physical size of the object, depth
edges, gradients, kernel PCA, local binary patterns,multiple depth kernels
SURF interest points
surface normals
?
3D
53.1?1.7
RGB
74.3?3.3
Both
81.9?2.8
64.7?2.2
66.8?2.5
78.8?2.7
74.5?3.1
74.7?3.6
77.7?1.9
83.9?3.5
79.6?4.0
86.2?2.1
?
81.2?2.3
78.9?3.8
?
82.4?3.1
80.8?4.2
86.4?2.3
87.5?2.9
86.8?3.3
Table 1: Comparison of our CNN-RNN to multiple related approaches. We outperform all approaches except that of Bo et al. which uses an extra input modality of surface normals.
also learn feature descriptors and apply them sparsely to interest points. We outperform all methods
except that of Bo et al. [7] who perform 0.7% better with a final feature vector that requires five
times the amount of memory compared to ours. They make additional use of surface normals and
gray scale images on top of RGB and depth channels and also learn features from these inputs with
unsupervised methods based on sparse coding. Sparse coding is known to not scale well in terms of
speed for large input dimensions [31].
4.2
Model Analysis
We analyze our model through several ablations and model variations. We picked one of the splits
as our development fold and focus on RGB images and RNNs with random weights only unless
otherwise noted.
Two layer CNN. Fig. 4 (left) shows a comparison between our CNN-RNN model and a two layer
CNN. We compare a previously recommended architecture for CNNs [17] and one which uses filters
trained with k-means. In both settings, the CNN-RNN outperforms the two layer CNN. Because it
also requires many fewer matrix multiplication, it is approximately 4? faster in our experiments
compared to a second CNN layer. However, the main bottleneck of our method is still the first CNN
layer. Both models could benefit from fast GPU implementations [32, 33].
Tree structured neural nets with untied weights. Fig. 4 (left) also gives results when the weights
of the random RNNs are untied across layers in the tree (TNN). In other words, different random
weights are used at each depth of the tree. Since weights are still tied inside each layer this setting
can be seen as a convolution where the stride size is equal to the filter size. We call this a tree neural
network (TNN) because it is technically not a recursive neural network. While this results in a large
increase in parameters, it actually hurts performance underlining the fact that tying the weights in
RNNs is beneficial.
Trained RNN. Another comparison shown in Fig. 4 (left) is between many random RNNs and a
single trained RNN. We carefully cross validated the RNN training procedure, objectives (adding
reconstruction costs at each layer as in [10], classifying each layer or only at the top node), regularization, layer size etc. The best performance was still lacking compared to 128 random RNNs ( 2%
difference) and training time is much longer. With a more efficient GPU-based implementation it
might be possible to train many RNNs in the future.
Number of random RNNs: Fig. 4 (center) shows that increasing the number of random RNNs
improves performance, leveling off at around 64 on this dataset.
RGB & depth combinations and features: Fig. 4 (right) shows that combining RGB and depth
features from RNNs improves performance. The two modalities complement each other and produce
features that are independent enough so that the classifier can benefit from their combination.
Global autoencoder on pixels and depth. In this experiment we investigate whether CNN-RNNs
learn better features than simply using a single layer of features on raw pixels. Many methods such
as those of Coates and Ng [28] show remarkable results with a single very wide layer. The global
autoencoder achieves only 61.1%, (it is overfitting at 93.3% training accuracy). We cross-validated
6
See [17]
See [17]
k-means
k-means
k-means
k-means
2nd
Layer
CNN
RNN
tRNN
TNN
CNN
RNN?
Acc.
90
77.66
77.04
78.10
79.67
78.65
80.15
80
Accuracy (%)
Filters
90
85
70
80
60
75
50
40
1 816 32
64
128
Number of RNNs
70
RGB
DepthRGB+Depth
Figure 4: Model analysis on the development split (left and center use rgb only). Left: Comparison of two layer CNN with CNN-RNN with different pre-processing ([17] and [13]). TNN is a
tree structured neural net with untied weights across layers, tRNN is a single RNN trained with
backpropagation (see text for details). The best performance is achieved with our model of random
RNNs (marked with ?). Center: Increasing the number of random RNNs improves performance.
Right: Combining both modalities improves performance to 88% on the development split.
over the number of hidden units and sparsity parameters). This shows that even random recursive
neural nets can clearly capture more of the underlying class structure in their feature representations
than a single layer autoencoder.
4.3
Error Analysis
Fig. 5 shows our confusion matrix across all 51 classes. Most model confusions are very reasonable
showing that recursive deep learning methods on raw pixels and depth can give provide high quality
features. The only class that we consistently misclassify are mushrooms which are very similar in
appearance to garlic.
Fig. 6 shows 4 pairs of often confused classes. Both garlic and mushrooms have very similar
appearances and colors. Water bottles and shampoo bottles in particular are problematic because the
IR sensors do not properly reflect from see through surfaces.
5
Conclusion
We introduced a new model based on a combination of convolutional and recursive neural networks.
Unlike previous RNN models, we fix the tree structure, allow multiple vectors to be combined,
use multiple RNN weights and keep parameters randomly initialized. This architecture allows for
parallelization and high speeds, outperforms two layer CNNs and obtains state of the art performance
without any external features. We also demonstrate the applicability of convolutional and recursive
feature learning to the new domain of depth images.
Acknowledgments
We thank Stephen Miller and Alex Teichman for tips on 3D images, Adam Coates for chats about image
pre-processing and Ilya Sutskever and Andrew Maas for comments on the paper. We thank the anonymous
reviewers for insightful comments. Richard is supported by the Microsoft Research PhD fellowship. The
authors gratefully acknowledge the support of the Defense Advanced Research Projects Agency (DARPA)
Machine Reading Program under Air Force Research Laboratory (AFRL) prime contract no. FA8750-09-C0181, and the DARPA Deep Learning program under contract number FA8650-10-C-7020. Any opinions,
findings, and conclusions or recommendations expressed in this material are those of the authors and do not
necessarily reflect the view of DARPA, AFRL, or the US government.
References
[1] M. Quigley, S. Batra, S. Gould, E. Klingbeil, Q. Le, A. Wellman, and A.Y. Ng. High-accuracy 3D sensing
for mobile manipulation: improving object detection and door opening. In ICRA, 2009.
7
apple
ball
banana
bell pepper
binder
bowl
calculator
camera
cap
cellphone
cereal box
coffee mug
comb
dry battery
flashlight
food bag
food box
food can
food cup
food jar
garlic
glue stick
greens
hand towel
instant noodles
keyboard
kleenex
lemon
lightbulb
lime
marker
mushroom
notebook
onion
orange
peach
pear
pitcher
plate
pliers
potato
rubber eraser
scissors
shampoo
soda can
sponge
stapler
tomato
toothbrush
toothpaste
water bottle
apple
ball
banana
bell pepper
binder
bowl
calculator
camera
cap
cellphone
cereal box
coffee mug
comb
dry battery
flashlight
food bag
food box
food can
food cup
food jar
garlic
glue stick
greens
hand towel
instant noodles
keyboard
kleenex
lemon
lightbulb
lime
marker
mushroom
notebook
onion
orange
peach
pear
pitcher
plate
pliers
potato
rubber eraser
scissors
shampoo
soda can
sponge
stapler
tomato
toothbrush
toothpaste
water bottle
Figure 5: Confusion Matrix of our CNN-RNN model. The ground truth labels are on the y-axis and
the predicted labels on the x-axis. Many misclassifications are between (a) garlic and mushroom (b)
food-box and kleenex.
Figure 6: Examples of confused classes: Shampoo bottle and water bottle, mushrooms labeled as
garlic, pitchers classified as caps due to shape and color similarity, white caps classified as kleenex
boxes at certain angles.
[2] K. Lai, L. Bo, X. Ren, and D. Fox. A Large-Scale Hierarchical Multi-View RGB-D Object Dataset. In
ICRA, 2011.
[3] A. Johnson. Spin-Images: A Representation for 3-D Surface Matching. PhD thesis, Robotics Institute,
Carnegie Mellon University, 1997.
[4] H.S. Koppula, A. Anand, T. Joachims, and A. Saxena. Semantic labeling of 3d point clouds for indoor
scenes. In NIPS, 2011.
[5] L. Bo, X. Ren, and D. Fox. Depth kernel descriptors for object recognition. In IROS, 2011.
[6] M. Blum, J. T. Springenberg, J. Wlfing, and M. Riedmiller. A Learned Feature Descriptor for Object
Recognition in RGB-D Data. In ICRA, 2012.
8
[7] L. Bo, X. Ren, and D. Fox. Unsupervised Feature Learning for RGB-D Based Object Recognition. In
ISER, June 2012.
[8] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition.
Proceedings of the IEEE, 86(11), November 1998.
[9] R. Socher, C. Lin, A. Y. Ng, and C.D. Manning. Parsing Natural Scenes and Natural Language with
Recursive Neural Networks. In ICML, 2011.
[10] R. Socher, J. Pennington, E. H. Huang, A. Y. Ng, and C. D. Manning. Semi-Supervised Recursive
Autoencoders for Predicting Sentiment Distributions. In EMNLP, 2011.
[11] C. Goller and A. K?uchler. Learning task-dependent distributed representations by backpropagation
through structure. In Proceedings of the International Conference on Neural Networks (ICNN-96), 1996.
[12] R. Socher, C. D. Manning, and A. Y. Ng. Learning continuous phrase representations and syntactic parsing
with recursive neural networks. In Proceedings of the NIPS-2010 Deep Learning and Unsupervised
Feature Learning Workshop, 2010.
[13] A. Coates, A. Y. Ng, and H. Lee. An Analysis of Single-Layer Networks in Unsupervised Feature Learning. Journal of Machine Learning Research - Proceedings Track: AISTATS, 2011.
[14] Q.V. Le, M.A. Ranzato, R. Monga, M. Devin, K. Chen, G.S. Corrado, J. Dean, and A.Y. Ng. Building
high-level features using large scale unsupervised learning. In ICML, 2012.
[15] Kevin Jarrett, Koray Kavukcuoglu, Marc?Aurelio Ranzato, and Yann LeCun. What is the best multi-stage
architecture for object recognition? In ICCV, 2009.
[16] A. Saxe, P.W. Koh, Z. Chen, M. Bhand, B. Suresh, and A. Y. Ng. On random weights and unsupervised
feature learning. In ICML, 2011.
[17] K. Jarrett and K. Kavukcuoglu and M. Ranzato and Y. LeCun. What is the Best Multi-Stage Architecture
for Object Recognition? In ICCV. IEEE, 2009.
[18] N. Pinto, D. D. Cox, and J. J. DiCarlo. Why is real-world visual object recognition hard? PLoS Comput
Biol, 2008.
[19] J. B. Pollack. Recursive distributed representations. Artificial Intelligence, 46, 1990.
[20] R. Socher, E. H. Huang, J. Pennington, A. Y. Ng, and C. D. Manning. Dynamic Pooling and Unfolding
Recursive Autoencoders for Paraphrase Detection. In NIPS. MIT Press, 2011.
[21] N. Silberman and R. Fergus. Indoor scene segmentation using a structured light sensor. In ICCV Workshop on 3D Representation and Recognition, 2011.
[22] H. Bay, A. Ess, T. Tuytelaars, and L. Van Gool. Speeded-Up Robust Features (SURF). Computer Vision
and Image Understanding, 110(3), 2008.
[23] A. E. Abdel-Hakim and A. A. Farag. CSIFT: A SIFT descriptor with color invariant characteristics. In
CVPR, 2006.
[24] K. Grauman and T. Darrell. The Pyramid Match Kernel: Discriminative Classification with Sets of Image
Features. ICCV, 2005.
[25] G. Hinton and R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science,
313(5786), 2006.
[26] Y. Bengio. Learning deep architectures for AI. Foundations and Trends in Machine Learning, 2(1), 2009.
[27] M. Ranzato, F. J. Huang, Y. Boureau, and Y. LeCun. Unsupervised learning of invariant feature hierarchies
with applications to object recognition. CVPR, 0:1?8, 2007.
[28] A. Coates and A. Ng. The Importance of Encoding Versus Training with Sparse Coding and Vector
Quantization . In ICML, 2011.
[29] Farabet C., Couprie C., Najman L., and LeCun Y. Scene parsing with multiscale feature learning, purity
trees, and optimal covers. In ICML, 2012.
[30] A. Hyv?arinen and E. Oja. Independent component analysis: algorithms and applications. Neural Netw.,
13, 2000.
[31] J. Ngiam, P. Koh, Z. Chen, S. Bhaskar, and A.Y. Ng. Sparse filtering. In NIPS. 2011.
[32] D. C. Ciresan, U. Meier, J. Masci, L. M. Gambardella, and J. Schmidhuber. Flexible, high performance
convolutional neural networks for image classification. In IJCAI, 2011.
[33] C. Farabet, B. Martini, P. Akselrod, S. Talay, Y. LeCun, and E. Culurciello. Hardware accelerated convolutional neural networks for synthetic vision systems. In Proc. International Symposium on Circuits and
Systems (ISCAS?10), 2010.
9
|
4773 |@word cnn:36 version:1 briefly:2 cox:1 nd:1 glue:2 grey:1 hyv:1 propagate:1 rgb:34 recursively:2 cellphone:2 ours:2 document:1 fa8750:1 outperforms:2 kleenex:4 current:1 comparing:1 yet:1 mushroom:6 gpu:2 parsing:3 devin:1 concatenate:1 shape:5 designed:4 cue:1 leaf:2 fewer:1 intelligence:1 es:1 record:2 colored:1 provides:3 detecting:1 node:3 location:1 org:2 five:1 height:4 become:1 symposium:1 consists:3 compose:3 combine:1 inside:1 leveling:1 introduce:3 comb:2 roughly:1 p1:4 multi:4 inspired:1 salakhutdinov:1 food:11 increasing:3 project:2 provided:1 underlying:1 confused:2 circuit:1 what:2 tying:1 finding:1 quantitative:1 every:1 saxena:1 r128:4 grauman:1 classifier:9 stick:2 unit:1 omit:1 before:3 local:3 modify:1 depended:1 despite:1 encoding:1 parallelize:1 merge:2 approximately:1 might:1 rnns:31 binder:2 speeded:1 jarrett:3 acknowledgment:1 camera:2 lecun:6 testing:2 recursive:22 block:11 practice:1 backpropagation:3 x3:1 procedure:4 suresh:1 riedmiller:1 rnn:31 bell:2 adapting:1 matching:4 pre:6 word:1 get:3 convenience:1 applying:2 www:1 map:3 reviewer:1 center:4 dean:1 pitcher:3 independently:1 importantly:1 variation:2 hurt:2 hierarchy:1 today:1 us:3 element:1 trend:1 recognition:18 continues:1 sparsely:1 labeled:1 ckm:2 cloud:2 module:1 capture:3 region:3 ranzato:4 plo:1 balanced:1 mentioned:1 environment:1 agency:1 battery:2 dynamic:1 trained:6 technically:1 uchler:1 easily:3 joint:1 darpa:3 bowl:2 various:1 train:2 fast:3 describe:3 shortcoming:1 artificial:1 labeling:1 kevin:1 quite:1 koppula:2 stanford:4 cvpr:2 otherwise:1 tuytelaars:1 syntactic:1 noisy:1 final:4 quigley:1 net:4 reconstruction:1 subtracting:1 interaction:1 product:1 p4:1 combining:4 ablation:3 flexibility:1 deformable:1 sutskever:1 parent:5 ijcai:1 darrell:1 extending:1 produce:4 adam:1 object:34 andrew:2 fixing:1 multicore:1 eq:2 p2:3 strong:1 c:1 predicted:1 noodle:2 merged:2 closely:1 cnns:5 filter:24 saxe:1 opinion:1 material:1 backprop:1 require:1 government:1 arinen:1 fix:1 clustered:1 icnn:1 anonymous:1 strictly:1 immensely:1 around:2 trnn:2 normal:5 ground:1 great:1 dictionary:1 achieves:1 notebook:2 proc:1 applicable:1 bag:2 label:3 tanh:1 individually:1 unfolding:1 mit:1 clearly:1 sensor:2 resized:1 varying:1 mobile:1 lightbulb:2 conjunction:1 validated:2 focus:3 june:1 joachim:1 improvement:1 consistently:1 properly:1 contrast:4 pear:2 zca:1 plier:2 culurciello:1 dependent:1 entire:1 hidden:1 onion:2 bhand:1 expand:1 pixel:4 translational:2 classification:6 orientation:1 among:1 flexible:1 development:4 art:4 softmax:7 spatial:2 orange:2 equal:2 ng:12 koray:1 x4:1 hardest:1 unsupervised:18 icml:5 theart:1 future:1 others:1 quantitatively:1 richard:3 employ:1 few:1 opening:1 randomly:3 oja:1 composed:1 translationally:2 geometry:1 iscas:1 microsoft:1 attempt:1 detection:3 misclassify:1 interest:4 highly:1 investigate:1 wellman:1 light:1 accurate:1 edge:8 potato:2 necessary:1 orthogonal:1 unless:1 tree:21 iv:1 peach:2 fox:3 initialized:2 pollack:2 instance:5 classify:3 column:2 cover:1 phrase:1 cost:1 applicability:1 deviation:1 goller:1 johnson:1 considerably:1 combined:3 synthetic:1 international:2 contract:2 off:1 lee:1 pool:1 kdimensional:1 together:1 quickly:1 tip:1 ilya:1 again:1 reflect:2 x9:1 thesis:1 huang:3 emnlp:1 external:1 derivative:1 nonlinearities:1 huval:1 de:1 stride:3 pooled:7 b2:1 coding:4 textons:1 scissors:2 performed:4 view:2 picked:1 analyze:2 start:2 minimize:1 air:1 ir:1 spin:3 convolutional:13 square:3 characteristic:2 descriptor:7 accuracy:4 who:1 miller:1 dry:2 calculator:2 generalize:2 raw:6 kavukcuoglu:2 ren:3 lighting:1 apple:2 published:2 ary:1 classified:3 acc:1 bharath:1 parallelizable:1 farabet:3 di:4 sampled:1 dataset:8 color:14 cap:4 dimensionality:2 improves:4 segmentation:4 carefully:1 actually:2 feed:1 afrl:2 higher:2 supervised:2 follow:1 response:4 done:1 though:1 underlining:1 box:6 furthermore:1 just:1 stage:2 lastly:1 until:1 autoencoders:3 hand:4 christopher:1 multiscale:1 overlapping:1 marker:2 chat:1 quality:4 gray:3 usa:1 effect:1 cereal:2 normalized:2 true:1 building:1 regularization:1 spatially:1 imaged:1 laboratory:1 semantic:1 white:1 mug:3 adjacent:2 during:1 width:4 noted:1 plate:2 outline:1 demonstrate:2 confusion:3 geometrical:1 image:47 recently:2 common:1 physical:2 overview:1 emk:2 mellon:1 cup:2 ai:1 iser:1 nonlinearity:1 language:2 gratefully:1 robot:2 similarity:2 surface:7 whitening:1 etc:2 longer:1 recent:7 optimizing:1 prime:1 manipulation:1 keyboard:2 certain:1 schmidhuber:1 binary:3 success:1 seen:2 additional:2 employed:1 hmp:1 purity:1 gambardella:1 redundant:1 recommended:1 corrado:1 semi:1 ii:1 multiple:12 full:2 stephen:1 reduces:1 smooth:1 faster:2 match:2 cross:3 lin:1 lai:3 divided:1 whitened:1 vision:6 affordable:1 histogram:4 normalization:2 kernel:10 monga:1 pyramid:4 texton:1 achieved:1 robotics:1 background:2 addition:1 bhat:1 separately:2 fellowship:1 modality:13 extra:3 parallelization:2 unlike:1 lcn:2 toothpaste:2 comment:2 pooling:7 anand:1 bhaskar:1 call:1 door:1 iii:1 split:8 enough:1 bengio:2 pepper:2 architecture:12 misclassifications:1 ciresan:1 sponge:2 idea:4 lesser:1 haffner:1 bottleneck:1 whether:1 pca:2 defense:1 sentiment:1 fa8650:1 compositional:1 pretraining:1 deep:9 useful:3 generally:1 amount:1 ang:1 hardware:2 processed:1 category:2 generate:1 outperform:2 exist:1 coates:5 problematic:1 neuroscience:1 per:2 track:1 carnegie:1 four:1 blum:3 iros:1 klingbeil:1 eraser:2 garlic:6 run:1 angle:2 powerful:1 soda:2 springenberg:1 reasonable:1 yann:1 separation:1 home:1 patch:9 p3:2 lime:2 investigates:1 comparable:1 brody:1 layer:35 followed:1 fold:1 lemon:2 alex:1 x2:1 scene:9 untied:3 shampoo:4 fourier:1 speed:3 gpus:1 gould:1 department:1 structured:3 combination:5 manning:6 ball:2 across:4 beneficial:1 making:1 invariant:5 restricted:1 iccv:4 koh:2 rectification:1 rubber:2 visualization:1 remains:1 previously:1 discus:2 turn:1 available:1 operation:1 pursuit:2 apply:2 hierarchical:5 alternative:1 batch:1 original:2 convolve:2 clustering:1 running:1 top:4 household:3 instant:2 concatenated:1 coffee:3 icra:3 silberman:2 objective:1 gradient:3 dp:2 stapler:2 separate:1 thank:2 concatenation:1 extent:2 water:4 code:1 dicarlo:1 setup:1 mostly:1 sharper:2 slows:1 design:1 implementation:2 teichman:1 perform:1 convolution:9 acknowledge:1 november:1 najman:1 hinton:1 banana:2 frame:1 kinect:1 paraphrase:1 intensity:1 introduced:3 complement:1 pair:3 bottle:6 meier:1 learned:2 discontinuity:2 nip:4 below:1 pattern:2 indoor:3 sparsity:1 reading:1 program:2 including:2 max:1 memory:1 green:2 gool:1 natural:3 rely:1 force:1 predicting:1 advanced:2 improve:2 technology:2 axis:2 carried:2 extract:3 autoencoder:3 text:1 review:1 understanding:3 literature:1 multiplication:1 lacking:1 interesting:1 filtering:1 versus:1 remarkable:1 abdel:1 foundation:1 tomato:2 ptop:1 bank:1 classifying:3 martini:1 maas:1 supported:1 bias:1 allow:3 hakim:1 institute:1 wide:1 taking:1 sparse:5 benefit:3 distributed:2 boundary:2 depth:33 dimension:4 valid:1 world:1 van:1 forward:1 made:1 author:2 far:2 correlate:1 obtains:3 netw:1 keep:1 flashlight:2 global:2 overfitting:1 rid:1 fergus:2 discriminative:1 grayscale:1 talay:1 search:2 continuous:1 bay:1 why:1 table:2 learn:11 channel:6 robust:1 ca:1 obtaining:1 forest:2 improving:1 ngiam:1 investigated:1 complex:1 necessarily:1 bottou:1 domain:2 marc:1 surf:4 sp:1 main:4 hierarchically:2 aistats:1 aurelio:1 subsample:1 arise:1 repeated:1 child:3 x1:3 fig:10 slow:1 concatenating:1 comput:1 tied:3 learns:3 masci:1 down:1 rk:3 specific:1 sift:6 tnn:4 showing:1 sensing:3 explored:1 r2:1 list:3 svm:3 insightful:1 workshop:2 socher:8 quantization:1 merging:2 adding:1 pennington:2 importance:1 phd:2 boureau:1 chen:3 jar:2 backpropagate:1 entropy:1 simply:3 explore:1 appearance:2 forming:1 visual:1 expressed:1 toothbrush:2 bo:7 scalar:1 recommendation:1 pinto:1 truth:1 extracted:1 towel:2 viewed:1 marked:1 couprie:1 hard:1 except:2 reducing:1 total:3 batra:1 invariance:2 support:1 latter:1 accelerated:1 biol:1
|
4,169 | 4,774 |
A Generative Model
for Parts-based Object Segmentation
S. M. Ali Eslami
School of Informatics
University of Edinburgh
[email protected]
Christopher K. I. Williams
School of Informatics
University of Edinburgh
[email protected]
Abstract
The Shape Boltzmann Machine (SBM) [1] has recently been introduced as a stateof-the-art model of foreground/background object shape. We extend the SBM to
account for the foreground object?s parts. Our new model, the Multinomial SBM
(MSBM), can capture both local and global statistics of part shapes accurately.
We combine the MSBM with an appearance model to form a fully generative
model of images of objects. Parts-based object segmentations are obtained simply
by performing probabilistic inference in the model. We apply the model to two
challenging datasets which exhibit significant shape and appearance variability,
and find that it obtains results that are comparable to the state-of-the-art.
There has been significant focus in computer vision on object recognition and detection e.g. [2], but
a strong desire remains to obtain richer descriptions of objects than just their bounding boxes. One
such description is a parts-based object segmentation, in which an image is partitioned into multiple
sets of pixels, each belonging to either a part of the object of interest, or its background.
The significance of parts in computer vision has been recognized since the earliest days of the
field (e.g. [3, 4, 5]), and there exists a rich history of work on probabilistic models for parts-based
segmentation e.g. [6, 7]. Many such models only consider local neighborhood statistics, however
several models have recently been proposed that aim to increase the accuracy of segmentations by
also incorporating prior knowledge about the foreground object?s shape [8, 9, 10, 11]. In such cases,
probabilistic techniques often mainly differ in how accurately they represent and learn about the
variability exhibited by the shapes of the object?s parts.
Accurate models of the shapes and appearances of parts can be necessary to perform inference in
datasets that exhibit large amounts of variability. In general, the stronger the models of these two
components, the more performance is improved. A generative model has the added benefit of being
able to generate samples, which allows us to visually inspect the quality of its understanding of the
data and the problem.
Recently, a generative probabilistic model known as the Shape Boltzmann Machine (SBM) has been
used to model binary object shapes [1]. The SBM has been shown to constitute the state-of-the-art
and it possesses several highly desirable characteristics: samples from the model look realistic, and
it generalizes to generate samples that differ from the limited number of examples it is trained on.
The main contributions of this paper are as follows: 1) In order to account for object parts we extend
the SBM to use multinomial visible units instead of binary ones, resulting in the Multinomial Shape
Boltzmann Machine (MSBM), and we demonstrate that the MSBM constitutes a strong model of
parts-based object shape. 2) We combine the MSBM with an appearance model to form a fully
generative model of images of objects (see Fig. 1). We show how parts-based object segmentations
can be obtained simply by performing probabilistic inference in the model. We apply our model
to two challenging datasets and find that in addition to being principled and fully generative, the
model?s performance is comparable to the state-of-the-art.
1
Train
labels
Train
images
Test image
Appearance model
Joint
Model
Shape model
Parsing
Figure 1: Overview. Using annotated images separate models of shape and appearance are trained.
Given an unseen test image, its parsing is obtained via inference in the proposed joint model.
In Secs. 1 and 2 we present the model and propose efficient inference and learning schemes. In
Sec. 3 we compare and contrast the resulting joint model with existing work in the literature. We
describe our experimental results in Sec. 4 and conclude with a discussion in Sec. 5.
1
Model
We consider datasets of cropped images of an object class. We assume that the images are constructed through some combination of a fixed number of parts. Given a dataset D = {Xd }, d = 1...n
of such images X, each consisting of P pixels {xi }, i = 1...P , we wish to infer a segmentation S for
the image. S consists of a labeling si for every pixel, where si is a 1-of-(L+1) encoded variable, and
L is the fixed number of parts
P that combine to generate the foreground. In other words, si = (sli ),
l = 0...L, sli 2 {0, 1} and l sli = 1. Note that the background is also treated as a ?part? (l = 0).
Accurate inference of S is driven by models for 1) part shapes and 2) part appearances.
Part shapes: Several types of models can be used to define probabilistic distributions over segmentations S. The simplest approach is to model each pixel si independently with categorical variables
whose parameters are specified by the object?s mean shape (Fig. 2(a)). Markov Random Fields
(MRFs, Fig. 2(b)) additionally model interactions between nearby pixels using pairwise potential
functions that efficiently capture local properties of images like smoothness and continuity.
Restricted Boltzmann Machines (RBMs) and their multi-layered counterparts Deep Boltzmann Machines (DBMs, Fig. 2(c)) make heavy use of hidden variables to efficiently define higher-order
potentials that take into account the configuration of larger groups of image pixels. The introduction of such hidden variables provides a way to efficiently capture complex, global properties of
image pixels. RBMs and DBMs are powerful generative models, but they also have many parameters. Segmented images, however, are expensive to obtain and datasets are typically small (hundreds
of examples). In order to learn a model that accurately captures the properties of part shapes we
use DBMs but also impose carefully chosen connectivity and capacity constraints, following the
structure of the Shape Boltzmann Machine (SBM) [1]. We further extend the model to account for
multi-part shapes to obtain the Multinomial Shape Boltzmann Machine (MSBM).
2
1
2
The MSBM has two layers of latent variables: h1 and
Ph (collectively H =1 {h2 , hs }), andsdefines a
Boltzmann distribution over segmentations p(S) = h1 ,h2 exp{ E(S, h , h |? )}/Z(? ) where
X
X
X
X
X
1
2 1 2
E(S, h1 , h2 |?s ) =
bli sli +
wlij
sli h1j +
c1j h1j +
wjk
hj hk +
c2k h2k , (1)
i,l
j
i,j,l
j,k
k
where j and k range over the first and second layer hidden variables, and ?s = {W 1 , W 2 , b, c1 , c2 }
are the shape model parameters. In the first layer, local receptive fields are enforced by connecting
each hidden unit in h1 only to a subset of the visible units, corresponding to one of four patches, as
shown in Fig. 2(d,e). Each patch overlaps its neighbor by b pixels, which allows boundary continuity
to be learned at the lowest layer. We share weights between the four sets of first-layer hidden units
and patches, and purposely restrict the number of units in h2 . These modifications significantly
reduce the number of parameters whilst taking into account an important property of shapes, namely
that the strongest dependencies between pixels are typically local.
2
h2
h2
1
1
h
S
S
(a) Mean
h
S
(b) MRF
h2
h1
S
S
(c) DBM
b
(d) SBM
(e) 2D SBM
Figure 2: Models of shape. Object shape is modeled with undirected graphical models. (a) 1D slice
of a mean model. (b) Markov Random Field in 1D. (c) Deep Boltzmann Machine in 1D. (d) 1D slice
of a Shape Boltzmann Machine. (e) Shape Boltzmann Machine in 2D. In all models latent units h
are binary and visible units S are multinomial random variables. Based on Fig. 2 of [1].
k=1
k=2
k=3
k=1
k=2
k=3
k=1
k=2
k=3
?
l=0
l=1
l=2
Figure 3: A model of appearances. Left: An exemplar dataset. Here we assume one background
(l = 0) and two foreground (l = 1, non-body; l = 2, body) parts. Right: The corresponding
appearance model. In this example, L = 2, K = 3 and W = 6. Best viewed in color.
Part appearances: Pixels in a given image are assumed to have been generated by W fixed Gaussians in RGB space. During pre-training, the means {?w } and covariances {?w } of these Gaussians
are extracted by training a mixture model with W components on every pixel in the dataset, ignoring
image and part structure. It is also assumed that each of the L parts can have different appearances
in different images, and that these appearances can be clustered into K classes. The classes differ in
how likely they are to use each of the W components when ?coloring in? the part.
The generative process is as follows. For part l in an image, one of the K classes is chosen (represented by a 1-of-K indicator variable al ). Given al , the probability distribution defined on pixels
associated with part l is given by a Gaussian mixture model with means {?w } and covariances {?w }
and mixing proportions { lkw }. The prior on A = {al } specifies the probability ?lk of appearance
class k being chosen for part l. Therefore appearance parameters ?a = {?lk , lkw } (see Fig. 3) and:
a
p(xi |A, si , ? ) =
p(A|?a ) =
Y
l
Y
l
a sli
p(xi |al , ? )
p(al |?a ) =
=
Y Y X
YY
l
l
k
w
lkw
N (xi |?w , ?w )
!alk !sli
(?lk )alk .
,
(2)
(3)
k
Combining shapes and appearances: To summarize, the latent variables for X are A, S, H, and
the model?s active parameters ? include shape parameters ?s and appearance parameters ?a , so that
p(X, A, S, H|?) =
Y
1
p(A|?a )p(S, H|?s )
p(xi |A, si , ?a ) ,
Z( )
i
(4)
where the parameter adjusts the relative contributions of the shape and appearance components.
See Fig. 4 for an illustration of the complete graphical model. During learning, we find the values of ? that maximize the likelihood of the training data D, and segmentation is performed on
a previously-unseen image by querying the marginal distribution p(S|Xtest , ?). Note that Z( ) is
constant throughout the execution of the algorithms. We set via trial and error in our experiments.
3
n
?a
H
si
al
xi
L+1
H
?s
S
X
A
P
Figure 4: A model of shape and appearance. Left: The joint model. Pixels xi are modeled via
appearance variables al . The model?s belief about each layer?s shape is captured by shape variables
H. Segmentation variables si assign each pixel to a layer. Right: Schematic for an image X.
2
Inference and learning
Inference: We approximate p(A, S, H|X, ?) by drawing samples of A, S and H using block-Gibbs
Markov Chain Monte Carlo (MCMC). The desired distribution p(S|X, ?) can then be obtained by
considering only the samples for S (see Algorithm 1). In order to sample p(A|S, H, X, ?) we
consider the conditional distribution of appearance class k being chosen for part l which is given by:
Q P
?s
?lk i ( w lkw N (xi |?w , ?w )) li
h Q P
i.
p(alk = 1|S, X, ?) = P
(5)
K
?sli
r=1 ?lr
i(
w lrw N (xi |?w , ?w ))
Since the MSBM only has edges between each pair of adjacent layers, all hidden units within a layer
are conditionally independent given the units in the other two layers. This property can be exploited
to make inference in the shape model exact and efficient. The conditional probabilities are:
X
X
1
2 2
p(h1j = 1|s, h2 , ?) =
(
wlij
sli +
wjk
hk + c1j ),
(6)
i,l
p(h2k
1
= 1|h , ?)
=
(
X
k
2 1
wjk
hj
+
c2j ),
(7)
j
where (y) = 1/(1 + exp( y)) is the sigmoid function. To sample from p(H|S, X, ?) we iterate
between Eqns. 6 and 7 multiple times and keep only the final values of h1 and h2 . Finally, we draw
samples for the pixels in p(S|A, H, X, ?) independently:
P 1 1
exp( j wlij
hj + bli ) p(xi |A, sli = 1, ?)
p(sli = 1|A, H, X, ?) = PL
.
(8)
P 1
1
m=1 exp(
j wmij hj + bmi ) p(xi |A, smi = 1, ?)
Seeding: Since the latent-space is extremely high-dimensional, in practice we find it helpful to run
several inference chains, each initializing S(1) to a different value. The ?best? inference is retained
and the others are discarded. The computation of the likelihood p(X|?) of image X is intractable,
so we approximate the quality of each inference using a scoring function:
1X
Score(X|?) =
p(X, A(t) , S(t) , H(t) |?),
(9)
T t
where {A(t) , S(t) , H(t) }, t = 1...T are the samples obtained from the posterior p(A, S, H|X, ?).
If the samples were drawn from the prior p(A, S, H|?) the scoring function would be an unbiased
estimator of p(X|?), but would be wildly inaccurate due to the high probability of missing the
important regions of latent space (see e.g. [12, p. 107-109] for further discussion of this issue).
Learning: Learning of the model involves maximizing the log likelihood log p(D|?a , ?s ) of the
training dataset D with respect to the model parameters ?a and ?s . Since training is partially supervised, in that for each image X its corresponding segmentation S is also given, we can learn the
parameters of the shape and appearance components separately.
For appearances, the learning of the mixing coefficients and the histogram parameters decomposes
into standard mixture updates independently for each part. For shapes, we follow the standard deep
4
Algorithm 1 MCMC inference algorithm.
1: procedure I NFER(X, ?)
2:
Initialize S(1) , H(1)
3:
for t
2 : chain length do
4:
A(t) ? p(A|S(t 1) , H(t 1) , X, ?)
5:
S(t) ? p(S|A(t) , H(t 1) , X, ?)
6:
H(t) ? p(H|S(t) , ?)
7:
return {S(t) }t=burnin:chain length
learning literature closely [13, 1]. In the pre-training phase we greedily train the model bottom up,
one layer at a time. We begin by training an RBM on the observed data using stochastic maximum
likelihood learning (SML; also referred to as ?persistent CD?; [14, 13]). Once this RBM is trained,
we infer the conditional mean of the hidden units for each training image. The resulting vectors
then serve as the training data for a second RBM which is again trained using SML. We use the
parameters of these two RBMs to initialize the parameters of the full MSBM model. In the second
phase we perform approximate stochastic gradient ascent in the likelihood of the full model to finetune the parameters in an EM-like scheme as described in [13].
3
Related work
Existing probabilistic models of images can be categorized by the amount of variability they expect
to encounter in the data and by how they model this variability. A significant portion of the literature
models images using only two parts: a foreground object and its background e.g. [15, 16, 17, 18, 19].
Models that account for the parts within the foreground object mainly differ in how accurately they
learn about and represent the variability of the shapes of the object?s parts.
In Probabilistic Index Maps (PIMs) [8] a mean partitioning is learned, and the deformable PIM [9]
additionally allows for local deformations of this mean partitioning. Stel Component Analysis [10]
accounts for larger amounts of shape variability by learning a number of different template means
for the object that are blended together on a pixel-by-pixel basis. Factored Shapes and Appearances
[11] models global properties of shape using a factor analysis-like model, and ?masked? RBMs have
been used to model more local properties of shape [20]. However, none of these models constitute
a strong model of shape in terms of realism of samples and generalization capabilities [1]. We
demonstrate in Sec. 4 that, like the SBM, the MSBM does in fact possess these properties.
The closest works to ours in terms of ability to deal with datasets that exhibit significant variability
in both shape and appearance are the works of Bo and Fowlkes [21] and Thomas et al. [22]. Bo and
Fowlkes [21] present an algorithm for pedestrian segmentation that models the shapes of the parts
using several template means. The different parts are composed using hand coded geometric constraints, which means that the model cannot be automatically extended to other application domains.
The Implicit Shape Model (ISM) used in [22] is reliant on interest point detectors and defines distributions over segmentations only in the posterior, and therefore is not fully generative. The model
presented here is entirely learned from data and fully generative, therefore it can be applied to new
datasets and diagnosed with relative ease. Due to its modular structure, we also expect it to rapidly
absorb future developments in shape and appearance models.
4
Experiments
Penn-Fudan pedestrians: The first dataset that we considered is Penn-Fudan pedestrians [23],
consisting of 169 images of pedestrians (Fig. 6(a)). The images are annotated with ground-truth
segmentations for L = 7 different parts (hair, face, upper and lower clothes, shoes, legs, arms;
Fig. 6(d)). We compare the performance of the model with the algorithm of Bo and Fowlkes [21].
For the shape component, we trained an MSBM on the 684 images of a labeled version of the
HumanEva dataset [24] (at 48 ? 24 pixels; also flipped horizontally) with overlap b = 4, and 400
and 50 hidden units in the first and second layers respectively. Each layer was pre-trained for 3000
epochs (iterations). After pre-training, joint training was performed for 1000 epochs.
5
(c) Completion
(a) Sampling
(b) Diffs
!
!
!
Figure 5: Learned shape model. (a) A chain of samples (1000 samples between frames). The
apparent ?blurriness? of samples is not due to averaging or resizing. We display the probability of
each pixel belonging to different parts. If, for example, there is a 50-50 chance that a pixel belongs
to the red or blue parts, we display that pixel in purple. (b) Differences between the samples and
their most similar counterparts in the training dataset. (c) Completion of occlusions (pink).
To assess the realism and generalization characteristics of the learned MSBM we sample from it.
In Fig. 5(a) we show a chain of unconstrained samples from an MSBM generated via block-Gibbs
MCMC (1000 samples between frames). The model captures highly non-linear correlations in the
data whilst preserving the object?s details (e.g. face and arms). To demonstrate that the model has
not simply memorized the training data, in Fig. 5(b) we show the difference between the sampled
shapes in Fig. 5(a) and their closest images in the training set (based on per-pixel label agreement).
We see that the model generalizes in non-trivial ways to generate realistic shapes that it had not
encountered during training. In Fig. 5(c) we show how the MSBM completes rectangular occlusions.
The samples highlight the variability in possible completions captured by the model. Note how,
e.g. the length of the person?s trousers on one leg affects the model?s predictions for the other,
demonstrating the model?s knowledge about long-range dependencies. An interactive M ATLAB
GUI for sampling from this MSBM has been included in the supplementary material.
The Penn-Fudan dataset (at 200 ? 100 pixels) was then split into 10 train/test cross-validation splits
without replacement. We used the training images in each split to train the appearance component
with a vocabulary of size W = 50 and K = 100 mixture components1 . We additionally constrained
the model by sharing the appearance models for the arms and legs with that of the face.
We assess the quality of the appearance model by performing the following experiment: for each test
image, we used the scoring function described in Eq. 9 to evaluate a number of different proposal
segmentations for that image. We considered 10 randomly chosen segmentations from the training
dataset as well as the ground-truth segmentation for the test image, and found that the appearance
model correctly assigns the highest score to the ground-truth 95% of the time.
During inference, the shape and appearance models (which are defined on images of different sizes),
were combined at 200 ? 100 pixels via M ATLAB?s imresize function, and we set
= 0.8
(Eq. 8) via trial and error. Inference chains were seeded at 100 exemplar segmentations from the
HumanEva dataset (obtained using the K-medoids algorithm with K = 100), and were run for
20 Gibbs iterations each (with 5 iterations of Eqs. 6 and 7 per Gibbs iteration). Our unoptimized
M ATLAB implementation completed inference for each chain in around 7 seconds.
We compute the conditional probability of each pixel belonging to different parts given the last set
of samples obtained from the highest scoring chain, assign each pixel independently to the most
likely part at that pixel, and report the percentage of correctly labeled pixels (see Table 1). We find
that accuracy can be improved using superpixels (SP) computed on X (pixels within a superpixel
are all assigned the most common label within it; as with [21] we use gPb-OWT-UCM [25]). We
also report the accuracy obtained, had the top scoring seed segmentation been returned as the final
segmentation for each image. Here the quality of the seed is determined solely by the appearance
model. We observe that the model has comparable performance to the state-of-the-art but pedestrianspecific algorithm of [21], and that inference in the model significantly improves the accuracy of the
segmentations over the baseline (top seed+SP). Qualitative results can be seen in Fig. 6(c).
1
We obtained the best quantitative results with these settings. The appearances exhibited by the parts in the
dataset are highly varied, and the complexity of the appearance model reflects this fact.
6
Table 1: Penn-Fudan pedestrians. We report the percentage of correctly labeled pixels. The final
column is an average of the background, upper and lower body scores (as reported in [21]).
FG
BG
Upper Body
Lower Body
Head
Average
Bo and Fowlkes [21]
73.3%
81.1%
73.6%
71.6%
51.8%
69.5%
MSBM
MSBM + SP
70.7%
71.6%
72.8%
73.8%
68.6%
69.9%
66.7%
68.5%
53.0%
54.1%
65.3%
66.6%
Top seed
Top seed + SP
59.0%
61.6%
61.8%
67.3%
56.8%
60.8%
49.8%
54.1%
45.5%
43.5%
53.5%
56.4%
Table 2: ETHZ cars. We report the percentage of pixels belonging to each part that are labeled
correctly. The final column is an average weighted by the frequency of occurrence of each label.
BG
Body
Wheel
Window
Bumper
License
Light
Average
ISM [22]
93.2%
72.2%
63.6%
80.5%
73.8%
56.2%
34.8%
86.8%
MSBM
94.6%
72.7%
36.8%
74.4%
64.9%
17.9%
19.9%
86.0%
Top seed
92.2%
68.4%
28.3%
63.8%
45.4%
11.2%
15.1%
81.8%
ETHZ cars: The second dataset that we considered is the ETHZ labeled cars dataset [22], which
itself is a subset of the LabelMe dataset [23], consisting of 139 images of cars, all in the same semiprofile view (Fig. 7(a)). The images are annotated with ground-truth segmentations for L = 6 parts
(body, wheel, window, bumper, license plate, headlight; Fig. 7(d)). We compare the performance of
the model with the ISM of Thomas et al. [22], who also report their results on this dataset.
The dataset was split into 10 train/test cross-validation splits without replacement. We used the
training images in each split to train both the shape and appearance components. For the shape
component, we trained an MSBM at 50 ? 50 pixels with overlap b = 4, and 2000 and 100 hidden
units in the first and second layers respectively. Each layer was pre-trained for 3000 epochs and joint
training was performed for 1000 epochs. The appearance model was trained with a vocabulary of
size W = 50 and K = 100 mixture components and we set = 0.7. Inference chains were seeded
at 50 exemplar segmentations (obtained using K-medoids). We find that the use of superpixels does
not help with this dataset (due to the poor quality of superpixels obtained for these images).
Qualitative and quantitative results that show the performance of model to be comparable to the
state-of-the-art ISM can be seen in Fig. 7(c) and Table 2. We believe the discrepancy in accuracy
between the MSBM and ISM on the ?license? and ?light? labels to mainly be due to ISM?s use of
interest-points, as they are able to locate such fine structures accurately. By incorporating better
models of part appearance into the generative model, we expect to see this discrepancy decrease.
5
Conclusions and future work
In this paper we have shown how the SBM can be extended to obtain the MSBM, and presented
a principled probabilistic model of images of objects that exploits the MSBM as its model for part
shapes. We demonstrated how object segmentations can be obtained simply by performing MCMC
inference in the model. The model can also be treated as a probabilistic evaluator of segmentations:
given a proposal segmentation it can be used to estimate its likelihood. This leads us to believe that
the combination of a generative model such as ours, with a discriminative, bottom-up segmentation
algorithm could be highly effective. We are currently investigating how textured appearance models,
which take into account the spatial structure of pixels, affect the learning and inference algorithms
and the performance of the model.
Acknowledgments
Thanks to Charless Fowlkes and Vittorio Ferrari for access to datasets, and to Pushmeet Kohli and
John Winn for valuable discussions. AE has received funding from the Carnegie Trust, the SORSAS
scheme, and the IST Programme under the PASCAL2 Network of Excellence (IST-2007-216886).
7
(a) Test
(c) MSBM (b) Bo and Fowlkes
(d) Ground truth
Background
Hair
Face
Upper
Shoes
Legs
Lower
Arms
(d) Ground truth
(c) MSBM
(b) Thomas et al.
(a) Test
Figure 6: Penn-Fudan pedestrians. (a) Test images. (b) Results reported by Bo and Fowlkes [21].
(c) Output of the joint model. (d) Ground-truth images. Images shown are those selected by [21].
Background
Body
Wheel
Window
Bumper
License
Headlight
Figure 7: ETHZ cars. (a) Test images. (b) Results reported by Thomas et al. [22]. (c) Output of
the joint model. (d) Ground-truth images. Images shown are those selected by [22].
8
References
[1] S. M. Ali Eslami, Nicolas Heess, and John Winn. The Shape Boltzmann Machine: a Strong
Model of Object Shape. In IEEE CVPR, 2012.
[2] Mark Everingham, Luc Van Gool, Christopher K. I. Williams, John Winn, and Andrew Zisserman. The PASCAL Visual Object Classes (VOC) Challenge. International Journal of
Computer Vision, 88:303?338, 2010.
[3] Martin Fischler and Robert Elschlager. The Representation and Matching of Pictorial Structures. IEEE Transactions on Computers, 22(1):67?92, 1973.
[4] David Marr. Vision: A Computational Investigation into the Human Representation and Processing of Visual Information. Freeman, 1982.
[5] Irving Biederman. Recognition-by-components: A theory of human image understanding.
Psychological Review, 94:115?147, 1987.
[6] Ashish Kapoor and John Winn. Located Hidden Random Fields: Learning Discriminative
Parts for Object Detection. In ECCV, pages 302?315, 2006.
[7] John Winn and Jamie Shotton. The Layout Consistent Random Field for Recognizing and
Segmenting Partially Occluded Objects. In IEEE CVPR, pages 37?44, 2006.
[8] Nebojsa Jojic and Yaron Caspi. Capturing Image Structure with Probabilistic Index Maps. In
IEEE CVPR, pages 212?219, 2004.
[9] John Winn and Nebojsa Jojic. LOCUS: Learning object classes with unsupervised segmentation. In ICCV, pages 756?763, 2005.
[10] Nebojsa Jojic, Alessandro Perina, Marco Cristani, Vittorio Murino, and Brendan Frey. Stel
component analysis. In IEEE CVPR, pages 2044?2051, 2009.
[11] S. M. Ali Eslami and Christopher K. I. Williams. Factored Shapes and Appearances for Partsbased Object Understanding. In BMVC, pages 18.1?18.12, 2011.
[12] Nicolas Heess. Learning generative models of mid-level structure in natural images. PhD
thesis, University of Edinburgh, 2011.
[13] Ruslan Salakhutdinov and Geoffrey Hinton. Deep Boltzmann Machines. In AISTATS, volume 5, pages 448?455, 2009.
[14] Tijmen Tieleman. Training restricted Boltzmann machines using approximations to the likelihood gradient. In ICML, pages 1064?1071, 2008.
[15] Carsten Rother, Vladimir Kolmogorov, and Andrew Blake. ?GrabCut?: interactive foreground
extraction using iterated graph cuts. ACM SIGGRAPH, 23:309?314, 2004.
[16] Eran Borenstein, Eitan Sharon, and Shimon Ullman. Combining Top-Down and Bottom-Up
Segmentation. In CVPR Workshop on Perceptual Organization in Computer Vision, 2004.
[17] Himanshu Arora, Nicolas Loeff, David Forsyth, and Narendra Ahuja. Unsupervised Segmentation of Objects using Efficient Learning. IEEE CVPR, pages 1?7, 2007.
[18] Bogdan Alexe, Thomas Deselaers, and Vittorio Ferrari. ClassCut for unsupervised class segmentation. In ECCV, pages 380?393, 2010.
[19] Nicolas Heess, Nicolas Le Roux, and John Winn. Weakly Supervised Learning of ForegroundBackground Segmentation using Masked RBMs. In ICANN, 2011.
[20] Nicolas Le Roux, Nicolas Heess, Jamie Shotton, and John Winn. Learning a Generative Model
of Images by Factoring Appearance and Shape. Neural Computation, 23(3):593?650, 2011.
[21] Yihang Bo and Charless Fowlkes. Shape-based Pedestrian Parsing. In IEEE CVPR, 2011.
[22] Alexander Thomas, Vittorio Ferrari, Bastian Leibe, Tinne Tuytelaars, and Luc Van Gool. Using
Recognition and Annotation to Guide a Robot?s Attention. IJRR, 28(8):976?998, 2009.
[23] Bryan Russell, Antonio Torralba, Kevin Murphy, and William Freeman. LabelMe: A Database
and Tool for Image Annotation. International Journal of Computer Vision, 77:157?173, 2008.
[24] Leonid Sigal, Alexandru Balan, and Michael Black. HumanEva. International Journal of
Computer Vision, 87(1-2):4?27, 2010.
[25] Pablo Arbelaez, Michael Maire, Charless C. Fowlkes, and Jitendra Malik. From Contours to
Regions: An Empirical Evaluation. In IEEE CVPR, 2009.
9
|
4774 |@word h:1 trial:2 kohli:1 version:1 stronger:1 proportion:1 everingham:1 rgb:1 covariance:2 xtest:1 configuration:1 score:3 ours:2 existing:2 si:8 parsing:3 john:8 visible:3 realistic:2 shape:58 seeding:1 update:1 nebojsa:3 generative:14 selected:2 realism:2 lr:1 provides:1 diffs:1 evaluator:1 constructed:1 c2:1 persistent:1 qualitative:2 consists:1 combine:3 excellence:1 pairwise:1 multi:2 freeman:2 voc:1 salakhutdinov:1 automatically:1 window:3 considering:1 begin:1 fudan:5 lowest:1 whilst:2 clothes:1 quantitative:2 every:2 xd:1 interactive:2 uk:2 partitioning:2 unit:12 penn:5 segmenting:1 local:7 frey:1 eslami:4 solely:1 black:1 challenging:2 ease:1 limited:1 range:2 acknowledgment:1 practice:1 block:2 ucm:1 maire:1 procedure:1 empirical:1 significantly:2 matching:1 word:1 pre:5 cannot:1 wheel:3 layered:1 stel:2 map:2 demonstrated:1 missing:1 maximizing:1 vittorio:4 williams:3 layout:1 attention:1 independently:4 rectangular:1 roux:2 assigns:1 factored:2 adjusts:1 sbm:11 estimator:1 marr:1 ferrari:3 exact:1 agreement:1 superpixel:1 recognition:3 expensive:1 located:1 cut:1 labeled:5 database:1 bottom:3 observed:1 initializing:1 capture:5 imresize:1 murino:1 region:2 decrease:1 highest:2 russell:1 valuable:1 principled:2 alessandro:1 complexity:1 gpb:1 fischler:1 occluded:1 trained:9 weakly:1 ali:3 serve:1 basis:1 textured:1 tinne:1 joint:8 siggraph:1 represented:1 kolmogorov:1 train:7 describe:1 effective:1 monte:1 labeling:1 kevin:1 neighborhood:1 whose:1 richer:1 encoded:1 larger:2 modular:1 apparent:1 drawing:1 cvpr:8 supplementary:1 resizing:1 ability:1 statistic:2 unseen:2 tuytelaars:1 itself:1 final:4 propose:1 jamie:2 interaction:1 combining:2 rapidly:1 kapoor:1 mixing:2 deformable:1 description:2 wjk:3 object:33 help:1 bogdan:1 andrew:2 ac:2 completion:3 exemplar:3 school:2 received:1 eq:3 strong:4 involves:1 differ:4 closely:1 annotated:3 alexandru:1 stochastic:2 dbms:3 human:2 memorized:1 material:1 assign:2 clustered:1 generalization:2 investigation:1 pl:1 marco:1 around:1 considered:3 ground:8 blake:1 visually:1 exp:4 seed:6 alexe:1 dbm:1 narendra:1 torralba:1 ruslan:1 label:5 currently:1 tool:1 reflects:1 weighted:1 gaussian:1 aim:1 hj:4 earliest:1 deselaers:1 focus:1 likelihood:7 mainly:3 superpixels:3 hk:2 contrast:1 brendan:1 greedily:1 baseline:1 helpful:1 inference:20 mrfs:1 factoring:1 inaccurate:1 typically:2 hidden:10 unoptimized:1 pixel:33 issue:1 smi:1 pascal:1 stateof:1 development:1 art:6 constrained:1 initialize:2 spatial:1 marginal:1 field:6 once:1 extraction:1 sampling:2 flipped:1 look:1 unsupervised:3 constitutes:1 perina:1 icml:1 foreground:8 discrepancy:2 future:2 others:1 report:5 c1j:2 pim:1 randomly:1 composed:1 pictorial:1 murphy:1 phase:2 consisting:3 occlusion:2 replacement:2 william:1 gui:1 detection:2 organization:1 interest:3 highly:4 evaluation:1 mixture:5 light:2 chain:10 accurate:2 edge:1 necessary:1 desired:1 deformation:1 psychological:1 column:2 blended:1 subset:2 hundred:1 masked:2 recognizing:1 reported:3 dependency:2 nfer:1 combined:1 person:1 thanks:1 international:3 probabilistic:11 informatics:2 michael:2 connecting:1 together:1 ashish:1 connectivity:1 again:1 thesis:1 return:1 ullman:1 li:1 account:8 potential:2 sec:5 sml:2 coefficient:1 pedestrian:7 forsyth:1 jitendra:1 bg:2 performed:3 h1:6 view:1 portion:1 red:1 capability:1 yaron:1 annotation:2 contribution:2 ass:2 purple:1 accuracy:5 characteristic:2 efficiently:3 who:1 iterated:1 accurately:5 none:1 carlo:1 history:1 detector:1 strongest:1 sharing:1 ed:2 rbms:5 frequency:1 atlab:3 associated:1 rbm:3 sampled:1 dataset:17 knowledge:2 color:1 improves:1 car:5 segmentation:33 humaneva:3 carefully:1 coloring:1 finetune:1 higher:1 day:1 supervised:2 follow:1 zisserman:1 improved:2 bmvc:1 box:1 diagnosed:1 wildly:1 just:1 implicit:1 c2k:1 correlation:1 hand:1 christopher:3 trust:1 continuity:2 c2j:1 defines:1 quality:5 believe:2 unbiased:1 counterpart:2 seeded:2 assigned:1 jojic:3 deal:1 conditionally:1 adjacent:1 during:4 irving:1 eqns:1 plate:1 complete:1 demonstrate:3 image:52 purposely:1 recently:3 charles:3 funding:1 sigmoid:1 common:1 multinomial:5 overview:1 volume:1 extend:3 significant:4 gibbs:4 smoothness:1 unconstrained:1 had:2 access:1 robot:1 owt:1 posterior:2 closest:2 inf:1 driven:1 belongs:1 binary:3 exploited:1 scoring:5 captured:2 preserving:1 seen:2 impose:1 recognized:1 grabcut:1 maximize:1 multiple:2 desirable:1 full:2 infer:2 segmented:1 cross:2 long:1 coded:1 schematic:1 prediction:1 mrf:1 hair:2 ae:1 vision:7 histogram:1 represent:2 iteration:4 c1:1 proposal:2 background:8 addition:1 cropped:1 separately:1 fine:1 winn:8 completes:1 borenstein:1 exhibited:2 posse:2 ascent:1 headlight:2 undirected:1 split:6 shotton:2 iterate:1 affect:2 restrict:1 reduce:1 alk:3 returned:1 constitute:2 bli:2 deep:4 heess:4 antonio:1 amount:3 mid:1 ph:1 simplest:1 generate:4 specifies:1 percentage:3 per:2 yy:1 correctly:4 blue:1 bryan:1 carnegie:1 group:1 ist:2 four:2 demonstrating:1 drawn:1 license:4 sharon:1 graph:1 trouser:1 enforced:1 run:2 powerful:1 throughout:1 eitan:1 patch:3 draw:1 loeff:1 comparable:4 entirely:1 layer:15 capturing:1 display:2 bastian:1 encountered:1 constraint:2 nearby:1 lkw:4 extremely:1 performing:4 martin:1 combination:2 poor:1 pink:1 belonging:4 em:1 partitioned:1 modification:1 leg:4 restricted:2 medoids:2 iccv:1 remains:1 previously:1 locus:1 generalizes:2 gaussians:2 apply:2 ckiw:1 observe:1 himanshu:1 leibe:1 occurrence:1 fowlkes:9 encounter:1 thomas:6 top:6 include:1 completed:1 graphical:2 exploit:1 ism:6 malik:1 added:1 receptive:1 eran:1 exhibit:3 gradient:2 separate:1 arbelaez:1 capacity:1 trivial:1 rother:1 length:3 modeled:2 retained:1 illustration:1 index:2 tijmen:1 vladimir:1 robert:1 implementation:1 boltzmann:14 perform:2 upper:4 inspect:1 datasets:8 sm:1 markov:3 discarded:1 extended:2 variability:9 head:1 hinton:1 frame:2 locate:1 varied:1 biederman:1 introduced:1 david:2 namely:1 pair:1 specified:1 pablo:1 learned:5 able:2 summarize:1 challenge:1 belief:1 pascal2:1 gool:2 overlap:3 treated:2 natural:1 indicator:1 arm:4 scheme:3 ijrr:1 lk:4 arora:1 categorical:1 prior:3 understanding:3 literature:3 geometric:1 epoch:4 review:1 relative:2 fully:5 expect:3 highlight:1 querying:1 geoffrey:1 validation:2 h2:9 consistent:1 sigal:1 elschlager:1 share:1 heavy:1 cd:1 eccv:2 balan:1 last:1 guide:1 neighbor:1 template:2 taking:1 face:4 fg:1 edinburgh:3 benefit:1 boundary:1 slice:2 vocabulary:2 van:2 rich:1 contour:1 sli:11 programme:1 pushmeet:1 transaction:1 approximate:3 obtains:1 absorb:1 keep:1 global:3 active:1 investigating:1 conclude:1 assumed:2 xi:11 discriminative:2 latent:5 decomposes:1 table:4 additionally:3 learn:4 nicolas:7 ignoring:1 complex:1 domain:1 sp:4 significance:1 main:1 bmi:1 aistats:1 icann:1 bounding:1 categorized:1 body:8 fig:18 referred:1 ahuja:1 wish:1 perceptual:1 bumper:3 shimon:1 down:1 h1j:3 exists:1 incorporating:2 intractable:1 workshop:1 phd:1 execution:1 simply:4 appearance:39 likely:2 reliant:1 shoe:2 visual:2 horizontally:1 desire:1 h2k:2 partially:2 bo:7 cristani:1 collectively:1 truth:8 chance:1 caspi:1 extracted:1 tieleman:1 acm:1 conditional:4 viewed:1 carsten:1 labelme:2 luc:2 leonid:1 included:1 determined:1 blurriness:1 averaging:1 experimental:1 lrw:1 burnin:1 mark:1 alexander:1 ethz:4 evaluate:1 mcmc:4
|
4,170 | 4,775 |
A P300 BCI for the Masses: Prior Information
Enables Instant Unsupervised Spelling
Pieter-Jan Kindermans, Hannes Verschore, David Verstraeten and Benjamin Schrauwen
Ghent University, Electronics and Information Systems
Sint-Pietersnieuwstraat 41, 9000 Ghent, Belgium
[email protected]
Abstract
The usability of Brain Computer Interfaces (BCI) based on the P300 speller is
severely hindered by the need for long training times and many repetitions of the
same stimulus. In this contribution we introduce a set of unsupervised hierarchical probabilistic models that tackle both problems simultaneously by incorporating prior knowledge from two sources: information from other training subjects (through transfer learning) and information about the words being spelled
(through language models). We show, that due to this prior knowledge, the performance of the unsupervised models parallels and in some cases even surpasses
that of supervised models, while eliminating the tedious training session.
1
Introduction
Brain Computer Interfaces interpret brain signals to allow direct man-machine communication [17].
In this contribution, we study the so-called P300 paradigm [6]. The user is presented with a grid of
36 characters of which alternately rows and columns light up, and focuses on the character he wishes
to spell. The intensification of the focused letter can typically be detected through an event-related
potential around the parietal lobe occurring 300 ms after the stimulus. By correctly detecting this
so-called P300 Event Related Potential (ERP), the character intended by the user can be determined.
To increase the spelling accuracy, multiple epochs are used before a character gets predicted, where
a single epoch is defined as a sequence of stimuli where each row and each column is intensified
once. The main difficulty in the construction of a P300 speller thus lies in the construction of a
classifier for the P300 wave.
Previous work related to P300 has mainly focused on supervised training. These techniques were
evaluated during several BCI Competitions [2, 3]. A popular classification method, which we will
compare our proposed methods to, is Bayesian Linear Discriminant Analysis [7]. This is essentially
Bayesian Linear Regression where the hyperparameters are optimized using EM [1]. It has been
shown that these classifiers are among the best performing for P300 spelling [12]. A recent interesting improvement of P300 spelling is post-processing of the classifier outputs by a language model to
improve spelling results [15]. Other researchers have focused on adaptive classifiers which are first
trained supervisedly and then adapt to the test session while spelling [11, 13, 10]. The most flexible
of these methods can be found in [11], where they are able to adapt unsupervisedly from one subject
to another, however there is still need for some initial supervised training sessions.
Recent work has introduced unsupervised linear classifiers [9] that achieve accuracies comparable
to state of the art supervised methods. However, these still suffer from some limitations. When the
speller is used online without any prior training, it needs a warm-up period. During this warm-up
period the speller output will be more or less random as the classifier is still trying to determine the
underlying structure of the P300 ERP. Once the classifier has successfully learned the task, it rarely
makes new mistakes. The length of this warm-up period depends on both the individual subject and
1
the number of epochs to spell each character. A higher number of epochs will result in fewer letters
in the warm-up, but the total spelling time might increase. A second disadvantage is the fact that the
classifier is randomly initialized. The remedy for this - evaluating many random initializations and
selecting the best - is suboptimal and ideally one would like to choose a more intelligent initialization
based on prior knowledge.
The aim of this paper is thus to reduce the warm-up period and to limit the number of initializations
required to achieve acceptable performance without any subject-specific information. This will
yield instant subject specific spelling, with high accuracy and a low number of epochs. To achieve
this goal, we extend the graphical model of the unsupervised classifier by incorporating two types
of prior knowledge: inter-subject information and language information. The key idea is that the
incorporation of constraints and prior information can drastically improve a BCI?s performance.
The power of incorporating prior knowledge has previously been demonstrated in a BCI where
finger flexion is decoded from electrocorticographic signals [18].
What we propose is a fully integrated probabilistic model, unlike previous methods which are a
combination of different techniques. Furthermore, the prior work related to P300 classification
possesses only a subset of the capabilities of our model.
2
Methods
ws
xs,t,i
?w
?w
ws
ws
xs,t,i
I
xs,t,i
I
cs,t
cs,t
cs,t
T
T
Subjects
Subjects
(a) standard
xs,t+1,i
I
xs,t+2,i
I
cs,t+1
I
cs,t+2
Subjects
(b) subject transfer
(c) subject transfer and language model
Figure 1: Graphical representation of the different classifiers. On the left we show the basic unsupervised classifier [9]. In the middle we present our first contributions: the incorporation of inter
subject information through a shared hyperprior. On the right the most complex model: inter subject
information and a trigram language model.
2.1
Unsupervised P300 Speller
The basic unsupervised speller which we extend in this paper, is the unsupervised P300 classifier
proposed in [9]. We will present a slightly generalized version of this model such that it does not
depend on the column/row intensification structure of the default P300 application. The model is
built around the following assumption: the EEG can be projected into one dimension where the
projection will have a Gaussian distribution with a class dependent mean (containing P300 response
versus not containing the response). From now on the distribution on the projected EEG will be
used as an approximation of the distribution on the EEG itself. This makes inference and reasoning
about the model simpler, but it remains an approximation. The full model, shown in Figure 1(a), is
as follows:
p (ws ) = N (ws |0, ?s I) ,
p (xs,t,i |cs,t , ws , ?s )
1
,
C
= N xs,t,i ws |y s,t,i (cs,t ), ?s ,
p (cs,t ) =
2
where ws is the classifier?s weight vector, C is the number of symbols in the character grid, s
indicates the subject, cs,t is the t-th character for subject s. The row vector xs,t,i contains the EEG
recorded after intensification i during spelling of ct by subject s, and a bias term. The EEG for a
character will be denoted as Xs,t which is a matrix whose rows are the different xs,t,i . Likewise,
Xs consists of all the features for a single subject. Both ?s and ?s are values for the precision of
the associated Gaussian distribution. The mean y s,t,i (cs,t ) equals 1 when during intensification i,
character cs,t was highlighted, otherwise y s,t,i (cs,t ) = ?1. This class dependent mean encompasses
the constraint on the labeling of the individual EEG segments posed by the application: during all of
the intensifications for a single character, the subject has focus on the same character. Thus during
all epochs, each intensification of this character should yield the P300. Each intensification which
does not include this character should not elicit a P300 response.
The probability of a character given the EEG can be computed by application of Bayes?s rule:
p(c )p(Xs,t |cs,t ,ws ,?s )
p (cs,t |Xs,t , ws , ?s ) = P s,t
p(cs,t )p(Xs,t |cs,t ,ws ,?s ) . In this model, the EEG Xs contains the obcs,t
served variables, the characters are the latent variables which need to be inferred and ws , ?s , ?s are
the parameters which we want to optimize. The well known Expectation Maximization algorithm
[5] can be used to optimize for ws , ?s and yields following update equations:
ws
?s?1
?1
?sold
T
=
p
Xs Xs + old I
XsT y s (cs )
?
s
cs
*
+
X
2
old
=
p cs,t |Xs , wold
xs,t,i wold
s , ?s
s ? y s,t,i (cs,t )
X
old
cs |Xs , wold
s , ?s
cs,t
t,i
The update for ws is a weighted sum ridge regression classifier trained with all possible labellings
for the EEG. The weights are the probabilities that the used labels are correct given the previous
weight vector wold
s . Let y(cs ) be the labels which are assigned to the EEG given the character
prediction cs when the application constraints described above are taken into account. The value for
?s?1 is the expected mean squared error between the projection and the target mean given the old
weight vector. The hyper-parameter ?s can be optimized directly: ?s = (woldD
T
old , where D is
s ) ws
the dimensionality of the weight vector. The combined optimization of ?s , ?s allows for automatic
tuning of the amount of regularization but ?s will be bounded by 103 to prevent the weight vector
from collapsing onto the prior.
From the graphical representation it is clear that, without making additional assumptions about the
data, there are only two ways to add additional constraints or information. First, we can incorporate
prior information about the characters (the bottom of the graphical model) through language models.
The second option is to incorporate prior information about the weight vector (the top of the model).
We will start with the latter. Both these access points for prior knowledge are given a brighter color
in Figure: 1(a).
2.2
Inter-subject Transfer
For the transfer learning, we drew inspiration from the work by Kemp et al. [8]. We will use
hierarchical Bayesian models to share knowledge about the P300 response detection across different
subjects. Our proposed model is shown in Figure 1(b) and is defined as follows:
p (?w ) = N (?w |0, ?p I) , ?p = 0, p (ws |?w ) = N (ws |?w , ?s I) ,
1
p (cs,t ) = ,
p (xs,t,i |cs,t , ws , ?s ) = N xs,t,i ws |y s,t,i (cs,t ), ?s ,
C
where we have placed a zero mean and precision Gaussian prior on the mean for the weight vector.
When doing inference, we will always assume that ?w is given and set to its most likely value. The
advantage of working with the most likely value is that there is no time penalty for transfer learning
3
when used in an online setting. In the case that ?w = 0, the model reduces to the original model.
On the other hand, if ?w takes on a nonzero value, the update equations for ws , ?s become:
?1
X
?sold
?sold
old
T
T
ws =
p cs |Xs , wold
,
?
X
X
+
I
X
y
(c
)
+
I?
,
s
s
s
s
s
s s
?sold
?sold w
c
s
?s
=
D
T
old
(wold
s ? ?w ) (w s ? ?w )
.
The update for ?s remains unaltered. When we train without transfer for an initial set of subjects:
s = 1, . . . , S, we initialize all ?s = ?p = 0 and ?w = 0. For this specific assignment of ?w , ?p ,
training is actually the same as integrating out ?w . After the training has converged for all the
subjects, we have a subject specific Maximum A Posteriori estimate: wnew
and an optimized value
s
?snew . Using these, we can compute the posterior distribution on ?w :
p (?w |wnew
, . . . , wnew
) = N ?w |?new
, ?pnew I ,
1
s
p
X
X
1
?new
= new
?snew wnew
,
?pnew =
?snew .
p
s
?p
s=1...S
s=1...S
To apply transfer learning for a new subject S + 1, we assign ?w = ?new
and keep it fixed. The new
p
?S+1 is initialized with ?pnew . The role of the optimization of ?S+1 is to let the model determine
whether we can build a proper model by staying close to the prior (?S+1 takes on large values) or
whether we have to build a very specific model (?S+1 becomes very small).
2.3
Incorporation of language models
A second possibility is to incorporate language models. The only difference between working with
and without a language model lies in the computation of the probability of a character given the
EEG. Hence the E-step will change but the M-step will not. Please note that we have dropped the
subject specific index, and we will continue to do so in this section to keep the notation uncluttered.
An n-gram language model takes the history into account: the probability of a character is defined
given the n ? 1 previous characters: p (ct |ct?1 , . . . , ct?n+1 ). In this work, we limit ourselves to uni,
bi and trigram language models. The graphical model of the P300 speller with subject transfer and
a trigram language model is shown in Figure 1(c). For the unigram language model, which counts
character frequencies, we only have to change the prior on the characters p (ct ) to the probability of
each character occurring.
To compute the marginal probability of a character given the EEG, which is exactly what we need
in the E-step, we use the well known forward backward algorithm for HMM?s [1]. For general
n-grams, this algorithm computes:
p (X1 , . . . , XT , ct , . . . , ct?n+2 ) = f (ct , . . . , ct?n+2 ) b (ct , . . . , ct?n+2 ) ,
f (ct , . . . , ct?n+2 ) = p (X1 , . . . , Xt , ct , . . . , ct?n+2 ) ,
b (ct , . . . , ct?n+2 ) = p (Xt+1 , . . . , XT |ct , . . . , ct?n+2 ) .
The forward and backward recursion are as follows:
X
f (ct , . . . , ct?n+2 ) = p (Xt |ct )
p (ct |ct?1 , . . . , ct?n+1 ) f (ct?1 , . . . , ct?n+1 ) ,
ct?n+1
b (ct , . . . , ct?n+2 )
=
X
p (Xt+1 |ct+1 ) p (ct+1 |ct , . . . , ct?n+2 ) b (ct+1 , . . . , ct?n+3 ) .
ct+1
The initialization of the forward and backward recursion is analogous to the initialization for the
default HMM [1]. The probability of a character can be computed as follows:
X
p (X1 , . . . , XT , ct , . . . , ct?n+2 )
p(ct |X) =
p (X)
c
,...,c
t?1
=
t?n+2
X
f (ct , . . . , ct?n+2 ) b (ct , . . . , ct?n+2 )
ct ,...,ct?n+2 f (ct , . . . , ct?n+2 ) b (ct , . . . , ct?n+2 ) .
P
ct?1 ,...,ct?n+2
4
This can then be plugged directly into the EM-update equations from Section 2.1. Note that when
we cache the forward pass from previous character predictions, only a single step of both the forward
and backward pass has to be executed to spell a new character.
3
Experiments and Discussion
3.1
The Akimpech Dataset
We performed our experiments on the public Akimpech P300 database [19]. This dataset covers 22
subjects1 who spelled Spanish words. The data was recorded with a 16 channel g.tec gUSBamp EEG
amplifier at 256 Hz but only 10 channels are available in the dataset. The recording was performed
with the BCI2000 P300 speller software [14] with the following settings: a 2 second pause before
and after each character, 62.5ms between the intensifications, these intensifications lasted 125ms
each and the spelling matrix contained the characters [a ? z1 ? 9 ]. The dataset comprises both a
train and a test set. The train set contains 16 characters with 15 epochs per character. This train
set will not be used by the unsupervised classifiers but only by the supervised classifier which we
will later use for comparison. The number of characters in the test is subject dependent and ranges
from 17 to 29 with an average of 22.18. This limited number of characters per sequence is very
challenging for our unsupervised classifier, since the spelling has to be as correct as possible right
from the start, in order to obtain high average accuracies.
As the pre-processing in [9] has been shown to lead to good spelling performance, we adhere to
their approach. The EEG is preprocessed one character at a time; as a consequence this approach is
valid in real online experiments2 . Pre-processing begins by applying a Common Average Reference
filter, followed by a bandpass filter (0.5 Hz - 15 Hz). Afterwards, each EEG channel is normalized
such that it has zero mean and unit variance. The final step is sub-sampling the EEG by a factor 6
and retaining 10 samples which are centered at 300ms after stimulus presentation.
3.2
Training the Language models and Spelling Real Text
The Akimpech dataset was recorded using a limited number of Spanish words with an unrepresentative subset of characters. It is therefore not an accurate representation of how a realistic speller would
be used. To alleviate this, we constructed a dataset which contains words that would be spelled in a
realistic context. This is done by re-synthesizing a dataset using the EEG from the Akimpech dataset
and sentences from the English Wikipedia dataset from Sutskever et al. [16].
In a P300 speller, a look-up table assigns a specific character to each position in the on-screen matrix.
The actual task is to determine the position that, when intensified, evokes the P300 response. To
spell a symbol, we predict the desired position, then we look up the symbol assigned to it. Thus, in
a standard P300 setup the desired text can be modified by altering the look-up table. Furthermore,
this will not influence the performance as long as the desired symbol is assigned to a single position.
This approach remains valid when language models are integrated into the classifier, because neither
the EEG nor the intensification structure is modified.
The Wikipedia dataset was transformed to lowercase and we used the first 5 ? 108 characters in the
dataset to select the 36 most frequently occurring characters excluding numeric symbols. We argue
that using a subset of numbers is of no use and since we add the space as a symbol, we have to drop
at least one numeric symbol. As such, it makes sense to replace all the numeric characters with other
symbols. The selected characters are the following: [a ? z : %()0 ? ?., ], where the underscore
signifies whitespace. This set of characters is then used to train unigram, bigram and trigram letter
models. These language models were trained on the first 5 ? 108 characters and we applied WittenBell smoothing [4], which assigns small but non-zero probabilities to n-grams not encountered in
the train set.
The remaining part of the Wikipedia dataset was used to generate target texts for classifier evaluation.
This part was not used to train the language models. First we dropped the non-selected characters.
Then for each subject, we sampled new texts, with the same length as the originally spelled text,
1
2
There are more subjects listed on the website but some files are corrupt.
This claim was empirically verified, we omit the discussion of these experiments due to page constraints.
5
from the dataset. Additionally, we modified the contents of the character grid such that it contains
the 36 selected symbols. The look-up table for the individual spelling actions was changed such that
the correct solution is the newly sampled text. This is implemented by taking the base look-up table,
with [a ? z : %()0 ? ?., ], with a in the top left and in the bottom right corner of the screen, and
cyclically shifting it.
3.3
Experimental setup
We tested 12 different classifiers, where we use the following code to name the classifiers. The first
letter indicates how the classifier is initialized, either randomly (R) or using subject Transfer (T).
The second letter indicates whether the classifiers adapts unsupervisedly during the spelling session
(A) or is static (S). We compared the standard unsupervised (and adaptive) algorithm (RA) which
is randomly initialized, our proposed transfer learning approach without online adaptation (TS) and
the transfer learning approach with adaptation (TA). These three different setups were tested without
a language model, and with a uni-, bi- and trigram language model. We will indicate the language
model by appending the subscript ??? for the classifier without language model, ?uni? for unigram,
etcetera. For example: TAtri is the unsupervised classifier which used transfer learning, learns on
the fly and includes a trigram language model. The classifier RA? is the baseline which we want to
improve on.
The influence of performance fluctuations caused by the initialization or desired text is minimized
as follows. We executed 20 experiments per subject, where in each experiment all the classifiers
are evaluated. The desired text and classifier initializations are experiment specific. This means
that for each subject we have 20 desired texts, 20 random initializations and 20 subject transfer
initializations. Each classifier was evaluated on all of these texts, where for each text we always
used the same initialization. Additionally, we repeated the experiments with 3, 4, 5, 10 and 15
epochs per character.
The randomly initialized adaptive procedures work as in [9]. In short, the classifier first receives the
EEG for the next character. The EEG is added to the unsupervised trainset and 3 EM iterations are
executed3 . Next, the desired symbol is predicted with the updated classifier.
In the case of transfer learning the initializations are computed as discussed in section 2.2. The initial
classifiers used in the transfer learning process itself were trained unsupervisedly and offline without
a language model. For each subject, we drew 5 samples for w and trained 2 classifiers per draw: one
with w and one with ?w such that at least one is above chance level for the binary P300 detection.
From the resulting 10 classifiers we selected the one which has the highest log likelihood, to be used
in transfer learning. Finally, the current test subject is omitted when computing the transfer learning
parameters. In short: the transfer learning parameters are computed without seeing labeled data and
more importantly without seeing any data from the current subject.
We conclude this section by discussing the time complexity of the methods. The use of transfer
learning does not increase the time needed to predict a character. However the time needed per EM
iteration scales linearly with the number of characters in the trainset. The addition of n-gram language models scales the time per E-step with (number of characters in grid)n?1 . Therefore character prediction can become very time consuming. As this is a major issue in this real-time application,
we will also discuss the setting where the classifier is first used to spell the next character and the EM
updates are executed during the intensifications for the following symbol. As mentioned in Section
2.3, only a single step in the forward and backward pass is needed to spell the next character. Thus
we can state that this approach yields instantaneous spelling. This classifier will be named TA? .
3.4
Results
We will start the discussion of the results with the baseline method RA? followed by the evaluation
of our contributions. An overview of the averaged results of all online experiments is available in
Figure 2. In Figure 3 we show the performance on the test set after the classifiers have processed
the test set and adapted to it, if possible. When we retest the adapted classifier we will denote this
by appending ?-R? to its name.
3
This is a trade-off between classifier update time and performance.
6
100
100
100
100
100
50
50
50
50
50
0
0
?
uni bi
tri
0
?
(a) 3 epochs
uni bi
tri
(b) 4 epochs
0
0
?
uni bi
tri
?
(c) 5 epochs
RA
TS
TA
uni bi
?
tri
(d) 10 epochs
uni bi
tri
(e) 15 epochs
Figure 2: Overview of all spelling results from online experiments. Increasing the number of epochs
or adding complex language models improves accuracy. Furthermore, transfer learning without
adaptation (TS) outperforms learning from scratch (RA). Adding adaptation to the transfer learning
improves the results even further (TA).
100
100
100
100
100
50
50
50
50
50
0
0
?
uni bi
tri
(a) 3 epochs
0
?
uni bi
tri
(b) 4 epochs
0
0
?
uni bi
tri
(c) 5 epochs
RA?R
TS
TA?R
?
uni bi
tri
(d) 10 epochs
?
uni bi
tri
(e) 15 epochs
Figure 3: Spelling accuracy when the test set is processed online and the classifiers are re-evaluated
afterwards. In Figure 2 we saw that the TS approach outperformed the RA range of classifiers. Here
we see that TA-R and RA-R outperform TS even with few epochs. It is also clear that the adaptive
classifiers are able to correct mistakes they made initially.
Application of the baseline method RA? and averaging the results over the different subjects results
in an online spelling accuracy starting at 24.6% for 3 epochs and up to 82.1% for 15 epochs. The
result with 15 epochs is usable in practice and predicts only 4 characters incorrectly. However,
the spelling time is about half a minute per character. Retesting the classifiers obtained after the
online experiment gives the following results: when 3 epochs are used the final classifier is able to
spell 60.5% correctly, for 15 epochs this becomes 94.6%. This corroborates the findings from the
original paper [9] that the classifiers need the warm-up period before they start to produce correct
predictions.
By evaluating the addition of a language model, RAuni,bi,tri , we see an improvement of the online
results. The longer the time dependency in the language model, the bigger the improvement. As
more repetitions are used per character, the performance gain of the language models diminishes.
For 3 repetitions, a tri-gram model produces an online spelling accuracy of 43.5% compared to
24.6% without a language model. The results for 15 repetitions show that on average 3 characters
are predicted incorrectly when a trigram is used. Analysis of the re-evaluation of the classifiers after
online processing shows a smaller improvement to the results, indicating that the language model
mainly helps to reduce the warm up period.
Next we consider the influence of transfer learning. We begin by evaluating the TS classifiers, which
do not use unsupervised adaptation. Overall, TS classifiers outperform the RA range, even when the
latter uses a trigram model. However, the post-test reevaluation shows that the RA methods are
able to outperform TS. In essence: given enough data, the adaptive method has the ability to learn
a better model than the transfer learning approach. Addition of the language models to the TS
classifier shows a secondary improvement, as is to be expected.
This brings us to the full model TA: adaptive unsupervised training which is initialized with transfer
learning and optionally makes use of language models. Figures 2 and 3 indeed confirm that these
7
Table 1: Comparison between different classifiers. The BLDA classifiers are subject-specific and
supervisedly trained. BLDA?
tri was trained using 3 epochs. The basic RA? and the full model TAtri
are included. Furthermore we give results for an adapted version TA?tri , which spells the character
before the EM updates and for TA-Rtri , which is the re-evaluation of TAtri after processing the test
set.
Epochs RA? TA?tri TAtri TA-Rtri BLDA BLDAtri BLDA?
tri
3
4
5
10
15
24.6
42.2
58.6
78.4
82.1
73.8
82.1
87.0
95.0
97.9
74.8
83.0
87.8
95.5
98.4
83.5
91.0
94.4
98.5
99.5
74.5
82.2
84.9
93.0
96.7
89.4
93.0
94.6
97.4
98.1
78.9
83.8
86.5
92.5
94.3
models produce the best results both in the online test and in the re-evaluation afterwards, when we
consider unsupervised spelling. Also, the trigram classifier produces the best results, which is not
surprising given the incorporation of important prior language knowledge into the model.
Next, we give an overview of spelling accuracies in Table 1, where we compare the basic unsupervised method RA? to the full model TAtri . With nearly three times as accurate spelling for 3 epochs
(74.8% compared to 24.6%) and near perfect spelling for 15 epochs, we can conclude that the full
model is capable of instant spelling for a novel subject. The application of TA?tri results in a minute
performance drop, but as this classifier spells the character before performing the EM iterations, it
allows for real-time spelling when the EEG is received and is therefore of more use in an online
setting.
To conclude, we compare the unsupervised methods with BLDA, which is the supervised counterpart of the RA? classifier. The BLDA classifiers in this table are supervisedly trained using 15
epochs per character on 16 characters. This is slightly over 10 minutes of training before one can
start spelling. The BLDA?
tri classifier used a limited training set with only 3 epochs per character
or almost three minutes of training. When the limited training set is used, we see that our proposed method produces results which are competitive for 3-5 epochs and better for 10 and 15. The
BLDAtri model outperforms our method when we consider a low number of repetitions per character but not for 10 or 15 epochs. From 4 epochs onwards we can see that the re-evaluated classifier
after online learning (TA-Rtri ) is able to learn models which are as good as supervisedly trained
models. Finally we would like to point out that even for just 3 epochs per character, our proposed
method spelled less characters wrongly (about 6 on average) than the number of characters used
during the supervised training (16 for each subject).
4
Conclusion
In this work we set out to build a P300 based BCI which is able to produce accurate spelling for
a novel subject without any form of training session. This is made possible by incorporating both
inter-subject information and language models directly into an unsupervised classifier. This yields
a coherent probabilistic model which quickly adapts to unseen subjects, by exploiting several forms
of prior information. This contrasts with all supervised methods which need time consuming training session. There are only a few other unsupervised approaches for P300 spelling, but they need a
warm-up period during which the speller is unreliable or they need labeled data to initialize the adaptive spellers. We compared our method to the original unsupervised speller proposed in [9] and have
shown that unlike theirs, our approach works instantly. Furthermore, our final experiments demonstrated that the proposed method can compete with state of the art subject-specific and supervisedly
trained classifiers [7], even when incorporating a language model.
Acknowledgments
This work was partially funded by the Ghent University Special Research Fund under the BOF-GOA
project Home-MATE.
8
References
[1] C. M. Bishop. Pattern Recognition and Machine Learning (Information Science and Statistics). Springer,
1 edition, 2007.
[2] B. Blankertz, K.-R. Muller, G. Curio, T.M. Vaughan, G. Schalk, J.R. Wolpaw, A. Schlogl, C. Neuper,
G. Pfurtscheller, T. Hinterberger, M. Schroder, and N. Birbaumer. The BCI competition 2003: progress
and perspectives in detection and discrimination of EEG single trials. IEEE Trans. on Biomedical Engineering, 51(6):1044 ?1051, June 2004.
[3] B. Blankertz, K.-R. Muller, D.J. Krusienski, G. Schalk, J.R. Wolpaw, A. Schlogl, G. Pfurtscheller, Jd.R.
Millan, M. Schroder, and N. Birbaumer. The BCI competition III: validating alternative approaches to
actual BCI problems. IEEE Trans. on Neural Systems and Rehabilitation Engineering, 14(2):153 ?159,
June 2006.
[4] S.F. Chen and J. Goodman. An empirical study of smoothing techniques for language modeling. Computer Speech & Language, 13(4):359?393, 1999.
[5] A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data via the EM
algorithm. Journal of the Royal Statistical Society. Series B (Methodological), 39(1):pp. 1?38, 1977.
[6] L.A. Farwell and E. Donchin. Talking off the top of your head: toward a mental prosthesis utilizing
event-related brain potentials. Electroencephalography and Clinical Neurophysiology, 70(6):510 ? 523,
1988.
[7] U. Hoffmann, J.-M. Vesin, T. Ebrahimi, and K. Diserens. An efficient P300-based brain?computer interface for disabled subjects. Journal of Neuroscience Methods, 167(1):115 ? 125, 2008.
[8] C. Kemp, A. Perfors, and J.B. Tenenbaum. Learning overhypotheses with hierarchical bayesian models.
Developmental science, 10(3):307?321, 2007.
[9] P.-J. Kindermans, D. Verstraeten, and B. Schrauwen. A bayesian model for exploiting application constraints to enable unsupervised training of a P300-based BCI. PLoS ONE, 7(4):e33758, 04 2012.
[10] Y. Li, C. Guan, H. Li, and Z. Chin. A self-training semi-supervised SVM algorithm and its application in
an EEG-based brain computer interface speller system. Pattern Recognition Letters, 29(9):1285 ? 1294,
2008.
[11] S. Lu, C. Guan, and H. Zhang. Unsupervised brain computer interface based on intersubject information
and online adaptation. IEEE Trans. on Neural Systems and Rehabilitation Engineering, 17(2):135 ?145,
2009.
[12] N.V. Manyakov, N. Chumerin, A. Combaz, and M.M. Van Hulle. Comparison of linear classification
methods for P300 brain-computer interface on disabled subjects. BIOSIGNALS, Rome, Italy, pages 328?
334, 2011.
[13] R.C. Panicker, S. Puthusserypady, and Ying S. Adaptation in P300 brain-computer interfaces: A twoclassifier cotraining approach. IEEE Trans. on Biomedical Engineering, 57(12):2927 ?2935, December
2010.
[14] G. Schalk, D. J. Mcfarland, T. Hinterberger, N. Birbaumer, and J. R. Wolpaw. BCI2000: A generalpurpose brain-computer interface (BCI) system. IEEE Trans. on Biomedical Engineering, 51:2004, 2004.
[15] W. Speier, C. Arnold, J. Lu, R. K. Taira, and N. Pouratian. Natural language processing with dynamic
classification improves P300 speller accuracy and bit rate. Journal of Neural Engineering, 9(1):016004,
2012.
[16] I. Sutskever, J. Martens, and G. Hinton. Generating text with recurrent neural networks. In International
Conference on Machine Learning (ICML), 2011.
[17] J. J. Vidal. Toward direct brain-computer communication. Annual Review of Biophysics and Bioengineering, 2(1):157?180, 1973.
[18] Z. Wang, G. Schalk, and Q. Ji. Anatomically constrained decoding of finger flexion from electrocorticographic signals. In J. Shawe-Taylor, R.S. Zemel, P. Bartlett, F.C.N. Pereira, and K.Q. Weinberger, editors,
Advances in Neural Information Processing Systems 24, pages 2070?2078. 2011.
[19] O. Yanez-Suarez, L. Bougrain, C. Saavedra, E. Bojorges, and G. Gentiletti. P300-speller public-domain
database, 5 2012.
9
|
4775 |@word neurophysiology:1 trial:1 version:2 middle:1 eliminating:1 unaltered:1 bigram:1 tedious:1 pieter:1 lobe:1 initial:3 electronics:1 contains:5 series:1 selecting:1 outperforms:2 current:2 surprising:1 realistic:2 enables:1 drop:2 update:8 fund:1 discrimination:1 half:1 fewer:1 selected:4 website:1 short:2 mental:1 detecting:1 simpler:1 zhang:1 constructed:1 direct:2 become:2 consists:1 introduce:1 inter:5 ra:15 indeed:1 expected:2 nor:1 frequently:1 brain:11 actual:2 cache:1 electroencephalography:1 increasing:1 becomes:2 begin:2 project:1 underlying:1 bounded:1 notation:1 mass:1 what:2 finding:1 tackle:1 exactly:1 classifier:59 unit:1 omit:1 before:6 dropped:2 engineering:6 mistake:2 severely:1 limit:2 consequence:1 subscript:1 fluctuation:1 might:1 initialization:11 challenging:1 limited:4 bi:13 range:3 averaged:1 acknowledgment:1 practice:1 wolpaw:3 procedure:1 jan:1 empirical:1 elicit:1 projection:2 word:4 integrating:1 pre:2 seeing:2 get:1 onto:1 close:1 wrongly:1 krusienski:1 context:1 applying:1 influence:3 vaughan:1 optimize:2 demonstrated:2 marten:1 starting:1 focused:3 snew:3 assigns:2 rule:1 utilizing:1 importantly:1 analogous:1 updated:1 construction:2 target:2 user:2 us:1 recognition:2 predicts:1 database:2 electrocorticographic:2 bottom:2 role:1 labeled:2 fly:1 suarez:1 wang:1 biosignals:1 reevaluation:1 plo:1 verstraeten:2 highest:1 trade:1 mentioned:1 benjamin:1 dempster:1 developmental:1 complexity:1 ideally:1 dynamic:1 trained:10 depend:1 segment:1 finger:2 train:7 perfors:1 supervisedly:5 detected:1 zemel:1 labeling:1 hyper:1 tec:1 trainset:2 whose:1 posed:1 otherwise:1 bci:11 ability:1 statistic:1 unseen:1 highlighted:1 itself:2 laird:1 final:3 online:16 sequence:2 advantage:1 propose:1 adaptation:7 p300:33 achieve:3 adapts:2 competition:3 sutskever:2 exploiting:2 produce:6 generating:1 perfect:1 spelled:5 staying:1 help:1 recurrent:1 intersubject:1 received:1 progress:1 implemented:1 predicted:3 c:28 indicate:1 correct:5 filter:2 centered:1 enable:1 public:2 etcetera:1 assign:1 alleviate:1 around:2 predict:2 claim:1 trigram:9 major:1 belgium:1 omitted:1 diminishes:1 outperformed:1 label:2 saw:1 repetition:5 successfully:1 weighted:1 gaussian:3 always:2 aim:1 modified:3 focus:2 june:2 improvement:5 methodological:1 indicates:3 mainly:2 lasted:1 underscore:1 likelihood:2 contrast:1 baseline:3 sense:1 posteriori:1 inference:2 dependent:3 lowercase:1 typically:1 integrated:2 initially:1 w:22 transformed:1 issue:1 classification:4 among:1 flexible:1 denoted:1 overall:1 retaining:1 schroder:2 art:2 smoothing:2 initialize:2 special:1 marginal:1 equal:1 once:2 constrained:1 sampling:1 look:5 unsupervised:24 nearly:1 icml:1 minimized:1 stimulus:4 intelligent:1 few:2 randomly:4 simultaneously:1 experiments2:1 individual:3 intended:1 ourselves:1 taira:1 amplifier:1 detection:3 onwards:1 possibility:1 evaluation:5 light:1 accurate:3 bioengineering:1 capable:1 incomplete:1 old:7 hyperprior:1 initialized:6 plugged:1 re:6 desired:7 farwell:1 bci2000:2 prosthesis:1 taylor:1 column:3 modeling:1 cover:1 disadvantage:1 altering:1 assignment:1 maximization:1 signifies:1 surpasses:1 ugent:1 subset:3 dependency:1 combined:1 international:1 probabilistic:3 off:2 decoding:1 quickly:1 schrauwen:2 squared:1 recorded:3 pietersnieuwstraat:1 choose:1 containing:2 hinterberger:2 collapsing:1 corner:1 usable:1 blda:7 li:2 account:2 potential:3 includes:1 caused:1 depends:1 performed:2 later:1 doing:1 wave:1 bayes:1 option:1 parallel:1 capability:1 start:5 competitive:1 contribution:4 accuracy:10 variance:1 who:1 likewise:1 yield:5 bayesian:5 lu:2 unsupervisedly:3 researcher:1 served:1 converged:1 history:1 bof:1 frequency:1 pp:1 associated:1 static:1 sampled:2 newly:1 dataset:13 gain:1 popular:1 knowledge:8 color:1 dimensionality:1 improves:3 actually:1 retest:1 higher:1 originally:1 supervised:9 ta:13 response:5 hannes:1 evaluated:5 wold:6 done:1 furthermore:5 just:1 biomedical:3 working:2 hand:1 receives:1 brings:1 disabled:2 name:2 normalized:1 remedy:1 counterpart:1 spell:9 regularization:1 assigned:3 inspiration:1 hence:1 hulle:1 nonzero:1 during:10 spanish:2 self:1 please:1 essence:1 m:4 generalized:1 trying:1 chin:1 ridge:1 interface:8 goa:1 reasoning:1 instantaneous:1 novel:2 common:1 wikipedia:3 empirically:1 overview:3 ji:1 birbaumer:3 extend:2 he:1 discussed:1 interpret:1 theirs:1 automatic:1 tuning:1 grid:4 session:6 language:38 shawe:1 funded:1 access:1 longer:1 add:2 base:1 posterior:1 recent:2 perspective:1 italy:1 binary:1 continue:1 discussing:1 muller:2 additional:2 determine:3 paradigm:1 period:7 signal:3 semi:1 multiple:1 full:5 afterwards:3 reduces:1 uncluttered:1 usability:1 adapt:2 clinical:1 long:2 post:2 bigger:1 biophysics:1 prediction:4 regression:2 basic:4 kindermans:3 essentially:1 expectation:1 iteration:3 addition:3 want:2 xst:1 adhere:1 source:1 goodman:1 unlike:2 posse:1 file:1 tri:18 subject:46 hz:3 recording:1 validating:1 december:1 near:1 iii:1 enough:1 brighter:1 suboptimal:1 hindered:1 reduce:2 idea:1 whether:3 bartlett:1 penalty:1 suffer:1 speech:1 action:1 clear:2 listed:1 speller:16 amount:1 tenenbaum:1 processed:2 generate:1 outperform:3 neuroscience:1 correctly:2 per:13 instantly:1 key:1 pnew:3 erp:2 prevent:1 preprocessed:1 neither:1 verified:1 backward:5 sum:1 compete:1 letter:6 named:1 evokes:1 almost:1 home:1 draw:1 whitespace:1 acceptable:1 comparable:1 bit:1 ct:52 followed:2 encountered:1 annual:1 adapted:3 incorporation:4 constraint:6 your:1 software:1 performing:2 flexion:2 combination:1 across:1 slightly:2 em:8 character:65 smaller:1 labellings:1 making:1 rehabilitation:2 anatomically:1 sint:1 taken:1 equation:3 previously:1 remains:3 discus:1 count:1 needed:3 available:2 vidal:1 apply:1 hierarchical:3 appending:2 alternative:1 weinberger:1 jd:1 original:3 ebrahimi:1 top:3 remaining:1 include:1 graphical:5 schalk:4 instant:3 build:3 society:1 added:1 hoffmann:1 spelling:30 intensified:2 hmm:2 argue:1 discriminant:1 kemp:2 toward:2 length:2 code:1 index:1 ying:1 optionally:1 setup:3 executed:3 synthesizing:1 proper:1 sold:5 mate:1 t:10 parietal:1 incorrectly:2 hinton:1 communication:2 excluding:1 head:1 rome:1 inferred:1 david:1 introduced:1 required:1 optimized:3 z1:1 sentence:1 coherent:1 learned:1 alternately:1 trans:5 able:6 mcfarland:1 pattern:2 encompasses:1 built:1 royal:1 shifting:1 power:1 event:3 difficulty:1 warm:8 natural:1 recursion:2 pause:1 blankertz:2 improve:3 text:12 prior:18 epoch:35 review:1 fully:1 interesting:1 limitation:1 versus:1 rubin:1 editor:1 corrupt:1 share:1 row:5 changed:1 placed:1 english:1 drastically:1 bias:1 allow:1 offline:1 arnold:1 taking:1 van:1 default:2 dimension:1 evaluating:3 gram:5 valid:2 computes:1 numeric:3 forward:6 made:2 adaptive:7 projected:2 uni:13 keep:2 confirm:1 unreliable:1 conclude:3 consuming:2 corroborates:1 latent:1 table:7 additionally:2 channel:3 transfer:25 learn:2 eeg:23 generalpurpose:1 complex:2 domain:1 main:1 linearly:1 hyperparameters:1 edition:1 schlogl:2 repeated:1 x1:3 screen:2 pfurtscheller:2 precision:2 sub:1 position:4 decoded:1 wish:1 intensification:11 comprises:1 bandpass:1 lie:2 pereira:1 guan:2 cotraining:1 learns:1 cyclically:1 minute:4 specific:10 unigram:3 xt:7 bishop:1 symbol:11 x:23 svm:1 incorporating:5 curio:1 donchin:1 adding:2 drew:2 occurring:3 chen:1 overhypotheses:1 likely:2 contained:1 partially:1 talking:1 springer:1 chance:1 wnew:4 goal:1 presentation:1 unrepresentative:1 shared:1 man:1 replace:1 change:2 content:1 included:1 determined:1 averaging:1 ghent:3 called:2 total:1 pas:3 secondary:1 experimental:1 neuper:1 rarely:1 select:1 indicating:1 millan:1 latter:2 incorporate:3 tested:2 scratch:1
|
4,171 | 4,776 |
Delay Compensation with Dynamical Synapses
C. C. Alan Fung, K. Y. Michael Wong
Hong Kong University of Science and Technology, Hong Kong, China
[email protected], [email protected]
Si Wu
State Key Laboratory of Cognitive Neuroscience and Learning,
Beijing Normal University, Beijing 100875, China
[email protected]
Abstract
Time delay is pervasive in neural information processing. To achieve real-time
tracking, it is critical to compensate the transmission and processing delays in a
neural system. In the present study we show that dynamical synapses with shortterm depression can enhance the mobility of a continuous attractor network to the
extent that the system tracks time-varying stimuli in a timely manner. The state
of the network can either track the instantaneous position of a moving stimulus
perfectly (with zero-lag) or lead it with an effectively constant time, in agreement
with experiments on the head-direction systems in rodents. The parameter regions
for delayed, perfect and anticipative tracking correspond to network states that are
static, ready-to-move and spontaneously moving, respectively, demonstrating the
strong correlation between tracking performance and the intrinsic dynamics of the
network. We also find that when the speed of the stimulus coincides with the
natural speed of the network state, the delay becomes effectively independent of
the stimulus amplitude.
1 Introduction
Time delay is pervasive in neural information processing. Its occurrence is due to the time for signals
to transmit in the neural pathways, e.g., 50-80 ms for electrical signals to propagate from the retina
to the primary visual cortex [13], and the time for neurons responding to inputs, which is in the
order of 10-20 ms. Delay is also inevitable for neural information processing. For a neural system
carrying out computations in the temporal domain, such as speech recognition and motor control,
input information needs to be integrated over time, which necessarily incur delays.
To achieve real-time tracking of fast moving objects, it is critical for a neural system to compensate
for the delay; otherwise, the object position perceived by the neural system will lag behind the
true object position considerably. A natural way to compensate for delays is to predict the future
position of the moving stimulus. Experimental findings suggested that delay compensations are
widely adopted in neural systems. A remarkable example is the head-direction (HD) systems in
rodents, which encode the head direction of a rodent in the horizontal plane relative to a static
environment [14, 17]. It was found that when the head of a rodent is moving continuously in space,
the direction perceived by the HD neurons in the postsubicular cortex has nearly zero-lag with
respect to the instantaneous position of the rodent head [18]. More interestingly, in the anterior
dorsal thalamic nucleus, the HD neurons perceive the future direction of the rodent head, leading the
current position by a constant time [3]. The similar anticipative behavior is also observed in the eyeposition neurons when animals make saccadic eye movement, the so-called saccadic remapping [16].
In human psychophysical experiments, the classic flash-lag effect also supports the notion of delay
1
(a)
ext
(b)
1
0.02
0.2
0
z(t), z0(t)
0.04
0.4
I
u(x,t)
2
0.06
u(x,t)
ext
I (x,t)
0.6
ext
I (x,t)
u(x,t)
-2
0
x-z(t)
2
0
0
0
50
t/?s
100
Figure 1: (a) Profiles of u (x, t) and I ext (x, t) in the absence of STD, where the center of mass of
the stimulus is moving with constant velocity v = 0.02a/?s . As shown, the profile of u (x, t) is
ext
almost Gaussian. (b) The centers
? of mass of u (x, t) and I (x, t) as functions of time. Parameters:
? = 128/2?, a = 0.5, J0 = 2?a and ?J0 A = 1.0.
compensation [12]. In the experiment, a flash is perceived to lag behind a moving object, even
though they are physically aligned. The underlying cause is that the visual system predicts the
future position of the continuously moving object, but is unable do so for the unpredictable flash.
Depending on the available information, the brain may employ different strategies for delay compensation. In the case of self-motion, such as an animal rotating its head actively or performing
saccadic eye movements, the motor command responsible for the motion can serve as a cue for
delay compensation. It was suggested that an efference copy of the motor command, called corollary discharge, is sent to the corresponding internal representation system prior to the motion [18].
For the head rotation, the advanced time can be up to 20 ms; for the saccadic eye movement, the
advanced time is about 70 ms. In the case of tracking an external moving stimulus, the neural system has to rely on the moving speed of the stimulus for prediction. Asymmetric neural interactions
have been proposed to drive the network states to catch up with changes in head directions [22] or
positions [4]. These may be achieved by the so-called conjunctive cells projecting neural signals
between successive modules in forward directions [10]. To explain the flash-lag effect, Nijhawan et
al. proposed a dynamical routing mechanism to compensate the transmission delay in the visual system, in which retinal neurons dynamically choose a pathway according to the speed of the stimulus,
and transmit the signal directly to the future position in the cortex [13].
In this study we propose a novel mechanism of how a neural system compensates for the processing
delay. By the processing delay, we mean the time consumed by a neural system in response to
external inputs. The proposed mechanism does not require corollary discharge, or efforts of choosing
signal pathways, or specific network structures such as asymmetric interactions or conjunctive cells.
It is based on the short-term depression (STD) of synapses, the inherent and ubiquitous nature that
the synaptic efficacy of a neuron is reduced after firing due to the depletion of neurotransmitters [11].
It has been found that STD enhances the mobility of the states of neural networks [21, 9, 6]. The
underlying mechanism is that neurotransmitters become depleted in the active region of the network
states compared with the neighboring regions, thus increasing the likelihood of the locally active
network state to shift to its neighboring positions when it is tracking a continuously shifting stimulus.
When STD is sufficiently strong, the tracking state of the network can even overtake the moving
stimulus, demonstrating its potential for generating predictions.
2
The Model
We consider continuous attractor neural networks (CANNs) as the internal representation models
for continuous stimuli [7, 2, 15]. A CANN holds a continuous family of bump-shaped stationary
states, which form a subspace in which the neural system is neutrally stable [20]. This property
endows the neural system the capacity of tracking time-varying stimuli smoothly.
Consider a continuous stimulus x being encoded by a neural ensemble. The variable x may represent
the orientation, the head direction, or the spatial location of an object. Neurons with preferred stimuli
x produce the maximum response when an external stimulus is present at x. Their preferred stimuli
2
are uniformly distributed in the space ?? < x < ?. In the continuum limit, the dynamics of
the neural ensemble can be described by a CANN. We denote as u(x, t) the population-averaged
synaptic current to the neurons at position x and time t. The dynamics of u(x, t) is determined
by the external input, the lateral interactions among the neurons, and its relaxation towards zero
response. It is given by
?
?u (x, t)
?s
= I ext (x, t) + ? dx? J (x, x? ) p (x? , t) r (x? , t) ? u (x, t) ,
(1)
?t
where ?s is the synaptic time constant, which is typically in the order of 1 to 5 ms, I ext (x, t) the
external input, ? the density of neurons, J(x, x? ) the coupling between neurons at x and x? , and
r(x, t) is the firing rate of the neurons. The variable p(x, t) represents the fraction of available
neurotransmitters, which evolves according to [6, 19]
?d
?p (x, t)
= 1 ? p (x, t) ? ?d ?p (x, t) r (x, t) ,
?t
(2)
where ?d is the STD time scale, which is typically of the order of 102 ms. In this work, we choose
?d = 50?s . The STD effect is controlled by the parameter ?, which can be considered as the fraction
of total neurotransmitters consumed per spike.
The actual forms of J(x, x? ) and r(x, t) depend on the details of the neural dynamics. Here, for the
convenience of analysis, we choose them to be
[
]
2
J0
(x ? x? )
?
? exp ?
J (x, x ) =
,
(3)
2a2
a 2?
2
r (x, t)
=
?[u(x, t)]
u (x, t)
?
2,
1 + k? dx? u (x? , t)
(4)
where J0 and a control the magnitude and range of the neuronal excitatory interactions respectively.
J(x, x? ) is translationally invariant in the space x, since it is a function of (x?x? ), which is essential
for the network state to be neutrally stable. In the expression for the firing rate, ? is the step function. Here, the stabilizing effect of inhibitory interactions is achieved by the divisive normalization
operation in Eq. (4).
Let us consider first?the case without STD by setting ? = 0. Hence, p (x, t) = 1 in Eq. (1). For
k ? kc ? ?J02 /(8 2?a), the network holds a continuous family of Gaussian-shaped stationary
states when I ext (x, t) = 0. These stationary states are
[
]
2
(x ? z)
u
? (x) = u
?0 exp ?
.
(5)
4a2
where u
? is the rescaled variable u
? ? ?J0 u, and u
?0 is the rescaled bump height. The parameter z,
i.e., the center of the bump, is a free parameter, implying that the stationary state of the network can
be located anywhere in the space x.
Next, we consider the case that the network receives a moving input,
[
]
(x ? z0 (t))2
ext
I (x, t) = A exp ?
,
4a2
(6)
where A is the magnitude of the input and z0 the stimulus position.
Without loss of generality, we consider the stimulus position at time t = 0 to be z0 = 0, and the
stimulus moves at a constant speed thereafter, i.e., z0 = vt for t ? 0. Let s ? z(t) ? z0 (t) be the
displacement between the network state and the stimulus position. It has been shown that without
STD, the steady value of the displacement is determined by [5]
(
)
As
s2
v=?
exp ? 2 .
(7)
?s
8a
Note that s has the opposite sign of v, implying that the network state always trails behind the
stimulus (see Fig. 1(a)). This is due to the response delay of the network relative to the input.
3
3
Tracking in the Presence of STD
The analysis of tracking in the presence of STD is more involved. Motivated by the nearly Gaussianshaped profile of the network states, we adopt a perturbation approach to solve the network dynamics [5]. The key idea is to expand the network states as linear combinations of a set of orthonormal
basis functions corresponding to different distortion modes of the bump, that is,
u (x, t)
=
?
un (t) ?n (x ? z) ,
(8)
pn (t) ?n (x ? z) ,
(9)
n
1 ? p (x, t)
=
?
n
where the basis functions are
?n (x ? z) =
?n (x ? z) =
[
]
)
2
x?z
(x ? z)
??
Hn ?
exp ?
,
4a2
2a
2?a2n n!
]
[
(
)
2
1
x?z
(x ? z)
??
Hn
.
exp ?
a
2a2
?a2n n!
(
1
(10)
(11)
Here, Hn is the nth -order Hermite polynomial function. ?n (x ? z) and ?n (x ? z) have clear physical meanings. For instance, for n = 1, 2, 3, 4, they corresponds to, respectively, the height, the
position, the width and the skewness changes of the Gaussian bump. Depending on the approximation precision, we can take the above expansions up to a proper order, and substituting them into
Eqs. (1) and (2) to solve the network dynamics analytically.
Results obtained from the 11th order perturbation are shown in Fig. 2(a) for three representative cases. They depend on the rescaled inhibition k? ? k/kc and the rescaled STD strength
?? ? ?d ?/(?2 J02 ). When STD is weak, the tracking state lags behind the stimulus. When the
STD strength increases to a critical value ??perfect , s becomes effectively zero in a rather broad range
of stimulus velocity, achieving perfect tracking. When the STD strength is above the critical value,
the tracking state leads the stimulus.
Hence delay compensation in a tracking task can be implemented at two different levels. The first
one is perfect tracking, in which the tracking state has zero-lag with respect to the true stimulus
position independent of the stimulus speed. The second one is anticipative tracking, in which the
tracking state leads by a constant time ?ant relative to the stimulus position, that is, the tracking state
is at the position the stimulus will travel to at a later time ?ant . To achieve a constant anticipation
time, it requires the leading displacement to increase with the stimulus velocity proportionally, i.e.,
s = v?ant . Both forms of delay compensation have been observed in the head-direction systems of
rodents, and may serve different functional purposes.
3.1
Prefect Tracking
To analyze the parameter regime for perfect tracking, it is instructive to consider the 1st order perturbation of the network dynamics, i.e.,
[
u [x ? z (t)] =
p [x ? z (t)] =
]
2
(x ? z (t))
u0 (t) exp ?
,
(12)
4a2
]
[
]
[
[
]
2
2
x ? z (t)
(x ? z (t))
(x ? z (t))
+ p1 (t)
exp ?
.
1 ? p0 (t) exp ?
2a2
a
2a2
(13)
4
0.4
0.01
(a)
0.3
0.008
~
k = 0.7
~
k = 0.6
~
k = 0.5
~
k = 0.4
~
k = 0.3
0.006
? = 0.022
0.004
0.2
0
? = 0.0035
-0.1
-0.2
-2
?perfect
0.1
0
0.005
0.5
~
s/a
0.002
1
~
A
(b)
1.5
?=0
-1
0
?dv/a
1
2
0
0
0.2
0.4
~
k
0.6
0.8
Figure 2: (a) The dependence of the displacement between the bump and the stimulus on the velocity
? Parameters: k? = 0.4 and A? = 1.8. (b) The
of the moving stimulus for different values of ?.
dependence of ??perfect on k? with A? = 1.0. Symbols: simulations. Solid line: the predicted curve
of ??perfect . Dashed line: the boundary separating the static and metastatic phases according to the
? Symbols: simulations. Lines:
1st order perturbation [6]. Inset: the dependence of ??perfect on A.
theoretical prediction according to the 1st order perturbation.
Substituting them into Eqs. (1) and (2) and utilizing the orthogonality of the basis functions, we get
(see Supplementary Material)
(
? )
(vt?z)2
d?
u0
u
?20
4
? ? 8a2 ,
?
?s
(14)
=
1 ? p0
?u
?0 + Ae
dt
7
B 2
( )3/2
(
)
(vt?z)2
?s dz
u
?0 2
A?
vt ? z
=
p1 +
e? 8a2 ,
(15)
2a dt
B 7
2?
u0
a
[
(
)
]
?
dp0
?s ??u
?20
2
?s p1 dz
?s
=
1 ? p0
? p0 ?
,
(16)
dt
?d B
3
2a dt
[
( )3/2 ]
?s dp1
?s
??u
?20 2
p1
?s dz
(17)
= ?
1+
+
.
p0 dt
?d
B
3
p0
a dt
At the steady state, d?
u0 /dt = dp0 /dt = dp1 /dt = 0, and dz/dt = v. Furthermore, for a sufficiently
small displacements, i.e., |s|/a ? 1, one can approximate A? exp[?(vt ? z)2 /(8a2 )] ? A? and
?
?
A[(vt
? z)/a] exp[?(vt ? z)2 /(8a2 )] ? ?As/a.
Solving the above equations, we find that s/a
? ?s /?d and v?d /a. When v?d /a ? 1, the rescaled
can be expressed in terms of the variables u
?0 /A,
displacement s/a can be approximated by a power series expansion of the rescaled velocity v?d /a.
Since the displacement reverses sign when the velocity reverses, s/a is an odd function of v?d /a.
This means that s/a ? c1 (v?d /a) + c3 (v?d /a)3 . For perfect tracking in the low velocity limit, we
have c1 = 0 and find
s
Cu
?0 ?s ( v?d )3
=?
,
(18)
a
2 A? ?d
a
where C is a parameter less than 1 (the detailed expression can be found in Supplementary Material).
For the network tracking a moving stimulus, the input magnitude cannot be too small. This means
that u
?0 /A? is not a large number. Therefore, for tracking speeds up to v?d /a ? 1, the displacement
s is very small and can be regarded as zero effectively (see Fig. 2(a)). The velocity range in which
the tracking is effectively perfect is rather broad, since it scales as (?d /?s )1/3 ? 1.
Equation (18) is valid when ?? takes a particular value. This ields an extimate of ??perfect in the 1st
order perturbation. Its expression is derived in Supplementary Material and plotted in Fig. 2(b).
For reference, we also plot the boundary that separates the metastatic phase above it from the static
phase below, as reported in the study of intrinsic properties of CANNs with STD in [6]. In the static
phase, the bump is stable at any position, whereas in the metastatic phase, the static bump starts to
move spontanaeously once it is pushed. Hence we say that the phase boundary is in a ready-to-move
state. Fig. 2(b) shows that ??perfect is just above the phase boundary. Indeed, when A? approaches
0, the expression of ??perfect reduces to the value of ?? along the phase boundary for the 1st order
5
Angular velocity (degree/sec)
200
300
(a)
400
? = 0.022, A = 1.8
? = 0.030, A = 1.5
? = 0.030, A = 1.0
50
40
30
0.4
20
0.2
10
?ant/?d
0.8
0.6
0
0
0.2
0.4
0.6
?dv/a
0.8
1
0.03
0.025
0.02
?
1
100
Anticipatory time (ms)
0
1.2
0.015
(b)
?ant = 0.2?d
?ant = 0.1?d
?ant = 0.02?d
0.01
0.005
Static
0
0
0.2
0.4
0.6
0.8
1
k
Figure 3: (a) The anticipatory time as a function of the speed of the stimulus. Different sets of
parameters may correspond to different levels of anticipatory behavior. Parameter: k? = 0.4. The
numerical scales are estimated from parameters in [8]. (b) The contours of constant anticipatory
time in the space of rescaled inhibition k? and the rescaled STD strength ?? in the limit of very small
stimulus speed. Dashed line: boundary separating the static and metastatic phases. Dotted line:
boundary separating the existence and non-existence phases of bumps. Calculations are done using
11th order perturbation.
perturbation. The inset of Fig. 2(b)) confirms that ??perfect does not change significantly with A? for
? This implies that the network with ?? = ??perfect exhibits effectively perfect
different values of k.
tracking performance because it is intrinsically in a ready-to-move state.
3.2 Anticipative Tracking
We further explore the network dynamics when the STD strength is higher than that for achieving
perfect tracking. By solving the network dynamics with the perturbation expansion up to the 11th
order, we obtain the relation between the displacement s and the stimulus speed v. The solid curve in
Fig. 2(a) shows that for strong STD, s increases linearly with v over a broad range of v. This implies
that the network achieves a constant anticipatory time ?ant over a broad range of the stimulus speed.
To gain insights into how the anticipation time depends on the stimulus speed, we consider the
regime of small displacements. In this regime, the rescaled displacement s/a can be approximated by a power series expansion of the rescaled velocity v?d /a, leading to s/a = c1 (v?d /a) +
c3 (v?d /a)3 . The coefficients c1 and c3 are determined such that the anticipation time in the limit
v = 0 should be ?ant (0) = s/v, and that s/a reaches a maximum when v = vmax . This yields the
result
[
]
(
)2 (
s
?ant (0) v?d
1
a
v?d )3
.
(19)
=
?
a
?d
a
3 vmax ?d
a
Hence the anticipatory time is given by
(
)
v2
?ant (v) = ?ant (0) 1 ? 2
.
3vmax
(20)
This shows that the anticipation time is effectively constant in a wide range of stimulus velocities, as
shown in Fig. 3(a). Even for v = 0.5vmax , the anticipation time is only reduced from its maximum
by 9%.
The contours of anticipatory times for slowly moving stimuli are shown in Fig. 3(b). Hence the
region of anticipative behavior effectively coincides with the metastatic phase, as indicated by the
region above the phase line (dashed) in Fig. 2(b). In summary, there is a direct correspondence
between delayed, perfect, and anticipative tracking on one hand, and the static, ready-to-move, and
spontaneously moving beahviors on the other. This demonstrates the strong correlation between the
tracking performance and the intrinsic behaviors of the CANN.
We compare the prediction of the model ?
with experimental data. In a typical HD experiment of
rodents [8], ?s = 1 ms, a = 28.5 degree/ 2, and the anticipation time drops from 20 ms at v = 0
to 15 ms at v = 360 degree/s. Substituting into Eq. (19) and assuming ?d = 50?s , these parameters
yield a slope of 0.41 at the origin and the maximum lead at vmax ?d /a = 1.03. This result can be
6
compared favorably with the curve of ?? = 0.022 in Fig. 2(a), where the slope at the origin is 0.45
and the maximum lead is located at vmax ?d /a = 1.01. Based on these parameters, the lowest curve
plotted in Fig. 3(a) is consistent with the real data in Fig. 4 of [8].
0.4
0.2
s/a
0
-0.2
-0.4
0
3.3
? = 0.005, A = 1
? = 0.005, A = 2
? = 0.005, A = 4
? = 0.010, A = 1
? = 0.010, A = 2
? = 0.010, A = 4
0.5
L1
1
?dv/a
1.5
2
Figure 4: Confluence points at natural
speeds. There are six curves in two
groups with different sets of parameters. Curves in one group intersect at
L3
the confluence point with the natural
?
speed at the corresponding value of ?.
Symbols: simulations. Thin lines: prediction of the displacement-velocity relation by 11th order perturbation. L1 :
natural speed at ?? = 0.005. L2 : natL2
ural speed at ?? = 0.01. L3 : the line
? limit. Pa2.5 for natural tracking at high A
?
rameter: k = 0.3.
Natural Tracking
For strong enough STD, a CANN holds spontaneously moving bump states. The speed of the
spontaneously moving bump is an intrinsic property of the network depending only on the network
parameters. We call this the natural speed of the network, denoted as vnatural . An interesting issue is
the tracking performance of the network when the stimulus is moving at its natural speed.
Two sets of curves corresponding to two values of ?? are shown in Fig. 4, when the stimulus amplitude A? is sufficiently strong. The lines L1 and L2 indicate the corresponding natural speeds of
? Remarkably, we obtain a confluence point of these curves at the
the system for these values of ?.
natural speed. This point is referred to as the natural tracking point. It has the important property
that the lag is independent of the stimulus amplitude. This independence of s from A? persists in the
? In this limit, s approaches ?vnatural ?s , corresponding to a delay time
asymptotic limit of large A.
of ?s , showing that the response is limited by the synaptic time scale in this limit. This asymptotic
? Hence the invariant point
limit is described by the line L3 and is identical for all values of k? and ?.
?
for natural tracking is given by (v, s) = (vnatural , ?vnatural ?s ) for all values of k? and ?.
We also consider natural tracking in the weak A? limit. Again we find a confluence point of the
displacement curves at the natural speed, but the delay time (and in some cases the anticipation
? For example, at k? = 0.3, the natural tracking point traces out an
time) depends on the value of k.
effectively linear curve in the space of v and s when ?? increases, with a slope equal to 0.8?s . This
shows that the delay time is 0.8?s , effectively independent of ?? at k? = 0.3. Since the delay time is
different from the value of ?s applicable in the strong A? limit, the natural tracking point is slowly
drifting from the weak to the strong A? limit. However, the magnitude of the natural time delay
remains of the order of ?s . This is confirmed by the analysis of the dynamical equations when the
stimulus speed is vnatural + ?v in the weak A? limit.
3.4
Extension to other CANNs
To investigate whether the delay compensation behavior and the prediction of the natural tracking
point are general features of CANN models, we consider a network with Mexican-hat couplings.
We replace J(x, x? ) in Eq. (1) by
[
[
]
(
)2 ]
2
1
x ? x?
(x ? x? )
MH
?
J (x, x ) = J0
?
exp ?
,
(21)
2
2a
2a2
and r (x, t) in Eqs. (1) and (2) by
2
r (x, t) = ? [u (x, t)]
7
u (x, t)
2
1 + u (x, t)
(22)
Angular velocity (degree/sec)
300
(a)
A = 0.3
A = 0.4
A = 0.5
?ant/?d
0.8
400
50
40
0.6
30
0.4
20
0.2
10
0
-0.2
0
0
0.2
0.4
0.6
?dv/a
0.8
1
2
(b)
0.04
L1
(c)
A = 0.1
A = 0.2
A = 0.3
1.5
0
s/a
200
vnatural?d/a
1
100
Anticipatory time (ms)
0
1.2
1
-0.04
0.5
-0.08
0
0
0.001
0.002
?
0.003
0.004
0
0.2
0.4
v?d/a
0.6
0.8
Figure 5: (a) The dependence of anticipatory time on the stimulus speed in the Mexican-hat model.
Parameter: ? = 0.003. (b) Natural speed of the network as a function of ?. (c) Plot of s against v.
There is a confluence point at the natural speed of the system. L1 : the natural speed of the system
at ? = 0.0011. Common parameters: ? = 128/ (2?) , J0 = 0.5 and a = 0.5.
Fig. 5 shows that the network exhibits the same behaviors as the model in Eqs. (1) and (2). As
shown in Fig. 5(a), the anticipatory times are effectively constant and similar in magnitude in the
range of stimulus speed comparable to experimental settings. In Fig. 5(b), the natural speed of the
bump is zero for ? less than a critical value. As ? increases, the natural speed increases from zero.
In Fig. 5(c), the displacement s is plotted as a function of the stimulus speed v. The invariance
of the displacement at the natural speed, independent of the stimulus amplitude, also appears in
the Mexican-hat model. The confluence point of the family of curves is close to the natural speed.
Furthermore, the displacement at the natural tracking point increases with the natural speed.
4
Conclusions
In the present study we have investigated a simple mechanism of how processing delays can be compensated in neural information processing. The mechanism is based on the intrinsic dynamics of a
neural circuit, utilizing the STD property of neuronal synapses. The latter induces translational instability of neural activities in a CANN and enhances the mobility of the network states in response
to external inputs. We found that for strong STD, the neural system can track moving stimuli with
either zero-lag or a lead of a constant time. The conditions for perfect and anticipative tracking hold
for a wide range of stimulus speeds, making them applicable in practice. By choosing biologically
plausible parameters, our model successfully justifies the experimentally observed delay compensation behaviors. We also made an interesting prediction in the network dynamics, that is, when
the speed of the stimulus coincides with the natural speed of the network state, the delay becomes
effectively independent of the stimulus amplitude. We also studied more than one kind of CANN
models to confirm the generality of our results.
Compared with other delay compensation strategies relying on corollary discharge or dynamical
routing, the mechanism we propose here is fully dependent on the intrinsic dynamics of the network,
namely, the network automatically ?adjusts? its tracking speed according to the input information.
There exists strong correlations between tracking performance and the intrinsic dynamics of the
network. The parameter regions for delayed, perfect and anticipative tracking correspond to network
states being static, ready-to-move and spontaneously moving, respectively. It has been suggested the
anticipative response of HD neurons in anterior dorsal thalamus is due to the corollary discharge of
motor neurons responsible for moving the head. However, experimental studies revealed that when
rats were moved passively (and hence no corollary discharge is available), either by hand or by a
chart, the anticipative response of HD neurons still exists and has an even larger leading time [1].
Our model provides a possible mechanism to describe this phenomenon.
Acknowledgement
This work is supported by the Research Grants Council of Hong Kong (grant number 605010) and
the National Foundation of Natural Science of China (No.91132702, No.31221003).
8
References
[1] J. P. Bassett, M. B. Zugaro, G. M. Muir, E. J. Golob, R. U. Muller and J. S. Taube. Passive
Movements of the Head Do Not Abolish Anticipatory Firing Properties of Head Direction
Cells. J. Neurophysiol. 93, 1304-1316 (2005).
[2] R. Ben-Yishai, R. Lev. Bar-Or, and H. Sompolinsky. Theory of orientation tuning in visual
cortex. Proc. Natl. Acad. Sci. U.S.A. 92, 3844-3848 (1995).
[3] H. T. Blair and P. E. Sharp. Anticipatory head direction signals in anterior thalamus: evidence
for a thalamocortical circuit that integrates angular head motion to compute head direction. J.
Neurosci. 15, 6260-6270 (1995).
[4] M. C. Fuhs and D. S. Touretzky. J. Neurosci. 26, A Spin Glass Model of Path Integration in
Rat Medial Entorhinal Cortex . 4266-4276 (2006).
[5] C. C. A. Fung, K. Y. Wong and S. Wu. Neural Comput. Moving Bump in a Continuous Manifold: A Comprehensive Study of the Tracking Dynamics of Continuous Attractor Neural Networks. 22, 752-792 (2010).
[6] C. C. A. Fung, K. Y. M. Wong, H. Wang and S. Wu. Dynamical Synapses Enhance Neural
Information Processing: Gracefulness, Accuracy and Mobility. Neural Comput. 24, 1147-1185
(2012).
[7] A. P. Georgopoulos, J. T. Lurito, M. Petrides, Mental rotation of the neuronal population vector.
A. B. Schwartz, and J. T. Massey, Science 243, 234-236 (1989).
[8] J. P. Goodridge and D. S. Touretzky. Modeling attractor deformation in the rodent head direction system. J. Neurophysiol.83, 3402-3410 (2000).
[9] Z. P. Kilpatrick and P. C. Bressloff. Effects of synaptic depression and adaptation on spatiotemporal dynamics of an excitatory neuronal network. Physica D 239, 547-560 (2010).
[10] B. L. McNaughton, F. P. Battaglia, O. Jensen, E. I. Moser and M.-B. Moser. Path integration
and the neural basis of the ?cognitive map?. Nature Rev. Neurosci. 7, 663-678 (2006).
[11] H. Markram and M. Tsodyks. Redistribution of synaptic efficacy between neocortical pyramidal neurons. Nature 382, 807-810, 1996.
[12] R. Nijhawan. Motion extrapolation in catching. Nature 370, 256-257 (1994).
[13] R. Nijhawan and S. Wu. Phil. Compensating time delays with neural predictions: are predictions sensory or motor? Trans. R. Soc. A 367, 1063-1078 (2009).
[14] J. O?Keefe and J. Dostrovsky. The hippocampus as a spatial map: preliminary evidence from
unit activity in the freely moving rat. Brain Res. 34, 171-175 (1971).
[15] A. Samsonovich and B. L. McNaughton. Path integration and cognitive mappping in a continuous attractor neural network model. J. Neurosci. 17, 5900-5920 (1997).
[16] M. A. Sommer and R. H. Wurtz. Influence of the thalamus on spatial visual processing in
frontal cortex. Nature 444, 374-377 (2006).
[17] J. S. Taube, R. U. Muller and J. B. Ranck Jr. Head-direction cells recorded from the postsubiculum in freely moving rats. I. Description and quantitative analysis. J. Neurosci. 10, 420-435
(1990).
[18] J. S. Taube and R. I. Muller. Comparisons of head direction cell activity in the postsubiculum
and anterior thalamus of freely moving rats. Hippocampus 8, 87-108 (1998).
[19] M. Tsodyks, K. Pawelzik, and H. Markram. Neural Networks with Dynamic Synapses. Neural
Comput. 10, 821-835 (1998).
[20] S. Wu and S. Amari. Computing with Continuous Attractors: Stability and Online Aspects no
access. Neural Comput. 17, 2215-2239 (2005).
[21] L. C. York and M. C. W. van Rossum. Recurrent networks with short term synaptic depression.
J. Comput. Neurosci 27, 707-620 (2009).
[22] K. Zhang. Representation of spatial orientation bythe intrinsic dynamics of the head-direction
cell ensemble: a theory. J. Neurosci. 16, 2112-2126 (1996).
9
|
4776 |@word kong:3 cu:1 polynomial:1 hippocampus:2 confirms:1 simulation:3 propagate:1 p0:6 solid:2 series:2 efficacy:2 interestingly:1 ranck:1 current:2 anterior:4 si:1 conjunctive:2 ust:2 dx:2 numerical:1 motor:5 plot:2 drop:1 medial:1 stationary:4 cue:1 implying:2 plane:1 short:2 mental:1 provides:1 location:1 successive:1 zhang:1 hermite:1 height:2 along:1 direct:1 become:1 pathway:3 manner:1 indeed:1 behavior:7 p1:4 samsonovich:1 brain:2 compensating:1 relying:1 automatically:1 actual:1 pawelzik:1 unpredictable:1 increasing:1 becomes:3 underlying:2 circuit:2 remapping:1 mass:2 lowest:1 kind:1 skewness:1 finding:1 temporal:1 quantitative:1 demonstrates:1 schwartz:1 control:2 unit:1 grant:2 rossum:1 postsubicular:1 persists:1 limit:13 acad:1 ext:9 lev:1 firing:4 path:3 china:3 studied:1 dynamically:1 goodridge:1 limited:1 range:8 averaged:1 responsible:2 spontaneously:5 practice:1 displacement:16 j0:7 intersect:1 significantly:1 anticipation:7 get:1 convenience:1 cannot:1 confluence:6 close:1 influence:1 instability:1 wong:3 map:2 center:3 dz:4 compensated:1 phil:1 stabilizing:1 perceive:1 insight:1 adjusts:1 utilizing:2 regarded:1 orthonormal:1 hd:6 classic:1 population:2 notion:1 stability:1 mcnaughton:2 transmit:2 discharge:5 trail:1 origin:2 agreement:1 velocity:13 recognition:1 approximated:2 located:2 std:21 asymmetric:2 predicts:1 observed:3 module:1 fuhs:1 electrical:1 wang:1 tsodyks:2 region:6 sompolinsky:1 movement:4 rescaled:10 environment:1 instructive:1 dynamic:17 carrying:1 depend:2 solving:2 incur:1 serve:2 basis:4 neurophysiol:2 mh:1 neurotransmitter:4 fast:1 describe:1 choosing:2 lag:10 widely:1 efference:1 encoded:1 solve:2 distortion:1 otherwise:1 metastatic:5 compensates:1 supplementary:3 say:1 pa2:1 amari:1 online:1 propose:2 interaction:5 adaptation:1 neighboring:2 aligned:1 a2n:2 achieve:3 description:1 moved:1 transmission:2 produce:1 generating:1 perfect:21 ben:1 object:6 depending:3 coupling:2 recurrent:1 odd:1 eq:8 strong:10 soc:1 implemented:1 predicted:1 implies:2 revers:2 indicate:1 blair:1 direction:16 human:1 routing:2 material:3 redistribution:1 require:1 preliminary:1 alanfung:1 extension:1 physica:1 hold:4 sufficiently:3 considered:1 normal:1 exp:12 predict:1 bump:13 substituting:3 continuum:1 adopt:1 a2:13 achieves:1 purpose:1 perceived:3 battaglia:1 proc:1 travel:1 applicable:2 integrates:1 council:1 successfully:1 gaussian:3 always:1 rather:2 pn:1 varying:2 command:2 pervasive:2 encode:1 corollary:5 derived:1 likelihood:1 hk:2 glass:1 dependent:1 integrated:1 typically:2 kc:2 relation:2 expand:1 translational:1 canns:3 among:1 orientation:3 issue:1 denoted:1 animal:2 spatial:4 integration:3 equal:1 once:1 shaped:2 identical:1 represents:1 broad:4 nearly:2 thin:1 inevitable:1 future:4 stimulus:52 inherent:1 employ:1 retina:1 national:1 comprehensive:1 delayed:3 translationally:1 phase:12 attractor:6 investigate:1 behind:4 natl:1 yishai:1 mobility:4 rotating:1 re:1 plotted:3 deformation:1 catching:1 theoretical:1 instance:1 dostrovsky:1 modeling:1 delay:30 too:1 reported:1 dp0:2 spatiotemporal:1 considerably:1 st:5 overtake:1 density:1 moser:2 michael:1 enhance:2 continuously:3 again:1 recorded:1 choose:3 hn:3 slowly:2 external:6 cognitive:3 leading:4 actively:1 potential:1 retinal:1 sec:2 coefficient:1 depends:2 later:1 extrapolation:1 zugaro:1 analyze:1 start:1 thalamic:1 timely:1 slope:3 chart:1 abolish:1 spin:1 accuracy:1 ensemble:3 correspond:3 yield:2 ant:13 weak:4 bnu:1 confirmed:1 drive:1 dp1:2 explain:1 synapsis:6 reach:1 touretzky:2 synaptic:7 against:1 involved:1 static:10 gain:1 intrinsically:1 ubiquitous:1 amplitude:5 appears:1 higher:1 dt:10 response:8 prefect:1 anticipatory:12 done:1 though:1 generality:2 furthermore:2 anywhere:1 just:1 angular:3 correlation:3 hand:2 receives:1 horizontal:1 mode:1 indicated:1 effect:5 true:2 hence:7 analytically:1 laboratory:1 self:1 width:1 steady:2 coincides:3 rat:5 hong:3 m:11 muir:1 neocortical:1 motion:5 l1:5 passive:1 meaning:1 instantaneous:2 novel:1 common:1 rotation:2 functional:1 physical:1 tuning:1 postsubiculum:2 l3:3 moving:26 stable:3 access:1 cortex:6 inhibition:2 vt:7 muller:3 freely:3 taube:3 signal:6 u0:4 dashed:3 ural:1 reduces:1 thalamus:4 alan:1 calculation:1 compensate:4 rameter:1 neutrally:2 controlled:1 prediction:9 ae:1 wurtz:1 physically:1 represent:1 normalization:1 achieved:2 cell:6 c1:4 whereas:1 remarkably:1 pyramidal:1 sent:1 call:1 depleted:1 presence:2 revealed:1 enough:1 independence:1 perfectly:1 opposite:1 idea:1 cn:1 consumed:2 shift:1 whether:1 expression:4 motivated:1 six:1 effort:1 speech:1 york:1 cause:1 depression:4 clear:1 proportionally:1 detailed:1 locally:1 induces:1 reduced:2 inhibitory:1 dotted:1 sign:2 neuroscience:1 nijhawan:3 track:3 per:1 estimated:1 group:2 key:2 thereafter:1 demonstrating:2 achieving:2 massey:1 relaxation:1 fraction:2 beijing:2 almost:1 family:3 wu:5 comparable:1 pushed:1 correspondence:1 activity:3 strength:5 orthogonality:1 georgopoulos:1 aspect:1 speed:38 performing:1 passively:1 fung:3 according:5 combination:1 jr:1 evolves:1 making:1 biologically:1 rev:1 projecting:1 invariant:2 dv:4 depletion:1 equation:3 remains:1 mechanism:8 adopted:1 available:3 operation:1 v2:1 occurrence:1 drifting:1 existence:2 hat:3 responding:1 sommer:1 psychophysical:1 move:7 spike:1 strategy:2 primary:1 saccadic:4 dependence:4 enhances:2 exhibit:2 subspace:1 unable:1 separate:1 lateral:1 capacity:1 separating:3 sci:1 manifold:1 extent:1 assuming:1 cann:7 favorably:1 trace:1 proper:1 neuron:16 compensation:10 head:21 perturbation:10 sharp:1 namely:1 c3:3 gaussianshaped:1 trans:1 suggested:3 bar:1 dynamical:6 below:1 regime:3 shifting:1 power:2 critical:5 natural:30 rely:1 endows:1 advanced:2 nth:1 technology:1 eye:3 ready:5 shortterm:1 catch:1 larger:1 prior:1 l2:2 j02:2 acknowledgement:1 relative:3 asymptotic:2 loss:1 fully:1 interesting:2 remarkable:1 foundation:1 nucleus:1 degree:4 consistent:1 excitatory:2 summary:1 supported:1 thalamocortical:1 copy:1 free:1 wide:2 bressloff:1 markram:2 distributed:1 van:1 curve:11 boundary:7 valid:1 contour:2 sensory:1 forward:1 made:1 vmax:6 approximate:1 preferred:2 gracefulness:1 confirm:1 active:2 continuous:10 un:1 nature:5 expansion:4 investigated:1 necessarily:1 domain:1 linearly:1 neurosci:7 s2:1 bassett:1 profile:3 neuronal:4 fig:18 representative:1 referred:1 precision:1 position:19 comput:5 plausible:1 z0:6 specific:1 phkywong:1 inset:2 showing:1 jensen:1 symbol:3 wusi:1 evidence:2 intrinsic:8 essential:1 exists:2 effectively:12 keefe:1 magnitude:5 entorhinal:1 justifies:1 rodent:9 smoothly:1 explore:1 visual:5 expressed:1 tracking:45 corresponds:1 flash:4 towards:1 replace:1 absence:1 change:3 experimentally:1 determined:3 typical:1 uniformly:1 mexican:3 called:3 total:1 invariance:1 experimental:4 divisive:1 anticipative:10 internal:2 support:1 latter:1 dorsal:2 frontal:1 phenomenon:1
|
4,172 | 4,777 |
Learning Manifolds with K-Means and K-Flats
Guillermo D. Canas?,?
Tomaso Poggio?,?
Lorenzo A. Rosasco?,?
? Laboratory for Computational and Statistical Learning - MIT-IIT
? CBCL, McGovern Institute - Massachusetts Institute of Technology
[email protected] [email protected] [email protected]
Abstract
We study the problem of estimating a manifold from random samples. In particular, we consider piecewise constant and piecewise linear estimators induced by
k-means and k-flats, and analyze their performance. We extend previous results
for k-means in two separate directions. First, we provide new results for k-means
reconstruction on manifolds and, secondly, we prove reconstruction bounds for
higher-order approximation (k-flats), for which no known results were previously
available. While the results for k-means are novel, some of the technical tools are
well-established in the literature. In the case of k-flats, both the results and the
mathematical tools are new.
1
Introduction
Our study is broadly motivated by questions in high-dimensional learning. As is well known, learning in high dimensions is feasible only if the data distribution satisfies suitable prior assumptions.
One such assumption is that the data distribution lies on, or is close to, a low-dimensional set embedded in a high dimensional space, for instance a low dimensional manifold. This latter assumption
has proved to be useful in practice, as well as amenable to theoretical analysis, and it has led to
a significant amount of recent work. Starting from [23, 34, 5], this set of ideas, broadly referred
to as manifold learning, has been applied to a variety of problems from supervised [35] and semisupervised learning [6], to clustering [37] and dimensionality reduction [5], to name a few.
Interestingly, the problem of learning the manifold itself has received less attention: given samples
from a d-manifold M embedded in some ambient space X , the problem is to learn a set that approximates M in a suitable sense. This problem has been considered in computational geometry, but in
a setting in which typically the manifold is a hyper-surface in a low-dimensional space (e.g. R3 ),
and the data are typically not sampled probabilistically, see for instance [26, 24]. The problem of
learning a manifold is also related to that of estimating the support of a distribution, (see [13, 14] for
recent surveys.) In this context, some of the distances considered to measure approximation quality
are the Hausforff distance, and the so-called excess mass distance.
The reconstruction framework that we consider is related to the work of [1, 32], as well as to the
framework proposed in [30], in which a manifold is approximated by a set, with performance measured by an expected distance to this set. This setting is similar to the problem of dictionary learning
(see for instance [29], and extensive references therein), in which a dictionary is found by minimizing a similar reconstruction error, perhaps with additional constraints on an associated encoding of
the data. Crucially, while the dictionary is learned on the empirical data, the quantity of interest is
the expected reconstruction error, which is the focus of this work.
We analyze this problem by focusing on two important, and widely-used algorithms, namely kmeans and k-flats. The k-means algorithm can be seen to define a piecewise constant approximation
of M. Indeed, it induces a Voronoi decomposition on M, in which each Voronoi region is effectively
approximated by a fixed mean. Given this, a natural extension is to consider higher order approxima1
tions, such as those induced by discrete collections of k d-dimensional affine spaces (k-flats), with
possibly better resulting performance. Since M is a d-manifold, the k-flats approximation naturally
resembles the way in which a manifold is locally approximated by its tangent bundle.
Our analysis extends previous results for k-means to the case in which the data-generating distribution is supported on a manifold, and provides analogous results for k-flats. We note that the k-means
algorithm has been widely studied, and thus much of our analysis in this case involves the combination of known facts to obtain novel results. The analysis of k-flats, however, requires developing
substantially new mathematical tools.
The rest of the paper is organized as follows. In section 2, we describe the formal setting and
the algorithms that we study. We begin our analysis by discussing the reconstruction properties
of k-means in section 3. In section 4, we present and discuss our main results, whose proofs are
postponed to the appendices.
2
Learning Manifolds
Let X by a Hilbert space with inner product h?, ?i, endowed with a Borel probability measure ?
supported over a compact, smooth d-manifold M. We assume the data to be given by a training set,
in the form of samples Xn = (x1 , . . . , xn ) drawn identically and independently with respect to ?.
Our goal is to learn a set Sn that approximates well the manifold. The approximation (learning
error) is measured by the expected reconstruction error
Z
E? (Sn ) :=
d?(x) d2X (x, Sn ),
(1)
M
where the distance to a set S ? X is dX (x, S) = inf x0 ?S d2X (x, x0 ), with dX (x, x0 ) = kx ? x0 k.
This is the same reconstruction measure that has been the recent focus of [30, 4, 32].
2
It is easy to see that any set such that S ? M will have zero risk, with M being the ?smallest? such
set (with respect to set containment.) In other words, the above error measure does not introduce an
explicit penalty on the ?size? of Sn : enlarging any given Sn can never increase the learning error.
With this observation in mind, we study specific learning algorithms that, given the data, produce
a set belonging to some restricted hypothesis space H (e.g. sets of size k for k-means), which
effectively introduces a constraint on the size of the sets. Finally, note that the risk of Equation 1 is
non-negative and, if the hypothesis space is sufficiently rich, the risk of an unsupervised algorithm
may converge to zero under suitable conditions.
2.1
Using K-Means and K-Flats for Piecewise Manifold Approximation
In this work, we focus on two specific algorithms, namely k-means [28, 27] and k-flats [9]. Although
typically discussed in the Euclidean space case, their definition can be easily extended to a Hilbert
space setting. The study of manifolds embedded in a Hilbert space is of special interest when
considering non-linear (kernel) versions of the algorithms [15]. More generally, this setting can be
seen as a limit case when dealing with high dimensional data. Naturally, the more classical setting
of an absolutely continuous distribution over d-dimensional Euclidean space is simply a particular
case, in which X = Rd , and M is a domain with positive Lebesgue measure.
K-Means. Let H = Sk be the class of sets of size k in X . Given a training set Xn and a choice of
k, k-means is defined by the minimization over S ? Sk of the empirical reconstruction error
n
En (S) :=
1X 2
d (xi , S).
n i=1 X
(2)
where, for any fixed set S, En (S) is an unbiased empirical estimate of E? (S), so that k-means can
be seen to be performing a kind of empirical risk minimization [10, 7, 30, 8, 31].
A minimizer of Equation 2 on Sk is a discrete set of k means Sn,k = {m1 , . . . , mk }, which induces
a Dirichlet-Voronoi tiling of X : a collection of k regions, each closest to a common mean [3] (in our
notation, the subscript n denotes the dependence of Sn,k on the sample, while k refers to its size.)
By virtue of Sn,k being a minimizing set, each mean must occupy the center of mass of the samples
2
in its Voronoi region. These two facts imply that it is possible to compute a local minimum of
the empirical risk by using a greedy coordinate-descent relaxation, namely Lloyd?s algorithm [27].
Furthermore, given a finite sample Xn , the number of locally-minimizing sets Sn,k is also finite
since (by the center-of-mass condition) there cannot be more than the number of possible partitions
of Xn into k groups, and therefore the global minimum must be attainable. Even though Lloyd?s
algorithm provides no guarantees of closeness to the global minimizer, in practice it is possible to
use a randomized approximation algorithm, such as kmeans++ [2], which provides guarantees of
approximation to the global minimum in expectation with respect to the randomization.
K-Flats. Let H = Fk be the class of collections of k flats (affine spaces) of dimension d. For
any value of k, k-flats, analogously to k-means, aims at finding the set Fk ? Fk that minimizes the
empirical reconstruction (2) over Fk . By an argument similar to the one used for k-means, a global
minimizer must be attainable, and a Lloyd-type relaxation converges to a local minimum. Note that,
in this case, given a Voronoi partition of M into regions closest to each d-flat, new optimizing flats
for that partition can be computed by a d-truncated PCA solution on the samples falling in each
region.
2.2
Learning a Manifold with K-means and K-flats
In practice, k-means is often interpreted to be a clustering algorithm, with clusters defined by the
Voronoi diagram of the set of means Sn,k . In this interpretation, Equation 2 is simply rewritten
by summing over the Voronoi regions, and adding all pairwise distances between samples in the
region (the intra-cluster distances.) For instance, this point of view is considered in [11] where kmeans is studied from an information theoretic persepective. K-means can also be interpreted to
be performing vector quantization, where the goal is to minimize the encoding error associated to
a nearest-neighbor quantizer [17]. Interestingly, in the limit of increasing sample size, this problem
coincides, in a precise sense [33], with the problem of optimal quantization of probability distributions (see for instance the excellent monograph of [18].)
When the data-generating distribution is supported on a manifold M, k-means can be seen to be
approximating points on the manifold by a discrete set of means. Analogously to the Euclidean
setting, this induces a Voronoi decomposition of M, in which each Voronoi region is effectively
approximated by a fixed mean (in this sense k-means produces a piecewise constant approximation
of M.) As in the Euclidean setting, the limit of this problem with increasing sample size is precisely
the problem of optimal quantization of distributions on manifolds, which is the subject of significant
recent work in the field of optimal quantization [20, 21].
In this paper, we take the above view of k-means as defining a (piecewise constant) approximation of
the manifold M supporting the data distribution. In particular, we are interested in the behavior of
the expected reconstruction error E? (Sn,k ), for varying k and n. This perspective has an interesting
relation with dictionary learning, in which one is interested in finding a dictionary, and an associated
representation, that allows to approximately reconstruct a finite set of data-points/signals. In this
interpretation, the set of means can be seen as a dictionary of size k that produces a maximally
sparse representation (the k-means encoding), see for example [29] and references therein. Crucially,
while the dictionary is learned on the available empirical data, the quantity of interest is the expected
reconstruction error, and the question of characterizing the performance with respect to this latter
quantity naturally arises.
Since k-means produces a piecewise constant approximation of the data, a natural idea is to consider
higher orders of approximation, such as approximation by discrete collections of k d-dimensional
affine spaces (k-flats), with possibly better performance. Since M is a d-manifold, the approximation induced by k-flats may more naturally resemble the way in which a manifold is locally
approximated by its tangent bundle. We provide in Sec. 4.2 a partial answer to this question.
3
Reconstruction Properties of k-Means
Since we are interested in the behavior of the expected reconstruction (1) of k-means and k-flats for
varying k and n, before analyzing this behavior, we consider what is currently known about this
problem, based on previous work. While k-flats is a relatively new algorithm whose behavior is not
yet well understood, several properties of k-means are currently known.
3
MNIST Dataset
Sphere Dataset
6
3.5
1.05
n=100
n=200
n=500
n=1000
n=100
n=200
n=500
n=1000
Test set reconstruction error: k?means, MNIST
1
Expected reconstruction error: k?means, d=20
x 10
0.95
0.9
0.85
0.8
0.75
3
2.5
0.7
0.65
0
0.1
0.2
0.3
0.4
0.5
k/n
0.6
0.7
0.8
0.9
2
1
0
0.1
0.2
0.3
0.4
0.5
k/n
0.6
0.7
0.8
0.9
1
Figure 1: We consider the behavior of k-means for data sets obtained by sampling uniformly a 19
dimensional sphere embedded in R20 (left). For each value of k, k-means (with k-means++ seeding)
is run 20 times, and the best solution kept. The reconstruction performance on a (large) hold-out set
is reported as a function of k. The results for four different training set cardinalities are reported: for
small number of points, the reconstruction error decreases sharply for small k and then increases,
while it is simply decreasing for larger data sets. A similar experiment, yielding similar results,
is performed on subsets of the MNIST (http://yann.lecun.com/exdb/mnist) database
(right). In this case the data might be thought to be concentrated around a low dimensional manifold.
For example [22] report an average intrinsic dimension d for each digit to be between 10 and 13.
Recall that k-means find an discrete set Sn,k of size k that best approximates the samples in the
sense of (2). Clearly, as k increases, the empirical reconstruction error En (Sn,k ) cannot increase,
and typically decreases. However, we are ultimately interested in the expected reconstruction error,
and therefore would like to understand the behavior of E? (Sn,k ) with varying k, n.
In the context of optimal quantization, the behavior of the expected reconstruction error E? has been
considered for an approximating set Sk obtained by minimizing the expected reconstruction error
itself over the hypothesis space H = Sk . The set Sk can thus be interpreted as the output of a
population, or infinite sample version of k-means. In this case, it is possible to show that E? (Sk ) is
a non increasing function of k and, in fact, to derive explicit rates. For example in the case X = Rd ,
and under fairly general technical assumptions, it is possible to show that E? (Sk ) = ?(k ?2/d ),
where the constants depend on ? and d [18].
In machine learning, the properties of k-means have been studied, for fixed k, by considering the
excess reconstruction error E? (Sn,k ) ? E? (Sk ). In particular,
p this quantity has been studied for
X = Rd , and shown to be, with high probability, of order kd/n, up-to logarithmic factors [31].
The
?case where X is a Hilbert space has been considered in [30, 8], where an upper-bound of order
k/ n is proven to hold with high probability. The more general setting where X is a metric space
has been studied in [7].
When analyzing the behavior of E? (Sn,k ), and in the particular case that X = Rd , the above results
can be combined to obtain, with high probability, a bound of the form
E? (Sn,k ) ? |E? (Sn,k ) ? En (Sn,k )| + En (Sn,k ) ? En (Sk ) + |En (Sk ) ? E? (Sk )| + E? (Sk )
!
r
kd
?C
+ k ?2/d
(3)
n
up to logarithmic factors, where the constant C does not depend on k or n (a complete derivation is
given in the Appendix.) The above inequality suggests a somewhat surprising effect: the expected
reconstruction
properties of k-means may be described by a trade-off between a statistical error (of
q
kd
?2/d
order
.)
n ) and a geometric approximation error (of order k
The existence of such a tradeoff between the approximation, and the statistical errors may itself not
be entirely obvious, see the discussion in [4]. For instance, in the k-means problem, it is intuitive
that, as more means are inserted, the expected distance from a random sample to the means should
4
(a) E? (Sk=1 ) ' 1.5
(b) E? (Sk=2 ) ' 2
Figure 2: The optimal k-means (red) computed from n = 2 samples drawn uniformly on S100 (blue.) For a)
k = 1, the expected squared-distance to a random point x ? S100 is E? (Sk=1 ) ' 1.5, while for b) k = 2, it is
E? (Sk=2 ) ' 2.
decrease, and one might expect a similar behavior for the expected reconstruction error. This observation naturally begs the question of whether and when this trade-off really exists or if it is simply a
result of the looseness in the bounds. In particular, one could ask how tight the bound (3) is.
While the bound on E? (Sk ) is known to be tight for k sufficiently large [18], the remaining terms
(which are dominated by |E? (Sn,k ) ? En (Sn,k )|) are derived by controlling the supremum of an
empirical process
sup |En (S) ? E? (S)|
(4)
S?Sk
and it is unknown whether available bounds for it are tight [30]. Indeed, it is not clearqhow close
1? 4
the distortion redundancy E? (Sn,k ) ? E? (Sk ) is to its known lower bound of order d k n d (in
expectation) [4]. More importantly, we are not aware of a lower bound for E? (Sn,k ) itself. Indeed,
as pointed out in [4], ?The exact dependence of the minimax distortion redundancy on k and d is
still a challenging open problem?.
Finally, we note that, whenever a trade-off can be shown to hold, it may be used to justify a heuristic
for choosing k empirically as the value that minimizes the reconstruction error in a hold-out set.
In Figure 1 we perform some simple numerical simulations showing that the trade-off indeed occurs
in certain regimes. The following example provides a situation where a trade-off can be easily shown
to occur.
Example 1. Consider a setup in which n = 2 samples are drawn from a uniform distribution on the
unit d = 100-sphere, though the argument holds for other n much smaller than d. Because d n,
with high probability, the samples are nearly orthogonal: < x1 , x2 >X ' 0, while a third sample x
drawn uniformly on S100 will also very likely be nearly orthogonal to both x1 , x2 [25]. The k-means
solution on this dataset is clearly Sk=1 = {(x1 + x2 )/2} (Fig 2(a)). Indeed, since Sk=2 = {x1 , x2 }
(Fig 2(b)), it is E? (Sk=1 ) ' 1.5 < 2 ' E? (Sk=2 ) with very high probability. In this case, it is
better to place a single mean closer to the origin (with E? ({0}) = 1), than to place two means at
the sample locations. This example is sufficiently simple that the exact k-means solution is known,
but the effect can be observed in more complex settings.
4
Main Results
Contributions. Our work extends previous results in two different directions:
(a) We provide an analysis of k-means for the case in which the data-generating distribution is
supported on a manifold embedded in a Hilbert space. In particular, in this setting: 1) we derive
new results on the approximation error, and 2) new sample complexity results (learning rates)
arising from the choice of k by optimizing the resulting bound. We analyze the case in which
a solution is obtained from an approximation algorithm, such as k-means++ [2], to include this
computational error in the bounds.
5
(b) We generalize the above results from k-means to k-flats, deriving learning rates obtained from
new bounds on both the statistical and the approximation errors. To the best of our knowledge,
these results provide the first theoretical analysis of k-flats in either sense.
We note that the k-means algorithm has been widely studied in the past, and much of our analysis in
this case involves the combination of known facts to obtain novel results. However, in the case of kflats, there is currently no known analysis, and we provide novel results as well as new performance
bounds for each of the components in the bounds.
Throughout this section we make the following technical assumption:
Assumption 1. M is a smooth d-manifold with metric of class C 1 , contained in the unit ball in X ,
and with volume measure denoted by ?I . The probability measure ? is absolutely continuous with
respect to ?I , with density p.
4.1
Learning Rates for k-Means
The first result considers the idealized case where we have access to an exact solution for k-means.
Theorem 1. Under Assumption 1, if Sn,k is a solution of k-means then, for 0 < ? < 1, there are
constants C and ? dependent only on d, and sufficiently large n0 such that, by setting
d/(d+2) Z
d
C
?
kn = n 2(d+2) ?
?
d?I (x)p(x)d/(d+2) ,
(5)
24 ?
M
and Sn = Sn,kn , it is
Z
p
?1/(d+2)
P E? (Sn ) ? ? ? n
? ln 1/? ?
d/(d+2)
d?I (x)p(x)
? 1 ? ?,
(6)
M
for all n ? n0 , where C ? d/(2?e) and ? grows sublinearly with d.
Remark 1. Note that the distinction between distributions with density in M, and singular distributions is important. The bound of Equation (6) holds only when the absolutely continuous part of ?
over M is non-vanishing. the case in which the distribution is singular over M requires a different
analysis, and may result in faster convergence rates.
The following result considers the case where the k-means++ algorithm is used to compute the
estimator.
Theorem 2. Under Assumption 1, if Sn,k is the solution of k-means++ , then for 0 < ? < 1, there
are constants C and ? that depend only on d, and a sufficiently large n0 such that, by setting
d/(d+2) Z
d
C
?
kn = n 2(d+2) ?
?
d?I (x)p(x)d/(d+2) ,
(7)
24 ?
M
and Sn = Sn,kn , it is
Z
p
?1/(d+2)
P EZ E? (Sn ) ? ? ? n
ln n + ln kpkd/(d+2) ? ln 1/? ?
d/(d+2)
d?I (x)p(x)
? 1??,
M
(8)
for all n ? n0 , where the expectation is with respect to the random choice Z in the algorithm, and
Z
(d+2)/d
d/(d+2)
kpkd/(d+2) =
d?I (x)p(x)
, C ? d/(2?e), and ? grows sublinearly with d.
M
Remark 2. In the particular case that X = Rd and M is contained in the unit ball, we may further
bound the distribution-dependent part of Equations 6 and 8. Using H?older?s inequality, one obtains
Z
d/(d+2) Z
2/(d+2)
Z
d?(x)p(x)d/(d+2) ?
d?(x)p(x)
?
d?(x)
(9)
M
M
2/(d+2)
? Vol(M)2/(d+2) ? ?d
,
where ? is the Lebesgue measure in Rd , and ?d is the volume of the d-dimensional unit ball.
6
It is clear from the proof of Theorem 1 that, in this case, we may choose
d/(d+2)
d
C
2/d
2(d+2)
?
kn = n
? ?d ,
?
24 ?
p
independently of the density p, to obtain a bound E? (Sn? ) = O n?1/(d+2) ? ln 1/? with probability 1 ? ? (and similarly for Theorem 2, except for an additional ln n term), where the constant
only depends on the dimension.
Remark 3. Note that according to the above theorems, choosing k requires knowledge of properties of the distribution ? underlying the data, such as the intrinsic dimension of the support. In fact,
following the ideas in [36] Section 6.3-5, it is easy to prove that choosing k to minimize the reconstruction error on a hold-out set, allows to achieve the same learning rates (up to a logarithmic
factor), adaptively in the sense that knowledge of properties of ? are not needed.
4.2
Learning Rates for k-Flats
To study k-flats, we need to slightly strengthen Assumption 1 by adding to it by the following:
Assumption 2. Assume the manifold M to have metric of class C 3 , and finite second fundamental
form II [16].
One reason for the higher-smoothness assumption is that k-flats uses higher order approximation,
whose analysis requires a higher order of differentiability.
We begin by providing a result for k-flats on hypersurfaces (codimension one), and next extend it to
manifolds in more general spaces.
Theorem 3. Let, X = Rd+1 . Under Assumptions 1,2, if Fn,k is a solution of k-flats, then there is a
constant C that depends only on d, and sufficiently large n0 such that, by setting
d/(d+4)
d
C
4/(d+4)
?
kn = n 2(d+4) ?
? (?M )
,
(10)
2 2?d
and Fn = Fn,kn , then for all n ? n0 it is
"
2/(d+4)
P E? (Fn ) ? 2 (8?d)
r
C d/(d+4) ? n?2/(d+4) ?
#
1
4/(d+4)
ln 1/? ? (?M )
? 1 ? ?,
2
(11)
R
1/2
where ?M := ?|II| (M) = M d?I (x)|?G (x)| is the total root curvature of M, ?|II| is the measure
associated with the (positive) second fundamental form, and ?G is the Gaussian curvature on M.
In the more general case of a d-manifold M (with metric in C 3 ) embedded in a separable Hilbert
space X , we cannot make any assumption on the codimension of M (the dimension of the orthogonal complement to the tangent space at each point.) In particular, the second fundamental form II,
which is an extrinsic quantity describing how the tangent spaces bend locally is, at every x ? M, a
?
map IIx : Tx M 7? (Tx M) (in this case of class C 1 by Assumption 2) from the tangent space to
its orthogonal complement (II(x) := B(x, x) in the notation of [16, p. 128].) Crucially, in this case,
?
we may no longer assume the dimension of the orthogonal complement (Tx M) to be finite.
Denote by |IIx | = supr?Tx M kIIx (r)kX , the operator norm of IIx . We have:
krk?1
Theorem 4. Under Assumptions 1,2, if Fn,k is a solution to the k-flats problem, then there is a
constant C that depends only on d, and sufficiently large n0 such that, by setting
d/(d+4)
d
C
2(d+4)
?
kn = n
?
? ?4/(d+4)
,
(12)
M
2 2?d
and Fn = Fn,kn , then for all n ? n0 it is
"
2/(d+4)
P E? (Fn ) ? 2 (8?d)
where ?M :=
R
M
C
d/(d+4)
r
?n
?2/(d+4)
d?I (x) |IIx |2
7
?
#
1
4/(d+4)
ln 1/? ? ?M
? 1 ? ?,
2
(13)
Note that the better k-flats bounds stem from the higher approximation power of d-flats over points.
Although this greatly complicates the setup and
proofs, as well as the analysis of the constants,the
resulting bounds are of order O n?2/(d+4) , compared with the slower order O n?1/(d+2) of
k-means.
4.3
Discussion
In all the results, the final performance does not depend on the dimensionality of the embedding
space (which in fact can be infinite), but only on the intrinsic dimension of the space on which the
data-generating distribution is defined. The key to these results is an approximation construction in
which the Voronoi regions on the manifold (points closest to a given mean or flat) are guaranteed to
have vanishing diameter in the limit of k going to infinity. Under our construction, a hypersurface is
approximated efficiently by tracking the variation of its tangent spaces by using the second fundamental form. Where this form vanishes, the Voronoi regions of an approximation will not be ensured
to have vanishing diameter with k going to infinity, unless certain care is taken in the analysis.
An important point of interest is that the approximations are controlled by averaged quantities,
such as the total root curvature (k-flats for surfaces of codimension one), total curvature (k-flats
in arbitrary codimensions), and d/(d + 2)-norm of the probability density (k-means), which are
integrated over the domain where the distribution is defined. Note that these types of quantities have
been linked to provably tight approximations in certain cases, such as for convex manifolds [19, 12],
in contrast with worst-case methods that place a constraint on a maximum curvature, or minimum
injectivity radius (for instance [1, 32].) Intuitively, it is easy to see that a constraint on an average
quantity may be arbitrarily less restrictive than one on its maximum. A small difficult region (e.g.
of very high curvature) may cause the bounds of the latter to substantially degrade, while the results
presented here would not be adversely affected so long as the region is small.
Additionally, care has been taken throughout to analyze the behavior of the constants. In particular,
there are no constants in the analysis that grow exponentially with the dimension, and in fact, many
have polynomial, or slower growth. We believe this to be an important point, since this ensures that
the asymptotic bounds do not hide an additional exponential dependence on the dimension.
References
[1] William K Allard, Guangliang Chen, and Mauro Maggioni. Multiscale geometric methods for data sets
ii: Geometric multi-resolution analysis. Applied and Computational Harmonic Analysis, 1:1?38, 2011.
[2] David Arthur and Sergei Vassilvitskii. k?means++: the advantages of careful seeding. In Proceedings
of the eighteenth annual ACM-SIAM symposium on Discrete algorithms, SODA ?07, pages 1027?1035,
Philadelphia, PA, USA, 2007. SIAM.
[3] Franz Aurenhammer. Voronoi diagrams: A survey of a fundamental geometric data structure. ACM
Comput. Surv., 23:345?405, September 1991.
[4] Peter L. Bartlett, Tamas Linder, and Gabor Lugosi. The minimax distortion redundancy in empirical
quantizer design. IEEE Transactions on Information Theory, 44:1802?1813, 1998.
[5] M. Belkin and P. Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation.
Neural Comput., 15(6):1373?1396, 2003.
[6] Mikhail Belkin, Partha Niyogi, and Vikas Sindhwani. Manifold regularization: a geometric framework
for learning from labeled and unlabeled examples. J. Mach. Learn. Res., 7:2399?2434, 2006.
[7] Shai Ben-David. A framework for statistical clustering with constant time approximation algorithms for
k-median and k-means clustering. Mach. Learn., 66(2-3):243?257, March 2007.
[8] G?erard Biau, Luc Devroye, and G?abor Lugosi. On the performance of clustering in hilbert spaces. IEEE
Transactions on Information Theory, 54(2):781?790, 2008.
[9] P. S. Bradley and O. L. Mangasarian. k-plane clustering. J. of Global Optimization, 16:23?32, January
2000.
[10] Joachim M. Buhmann. Empirical risk approximation: An induction principle for unsupervised learning.
Technical report, University of Bonn, 1998.
[11] Joachim M. Buhmann. Information theoretic model validation for clustering. In International Symposium
on Information Theory, Austin Texas. IEEE, 2010. (in press).
8
[12] Kenneth L. Clarkson. Building triangulations using -nets. In Proceedings of the thirty-eighth annual
ACM symposium on Theory of computing, STOC ?06, pages 326?335, New York, NY, USA, 2006. ACM.
[13] A. Cuevas and R. Fraiman. Set estimation. In New perspectives in stochastic geometry, pages 374?397.
Oxford Univ. Press, Oxford, 2010.
[14] A. Cuevas and A. Rodr??guez-Casal. Set estimation: an overview and some recent developments. In Recent
advances and trends in nonparametric statistics, pages 251?264. Elsevier B. V., Amsterdam, 2003.
[15] Inderjit S. Dhillon, Yuqiang Guan, and Brian Kulis. Kernel k-means: spectral clustering and normalized
cuts. In Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and
data mining, KDD ?04, pages 551?556, New York, NY, USA, 2004. ACM.
[16] M.P. DoCarmo. Riemannian geometry. Theory and Applications Series. Birkh?auser, 1992.
[17] Allen Gersho and Robert M. Gray. Vector quantization and signal compression. Kluwer Academic
Publishers, Norwell, MA, USA, 1991.
[18] Siegfried Graf and Harald Luschgy. Foundations of quantization for probability distributions. SpringerVerlag New York, Inc., Secaucus, NJ, USA, 2000.
[19] P. M. Gruber. Asymptotic estimates for best and stepwise approximation of convex bodies i. Forum
Mathematicum, 15:281?297, 1993.
[20] Peter M. Gruber. Optimum quantization and its applications. Adv. Math, 186:2004, 2002.
[21] P.M. Gruber. Convex and discrete geometry. Grundlehren der mathematischen Wissenschaften. Springer,
2007.
[22] Matthias Hein and Jean-Yves Audibert. Intrinsic dimensionality estimation of submanifolds in rd. In
ICML ?05: Proceedings of the 22nd international conference on Machine learning, pages 289?296, 2005.
[23] V. De Silva J. B. Tenenbaum and J. C. Langford. A global geometric framework for nonlinear dimensionality reduction. Science, 290(5500):2319?2323, 2000.
[24] Ravikrishna Kolluri, Jonathan Richard Shewchuk, and James F. O?Brien. Spectral surface reconstruction
from noisy point clouds. In Proceedings of the 2004 Eurographics/ACM SIGGRAPH symposium on
Geometry processing, SGP ?04, pages 11?21, New York, NY, USA, 2004. ACM.
[25] M. Ledoux. The Concentration of Measure Phenomenon. Mathematical Surveys and Monographs. American Mathematical Society, 2001.
[26] David Levin. Mesh-independent surface interpolation. In Hamann Brunnett and Mueller, editors, Geometric Modeling for Scientific Visualization, pages 37?49. Springer-Verlag, 2003.
[27] Stuart P. Lloyd. Least squares quantization in pcm. IEEE Transactions on Information Theory, 28:129?
137, 1982.
[28] J. B. MacQueen. Some methods for classification and analysis of multivariate observations. In L. M. Le
Cam and J. Neyman, editors, Proc. of the fifth Berkeley Symposium on Mathematical Statistics and Probability, volume 1, pages 281?297. University of California Press, 1967.
[29] Julien Mairal, Francis Bach, Jean Ponce, and Guillermo Sapiro. Online dictionary learning for sparse
coding. In Proceedings of the 26th Annual International Conference on Machine Learning, ICML ?09,
pages 689?696, 2009.
[30] A. Maurer and M. Pontil. K?dimensional coding schemes in hilbert spaces. IEEE Transactions on
Information Theory, 56(11):5839 ?5846, nov. 2010.
[31] A. Maurer and M. Pontil. K-dimensional coding schemes in Hilbert spaces. IEEE Trans.Inf.Th, 56(11),
2010.
[32] Hariharan Narayanan and Sanjoy Mitter. Sample complexity of testing the manifold hypothesis. In
Advances in Neural Information Processing Systems 23, pages 1786?1794. MIT Press, 2010.
[33] David Pollard. Strong consistency of k-means clustering. Annals of Statistics, 9(1):135?140, 1981.
[34] ST Roweis and LK Saul. Nonlinear dimensionality reduction by locally linear embedding. Science,
290:2323?2326, 2000.
[35] Florian Steinke, Matthias Hein, and Bernhard Sch?olkopf. Nonparametric regression between general
Riemannian manifolds. SIAM J. Imaging Sci., 3(3):527?563, 2010.
[36] I. Steinwart and A. Christmann. Support vector machines. Information Science and Statistics. Springer,
New York, 2008.
[37] Ulrike von Luxburg. A tutorial on spectral clustering. Stat. Comput., 17(4):395?416, 2007.
9
|
4777 |@word kulis:1 version:2 compression:1 polynomial:1 norm:2 nd:1 open:1 simulation:1 crucially:3 decomposition:2 attainable:2 reduction:4 series:1 interestingly:2 past:1 brien:1 bradley:1 com:1 surprising:1 yet:1 dx:2 must:3 sergei:1 guez:1 fn:8 numerical:1 partition:3 mesh:1 kdd:1 seeding:2 n0:8 greedy:1 plane:1 vanishing:3 provides:4 quantizer:2 math:1 location:1 mathematical:5 symposium:5 prove:2 introduce:1 pairwise:1 x0:4 sublinearly:2 indeed:5 behavior:10 expected:14 tomaso:1 multi:1 decreasing:1 considering:2 increasing:3 cardinality:1 begin:2 estimating:2 notation:2 underlying:1 mass:3 what:1 kind:1 interpreted:3 substantially:2 minimizes:2 submanifolds:1 finding:2 nj:1 guarantee:2 sapiro:1 berkeley:1 every:1 cuevas:2 growth:1 ensured:1 unit:4 positive:2 before:1 understood:1 local:2 limit:4 encoding:3 mach:2 analyzing:2 oxford:2 subscript:1 interpolation:1 approximately:1 lugosi:2 might:2 therein:2 resembles:1 studied:6 suggests:1 challenging:1 averaged:1 lecun:1 thirty:1 testing:1 practice:3 digit:1 pontil:2 empirical:11 thought:1 gabor:1 word:1 refers:1 cannot:3 close:2 unlabeled:1 bend:1 operator:1 context:2 risk:6 map:1 center:2 eighteenth:1 attention:1 starting:1 independently:2 convex:3 survey:3 resolution:1 canas:1 estimator:2 yuqiang:1 importantly:1 deriving:1 fraiman:1 population:1 embedding:2 maggioni:1 coordinate:1 variation:1 analogous:1 annals:1 controlling:1 construction:2 strengthen:1 exact:3 us:1 hypothesis:4 origin:1 shewchuk:1 pa:1 surv:1 trend:1 approximated:6 cut:1 database:1 labeled:1 observed:1 inserted:1 cloud:1 worst:1 region:12 ensures:1 adv:1 decrease:3 trade:5 monograph:2 vanishes:1 complexity:2 cam:1 ultimately:1 depend:4 tight:4 easily:2 siggraph:1 iit:1 tx:4 derivation:1 univ:1 describe:1 birkh:1 mcgovern:1 hyper:1 choosing:3 whose:3 heuristic:1 widely:3 larger:1 jean:2 distortion:3 reconstruct:1 casal:1 niyogi:2 statistic:4 itself:4 noisy:1 final:1 online:1 advantage:1 ledoux:1 net:1 matthias:2 reconstruction:28 product:1 achieve:1 roweis:1 intuitive:1 secaucus:1 olkopf:1 convergence:1 cluster:2 optimum:1 produce:4 generating:4 converges:1 ben:1 tions:1 derive:2 stat:1 measured:2 nearest:1 received:1 strong:1 involves:2 resemble:1 christmann:1 direction:2 radius:1 stochastic:1 really:1 randomization:1 brian:1 secondly:1 extension:1 hold:7 sufficiently:7 considered:5 around:1 cbcl:1 dictionary:8 smallest:1 estimation:3 proc:1 currently:3 tool:3 minimization:2 mit:5 clearly:2 gaussian:1 aim:1 varying:3 probabilistically:1 derived:1 focus:3 joachim:2 ponce:1 greatly:1 contrast:1 sigkdd:1 sense:6 elsevier:1 mueller:1 voronoi:12 dependent:2 typically:4 integrated:1 abor:1 relation:1 going:2 interested:4 provably:1 rodr:1 classification:1 denoted:1 development:1 special:1 fairly:1 auser:1 field:1 aware:1 never:1 sampling:1 stuart:1 unsupervised:2 nearly:2 icml:2 report:2 piecewise:7 richard:1 few:1 belkin:2 geometry:5 lebesgue:2 william:1 interest:4 mining:1 intra:1 introduces:1 yielding:1 bundle:2 amenable:1 norwell:1 ambient:1 closer:1 partial:1 arthur:1 poggio:1 orthogonal:5 unless:1 euclidean:4 supr:1 maurer:2 re:1 hein:2 theoretical:2 mk:1 complicates:1 instance:7 modeling:1 tp:1 subset:1 uniform:1 eigenmaps:1 levin:1 reported:2 kn:9 answer:1 combined:1 adaptively:1 st:1 density:4 international:4 fundamental:5 randomized:1 siam:3 off:5 analogously:2 squared:1 von:1 eurographics:1 choose:1 rosasco:1 possibly:2 adversely:1 american:1 de:1 lloyd:4 sec:1 coding:3 inc:1 audibert:1 idealized:1 depends:3 performed:1 view:2 root:2 analyze:4 sup:1 red:1 linked:1 francis:1 ulrike:1 shai:1 contribution:1 minimize:2 partha:1 yves:1 square:1 hariharan:1 efficiently:1 biau:1 generalize:1 guangliang:1 whenever:1 definition:1 james:1 obvious:1 naturally:5 associated:4 proof:3 riemannian:2 sampled:1 proved:1 dataset:3 massachusetts:1 ask:1 recall:1 knowledge:4 dimensionality:6 organized:1 hilbert:9 focusing:1 higher:7 supervised:1 maximally:1 though:2 furthermore:1 langford:1 steinwart:1 multiscale:1 nonlinear:2 quality:1 perhaps:1 gray:1 scientific:1 grows:2 semisupervised:1 believe:1 name:1 effect:2 usa:6 building:1 unbiased:1 tamas:1 normalized:1 regularization:1 laboratory:1 dhillon:1 sgp:1 coincides:1 exdb:1 theoretic:2 complete:1 allen:1 silva:1 harmonic:1 novel:4 mangasarian:1 common:1 empirically:1 overview:1 exponentially:1 volume:3 extend:2 discussed:1 approximates:3 m1:1 interpretation:2 kluwer:1 mathematischen:1 significant:2 ai:1 smoothness:1 rd:8 fk:4 consistency:1 similarly:1 pointed:1 access:1 longer:1 surface:4 curvature:6 closest:3 multivariate:1 recent:6 hide:1 perspective:2 lrosasco:1 inf:2 optimizing:2 triangulation:1 certain:3 verlag:1 inequality:2 allard:1 arbitrarily:1 discussing:1 postponed:1 der:1 seen:5 minimum:5 additional:3 somewhat:1 care:2 injectivity:1 florian:1 converge:1 signal:2 ii:6 stem:1 smooth:2 technical:4 faster:1 academic:1 bach:1 sphere:3 long:1 controlled:1 laplacian:1 regression:1 expectation:3 metric:4 kernel:2 harald:1 diagram:2 singular:2 grow:1 median:1 publisher:1 sch:1 rest:1 guilledc:1 induced:3 subject:1 grundlehren:1 identically:1 easy:3 variety:1 codimension:3 inner:1 idea:3 tradeoff:1 texas:1 whether:2 motivated:1 pca:1 vassilvitskii:1 bartlett:1 penalty:1 clarkson:1 peter:2 pollard:1 york:5 cause:1 remark:3 useful:1 generally:1 clear:1 amount:1 nonparametric:2 locally:5 tenenbaum:1 induces:3 concentrated:1 differentiability:1 diameter:2 narayanan:1 http:1 occupy:1 tutorial:1 arising:1 extrinsic:1 blue:1 broadly:2 discrete:7 vol:1 affected:1 group:1 redundancy:3 four:1 key:1 falling:1 drawn:4 tenth:1 kenneth:1 kept:1 imaging:1 relaxation:2 run:1 luxburg:1 soda:1 extends:2 place:3 throughout:2 yann:1 appendix:2 entirely:1 bound:21 guaranteed:1 annual:3 occur:1 constraint:4 precisely:1 sharply:1 infinity:2 x2:4 flat:34 s100:3 dominated:1 bonn:1 argument:2 performing:2 separable:1 relatively:1 developing:1 according:1 combination:2 ball:3 march:1 belonging:1 kd:3 smaller:1 slightly:1 intuitively:1 restricted:1 taken:2 ln:8 equation:5 visualization:1 previously:1 neyman:1 discus:1 r3:1 describing:1 needed:1 mind:1 gersho:1 tiling:1 available:3 endowed:1 rewritten:1 spectral:3 slower:2 existence:1 vikas:1 luschgy:1 denotes:1 clustering:10 dirichlet:1 remaining:1 include:1 iix:4 restrictive:1 approximating:2 classical:1 forum:1 society:1 aurenhammer:1 question:4 quantity:8 occurs:1 concentration:1 dependence:3 september:1 distance:9 separate:1 sci:1 mauro:1 degrade:1 manifold:36 considers:2 reason:1 induction:1 devroye:1 providing:1 minimizing:4 setup:2 difficult:1 robert:1 stoc:1 negative:1 design:1 looseness:1 unknown:1 perform:1 upper:1 observation:3 macqueen:1 finite:5 descent:1 truncated:1 supporting:1 defining:1 extended:1 situation:1 precise:1 january:1 arbitrary:1 david:4 complement:3 namely:3 extensive:1 california:1 learned:2 distinction:1 established:1 trans:1 eighth:1 regime:1 power:1 suitable:3 natural:2 buhmann:2 minimax:2 older:1 scheme:2 technology:1 lorenzo:1 imply:1 julien:1 lk:1 philadelphia:1 sn:33 prior:1 literature:1 geometric:7 tangent:6 discovery:1 asymptotic:2 graf:1 embedded:6 expect:1 interesting:1 proven:1 validation:1 foundation:1 affine:3 gruber:3 principle:1 begs:1 editor:2 austin:1 guillermo:2 supported:4 formal:1 understand:1 institute:2 neighbor:1 saul:1 characterizing:1 d2x:2 steinke:1 mikhail:1 sparse:2 fifth:1 dimension:10 xn:5 rich:1 collection:4 franz:1 hypersurface:1 transaction:4 excess:2 nov:1 compact:1 obtains:1 bernhard:1 r20:1 dealing:1 supremum:1 global:6 mairal:1 summing:1 containment:1 xi:1 continuous:3 kolluri:1 sk:24 additionally:1 learn:4 siegfried:1 excellent:1 complex:1 domain:2 krk:1 wissenschaften:1 main:2 x1:5 body:1 fig:2 referred:1 mitter:1 en:9 borel:1 ny:3 explicit:2 exponential:1 comput:3 lie:1 guan:1 third:1 theorem:7 enlarging:1 specific:2 showing:1 virtue:1 closeness:1 intrinsic:4 exists:1 quantization:9 mnist:4 adding:2 effectively:3 stepwise:1 kx:2 chen:1 led:1 logarithmic:3 simply:4 likely:1 pcm:1 ez:1 erard:1 amsterdam:1 contained:2 tracking:1 inderjit:1 sindhwani:1 springer:3 hypersurfaces:1 minimizer:3 satisfies:1 acm:8 ma:1 goal:2 kmeans:3 careful:1 luc:1 feasible:1 springerverlag:1 infinite:2 except:1 uniformly:3 justify:1 called:1 total:3 sanjoy:1 linder:1 support:3 latter:3 arises:1 jonathan:1 absolutely:3 phenomenon:1
|
4,173 | 4,778 |
One Permutation Hashing
Ping Li
Department of Statistical Science
Cornell University
Art B Owen
Department of Statistics
Stanford University
Cun-Hui Zhang
Department of Statistics
Rutgers University
Abstract
Minwise hashing is a standard procedure in the context of search, for efficiently
estimating set similarities in massive binary data such as text. Recently, b-bit
minwise hashing has been applied to large-scale learning and sublinear time nearneighbor search. The major drawback of minwise hashing is the expensive preprocessing, as the method requires applying (e.g.,) k = 200 to 500 permutations
on the data. This paper presents a simple solution called one permutation hashing.
Conceptually, given a binary data matrix, we permute the columns once and divide
the permuted columns evenly into k bins; and we store, for each data vector, the
smallest nonzero location in each bin. The probability analysis illustrates that this
one permutation scheme should perform similarly to the original (k-permutation)
minwise hashing. Our experiments with training SVM and logistic regression confirm that one permutation hashing can achieve similar (or even better) accuracies
compared to the k-permutation scheme. See more details in arXiv:1208.1259.
1 Introduction
Minwise hashing [4, 3] is a standard technique in the context of search, for efficiently computing
set similarities. Recently, b-bit minwise hashing [18, 19], which stores only the lowest b bits of
each hashed value, has been applied to sublinear time near neighbor search [22] and learning [16],
on large-scale high-dimensional binary data (e.g., text). A drawback of minwise hashing is that it
requires a costly preprocessing step, for conducting (e.g.,) k = 200 ? 500 permutations on the data.
1.1
Massive High-Dimensional Binary Data
In the context of search, text data are often processed to be binary in extremely high dimensions. A
standard procedure is to represent documents (e.g., Web pages) using w-shingles (i.e., w contiguous
words), where w ? 5 in several studies [4, 8]. This means the size of the dictionary needs to be
substantially increased, from (e.g.,) 105 common English words to 105w ?super-words?. In current
practice, it appears sufficient to set the total dimensionality to be D = 264 , for convenience. Text
data generated by w-shingles are often treated as binary. The concept of shingling can be naturally
extended to Computer Vision, either at pixel level (for aligned images) or at Visual feature level [23].
In machine learning practice, the use of extremely high-dimensional data has become common. For
example, [24] discusses training datasets with (on average) n = 1011 items and D = 109 distinct
features. [25] experimented with a dataset of potentially D = 16 trillion (1.6?1013 ) unique features.
1.2
Minwise Hashing and b-Bit Minwise Hashing
Minwise hashing was mainly designed for binary data. A binary (0/1) data vector can be viewed as
a set (locations of the nonzeros). Consider sets Si ? ? = {0, 1, 2, ..., D ? 1}, where D, the size of
the space, is often set as D = 264 in industrial applications. The similarity between two sets, S1 and
S2 , is commonly measured by the resemblance, which is a version of the normalized inner product:
a
|S1 ? S2 |
=
,
where f1 = |S1 |, f2 = |S2 |, a = |S1 ? S2 |
(1)
R=
|S1 ? S2 |
f1 + f2 ? a
For large-scale applications, the cost of computing resemblances exactly can be prohibitive in time,
space, and energy-consumption. The minwise hashing method was proposed for efficient computing
resemblances. The method requires applying k independent random permutations on the data.
Denote ? a random permutation: ? : ? ? ?. The hashed values are the two minimums of ?(S1 )
and ?(S2 ). The probability at which the two hashed values are equal is
|S1 ? S2 |
Pr (min(?(S1 )) = min(?(S2 ))) =
=R
(2)
|S1 ? S2 |
1
One can then estimate R from k independent permutations, ?1 , ..., ?k :
k
?
?
X
?M = 1
? M = 1 R(1 ? R)
R
1{min(?j (S1 )) = min(?j (S2 ))},
Var R
k j=1
k
(3)
Because the indicator function 1{min(?j (S1 )) = min(?j (S2 ))} can be written as an inner product
between two binary vectors (each having only one 1) in D dimensions [16]:
1{min(?j (S1 )) = min(?j (S2 ))} =
D?1
X
1{min(?j (S1 )) = i} ? 1{min(?j (S2 )) = i}
(4)
i=0
we know that minwise hashing can be potentially used for training linear SVM and logistic regression on high-dimensional binary data by converting the permuted data into a new data matrix in
D ? k dimensions. This of course would not be realistic if D = 264 .
The method of b-bit minwise hashing [18, 19] provides a simple solution by storing only the lowest
b bits of each hashed data, reducing the dimensionality of the (expanded) hashed data matrix to just
2b ? k. [16] applied this idea to large-scale learning on the webspam dataset and demonstrated that
using b = 8 and k = 200 to 500 could achieve very similar accuracies as using the original data.
1.3 The Cost of Preprocessing and Testing
Clearly, the preprocessing of minwise hashing can be very costly. In our experiments, loading the
webspam dataset (350,000 samples, about 16 million features, and about 24GB in Libsvm/svmlight
(text) format) used in [16] took about 1000 seconds when the data were stored in text format, and
took about 150 seconds after we converted the data into binary. In contrast, the preprocessing cost for
k = 500 was about 6000 seconds. Note that, compared to industrial applications [24], the webspam
dataset is very small. For larger datasets, the preprocessing step will be much more expensive.
In the testing phrase (in search or learning), if a new data point (e.g., a new document or a new
image) has not been processed, then the total cost will be expensive if it includes the preprocessing.
This may raise significant issues in user-facing applications where the testing efficiency is crucial.
Intuitively, the standard practice of minwise hashing ought to be very ?wasteful? in that all the
nonzero elements in one set are scanned (permuted) but only the smallest one will be used.
1.4
Our Proposal: One Permutation Hashing
1
0
1
2
2
3
4
5
6
3
7
8
9
4
10 11 12 13 14 15
?(S ): 0 0 1 0 1 0 0 1 0 0 0 0 0 1 0 0
1
?(S2): 1 0 0 1 0 0 1 0 0 0 0 0 0 1 0 0
?(S3): 1 1 0 0 0 0 0 0 0 0 1 0 1 0 0 0
Figure 1: Consider S1 , S2 , S3 ? ? = {0, 1, ..., 15} (i.e., D = 16). We apply one permutation ? on the
sets and present ?(S1 ), ?(S2 ), and ?(S3 ) as binary (0/1) vectors, where ?(S1 ) = {2, 4, 7, 13}, ?(S2 ) =
{0, 6, 13}, and ?(S3 ) = {0, 1, 10, 12}. We divide the space ? evenly into k = 4 bins, select the smallest
nonzero in each bin, and re-index the selected elements as: [2, 0, ?, 1], [0, 2, ?, 1], and [0, ?, 2, 0]. For
now, we use ?*? for empty bins, which occur rarely unless the number of nonzeros is small compared to k.
As illustrated in Figure 1, the idea of one permutation hashing is simple. We view sets as 0/1 vectors
in D dimensions so that we can treat a collection of sets as a binary data matrix in D dimensions.
After we permute the columns (features) of the data matrix, we divide the columns evenly into k
parts (bins) and we simply take, for each data vector, the smallest nonzero element in each bin.
In the example in Figure 1 (which concerns 3 sets), the sample selected from ?(S1 ) is [2, 4, ?, 13],
where we use ?*? to denote an empty bin, for the time being. Since only want to compare elements
with the same bin number (so that we can obtain an inner product), we can actually re-index the
elements of each bin to use the smallest possible representations. For example, for ?(S1 ), after
re-indexing, the sample [2, 4, ?, 13] becomes [2 ? 4 ? 0, 4 ? 4 ? 1, ?, 13 ? 4 ? 3] = [2, 0, ?, 1].
We will show that empty bins occur rarely unless the total number of nonzeros for some set is small
compared to k, and we will present strategies on how to deal with empty bins should they occur.
2
1.5
Advantages of One Permutation Hashing
Reducing k (e.g., 500) permutations to just one permutation (or a few) is much more computationally
efficient. From the perspective of energy consumption, this scheme is desirable, especially considering that minwise hashing is deployed in the search industry. Parallel solutions (e.g., GPU [17]),
which require additional hardware and software implementation, will not be energy-efficient.
In the testing phase, if a new data point (e.g., a new document or a new image) has to be first processed with k permutations, then the testing performance may not meet the demand in, for example,
user-facing applications such as search or interactive visual analytics.
One permutation hashing should be easier to implement, from the perspective of random number
generation. For example, if a dataset has one billion features (D = 109 ), we can simply generate a
?permutation vector? of length D = 109 , the memory cost of which (i.e., 4GB) is not significant.
On the other hand, it would not be realistic to store a ?permutation matrix? of size D ? k if D = 109
and k = 500; instead, one usually has to resort to approximations such as universal hashing [5].
Universal hashing often works well in practice although theoretically there are always worst cases.
One permutation hashing is a better matrix sparsification scheme. In terms of the original binary data
matrix, the one permutation scheme simply makes many nonzero entries be zero, without further
?damaging? the matrix. Using the k-permutation scheme, we store, for each permutation and each
row, only the first nonzero and make all the other nonzero entries be zero; and then we have to
concatenate k such data matrices. This significantly changes the structure of the original data matrix.
1.6
Related Work
One of the authors worked on another ?one permutation? scheme named Conditional Random Sampling (CRS) [13, 14] since 2005. Basically, CRS continuously takes the bottom-k nonzeros after
applying one permutation on the data, then it uses a simple ?trick? to construct a random sample for
each pair with the effective sample size determined at the estimation stage. By taking the nonzeros
continuously, however, the samples are no longer ?aligned? and hence we can not write the estimator as an inner product in a unified fashion. [16] commented that using CRS for linear learning
does not produce as good results compared to using b-bit minwise hashing. Interestingly, in the
original ?minwise hashing? paper [4] (we use quotes because the scheme was not called ?minwise
hashing? at that time), only one permutation was used and a sample was the first k nonzeros after
the permutation. Then they quickly moved to the k-permutation minwise hashing scheme [3].
We are also inspired by the work on very sparse random projections [15] and very sparse stable
random projections [12]. The regular random projection method also has the expensive preprocessing cost as it needs a large number of projections. [15, 12] showed that one can substantially reduce the preprocessing cost by using an extremely sparse projection matrix. The preprocessing cost of very sparse random projections can be as small as merely doing one projection. See www.stanford.edu/group/mmds/slides2012/s-pli.pdf for the experimental results on clustering/classification/regression using very sparse random projections.
This paper focuses on the ?fixed-length? scheme as shown in Figure 1. The technical report
(arXiv:1208.1259) also describes a ?variable-length? scheme. Two schemes are more or less equivalent, although the fixed-length scheme is more convenient to implement (and it is slightly more
accurate). The variable-length hashing scheme is to some extent related to the Count-Min (CM)
sketch [6] and the Vowpal Wabbit (VW) [21, 25] hashing algorithms.
2
Applications of Minwise Hashing on Efficient Search and Learning
In this section, we will briefly review two important applications of the k-permutation b-bit minwise
hashing: (i) sublinear time near neighbor search [22], and (ii) large-scale linear learning [16].
2.1
Sublinear Time Near Neighbor Search
The task of near neighbor search is to identify a set of data points which are ?most similar? to
a query data point. Developing efficient algorithms for near neighbor search has been an active
research topic since the early days of modern computing (e.g, [9]). In current practice, methods
for approximate near neighbor search often fall into the general framework of Locality Sensitive
Hashing (LSH) [10, 1]. The performance of LSH largely depends on its underlying implementation.
The idea in [22] is to directly use the bits from b-bit minwise hashing to construct hash tables.
3
Specifically, we hash the data points using k random permutations and store each hash value using
b bits. For each data point, we concatenate the resultant B = bk bits as a signature (e.g., bk = 16).
This way, we create a table of 2B buckets and each bucket stores the pointers of the data points
whose signatures match the bucket number. In the testing phrase, we apply the same k permutations
to a query data point to generate a bk-bit signature and only search data points in the corresponding
bucket. Since using only one table will likely miss many true near neighbors, as a remedy, we
independently generate L tables. The query result is the union of data points retrieved in L tables.
Index
00 00
00 01
00 10
Index
00 00
00 01
00 10
Data Points
6, 110, 143
3, 38, 217
(empty)
11 01 5, 14, 206
11 10 31, 74, 153
11 11 21, 142, 329
Data Points
8, 159, 331
11, 25, 99
3, 14, 32, 97
11 01 7, 49, 208
11 10 33, 489
11 11 6 ,15, 26, 79
Figure 2: An example of hash tables, with b = 2, k = 2, and L = 2.
Figure 2 provides an example with b = 2 bits, k = 2 permutations, and L = 2 tables. The size of
each hash table is 24 . Given n data points, we apply k = 2 permutations and store b = 2 bits of
each hashed value to generate n (4-bit) signatures L times. Consider data point 6. For Table 1 (left
panel of Figure 2), the lowest b-bits of its two hashed values are 00 and 00 and thus its signature
is 0000 in binary; hence we place a pointer to data point 6 in bucket number 0. For Table 2 (right
panel of Figure 2), we apply another k = 2 permutations. This time, the signature of data point 6
becomes 1111 in binary and hence we place it in the last bucket. Suppose in the testing phrase, the
two (4-bit) signatures of a new data point are 0000 and 1111, respectively. We then only search for
the near neighbors in the set {6, 15, 26, 79, 110, 143}, instead of the original set of n data points.
2.2
Large-Scale Linear Learning
The recent development of highly efficient linear learning algorithms is a major breakthrough. Popular packages include SVMperf [11], Pegasos [20], Bottou?s SGD SVM [2], and LIBLINEAR [7].
Given a dataset {(xi , yi )}ni=1 , xi ? RD , yi ? {?1, 1}, the L2 -regularized logistic regression solves
the following optimization problem (where C > 0 is the regularization parameter):
n
?
?
X
T
1
min wT w + C
log 1 + e?yi w xi ,
(5)
w
2
i=1
The L2 -regularized linear SVM solves a similar problem:
min
w
n
X
?
?
1 T
w w+C
max 1 ? yi wT xi , 0 ,
2
i=1
(6)
In [16], they apply k random permutations on each (binary) feature vector xi and store the lowest
b bits of each hashed value, to obtain a new dataset which can be stored using merely nbk bits. At
run-time, each new data point has to be expanded into a 2b ? k-length vector with exactly k 1?s.
To illustrate this simple procedure, [16] provided a toy example with k = 3 permutations. Suppose for one data vector, the hashed values are {12013, 25964, 20191}, whose binary digits
are respectively {010111011101101, 110010101101100, 100111011011111}. Using b = 2 bits,
the binary digits are stored as {01, 00, 11} (which corresponds to {1, 0, 3} in decimals). At
run-time, the (b-bit) hashed data are expanded into a new feature vector of length 2b k = 12:
{0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0}. The same procedure is then applied to all n feature vectors.
Clearly, in both applications (near neighbor search and linear learning), the hashed data have to be
?aligned? in that only the hashed data generated from the same permutation are interacted. Note
that, with our one permutation scheme as in Figure 1, the hashed data are indeed aligned.
3
Theoretical Analysis of the One Permutation Scheme
This section presents the probability analysis to provide a rigorous foundation for one permutation
hashing as illustrated in Figure 1. Consider two sets S1 and S2 . We first introduce two definitions,
4
for the number of ?jointly empty bins? and the number of ?matched bins,? respectively:
Nemp =
k
X
Iemp,j ,
Nmat =
j=1
k
X
Imat,j
(7)
j=1
where Iemp,j and Imat,j are defined for the j-th bin, as
?
1 if both ?(S1 ) and ?(S2 ) are empty in the j-th bin
Iemp,j =
0 otherwise
(
1 if both ?(S1 ) and ?(S1 ) are not empty and the smallest element of ?(S1 )
matches the smallest element of ?(S2 ), in the j-th bin
Imat,j =
0 otherwise
(8)
(9)
Recall the notation: f1 = |S1 |, f2 = |S2 |, a = |S1 ? S2 |. We also use f = |S1 ? S2 | = f1 + f2 ? a.
Lemma 1
?
?
fY
?1
D 1 ? j+s
?t
k!
k
Pr (Nemp = j) =
, 0?j ?k?1
(?1)
j!s!(k ? j ? s)! t=0
D?t
s=0
?
?
Assume D 1 ? k1 ? f = f1 + f2 ? a.
?
?
?
?f
fY
?1
D 1 ? k1 ? j
E (Nemp )
1
=
? 1?
k
D?j
k
j=0
?
?
?
?
?
?
fY
?1
1
D
1
?
?
j
E (Nmat )
E (Nemp )
k
?
=R 1?
= R ?1 ?
k
k
D
?
j
j=0
k?j
X
s
Cov (Nmat , Nemp ) ? 0
?
(10)
(11)
(12)
(13)
In practical scenarios, the data are often sparse, i.e., f = f1 + f2 ? a ? D. In this case, the upper
?
?f
?
?f
E(Nemp )
bound (11) 1 ? k1 is a good approximation to the true value of
. Since 1 ? k1
?
k
e?f /k , we know that the chance of empty bins is small when f ? k. For example, if f /k = 5 then
?
?f
1 ? k1 ? 0.0067. For practical applications, we would expect that f ? k (for most data pairs),
otherwise hashing probably would not be too useful anyway. This is why we do not expect empty
bins will significantly impact (if at all) the performance in practical settings.
? mat of the resemblance is unbiased:
Lemma 2 shows the following estimator R
Lemma 2
?
?
Nmat
? mat = R
,
E R
k ? Nemp
? ?
??
?
?
?
?
1
1
1
? mat = R(1 ? R) E
V ar R
1+
?
k ? Nemp
f ?1
f ?1
? k?1
?
X Pr (Nemp = j)
1
1
=
?
E
k ? Nemp
k?j
k ? E(Nemp )
j=0
? mat =
R
(14)
(15)
?
(16)
?
?
? mat = R may seem surprising as in general ratio estimators are not unbiased.
The fact that E R
Note that k ? Nemp > 0, because we assume the original data vectors are not completely empty
(all?
?
?
zero). As expected, when k ? f = f1 + f2 ? a, Nemp is essentially zero and hence V ar Rmat ?
R(1?R)
k
?
?
? mat is a bit smaller than
. In fact, V ar R
R(1?R)
k
, especially for large k.
It is probably not surprising that our one permutation scheme (slightly) outperforms the original
k-permutation scheme (at merely 1/k of the preprocessing cost), because one permutation hashing,
which is ?sampling-without-replacement?, provides a better strategy for matrix sparsification.
5
4
Strategies for Dealing with Empty Bins
In general, we expect that empty bins should not occur often because E(Nemp )/k ? e?f /k , which
is very close to zero if f /k > 5. (Recall f = |S1 ? S2 |.) If the goal of using minwise hashing is for
data reduction, i.e., reducing the number of nonzeros, then we would expect that f ? k anyway.
Nevertheless, in applications where we need the estimators to be inner products, we need strategies
to deal with empty bins in case they occur. Fortunately, we realize a (in retrospect) simple strategy
which can be nicely integrated with linear learning algorithms and performs well.
4
Frequency
x 10
4
Figure 3 plots the histogram of the numbers of
Webspam
nonzeros in the webspam dataset, which has 350,000
3
samples. The average number of nonzeros is about
2
4000 which should be much larger than k (e.g., 500)
for the hashing procedure. On the other hand, about
1
10% (or 2.8%) of the samples have < 500 (or <
0
200) nonzeros. Thus, we must deal with empty bins
0
2000 4000 6000 8000 10000
# nonzeros
if we do not want to exclude those data points. For
Figure
3:
Histogram
of the numbers of nonzeros
?f /k
example, if f = k = 500, then Nemp ? e
= in the webspam dataset (350,000 samples).
0.3679, which is not small.
The strategy we recommend for linear learning is zero coding, which is tightly coupled with the
strategy of hashed data expansion [16] as reviewed in Sec. 2.2. More details will be elaborated in
Sec. 4.2. Basically, we can encode ?*? as ?zero? in the expanded space, which means Nmat will
remain the same (after taking the inner product in the expanded space). This strategy, which is
sparsity-preserving, essentially corresponds to the following modified estimator:
(0)
? mat
R
=q
k?
Nmat
q
(2)
k ? Nemp
(1)
Nemp
(17)
Pk
Pk
(2)
(1)
(2)
(1)
where Nemp = j=1 Iemp,j and Nemp = j=1 Iemp,j are the numbers of empty bins in ?(S1 )
and ?(S2 ), respectively. This modified estimator makes sense for a number of reasons.
Basically, since each data vector is processed and coded separately, we actually do not know Nemp
(the number of jointly empty bins) until we see both ?(S1 ) and ?(S2 ). In other words, we can not
(2)
(1)
really compute Nemp if we want toq
use linear estimators.
On the other hand, Nemp and Nemp are
q
(1)
(2)
always available. In fact, the use of k ? Nemp k ? Nemp in the denominator corresponds to the
normalizing step which is needed before feeding the data to a solver for SVM or logistic regression.
(2)
(1)
? mat . When two original vectors
When Nemp = Nemp = Nemp , (17) is equivalent to the original R
(2)
(1)
are very similar (e.g., large R), Nemp and Nemp will be close to Nemp . When two sets are highly
unbalanced, using (17) will overestimate R; however, in this case, Nmat will be so small that the
absolute error will not be large.
4.1
The m-Permutation Scheme with 1 < m ? k
If one would like to further (significantly) reduce the chance of the occurrences of empty bins,
here we shall mention that one does not really have to strictly follow ?one permutation,? since one
can always conduct m permutations with k 0 = k/m and concatenate the hashed data. Once the
preprocessing is no longer the bottleneck, it matters less whether we use 1 permutation or (e.g.,)
m = 3 permutations. The chance of having empty bins decreases exponentially with increasing m.
4.2
An Example of The ?Zero Coding? Strategy for Linear Learning
Sec. 2.2 reviewed the data-expansion strategy used by [16] for integrating b-bit minwise hashing
with linear learning. We will adopt a similar strategy with modifications for considering empty bins.
We use a similar example as in Sec. 2.2. Suppose we apply our one permutation hashing scheme and
use k = 4 bins. For the first data vector, the hashed values are [12013, 25964, 20191, ?] (i.e., the
4-th bin is empty). Suppose again we use b = 2 bits. With the ?zero coding? strategy, our procedure
6
is summarized as follows:
Original hashed values (k = 4) :
Original binary representations :
Lowest b = 2 binary digits :
Expanded 2b = 4 binary digits :
12013
25964
20191
010111011101101 110010101101100 100111011011111
01
00
11
0010
0001
1000
1
New feature vector fed to a solver : ?
? [0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0]
4?1
?
?
?
0000
We apply the same procedure to all feature vectors in the data matrix to generate a new data matrix.
The normalization factor q 1 (i) varies, depending on the number of empty bins in the i-th vector.
k?Nemp
5 Experimental Results on the Webspam Dataset
The webspam dataset has 350,000 samples and 16,609,143 features. Each feature vector has on
average about 4000 nonzeros; see Figure 3. Following [16], we use 80% of samples for training
and the remaining 20% for testing. We conduct extensive experiments on linear SVM and logistic
regression, using our proposed one permutation hashing scheme with k ? {26 , 27 , 28 , 29 } and b ?
{1, 2, 4, 6, 8}. For convenience, we use D = 224 = 16, 777, 216, which is divisible by k.
There is one regularization parameter C in linear SVM and logistic regression. Since our purpose
is to demonstrate the effectiveness of our proposed hashing scheme, we simply provide the results
for a wide range of C values and assume that the best performance is achievable if we conduct
cross-validations. This way, interested readers may be able to easily reproduce our experiments.
b=1
SVM: k = 64
Webspam: Accuracy
?2
10
?1
0
10
10
1
10
2
10
b = 6,8
b=4
b=2
b=1
Original
1 Perm
k Perm
SVM: k = 128
Webspam: Accuracy
?2
10
?1
10
b=8
b=6
b=4
b=2
b=1
logit: k = 64
Webspam: Accuracy
?2
10
?1
0
10
10
C
10
1
10
2
10
b=4
b=2
b=1
Original
1 Perm
k Perm
SVM: k = 256
Webspam: Accuracy
?1
1
10
2
10
100
98
96
94
92
90
88
86
84
82
80 ?3
10
b = 6,8
b=4
b=1
logit: k = 128
Webspam: Accuracy
10
?1
0
10
10
1
10
2
10
100 b = 4,6,8
98
96
94
92
90
88
86
84
82
80 ?3
?2
10
10
b=2
b=1
Original
1 Perm
k Perm
SVM: k = 512
Webspam: Accuracy
?1
10
1
10
2
10
100
98
96
94
92
90
88
86
84
82
80 ?3
10
b=4
b=2
b=1
Original
1 Perm
k Perm
logit: k = 256
Webspam: Accuracy
10
?1
0
10
C
10
C
10
1
2
10
10
C
b = 6,8
?2
0
10
C
b=2
?2
0
10
C
Accuracy (%)
Accuracy (%)
C
100
98
96
94
92
90
88
86
84
82
80 ?3
10
0
100
b = 6,8
98
96
94
92
90
88
86
84
82
80 ?3
?2
10
10
Accuracy (%)
b=2
100
98
96
94
92
90
88
86
84
82
80 ?3
10
Accuracy (%)
b=6
b=4
Accuracy (%)
b=8
Accuracy (%)
100
98
96
94
92
90
88
86
84
82
80 ?3
10
Accuracy (%)
Accuracy (%)
Figure 4 presents the test accuracies for both linear SVM (upper panels) and logistic regression (bottom panels). Clearly, when k = 512 (or even 256) and b = 8, b-bit one permutation hashing achieves
similar test accuracies as using the original data. Also, compared to the original k-permutation
scheme as in [16], our one permutation scheme achieves similar (or even slightly better) accuracies.
1
10
2
10
100
98
96
94
92
90
88
86
84
82
80 ?3
10
b = 4,6,8
b=2
b=1
Original
1 Perm
k Perm
logit: k = 512
Webspam: Accuracy
?2
10
?1
0
10
10
1
10
2
10
C
Figure 4: Test accuracies of SVM (upper panels) and logistic regression (bottom panels), averaged
over 50 repetitions. The accuracies of using the original data are plotted as dashed (red, if color is
available) curves with ?diamond? markers. C is the regularization parameter. Compared with the
original k-permutation minwise hashing (dashed and blue if color is available), the one permutation
hashing scheme achieves similar accuracies, or even slightly better accuracies when k is large.
The empirical results on the webspam datasets are encouraging because they verify that our proposed
one permutation hashing scheme performs as well as (or even slightly better than) the original kpermutation scheme, at merely 1/k of the original preprocessing cost. On the other hand, it would
be more interesting, from the perspective of testing the robustness of our algorithm, to conduct
experiments on a dataset (e.g., news20) where the empty bins will occur much more frequently.
6 Experimental Results on the News20 Dataset
The news20 dataset (with 20,000 samples and 1,355,191 features) is a very small dataset in not-toohigh dimensions. The average number of nonzeros per feature vector is about 500, which is also
small. Therefore, this is more like a contrived example and we use it just to verify that our one
permutation scheme (with the zero coding strategy) still works very well even when we let k be
7
as large as 4096 (i.e., most of the bins are empty). In fact, the one permutation schemes achieves
noticeably better accuracies than the original k-permutation scheme. We believe this is because the
one permutation scheme is ?sample-without-replacement? and provides a better matrix sparsification
strategy without ?contaminating? the original data matrix too much.
We experiment with k ? {25 , 26 , 27 , 28 , 29 , 210 , 211 , 212 } and b ? {1, 2, 4, 6, 8}, for both one permutation scheme and k-permutation scheme. We use 10,000 samples for training and the other
10,000 samples for testing. For convenience, we let D = 221 (which is larger than 1,355,191).
b=1
SVM: k = 32
News20: Accuracy
1
2
3
10
10
10
C
0
10
100
Accuracy (%)
Accuracy (%)
b=4
90
b=2
85
b=1
80
75
70
65 ?1
10
1
10
10
C
2
10
SVM: k = 64
News20: Accuracy
0
10
1
10
C
90
10
3
10
100
b=8
b=6
b=4
b=2
b=1
SVM: k = 128
News20: Accuracy
0
10
1
10
C
2
b=1
75
10
10
1
10
C
2
10
b=2
90
10
85
80
b=2
75
b=1
SVM: k = 256
News20: Accuracy
0
1
10
95
85
Original
1 Perm
k Perm
80
75
65 ?1
10
3
b=4
b=1
70
SVM: k = 1024
News20: Accuracy
0
b=4
10
C
2
10
3
10
100 b = 4,6,8
95
80
b=6
90
65 ?1
10
3
10
b=8
95
70
100 b = 6,8
b=2
85
65 ?1
10
3
2
10
b=4
70
SVM: k = 512
News20: Accuracy
0
b=2
b=1
100 b = 6,8
95
b=8
b=6
95
b=4
100
95
90
85
80
75
70
65
60
55
50 ?1
10
Accuracy (%)
b=2
b=8
b=6
Accuracy (%)
b=6
b=4
100
95
90
85
80
75
70
65
60
55
50 ?1
10
Accuracy (%)
b=8
Accuracy (%)
100
95
90
85
80
75
70
65
60
55
50 ?1
10
Accuracy (%)
Accuracy (%)
Figure 5 and Figure 6 present the test accuracies for linear SVM and logistic regression, respectively.
When k is small (e.g., k ? 64) both the one permutation scheme and the original k-permutation
scheme perform similarly. For larger k values (especially as k ? 256), however, our one permutation scheme noticeably outperforms the k-permutation scheme. Using the original data, the test
accuracies are about 98%. Our one permutation scheme with k ? 512 and b = 8 essentially achieves
the original test accuracies, while the k-permutation scheme could only reach about 97.5%.
10
1
10
C
2
Original
1 Perm
k Perm
80
75
65 ?1
10
3
10
85
70
SVM: k = 2048
News20: Accuracy
0
10
b=2
b=1
90
SVM: k = 4096
News20: Accuracy
0
10
1
10
C
2
10
3
10
b=2
b=1
0
logit: k = 32
News20: Accuracy
1
10
10
C
2
10
100
95
Accuracy (%)
b=4
90
85
b=2
80
b=1
75
70
65 ?1
10
0
1
10
C
2
10
b=1
0
10
logit: k = 64
News20: Accuracy
1
10
C
2
10
3
10
90
3
10
b=2
b=1
0
1
10
C
2
10
b=1
80
75
0
1
10
C
2
10
3
10
85
Original
1 Perm
k Perm
80
75
65 ?1
10
b=2
75
b=1
95
b=1
70
logit: k = 1024
News20: Accuracy
10
90
80
100
b=4
b=2
95
b=2
85
65 ?1
10
3
10
90
0
1
10
C
2
10
logit: k = 256
News20: Accuracy
0
10
3
10
1
10
C
b = 4,6,8
2
10
3
10
b=2
b=1
90
Original
1 Perm
k Perm
85
80
75
70
logit: k = 2048
News20: Accuracy
10
b=6
b=4
70
logit: k = 128
News20: Accuracy
10
b=8
95
b=4
b = 6,8
b=4
85
65 ?1
10
100
b=8
b=6
100
b = 6,8
70
logit: k = 512
News20: Accuracy
10
b=2
100
b=8
b=6
95
Accuracy (%)
3
10
b=4
100
95
90
85
80
75
70
65
60
55
50 ?1
10
Accuracy (%)
b=4
b=8
b=6
Accuracy (%)
b=6
100
95
90
85
80
75
70
65
60
55
50 ?1
10
Accuracy (%)
b=8
Accuracy (%)
100
95
90
85
80
75
70
65
60
55
50 ?1
10
Accuracy (%)
Accuracy (%)
Figure 5: Test accuracies of linear SVM averaged over 100 repetitions. The one permutation scheme
noticeably outperforms the original k-permutation scheme especially when k is not small.
65 ?1
10
logit: k = 4096
News20: Accuracy
0
10
1
10
C
2
10
3
10
Figure 6: Test accuracies of logistic regression averaged over 100 repetitions. The one permutation
scheme noticeably outperforms the original k-permutation scheme especially when k is not small.
7
Conclusion
A new hashing algorithm is developed for large-scale search and learning in massive binary data.
Compared with the original k-permutation (e.g., k = 500) minwise hashing (which is a standard
procedure in the context of search), our method requires only one permutation and can achieve
similar or even better accuracies at merely 1/k of the original preprocessing cost. We expect that one
permutation hashing (or its variant) will be adopted in practice. See more details in arXiv:1208.1259.
Acknowledgement: The research of Ping Li is partially supported by NSF-IIS-1249316, NSFDMS-0808864, NSF-SES-1131848, and ONR-YIP-N000140910911. The research of Art B Owen
is partially supported by NSF-0906056. The research of Cun-Hui Zhang is partially supported by
NSF-DMS-0906420, NSF-DMS-1106753, NSF-DMS-1209014, and NSA-H98230-11-1-0205.
8
References
[1] Alexandr Andoni and Piotr Indyk. Near-optimal hashing algorithms for approximate nearest neighbor in
high dimensions. In Commun. ACM, volume 51, pages 117?122, 2008.
[2] Leon Bottou. http://leon.bottou.org/projects/sgd.
[3] Andrei Z. Broder, Moses Charikar, Alan M. Frieze, and Michael Mitzenmacher. Min-wise independent
permutations (extended abstract). In STOC, pages 327?336, Dallas, TX, 1998.
[4] Andrei Z. Broder, Steven C. Glassman, Mark S. Manasse, and Geoffrey Zweig. Syntactic clustering of
the web. In WWW, pages 1157 ? 1166, Santa Clara, CA, 1997.
[5] J. Lawrence Carter and Mark N. Wegman. Universal classes of hash functions (extended abstract). In
STOC, pages 106?112, 1977.
[6] Graham Cormode and S. Muthukrishnan. An improved data stream summary: the count-min sketch and
its applications. Journal of Algorithm, 55(1):58?75, 2005.
[7] Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, Xiang-Rui Wang, and Chih-Jen Lin. Liblinear: A library
for large linear classification. Journal of Machine Learning Research, 9:1871?1874, 2008.
[8] Dennis Fetterly, Mark Manasse, Marc Najork, and Janet L. Wiener. A large-scale study of the evolution
of web pages. In WWW, pages 669?678, Budapest, Hungary, 2003.
[9] Jerome H. Friedman, F. Baskett, and L. Shustek. An algorithm for finding nearest neighbors. IEEE
Transactions on Computers, 24:1000?1006, 1975.
[10] Piotr Indyk and Rajeev Motwani. Approximate nearest neighbors: Towards removing the curse of dimensionality. In STOC, pages 604?613, Dallas, TX, 1998.
[11] Thorsten Joachims. Training linear svms in linear time. In KDD, pages 217?226, Pittsburgh, PA, 2006.
[12] Ping Li. Very sparse stable random projections for dimension reduction in l? (0 < ? ? 2) norm. In
KDD, San Jose, CA, 2007.
[13] Ping Li and Kenneth W. Church. Using sketches to estimate associations. In HLT/EMNLP, pages 708?
715, Vancouver, BC, Canada, 2005 (The full paper appeared in Commputational Linguistics in 2007).
[14] Ping Li, Kenneth W. Church, and Trevor J. Hastie. One sketch for all: Theory and applications of
conditional random sampling. In NIPS, Vancouver, BC, Canada, 2008 (Preliminary results appeared
in NIPS 2006).
[15] Ping Li, Trevor J. Hastie, and Kenneth W. Church. Very sparse random projections. In KDD, pages
287?296, Philadelphia, PA, 2006.
[16] Ping Li, Anshumali Shrivastava, Joshua Moore, and Arnd Christian K?onig. Hashing algorithms for largescale learning. In NIPS, Granada, Spain, 2011.
[17] Ping Li, Anshumali Shrivastava, and Arnd Christian K?onig. b-bit minwise hashing in practice: Largescale batch and online learning and using GPUs for fast preprocessing with simple hash functions. Technical report.
[18] Ping Li and Arnd Christian K?onig. b-bit minwise hashing. In WWW, pages 671?680, Raleigh, NC, 2010.
[19] Ping Li, Arnd Christian K?onig, and Wenhao Gui. b-bit minwise hashing for estimating three-way similarities. In NIPS, Vancouver, BC, 2010.
[20] Shai Shalev-Shwartz, Yoram Singer, and Nathan Srebro. Pegasos: Primal estimated sub-gradient solver
for svm. In ICML, pages 807?814, Corvalis, Oregon, 2007.
[21] Qinfeng Shi, James Petterson, Gideon Dror, John Langford, Alex Smola, and S.V.N. Vishwanathan. Hash
kernels for structured data. Journal of Machine Learning Research, 10:2615?2637, 2009.
[22] Anshumali Shrivastava and Ping Li. Fast near neighbor search in high-dimensional binary data. In ECML,
2012.
[23] Josef Sivic and Andrew Zisserman. Video google: a text retrieval approach to object matching in videos.
In ICCV, 2003.
[24] Simon Tong.
Lessons learned developing a practical large scale machine learning system.
http://googleresearch.blogspot.com/2010/04/lessons-learned-developing-practical.html, 2008.
[25] Kilian Weinberger, Anirban Dasgupta, John Langford, Alex Smola, and Josh Attenberg. Feature hashing
for large scale multitask learning. In ICML, pages 1113?1120, 2009.
9
|
4778 |@word multitask:1 version:1 briefly:1 achievable:1 loading:1 mmds:1 logit:12 norm:1 hsieh:1 sgd:2 mention:1 reduction:2 liblinear:2 document:3 interestingly:1 bc:3 outperforms:4 current:2 imat:3 com:1 surprising:2 si:1 clara:1 written:1 gpu:1 must:1 realize:1 john:2 realistic:2 concatenate:3 kdd:3 christian:4 designed:1 plot:1 hash:8 prohibitive:1 selected:2 item:1 cormode:1 pointer:2 provides:4 location:2 org:1 zhang:2 become:1 baskett:1 introduce:1 theoretically:1 news20:19 expected:1 indeed:1 frequently:1 inspired:1 encouraging:1 curse:1 considering:2 solver:3 becomes:2 provided:1 estimating:2 underlying:1 matched:1 panel:6 notation:1 increasing:1 lowest:5 project:1 spain:1 cm:1 substantially:2 dror:1 developed:1 unified:1 finding:1 sparsification:3 ought:1 interactive:1 exactly:2 onig:4 overestimate:1 before:1 treat:1 dallas:2 meet:1 analytics:1 range:1 averaged:3 unique:1 practical:5 testing:10 alexandr:1 practice:7 union:1 implement:2 digit:4 procedure:8 universal:3 empirical:1 significantly:3 projection:10 convenient:1 word:4 integrating:1 regular:1 matching:1 jui:1 convenience:3 pegasos:2 close:2 janet:1 context:4 applying:3 www:4 equivalent:2 demonstrated:1 shi:1 vowpal:1 independently:1 estimator:7 anyway:2 pli:1 suppose:4 massive:3 user:2 us:1 trick:1 element:7 pa:2 expensive:4 bottom:3 steven:1 wang:1 worst:1 kilian:1 decrease:1 manasse:2 signature:7 raise:1 f2:7 efficiency:1 completely:1 easily:1 tx:2 muthukrishnan:1 distinct:1 fast:2 effective:1 query:3 shalev:1 whose:2 stanford:2 larger:4 kai:1 s:1 otherwise:3 statistic:2 cov:1 syntactic:1 jointly:2 indyk:2 online:1 advantage:1 wabbit:1 took:2 product:6 aligned:4 budapest:1 hungary:1 achieve:3 shustek:1 moved:1 billion:1 interacted:1 empty:24 contrived:1 motwani:1 produce:1 object:1 illustrate:1 depending:1 andrew:1 measured:1 nearest:3 solves:2 drawback:2 bin:33 noticeably:4 require:1 feeding:1 f1:7 really:2 preliminary:1 svmperf:1 strictly:1 rong:1 lawrence:1 major:2 dictionary:1 early:1 smallest:7 adopt:1 achieves:5 purpose:1 estimation:1 quote:1 sensitive:1 repetition:3 create:1 clearly:3 anshumali:3 always:3 super:1 modified:2 cr:3 cornell:1 encode:1 focus:1 joachim:1 mainly:1 industrial:2 contrast:1 rigorous:1 sense:1 integrated:1 reproduce:1 interested:1 josef:1 pixel:1 issue:1 classification:2 html:1 development:1 art:2 breakthrough:1 yip:1 equal:1 once:2 construct:2 having:2 nicely:1 sampling:3 piotr:2 icml:2 report:2 recommend:1 few:1 modern:1 frieze:1 tightly:1 petterson:1 phase:1 replacement:2 gui:1 friedman:1 highly:2 nsa:1 primal:1 accurate:1 unless:2 conduct:4 divide:3 re:3 plotted:1 theoretical:1 increased:1 column:4 industry:1 contiguous:1 ar:3 phrase:3 cost:11 entry:2 too:2 stored:3 varies:1 cho:1 broder:2 michael:1 continuously:2 quickly:1 again:1 emnlp:1 resort:1 li:11 toy:1 converted:1 exclude:1 coding:4 sec:4 includes:1 summarized:1 matter:1 oregon:1 depends:1 stream:1 view:1 doing:1 red:1 parallel:1 shai:1 simon:1 elaborated:1 ni:1 accuracy:64 wiener:1 conducting:1 efficiently:2 largely:1 identify:1 lesson:2 conceptually:1 basically:3 shingle:2 ping:11 reach:1 hlt:1 trevor:2 definition:1 energy:3 frequency:1 james:1 dm:3 naturally:1 resultant:1 dataset:15 popular:1 recall:2 color:2 dimensionality:3 actually:2 appears:1 hashing:58 day:1 follow:1 zisserman:1 improved:1 wei:1 mitzenmacher:1 just:3 stage:1 smola:2 until:1 retrospect:1 hand:4 sketch:4 dennis:1 web:3 jerome:1 langford:2 marker:1 rajeev:1 google:1 logistic:10 resemblance:4 believe:1 concept:1 normalized:1 true:2 remedy:1 unbiased:2 hence:4 regularization:3 verify:2 evolution:1 nonzero:7 moore:1 illustrated:2 deal:3 pdf:1 demonstrate:1 performs:2 image:3 wise:1 recently:2 common:2 permuted:3 exponentially:1 volume:1 million:1 association:1 significant:2 rd:1 similarly:2 lsh:2 stable:2 hashed:17 similarity:4 longer:2 contaminating:1 showed:1 recent:1 perspective:3 retrieved:1 commun:1 scenario:1 store:8 binary:24 onr:1 yi:4 joshua:1 preserving:1 minimum:1 additional:1 fortunately:1 converting:1 dashed:2 ii:2 full:1 desirable:1 nonzeros:14 alan:1 technical:2 match:2 cross:1 zweig:1 lin:1 retrieval:1 coded:1 impact:1 variant:1 regression:11 denominator:1 vision:1 essentially:3 rutgers:1 arxiv:3 histogram:2 represent:1 normalization:1 kernel:1 proposal:1 want:3 separately:1 crucial:1 probably:2 seem:1 effectiveness:1 near:11 vw:1 svmlight:1 divisible:1 hastie:2 inner:6 idea:3 reduce:2 bottleneck:1 whether:1 gb:2 useful:1 santa:1 hardware:1 processed:4 carter:1 svms:1 generate:5 http:2 nsf:6 s3:4 moses:1 estimated:1 per:1 blue:1 write:1 mat:8 shall:1 dasgupta:1 commented:1 group:1 nevertheless:1 wasteful:1 libsvm:1 kenneth:3 merely:5 run:2 package:1 jose:1 named:1 place:2 reader:1 chih:1 graham:1 bit:29 bound:1 fan:1 scanned:1 occur:6 vishwanathan:1 worked:1 alex:2 software:1 nathan:1 extremely:3 min:15 leon:2 expanded:6 format:2 gpus:1 department:3 developing:3 charikar:1 structured:1 anirban:1 describes:1 slightly:5 smaller:1 remain:1 cun:2 perm:18 modification:1 s1:29 intuitively:1 iccv:1 pr:3 indexing:1 thorsten:1 bucket:6 computationally:1 discus:1 count:2 needed:1 know:3 singer:1 fed:1 adopted:1 available:3 apply:7 occurrence:1 attenberg:1 batch:1 robustness:1 weinberger:1 original:36 clustering:2 include:1 remaining:1 linguistics:1 yoram:1 k1:5 especially:5 googleresearch:1 strategy:14 costly:2 wenhao:1 gradient:1 consumption:2 evenly:3 topic:1 najork:1 extent:1 fy:3 reason:1 length:7 index:4 decimal:1 ratio:1 nc:1 potentially:2 stoc:3 implementation:2 perform:2 diamond:1 upper:3 datasets:3 ecml:1 wegman:1 extended:3 canada:2 bk:3 pair:2 extensive:1 glassman:1 sivic:1 learned:2 nip:4 able:1 usually:1 appeared:2 sparsity:1 gideon:1 max:1 memory:1 video:2 webspam:17 treated:1 blogspot:1 regularized:2 indicator:1 largescale:2 scheme:43 library:1 church:3 coupled:1 philadelphia:1 text:7 review:1 l2:2 acknowledgement:1 vancouver:3 xiang:1 expect:5 permutation:78 sublinear:4 generation:1 interesting:1 srebro:1 var:1 facing:2 iemp:5 geoffrey:1 validation:1 foundation:1 sufficient:1 storing:1 granada:1 row:1 course:1 summary:1 supported:3 last:1 english:1 raleigh:1 neighbor:13 fall:1 taking:2 wide:1 absolute:1 sparse:8 curve:1 dimension:8 author:1 commonly:1 collection:1 preprocessing:15 san:1 corvalis:1 transaction:1 approximate:3 confirm:1 dealing:1 active:1 arnd:4 pittsburgh:1 xi:5 shwartz:1 search:20 why:1 table:10 reviewed:2 ca:2 shrivastava:3 permute:2 expansion:2 bottou:3 marc:1 pk:2 s2:26 h98230:1 fetterly:1 en:1 fashion:1 deployed:1 andrei:2 tong:1 sub:1 n000140910911:1 removing:1 jen:1 experimented:1 svm:24 concern:1 normalizing:1 andoni:1 hui:2 illustrates:1 demand:1 rui:1 easier:1 locality:1 simply:4 likely:1 visual:2 josh:1 partially:3 chang:1 corresponds:3 chance:3 trillion:1 acm:1 conditional:2 viewed:1 goal:1 towards:1 owen:2 change:1 determined:1 specifically:1 reducing:3 qinfeng:1 wt:2 miss:1 lemma:3 called:2 total:3 experimental:3 rarely:2 select:1 damaging:1 mark:3 unbalanced:1 minwise:30
|
4,174 | 4,779 |
The variational hierarchical EM algorithm for
clustering hidden Markov models
Emanuele Coviello
ECE Dept., UC San Diego
[email protected]
Antoni B. Chan
CS Dept., CityU of Hong Kong
[email protected]
Gert R.G. Lanckriet
ECE Dept., UC San Diego
[email protected]
Abstract
In this paper, we derive a novel algorithm to cluster hidden Markov models
(HMMs) according to their probability distributions. We propose a variational
hierarchical EM algorithm that i) clusters a given collection of HMMs into groups
of HMMs that are similar, in terms of the distributions they represent, and ii) characterizes each group by a ?cluster center?, i.e., a novel HMM that is representative
for the group. We illustrate the benefits of the proposed algorithm on hierarchical
clustering of motion capture sequences as well as on automatic music tagging.
1
Introduction
The hidden Markov model (HMM) [1] is a probabilistic model that assumes a signal is generated
by a double embedded stochastic process. A discrete-time hidden state process, which evolves as a
Markov chain, encodes the dynamics of the signal, and an observation process, at each time conditioned on the current state, encodes the appearance of the signal. HMMs have successfully served
a variety of applications, including speech recognition [1], music analysis [2] and identification [3],
and clustering of time series data [4, 5].
This paper is about clustering HMMs. More precisely, we are interested in an algorithm that, given
a collection of HMMs, partitions them into K clusters of ?similar? HMMs, while also learning a
representative HMM ?cluster center? that concisely and appropriately represents each cluster. This
is similar to standard k-means clustering, except that the data points are HMMs now instead of
vectors in Rd . Various applications motivate the design of HMM clustering algorithms, ranging from
hierarchical clustering of sequential data (e.g., speech or motion sequences modeled by HMMs [4]),
over hierarchical indexing for fast retrieval, to reducing the computational complexity of estimating
mixtures of HMMs from large datasets (e.g., semantic annotation models for music and video) ?
by clustering HMMs, efficiently estimated from many small subsets of the data, into a more compact
mixture model of all data. However, there has been relatively little work on HMM clustering and,
therefore, its applications.
Existing approaches to clustering HMMs operate directly on the HMM parameter space, by grouping HMMs according to a suitable pairwise distance defined in terms of the HMM parameters.
However, as HMM parameters lie on a non-linear manifold, a simple application of the k-means algorithm will not succeed in the task, since it assumes real vectors in a Euclidean space. In addition,
such an approach would have the additional complication that HMM parameters for a particular
generative model are not unique, i.e., a permutation of the states leads to the same generative model.
One solution, proposed in [4], first constructs an appropriate similarity matrix between all HMMs
that are to be clustered (e.g., based on the Bhattacharyya affinity, which depends non-linearly on the
HMM parameters [6]), and then applies spectral clustering. While this approach has proven successful to group HMMs into similar clusters [4], it does not allow to generate novel HMMs as cluster
centers. Each cluster can still be represented by choosing one of the given HMMs, e.g., the HMM
which the spectral clustering procedure maps the closest to each spectral clustering center. However,
this may be suboptimal for various applications of HMM clustering, e.g., in hierarchical estimation
1
of HMM mixtures. Spectral clustering can be based on other affinity scores between HMMs distributions than Bhattacharyya affinity, such as KL divergence approximated with sampling [7].
Instead, in this paper we propose to cluster HMMs directly with respect to the probability distributions they represent. We derive a hierarchical expectation maximization (HEM) algorithm that,
starting from a group of HMMs, estimates a smaller mixture model that concisely represents and
clusters the input HMMs (i.e., the input HMM distributions guide the estimation of the output mixture distribution). Historically, the first HEM algorithm was designed to cluster Gaussian probability
distributions [8]. This algorithm starts from a Gaussian mixture model (GMM) and reduces it to another GMM with fewer components, where each of the mixture components of the reduced GMM
represents, i.e., clusters, a group of the original Gaussian mixture components. More recently, Chan
et al. [9] derived an HEM algorithm to cluster dynamic texture (DT) models (i.e., linear dynamical
systems, LDSs) through their probability distributions. HEM has been applied successfully to many
machine learning tasks for images [10], video [9] and music [11, 12]. The HEM algorithm is similar in spirit to Bregman-clustering [13], which is based on assigning points to cluster centers using
KL-divergence.
To extend the HEM framework for GMMs to hidden Markov mixture models (H3Ms), additional
marginalization of the hidden-state processes is required, as for DTMs. However, while Gaussians
and DTs allow tractable inference in the E-step of HEM, this is no longer the case for HMMs.
Therefore, in this work, we derive a variational formulation of the HEM algorithm (VHEM), and
then leverage a variational approximation derived in [14] (which has not been used in a learning context so far) to make the inference in the E-step tractable. The proposed VHEM algorithm for H3Ms
(VHEM-H3M) allows to cluster hidden Markov models, while also learning novel HMM centers that
are representative of each cluster, in a way that is consistent with the underlying generative model
of the input HMMs. The resulting VHEM algorithm can be generalized to handle other classes of
graphical models, for which exact computation of the E-step in standard HEM would be intractable,
by leveraging similar variational approximations. The efficacy of the VHEM-H3M algorithm is
demonstrated on hierarchical motion clustering and semantic music annotation and retrieval.
The remainder of the paper is organized as follows. We review the hidden Markov model (HMM)
and the hidden Markov mixture model (H3M) in Section 2. We present the derivation of the VHEMH3M algorithm in Section 3, discussion and an experimental evaluation in Section 4.
2
The hidden Markov (mixture) model
A hidden Markov model (HMM) M assumes a sequence of ? observations y1:? is generated by
a double embedded stochastic process. The hidden state process x1:? is a first order Markov
chain on S states, with transition matrix A whose entries are a?,? = P (xt+1 = ?|xt = ?),
and initial state distribution ? = [?1 , . . . , ?S ], where ?? = P (x1 = ?|M). Each state ? generates observations according to an emission probability density function p(y|x = ?, M) which
here we assume time-invariant and modeled as a Gaussian mixture with M components, i.e.,
PM
p(y|x = ?, M) = m=1 c?,m p(y|? = m, M), where ? ? multinomial(c?,1 , . . . , c?,M ) is the
hidden variable that selects the mixture component, c?,m the mixture weight of the mth Gaussian
component, and p(y|? = m, M) = N (y; ??,m , ??,m ) is the probability density function of a multivariate Gaussian distribution with mean ??,m and covariance matrix ??,m . The HMM is specified
S
by the parameters M = {?, A, {{c?,m , ??,m , ??,m }M
m=1 }?=1 } which can be efficiently learned
from an observation sequence y1:? with the Baum-Welch algorithm [1].
A hidden Markov mixture model (H3M) models a set of observation sequences as samples
from a group of K hidden Markov models, each associated to a specific sub-behavior [5].
For a given sequence, an assignment variable z ? multinomial(?1 , ? ? ? ?K ) selects the parameters of one of the K HMMs. Each mixture component is parametrized by Mz =
S
K
{? z , Az , {{cz?,m , ?z?,m , ?z?,m }M
m=1 }?=1 } and the H3M is parametrized by M = {?z , Mz }z=1 .
The likelihood of a random sequence y1:? ? M is
p(y1:? |M) =
K
X
?i p(y1:? |z = i, M),
(1)
i=1
where p(y1:? |z = i, M) is the likelihood of y1:? under the ith HMM component. To reduce clutter,
here we assume that all the HMMs have the same number S of hidden states and that all emission
probabilities have M mixture components, though our derivation could be easily extended to the
more general case, and in the remainder of the paper we use the notation in Table 1.
2
Table 1: Notation. (b) base model, (r) reduced model.
variables
index for HMM comp.
HMM states
HMM state sequence
index for comp. of GMM
models
H3M
HMM component
(b)
i
?
?1:? = {?1 ? ? ??? }
m
(r)
j
?
?1:? = {?1 ? ? ??? }
`
M(b)
(b)
Mi
M(r)
(r)
Mj
GMM emission
Mi,?
(b)
Mj,?
(b)
Mi,?,m
component of GMM
3
(r)
(r)
Mj,?,`
probability distributions
HMM state seq. (b)
HMM state seq. (r)
HMM obs. likelihood (r)
GMM emit likelihood (r)
Gaussian likelihood (r)
expectations
HMM obs. seq.
notation
p(x1:? =?1:? |z (b)=i, M(b))
p(x1:? =?1:? |z (r)=j, M(r))
p(y1:? |z (r) = j, M(r) )
(r)
p(yt |xt = ?, Mj )
(r)
p(yt |?t = `, xt = ?, Mj )
short-hand
(b),i
??1:?
(r),j
??1:?
(r)
p(y1:? |Mj )
(r)
p(yt |Mj,? )
(r)
p(yt |Mj,?,` )
Ey1:? |z(b) =i,M(b) [?]
EM(b) [?]
GMM emission
Eyt |xt =?,M(b) [?]
EM(b) [?]
i
i,?
Eyt |?t =m,xt =?,M(b) [?]
Gaussian component
i
i
EM(b)
[?]
i,?,m
Clustering hidden Markov models
We now derive the variational hierarchical EM algorithm for clustering HMMs (VHEM-H3M). Let
(b)
(b)
(b)
(b)
M(b) = {?i , Mi }K
components.
i=1 be a base hidden Markov mixture model (H3M) with K
The goal of the VHEM-H3M algorithm is to find a reduced hidden Markov mixture model M(r) =
(r)
(r)
(r)
(r)
{?j , Mj }K
< K (b) ), that represents M(b) well. At a high
j=1 with fewer components (i.e., K
level, the VHEM-H3M algorithm estimates the reduced H3M model M(r) from virtual samples
distributed according to the base H3M model M(b) . From this estimation procedure, the VHEM
algorithm provides: (i) a (soft) clustering of the original K (b) HMMs into K (r) groups, encoded
in assignment variables z?i,j , and (ii) novel HMM cluster centers, i.e., the HMM components of
M(r) , each of them representing a group of the original HMMs of M(b) . Finally, because we take
the expectation over the virtual samples, the estimation is carried out in an efficient manner that
requires only knowledge of the parameters of the base model without the need of generating actual
virtual samples.
3.1
Parameter estimation
We consider a set Y of N virtual samples distributed accordingly to the base model M(b) , such that
(b)
(i,m)
(i,m)
(b)
i
the Ni = N ?i samples Yi = {y1:? }N
? Mi ). We
m=1 are from the ith component (i.e., y1:?
(b)
denote the entire set of samples as Y = {Yi }K
i=1 , and, in order to obtain a consistent clustering of the
(b)
input HMMs Mi , we assume the entirety of samples Yi is assigned to the same component of the
(i,m) (i,m)
reduced model [8]. Note that, in this formulation, we are not using virtual samples {x1:? , y1:? }
(b)
for each base component, according to its joint distribution p(x1:? , y1:? |Mi ), but we treat Xi =
(i,m)
i
{x1:? }N
m=1 as ?missing? information, and estimate them in the E-step. The reason is that a basis
(b)
(r)
mismatch between components of Mi will cause problems when the parameters of Mj are
(b)
(b)
K
computed from virtual samples of the hidden states of {Mi }i=1
.
The original formulation of HEM [8] maximizes log-likelihood of the virtual samples, i.e.,
PK (b)
(r)
log p(Y |M(r) ) =
), with respect to M(r) , and uses the law of large numi=1 log p(Yi |M
(b)
bers to turn the virtual samples into an expectation over the base model components Mi . In this
paper, we will start with a slightly different objective function to derive the VHEM algorithm. To
estimate M(r) , we will maximize the expected log-likelihood of the virtual samples,
(r)
J (M
(b)
h
i K
h
i
X
(r)
) = EM(b) log p(Y |M ) =
EM(b) log p(Yi |M(r) ) ,
i=1
(2)
i
(b)
where the expectation is over the base model components Mi .
A general framework for maximum likelihood estimation in the presence of hidden variables (which
is the case for H3Ms) is the EM algorithm [15]. In this work, we take a variational perspective [16,
17, 18], which views both E- and M-step as a maximization step. The variational E-step first obtains
a family of lower bounds to the log-likelihood (i.e., to equation 2), indexed by variational parameters,
and then optimizes over the variational parameters to find the tightest bound. The corresponding
M-step then maximizes the lower bound (with the variational parameters fixed) with respect to the
3
model parameters. One advantage of the variational formulation is that it allows to replace a difficult
inference in the E-step with a variational approximation, by restricting the maximization to a smaller
domain for which the lower bound is tractable.
3.1.1
Lower bound to an expected log-likelihood
Before proceeding with the derivation of VHEM for H3Ms, we first need to derive a lower-bound to
an expected log-likelihood term (e.g., (2)). We will first consider the lower bound to a log-likelihood.
In all generality, let {O, H} be the observation and hidden variables of a probabilistic model, respectively, where p(H) is the distribution
P of the hidden variables, p(O|H) is the conditional likelihood
of the observations, and p(O) = H p(O|H)p(H) is the observation likelihood. We can define a
variational lower bound to the observation log-likelihood [18, 19]:
X
p(H)p(O|H)
log p(O) ? log p(O) ? D(q(H)||p(H|O)) =
q(H) log
(3)
q(H)
H
where p(H|O) isP
the posterior distribution of H given observation O, and q(H) is the variational
distribution (i.e., H q(H) = 1 and qi (H) ? 0) or approximate posterior distribution. D(pkq) =
R
p(y) log p(y)
q(y) dy is the Kullback-Leibler (KL) divergence between two distributions, p and q. When
the variational distribution equals the true posterior, q(H) = P (H|O), then the KL divergence
is zero, and hence the lower-bound reaches log p(O). When the true posterior is not possible to
calculate, then typically q is restricted to some set of approximate posterior distributions that are
tractable, and the best lower-bound is obtained by maximizing over q,
X
p(H)p(O|H)
(4)
log p(O) ? max
q(H) log
q?Q
q(H)
H
Using the lower bound in (4), we can now derive a lower bound to an expected log-likelihood
expression. Let Eb [?] be the expectation of O with respect to a distribution pb (O). Since pb (O) is
non-negative, taking the expectation on both sides of (4) yields,
"
Eb [log p(O)] ? Eb max
q?Q
= max
q?Q
X
H
X
q(H) log
H
p(H)p(O|H)
q(H)
#
"
? max Eb
q?Q
X
q(H) log
H
p(H)p(O|H)
q(H)
p(H)
q(H) log
+ Eb [log p(O|H)] ,
q(H)
#
(5)
(6)
where (5) follows from Jensen?s inequality (i.e., f (E[x]) ? E[f (x)] when f is convex), and the
convexity of the max function.
3.1.2
Variational lower bound
We now derive the lower bound of the expected log-likelihood cost function in (2). The derivation
proceeds by successively applying the lower bound from (6) on each arising expected log-likelihood
term, which results in a set of nested lower bounds. We first define the following three lower bounds:
EM(b) [log p(Yi |M(r) )] ? LiH3M ,
(7)
i
(r)
EM(b) [log p(y1:? |Mj )] ? Li,j
HM M ,
(8)
i
(r)
(i,? ),(j,?t )
EM(b) [log p(yt |Mj,?t )] ? LGMtM
.
(9)
i,?t
The first lower bound, LiH3M , is on the expected log-likelihood between an HMM and an H3M.
(r)
The second lower bound, Li,j
HM M , is on the expected log-likelihood of an HMM Mj , marginal(b)
ized over observation sequences from a different HMM Mi . Although the data log-likelihood
(r)
log p(y1:? |Mj ) can be computed exactly using the forward algorithm [1], calculating its expecta(r)
tion is not analytically tractable since y1:? ? Mj is essentially an observation from a mixture with
(b)
(r)
O(S ? ) components. The third lower bound is between GMM emission densities Mi,?t and Mj,?t .
4
H3M lower bound - Looking at an individual term in (2), p(Yi |M(r) ) is a mixture of HMMs, and
thus the observation variable is Yi and the hidden variable is zi (the assignment of Yi to a component
(r)
Mj ). Hence, introducing the variational distribution qi (zi ) and applying (6), we have
h
(r)
EM(b) log p(Yi |M
i
p(zi = j)
(r)
) ? max
qi (zi = j) log
+ Ni EM(b) [log p(y1:? |Mj )]
qi
qi (zi = j)
i
j
X
p(zi = j)
? max
qi (zi = j) log
+ Ni Li,j
, LiH3M .
(10)
HM M
qi
q
i (zi = j)
j
i
X
where we use the fact that Yi is a set of Ni i.i.d. samples, and we use the lower bound (8) for
(r)
the expectation of log p(y1:? |Mj ), which is the observation log-likelihood of an HMM and hence
its expectation cannot be calculated directly. To compute LiH3M , we will restrict the variational
PK (r)
distributions to the form qi (zi = j) = zij for all i, where j=1 zij = 1, and zij ? 0 ?j.
(r)
HMM lower bound - For the HMM likelihood p(y1:? |Mj ), the observation variable is y1:? and
the hidden variable is its state sequence ?1:? . Hence, for the lower bound Li,j
HM M we get
(r)
EM(b) [log p(y1:? |Mj )] =
i
X
(r)
(b),i
??1:? EM(b) |?
i
?1 :?
1:?
[log p(y1:? |Mj )]
(11)
)
(r)
X
p(?1:? |Mj )
(r)
+
?
EM(b) [log p(yt |Mj,?t )]
max
q (?1:? |?1:? ) log i,j
q (?1:? |?1:? )
i,?t
q i,j
t
?1:?
?1 :?
)
(
(r)
X (b),i
X i,j
X (i,? ),(j,? )
p(?1:? |Mj )
t
t
?
+
, Li,j
??1:? max
q (?1:? |?1:? ) log i,j
LGM M
HM M
q
(?
q i,j
1:? |?1:? )
?
t
(
X
(b),i
??1:?
?1 :?
X
i,j
(12)
(13)
1:?
where in (11) we first rewrite the expectation EM(b) to explicitly marginalize over the HMM state
i
(b)
sequence ?1:? from Mi , in (12) we introduce a variational distribution q?i,j1:? (?1:? ) on the state
sequence ?1:? , which depends on the particular sequence ?1:? , and apply (6) , and in the last line we
use the lower bound, defined in (9), on each expectation.
To compute Li,j
HM M we will restrict the variational distributions to the form of a Markov chain [14],
?
Y
q i,j (?1:? |?1:? ) = ?i,j (?1:? |?1:? ) = ?i,j (?1 |?1 )
?i,j
(14)
?t (?t |?t?1 ),
t=2
where
PS
i,j
?1 =1 ??1 (?1 ) = 1 for each value of ?1 , and
PS
i,j
?t =1 ??t (?t |?t?1 ) = 1 for each value
(b)
of ?t and ?t?1 . The variational distribution q?i,j1:? (?1:? ) assigns state sequences ?1:? ? Mi
state sequences ?1:? ?
(r)
Mj ,
based on how well (in expectation) the state sequence ?1:? ?
(b)
can explain an observation sequence generated by HMM Mi
(b)
(b)
?1:? ? Mi , i.e., by p(y1:? |Mi , ?1:? ).
to
(r)
Mj
evolving through state sequence
GMM lower bound - In [20] we derive the lower bound (9), by marginalizing EM(b) over GMM
i,?t
i,j
assignment m, introducing the variational distributions q?,?
(? = l|m), and applying (6). We will
PM (i,? ),(j,?t )
(i,?),(j,?)
i,j
restrict the variational distributions to q?,? (? = l|m) = ?`|m
, where `=1 ?`|m t
=1 ?m,
(i,? ),(j,?t )
and ?`|m t
?0 ?`,m. Intuitively, ? (i,?t ),(j,?t ) is the responsibility matrix between Gaussian ob(b)
(r)
(i,? ),(j,?t )
servation components for state ?t in Mi and state ?t in Mj , where ?`|m t
that an observation from component m of
3.2
(b)
Mi,?t
is the probability
(r)
corresponds to component ` of Mj,?t .
Variational HEM algorithm
Finally, the variational lower bound of the expected log-likelihood of the virtual samples in (2) is
(r)
J (M
h
) = EM(b) log p(Y |M
(r)
(b)
i K
X
) ?
LiH3M ,
i=1
5
(15)
which is composed of three nested lower bounds, corresponding to different model elements (the
H3M, the component HMMs, and the emission GMMs). The VHEM algorithm for HMMs consists
in coordinate ascent on the right hand side of (15).
E-step - The variational E-step (see [20] for details) calculates the variational parameters zij ,
Q?
i,j
(i,?),(j,?)
?i,j (?1:? |?1:? ) = ?i,j
for the lower bounds in (9) (13)
t=2 ??t (?t |?t?1 ), and ?
?1 (?1 )
(10). In particular, given the nesting of the lower bounds, we proceed by first maximizing the
(i,? ),(j,? )
GMM lower bound LGMtM t for each (i, j, ?t , ?t ). Next, the HMM lower bound Li,j
HM M is
maximized for each (i, j), which is followed by maximizing LiH3M for each i. The latter gives
(r)
z?ij ? wj exp(Ni Li,j
HM M ), which is similar to the formula derived in [8, 9], but the expectation is now replaced with its lower bound. We then collect the summary statistics:
?1i,j (?1 , ?1 ) =
PS
(b),i
(b),i
i,j
?i,j
??1 ??i,j
?ti,j (?t?1 , ?t , ?t ) =
1 (?1 |?1 ),
?t?1 =1?t?1 (?t?1 , ?t?1 )a?t?1 ,?t?1 ?t (?t |?t?1 , ?t ),
PS
and ?ti,j (?t , ?t ) = ?t?1 =1 ?ti,j (?t?1 , ?t , ?t ), the last two for t = 2, . . . , ? , and their aggregates
which are necessary for the M-step:
??1i,j (?) =
S
X
?1i,j (?, ?),
??i,j (?, ?) =
?
X
?ti,j (?, ?),
t=1
?=1
??i,j (?, ?0 ) =
? X
S
X
?ti,j (?, ?0 , ?).
(16)
t=2 ?=1
(r)
The statistic ??1i,j (?) is the expected number of times that the HMM Mj starts from state ?, when
(b)
modeling sequences generated by Mi . The quantity ??i,j (?, ?) is the expected number of times that
(r)
(b)
the HMM Mj is in state ? when the HMM Mi is in state ?, when both are modeling sequences
(b)
generated by Mi . Similarly, the quantity ??i,j (?, ?0 ) is the expected number of transitions from
(r)
(b)
state ? to state ?0 of Mj , when modeling sequences generated by Mi .
M-step - The lower bound (15) is maximized with respect to the parameters M(r) . Defined a
PK (b)
P
(b) PS
(b),i
weighted sum operator ?j,?,` (x(i, ?, m)) = i=1
z?i,j ?i
?i,j (?, ?) M
?=1 ?
m=1 c?,m x(i, ?, m), the
parameters M(r) are updated according to (derivation in [20]):
(r) ?
?j
(r),j ?
c?,`
(r),j ?
??,`
PK (b)
(b)
(b) i,j
?i,j ?i ??i,j (?, ?0 )
z?i,j ?i ?
?1 (?)
(r),j ?
i=1 z
, a?,?0 = P
,
PK (b)
PK (b)
(b) i,j
(b)
S
0
?i,j ?i ?
?1 (? ))
?i,j ?i ??i,j (?, ?)
i=1 z
?=1
i=1 z
?0 =1
(b),i
(i,?),(j,?)
(i,?),(j,?)
??,m
?j,?,` ?`|m
?j,?,` ??`|m
(r),j ?
, ??,`
=
,
(17)
= P
(i,?),(j,?)
(i,?),(j,?)
M
?j,?,`0 ??`0 |m
?j,?,` ??`|m
`0 =1
h
i
(i,?),(j,?)
(b),i
(b),i
(r),j
(b),i
(r),j t
(i,?),(j,?)
= ?j,?,` ??`|m
??,m + (??,m ? ??,` ) (??,m ? ??,` )
/?j,?,` ??`|m
.
(18)
PK (b)
=
?i,j
i=1 z
K (b)
(r),j ?
, ??
PK (b)
= P
S
i=1
Equations (17) and (18) are all weighted averages over all base models, model states, and Gaussian
components. The covariance matrices of the reduced models (18) are never smaller in magnitude
than the covariance matrices of the base models, due to the outer-product term. This regularization
effect derives from the E-step, which averages all possible observations from the base model.
4
Discussion, Experiments and Conclusions
Jebara et al. [4] cluster a collection of HMMs by applying spectral clustering to a probability product
kernel (PPK) matrix between HMMs. While this has been proven successful in grouping HMMs
into similar clusters, it cannot learn novel HMM cluster centers and therefore is suboptimal for
hierarchical estimation of mixture models (see Section 4.2). A second limitation is that the cost of
building the PPK matrix is quadratic in the number K (b) of input HMMs. Note that we extended the
algorithm in [4] to support GMM observations instead of only Gaussians.
The VHEM-H3M algorithm clusters a collection of HMMs directly through the distributions they
represent, by estimating a smaller mixture of novel HMMs that concisely models the distribution
represented by the input HMMs. This is achieved by maximizing the log-likelihood of ?virtual?
samples generated from the input HMMs. As a result, the VHEM cluster centers are consistent
with the underlying generative probabilistic framework. As a first advantage, since VHEM-H3M
estimates novel HMM cluster centers, we expect the learned cluster centers to retain more information on the clusters? structure and VHEM-H3M to produce better hierarchical clusterings than [4],
which suffers out-of-sample limitations. A second advantage is that VHEM does not build a kernel
embedding as in [4], an is therefore expected to be more efficient, especially for large K (b) .
6
In addition, VHEM-H3M allows for efficient estimation of HMM mixtures from large datasets using
a hierarchical estimation procedure. In particular, in a first stage intermediate HMM mixtures are
estimated in parallel by running standard EM on small independent portions of the dataset, and the
final model is estimated from the intermediate models using the VHEM algorithm. Relative to direct
EM estimation on the entire data, VHEM-H3M is more time- and memory-efficient. First, it does
not need to evaluate the likelihood of all the samples at each iteration, and converges to effective
estimates in shorter times. Second, it no longer requires storing in memory the entire data set during
parameter estimation. Another advantage is that the intermediate models implicitly provide more
?samples? (virtual variations of each time-series) to the final VHEM stage. This acts as a form of
regularization that prevents over-fitting and improves robustness of the learned models. Therefore,
we expect models learned using the hierarchical estimation procedure to perform better than those
learned with EM directly on the entire data. Note that in the second stage we could use the spectral
clustering algorithm in [4] instead of VHEM ? run spectral clustering over intermediate models
pooled together, and form the final H3M with the HMMs mapped the closest to the K cluster centers.
VHEM, however, is expected to do better since it learns novel cluster centers. As an alternative to
VHEM, we tested a version of HEM that, instead of marginalizing over virtual samples, uses actual
sampling and the EM algorithm [5] to learn the reduced H3M. Despite its simplicity, the algorithm
requires a large number of samples for learning accurate models, and has longer learning times
(since it evaluates the likelihood of all samples at each iteration).
VHEM-H3M
EM-H3M
PPK-SC
HEM-DTM
HEM-GMM
P
0.446
0.415
0.299
0.430
0.374
annotation
R
F
0.211 0.260
0.214 0.248
0.159 0.151
0.202 0.252
0.205 0.213
Level 4
time (h)
678
1860
1033
426
5
Level 3
jog
5
sit
4
walk 2
3
run
2
4
6
7
8
50
1
soccer
40
jump
30
basket
20
walk 1
10
2
1
2
1
2
10
retrieval
MAP P@10
0.440 0.451
0.423 0.422
0.347 0.340
0.439 0.453
0.417 0.425
3
Level 1
1
30.97
37.69
843.89
3849.72
667.97
121.32
Table 3: Annotation and retrieval on CAL500, for VHEMH3M, PPK-SC, EM-H3M, HEM-DTM and HEM-GMM,
averaged over the 97 tags with at least 30 examples in
CAL500, and result of 5 fold-cross validation.
2
Level 2
time (s)
1
Level 4
log-likelihood (?106)
2
3
4
-5.361 -5.682 -5.866
-5.399 -5.845 -6.068
-13.632 -69.746 -275.650
-14.645 -30.086 -52.227
-5.713 -202.55 -168.90
-7.125 -8.163 -8.532
PPK-SC
Rand-index
2
3
4
0.937 0.811 0.518
0.956 0.740 0.393
0.714 0.359 0.234
0.782 0.685 0.480
0.831 0.430 0.340
0.897 0.661 0.412
2
3
4
Level 2
Level
(#samples)
VHEM-H3M
PPK-SC
SHEM-H3M (560)
SHEM-H3M (2800)
EM-H3M
HEM-DTM
1
Level 3
Table 2: Hierarchical clustering on Motion Capture data,
using various algorithms. The Rand-index is the probability that any pair of motion sequences are correctly clustered
with respect to each other. Results are averages of 10 trials.
VHEM-H3M algorithm
Experiment on hierarchical motion clustering
3
4
5
6
7
8
Level 1
4.1
20
30
40
50
Figure 1: Hierarchical clustering of Motion Capture data
(qualitative). Best in color.
We tested the VHEM algorithm on hierarchical motion clustering, where each of the input HMMs
to be clustered is estimated on a sequence of motion capture data from the Motion Capture dataset
(http://mocap.cs.cmu.edu/ ). In particular, we start from K1 = 56 motion examples from 8 different
classes (?jump?, ?run?, ?jog??, ?walk 1? and ?walk 2? which are from two different subjects, ?basket?, ?soccer?, ?sit?), and learn a HMM for each of them, forming the first level of the hierarchy.
A tree-structure is formed by successively clustering HMMs with the VHEM algorithm, and using
the learned cluster centers as the representative HMMs at the new level. Level 2, 3, and 4 of the
hierarchy correspond to K2 = 8, K3 = 4 and K4 = 2 clusters.
The hierarchical clustering obtained with VHEM is illustrated in Figure 1 (top). In the first level,
each vertical bar represents a motion sequence, and different colors indicate different ground-truth
classes. At Level 2, the 8 HMM clusters are shown with vertical bars, with the colors indicating the
proportions of the motion classes in the cluster. At Level 2, VHEM produces clusters with examples
from a single motion class (e.g., ?run?, ?jog?, ?jump?), but mixes some ?soccer? examples with
?basket?, possibly because both actions consists in a sequence of movement-shot-pause. Moving up
the hierarchy, VHEM clusters similar motions classes together (as indicated by the arrows), and at
Level 4 it creates a dichotomy between ?sit? and the other (more dynamic) motion classes. On the
7
bottom, in Figure 1, the same experiment is repeated using spectral clustering in tandem with PPK
similarity (PPK-SC). PPK-SC clusters motion sequences properly, however, at Level 2 it incorrectly
aggregates ?sit? and ?soccer? that have quite different dynamics, and Level 4 is not as interpretable
as the one by VHEM. Table 2 provides a quantitative comparison. While VHEM has lower Randindex than PPK-SC at Level 2 (0.937 vs. 0.956), it has higher Rand-index at Level 3 (0.811 vs.
0.740) and Level 4 (0.518 vs. 0.393). In addition, VHEM-H3M has higher data log-likelihood
than PPK-SC at each level, and is more efficient. This suggests that the novel HMM cluster centers
learned by VHEM-H3M retain more information on the clusters? structure than the spectral cluster
centers, which is increasingly visible moving up the hierarchy. Finally, VHEM-H3M performs better
and is more efficient than the HEM version based on actual sampling (SHEM-H3M), the EM applied
directly on the motion sequences, and the HEM-DTM algorithm [9].
4.2 Experiment on automatic music tagging
We evaluated VHEM-H3M on content-based music auto-tagging on the CAL500 [11], a collection
of 502 songs annotated with respect to a vocabulary V of 149 tags. For each song, we extract a
time series Y = {y1 , . . . , yT } of 13 Mel frequency cepstral coefficients (MFCC) [1] over halfoverlapping windows of 46ms, with first and second instantaneous derivatives. We formulate music
auto-tagging as supervised multi-class labeling [10], where each class is a tag from V and is modeled
as a H3M probability distribution estimated from audio-sequences (of T = 125 audio features, i.e.,
approximately 3s of audio) extracted from the relevant songs in the database, using the VHEMH3M algorithm. First, for each song the EM algorithm is used to learn a H3Ms with K (s) = 6
components (as many as the structural parts of most pop songs). Then, for each tag, the relevant
song-level H3Ms are pooled together and the VHEM-H3M algorithm is used to learn the final H3M
tag model with K = 3 components.
We compare the proposed VHEM-H3M algorithm to PPK-SC,1 direct EM-estimation (EM-H3M)
[5] from the relevant songs? audio sequences, HEM-DTM [12] and HEM-GMM [11]. The last two
use an efficient HEM algorithm for learning, and are state-of-the-art baselines for music tagging.
We were not able to successfully estimate tag-H3Ms with the sampling version of HEM-H3M.
Annotation (precision P, recall R, and f-score F) and retrieval (mean average precision MAP, and
top-10 precision P@10) are reported in Table 3. VHEM-H3M is the most efficient algorithm for
learning H3Ms as it requires only 36% of the time of EM-H3M, and 65% of the time of PPKSC. VHEM-H3M capitalizes on the song-level H3Ms learned in the first stage (about one third of
the total time), by efficiently using them to learn the final tag models. The gain in computational
efficiency does not negatively affect the quality of the resulting models. On the contrary, VHEMH3M achieves better performance than EM-H3M (differences are statistically significant based on
a paired t-test with 95% confidence), since it has the benefit of regularization, and outperforms
PPK-SC. Designed for clustering HMMs, PPK-SC does not produce accurate annotation models,
since it discards information on the clusters? structure by approximating it with one of the original
HMMs. Instead, VHEM-H3M generates novel HMM cluster centers that effectively summarizes
each cluster. VHEM-H3M outperforms HEM-GMM, which does not model temporal information
in the audio signal. Finally, HEM-DTM, based on LDSs (a continuous-state model), can model only
stationary time-series in a linear subspace. In contrast, VHEM-H3M uses HMMs with discrete states
and GMM emissions, and can also adapt to non-stationary time-series on a non-linear manifold.
Hence, VHEM-H3M outperforms HEM-DTM on the human MoCap data (see Table (2)), which has
non-linear dynamics, while the two perform similarly on the music data (difference were statistically
significant only on annotation P), where the audio features are stationary over short time frames.
4.3 Conclusion
We presented a variational HEM algorithm for clustering HMMs through their distributions and generates novel HMM cluster centers. The efficacy of the algorithm was demonstrated on hierarchical
motion clustering and automatic music tagging, with improvement over current methods.
Acknowledgments
The authors acknowledge support from Google, Inc. E.C. and G.R.G.L. acknowledge support from
Qualcomm, Inc., Yahoo! Inc., and the National Science Foundation (grants CCF-083053, IIS1054960 and EIA-0303622). A.B.C. acknowledges support from the Research Grants Council of
the Hong Kong SAR, China (CityU 110610). G.R.G.L. acknowledges support from the Alfred P.
Sloan Foundation.
It was necessary to implement PPK-SC with song-level H3Ms with K (s)=1. K (s)=2 took about quadruple
the time with no improvement in performance. Larger K (s) would determine impractical learning times.
1
8
References
[1] L. Rabiner and B. H. Juang. Fundamentals of Speech Recognition. Prentice Hall, Upper Saddle
River (NJ, USA), 1993.
[2] Y. Qi, J.W. Paisley, and L. Carin. Music analysis using hidden markov mixture models. Signal
Processing, IEEE Transactions on, 55(11):5209?5224, 2007.
[3] E. Batlle, J. Masip, and E. Guaus. Automatic song identification in noisy broadcast audio. In
IASTED International Conference on Signal and Image Processing. Citeseer, 2002.
[4] T. Jebara, Y. Song, and K. Thadani. Spectral clustering and embedding with hidden markov
models. Machine Learning: ECML 2007, pages 164?175, 2007.
[5] P. Smyth. Clustering sequences with hidden markov models. In Advances in neural information
processing systems, 1997.
[6] T. Jebara, R. Kondor, and A. Howard. Probability product kernels. The Journal of Machine
Learning Research, 5:819?844, 2004.
[7] B. H. Juang and L. R. Rabiner. A probabilistic distance measure for hidden Markov models.
AT&T Technical Journal, 64(2):391?408, February 1985.
[8] N. Vasconcelos and A. Lippman. Learning mixture hierarchies. In Advances in Neural Information Processing Systems, 1998.
[9] A.B. Chan, E. Coviello, and G.R.G. Lanckriet. Clustering dynamic textures with the hierarchical em algorithm. In Intl. Conference on Computer Vision and Pattern Recognition, 2010.
[10] G. Carneiro, A.B. Chan, P.J. Moreno, and N. Vasconcelos. Supervised learning of semantic
classes for image annotation and retrieval. IEEE Transactions on Pattern Analysis and Machine
Intelligence, 29(3):394?410, 2007.
[11] D. Turnbull, L. Barrington, D. Torres, and G. Lanckriet. Semantic annotation and retrieval
of music and sound effects. IEEE Transactions on Audio, Speech and Language Processing,
16(2):467?476, February 2008.
[12] E. Coviello, A. Chan, and G. Lanckriet. Time series models for semantic music annotation.
Audio, Speech, and Language Processing, IEEE Transactions on, 5(19):1343?1359, 2011.
[13] A. Banerjee, S. Merugu, I.S. Dhillon, and J. Ghosh. Clustering with bregman divergences. The
Journal of Machine Learning Research, 6:1705?1749, 2005.
[14] J.R. Hershey, P.A. Olsen, and S.J. Rennie. Variational Kullback-Leibler divergence for hidden Markov models. In Automatic Speech Recognition & Understanding, 2007. ASRU. IEEE
Workshop on, pages 323?328. IEEE, 2008.
[15] A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data via
the EM algorithm. Journal of the Royal Statistical Society B, 39:1?38, 1977.
[16] R.M. Neal and G.E. Hinton. A view of the em algorithm that justifies incremental, sparse, and
other variants. NATO ASI SERIES D BEHAVIOURAL AND SOCIAL SCIENCES, 89:355?370,
1998.
[17] I. Csisz, G. Tusn?ady, et al. Information geometry and alternating minimization procedures.
Statistics and decisions, 1984.
[18] M.I. Jordan, Z. Ghahramani, T.S. Jaakkola, and L.K. Saul. An introduction to variational
methods for graphical models. Machine learning, 37(2):183?233, 1999.
[19] Tommi S. Jaakkola. Tutorial on Variational Approximation Methods. In In Advanced Mean
Field Methods: Theory and Practice, pages 129?159. MIT Press, 2000.
[20] Anonymous. Derivation of the Variational HEM Algorithm for Hidden Markov Mixture Models. Technical report, Anonymous, 2012.
9
|
4779 |@word kong:2 trial:1 kondor:1 version:3 proportion:1 covariance:3 citeseer:1 shot:1 initial:1 series:7 score:2 efficacy:2 zij:4 bhattacharyya:2 outperforms:3 existing:1 current:2 assigning:1 visible:1 partition:1 j1:2 moreno:1 designed:2 interpretable:1 v:3 stationary:3 generative:4 fewer:2 intelligence:1 accordingly:1 capitalizes:1 ith:2 short:2 provides:2 complication:1 direct:2 qualitative:1 consists:2 fitting:1 introduce:1 manner:1 pairwise:1 tagging:6 expected:14 behavior:1 ldss:2 multi:1 little:1 actual:3 window:1 tandem:1 estimating:2 underlying:2 notation:3 maximizes:2 ghosh:1 impractical:1 nj:1 temporal:1 quantitative:1 ti:5 act:1 exactly:1 k2:1 grant:2 before:1 treat:1 despite:1 quadruple:1 approximately:1 eb:5 china:1 collect:1 suggests:1 hmms:48 statistically:2 averaged:1 unique:1 acknowledgment:1 practice:1 implement:1 lippman:1 procedure:5 evolving:1 asi:1 confidence:1 get:1 cannot:2 marginalize:1 operator:1 prentice:1 context:1 applying:4 map:3 demonstrated:2 center:18 baum:1 yt:7 missing:1 maximizing:4 starting:1 convex:1 welch:1 formulate:1 simplicity:1 assigns:1 nesting:1 embedding:2 handle:1 gert:2 coordinate:1 variation:1 sar:1 updated:1 diego:2 hierarchy:5 exact:1 smyth:1 us:3 lanckriet:4 element:1 recognition:4 approximated:1 database:1 bottom:1 capture:5 calculate:1 wj:1 movement:1 mz:2 dempster:1 convexity:1 complexity:1 dynamic:6 motivate:1 rewrite:1 creates:1 negatively:1 efficiency:1 basis:1 cityu:3 easily:1 joint:1 isp:1 various:3 represented:2 carneiro:1 derivation:6 fast:1 effective:1 sc:12 dichotomy:1 aggregate:2 labeling:1 choosing:1 whose:1 encoded:1 quite:1 larger:1 rennie:1 statistic:3 qualcomm:1 noisy:1 laird:1 final:5 sequence:30 advantage:4 took:1 propose:2 product:3 remainder:2 relevant:3 csisz:1 az:1 juang:2 cluster:42 double:2 p:5 intl:1 produce:3 generating:1 incremental:1 converges:1 derive:9 illustrate:1 bers:1 ij:1 c:2 entirety:1 indicate:1 tommi:1 annotated:1 stochastic:2 human:1 virtual:13 clustered:3 anonymous:2 hall:1 ground:1 exp:1 k3:1 achieves:1 estimation:13 council:1 successfully:3 weighted:2 minimization:1 mit:1 gaussian:10 jaakkola:2 derived:3 emission:7 properly:1 improvement:2 likelihood:30 hk:1 contrast:1 baseline:1 inference:3 entire:4 typically:1 hidden:31 mth:1 interested:1 selects:2 yahoo:1 art:1 uc:2 marginal:1 equal:1 construct:1 never:1 vasconcelos:2 ady:1 sampling:4 field:1 represents:5 carin:1 report:1 composed:1 divergence:6 national:1 individual:1 replaced:1 geometry:1 eyt:2 evaluation:1 mixture:28 chain:3 accurate:2 bregman:2 emit:1 necessary:2 shorter:1 indexed:1 tree:1 euclidean:1 incomplete:1 walk:4 soft:1 modeling:3 assignment:4 maximization:3 turnbull:1 cost:2 introducing:2 subset:1 entry:1 successful:2 hem:28 reported:1 density:3 fundamental:1 ppk:15 river:1 international:1 retain:2 probabilistic:4 together:3 successively:2 broadcast:1 possibly:1 derivative:1 li:8 pooled:2 coefficient:1 inc:3 explicitly:1 sloan:1 depends:2 tion:1 view:2 responsibility:1 characterizes:1 portion:1 start:4 parallel:1 annotation:10 formed:1 ni:5 merugu:1 efficiently:3 maximized:2 correspond:1 yield:1 rabiner:2 identification:2 served:1 comp:2 mfcc:1 explain:1 reach:1 suffers:1 basket:3 evaluates:1 frequency:1 associated:1 mi:24 gain:1 dataset:2 recall:1 knowledge:1 color:3 improves:1 organized:1 higher:2 dt:1 supervised:2 hershey:1 rand:3 formulation:4 evaluated:1 though:1 eia:1 generality:1 stage:4 hand:2 banerjee:1 google:1 quality:1 indicated:1 barrington:1 building:1 effect:2 usa:1 true:2 ccf:1 hence:5 assigned:1 analytically:1 regularization:3 alternating:1 leibler:2 dhillon:1 semantic:5 illustrated:1 neal:1 during:1 mel:1 soccer:4 hong:2 generalized:1 m:1 performs:1 motion:19 ranging:1 variational:33 image:3 novel:12 recently:1 instantaneous:1 multinomial:2 extend:1 significant:2 paisley:1 automatic:5 rd:1 pm:2 similarly:2 emanuele:1 language:2 moving:2 similarity:2 longer:3 base:11 pkq:1 closest:2 multivariate:1 chan:5 posterior:5 perspective:1 optimizes:1 discard:1 inequality:1 yi:11 additional:2 expecta:1 determine:1 maximize:1 mocap:2 signal:6 ii:2 mix:1 sound:1 reduces:1 jog:3 technical:2 adapt:1 cross:1 dept:3 retrieval:7 paired:1 dtm:7 qi:9 calculates:1 variant:1 essentially:1 expectation:13 cmu:1 vision:1 iteration:2 represent:3 cz:1 kernel:3 achieved:1 addition:3 appropriately:1 operate:1 coviello:3 ascent:1 subject:1 contrary:1 leveraging:1 spirit:1 gmms:2 jordan:1 structural:1 leverage:1 presence:1 intermediate:4 variety:1 marginalization:1 affect:1 zi:9 restrict:3 suboptimal:2 reduce:1 expression:1 song:11 speech:6 proceed:1 cause:1 action:1 clutter:1 h3m:49 reduced:7 generate:1 http:1 tutorial:1 estimated:5 arising:1 correctly:1 alfred:1 discrete:2 group:9 iasted:1 pb:2 k4:1 gmm:18 sum:1 run:4 family:1 seq:3 ob:3 dy:1 summarizes:1 decision:1 bound:35 followed:1 fold:1 quadratic:1 precisely:1 encodes:2 tag:7 generates:3 abchan:1 ey1:1 relatively:1 according:6 smaller:4 slightly:1 em:36 increasingly:1 evolves:1 intuitively:1 invariant:1 indexing:1 restricted:1 behavioural:1 equation:2 turn:1 tractable:5 gaussians:2 tightest:1 apply:1 hierarchical:20 appropriate:1 spectral:10 alternative:1 robustness:1 original:5 assumes:3 clustering:39 running:1 top:2 graphical:2 music:14 calculating:1 ghahramani:1 build:1 especially:1 k1:1 approximating:1 february:2 society:1 objective:1 tusn:1 quantity:2 affinity:3 subspace:1 distance:2 mapped:1 hmm:50 parametrized:2 outer:1 manifold:2 reason:1 thadani:1 modeled:3 index:5 difficult:1 negative:1 ized:1 design:1 perform:2 upper:1 vertical:2 observation:19 markov:23 datasets:2 howard:1 acknowledge:2 ecml:1 incorrectly:1 extended:2 looking:1 hinton:1 y1:24 frame:1 ucsd:2 jebara:3 pair:1 required:1 kl:4 specified:1 concisely:3 learned:8 dts:1 pop:1 able:1 bar:2 proceeds:1 dynamical:1 pattern:2 mismatch:1 including:1 max:9 video:2 memory:2 royal:1 suitable:1 pause:1 advanced:1 representing:1 historically:1 carried:1 acknowledges:2 hm:8 auto:2 extract:1 review:1 understanding:1 marginalizing:2 relative:1 law:1 embedded:2 expect:2 permutation:1 limitation:2 proven:2 validation:1 foundation:2 consistent:3 rubin:1 storing:1 summary:1 last:3 guide:1 allow:2 side:2 saul:1 taking:1 cepstral:1 sparse:1 benefit:2 distributed:2 calculated:1 vocabulary:1 transition:2 forward:1 collection:5 jump:3 san:2 author:1 far:1 social:1 transaction:4 approximate:2 compact:1 obtains:1 implicitly:1 kullback:2 olsen:1 nato:1 xi:1 continuous:1 table:7 mj:32 learn:6 dtms:1 lgm:1 domain:1 pk:8 linearly:1 arrow:1 repeated:1 x1:7 representative:4 torres:1 precision:3 sub:1 lie:1 third:2 learns:1 formula:1 antoni:1 xt:6 specific:1 jensen:1 grouping:2 workshop:1 intractable:1 derives:1 restricting:1 sequential:1 sit:4 cal500:3 effectively:1 texture:2 magnitude:1 conditioned:1 justifies:1 appearance:1 saddle:1 forming:1 prevents:1 applies:1 nested:2 corresponds:1 truth:1 extracted:1 succeed:1 conditional:1 goal:1 asru:1 replace:1 content:1 except:1 reducing:1 total:1 ece:3 experimental:1 indicating:1 support:5 latter:1 evaluate:1 audio:9 tested:2
|
4,175 | 478 |
A comparison between a neural network model for
the formation of brain maps and experimental data
K. Obermayer
Beckman-Institute
University of Illinois
Urbana, IL 61801
K. Schulten
Beckman-Institute
University of Illinois
Urbana, IL 61801
G.G. Blasdel
Harvard Medical School
Harvard University
Boston, MA 02115
Abstract
Recently, high resolution images of the simultaneous representation of
orientation preference, orientation selectivity and ocular dominance have
been obtained for large areas in monkey striate cortex by optical imaging
[1-3]. These data allow for the first time a "local" as well as "global"
description of the spatial patterns and provide strong evidence for correlations between orientation selectivity and ocular dominance.
A quantitative analysis reveals that these correlations arise when a fivedimensional feature space (two dimensions for retinotopic space, one each
for orientation preference, orientation specificity, and ocular dominance) is
mapped into the two available dimensions of cortex while locally preserving
topology. These results provide strong evidence for the concept of topology
preserving maps which have been suggested as a basic design principle of
striate cortex [4-7].
Monkey striate cortex contains a retinotopic map in which are embedded the highly
repetitive patterns of orientation selectivity and ocular dominance. The retinotopic
projection establishes a "global" order, while maps of variables describing other
stimulus features, in particular line orientation and ocularity, dominate cortical
organization locally. A large number of pattern models [8-12] as well as models
of development [6,7,13-21] have been proposed to describe the spatial structure of
these patterns and their development during ontogenesis. However, most models
have not been compared with experimental data in detail. There are two reasons
for this: (i) many model-studies were not elaborated enough to be experimentally
testable and (ii) a sufficient amount of experimental data obtained from large areas
of striate cortex was not available.
83
84
Obermayer, Schulten, and Blasdel
Figure 1: Spatial pattern of orientation preference and ocular dominance in monkey striate cortex (left) compared with predictions of the SOFM-model (right). Isoorientation lines (gray) are drawn in intervals of 11.25? (left) and IS.00 (right), respectively. Black lines indicate the borders (ws(rj = 0) of ocular dominance bands.
The areas enclosed by black rectangles mark corresponding elements of organization
in monkey striate cortex and in the simulation result (see text). Left: Data obtained
from a 3.1mm x 4.2mm patch of the striate cortex of an adult macaque (macaca
nemestrina) by optical imaging [1-3]. The region is located near the border with
area IS, close to midline. Right: Model-map generated by the SOFM-algorithm.
The figure displays a small section of a network of size N
d 512. The parameters of the simulation were: ? = 0.02, tTh
5,
= 20.48, ax 15.36, 9 . 101
iterations, with retinotopic initial conditions and periodic boundary conditions.
= vr,:x
1
= =
vr =
Orientation and ocular dominance columns in monkey
striate cortex
Recent advances in optical imaging [1-3,22,23] now make it possible to obtain high
resolution images of the spatial pattern of orientation selectivity and ocular dominance from large cortical areas. Prima vista analysis of data from monkey striate
cortex reveals that the spatial pattern of orientation preference and ocular dominance is continuous and highly repetitive across cortex. On a global scale orientation preferences repeat along every direction of cortex with similar periods. Locally,
orientation preferences are organized as parallel slabs (arrow 1, Fig. 1a) in linear
zones, which start and end at singularities (arrow 2, Fig. la), point-like discontinuities, around which orientation preferences change by ?IS00 in a pinwheel-like
fashion. Both types of singularities appear in equal numbers (359:354 for maps
obtained from four adult macaques) with a density of 5.5/mm 2 (for regions close to
A Neural Network Model for the Formation of Brain Maps Compared with Experimental Data
Fourier transforms
Wj (k)
E;r
correlation functions
Cij (P)
< Wier) Wj{r + p) >;r
feature gradients
IV;rWj (r) I
({Wj{rl + 1, r2) - Wj{rl' r2?2
+ (Wj(rl, r2 + 1) - Wj{rl' r2?2}i/2
Gabor transforms
9j (k, r)
(211'0';)-t
exp{ikr) Wj{r)
f d 2 r'wj{r')
(;r
;t'r~
exp{- ;(1~
~
+ ik{r' -
!r)}
Table 1: Quantitative measures used to characterize cortical maps.
the midline). Figure la reveals that the iso-orientation lines cross ocular dominance
bands at nearly right angles most of the time (region number 2) and that singularities tend to align with the centers of the ocular dominance bands (region number
1). Where orientation preferences are organized as parallel slabs (region number
2), the iso-orientation contours are often equally spaced and orientation preferences
change linearly with distance.
These results are confirmed by a quantitative analysis (see Table 1). For the
following we denote cortical location by a two-dimensional vector r. At each location we denote the (average) position of receptive field centroids in visual space
by (Wl(r), tlJ2{r). Orientation selectivity is described by a two-dimensional vector
( W3( r), W4 ( r), whose length and direction code for orientation tuning strength and
preferred orientation, respectively [1,10]. Ocular dominance is described by a realvalued function ws{r), which denotes the difference in response to stimuli presented
to the left and right eye. Data acquisition and postprocessing are described in detail
in [1-3].
A Fourier transform of the map of orientation preferences reveals a spectrum which
is a nearly circular band (Fig. 2a), showing that orientation preferences repeat
with similar periods in every direction in cortex. Neglecting the slight anisotropy
in the experimental datal, a power spectrum can be approximated by averaging
amplitudes over all directions of the wave-vector (Fig. 2b, dots~. The location of
the peak corresponds to an average period Ao 710pm ? 50pm and it's width to
a coherence length of 820pm ? 130pm. The coherence length indicates the typical
distance over which orientation preferences can change linearly and corresponds
to the average size of linear zones in Fig. la. The corresponding autocorrelation
functions (Fig. 2c) have a Mexican hat shape. The minimum occurs near 300pm,
which indicates that orientation preferences in regions separated by this distance
tend to be orthogonal. In summary, the spatial pattern of orientation preference is
characterized by local correlation and global "disorder" .
=
, ,\ long axes parallel t.o t.he ocular dominance fllabfl, orient.ation preferencM repeat on
average every 660/Jm ? 40pm; perpendicular t.o the fltripes every 840/Jm ? 40/Jm. The fllight.
horizontal elongation reflects t.he fact that. iR(H)rientation fllabR tend to r.onned t.he centerR
of ocul:" dominance bandR.
2,\11 quantities regarding experimental data are averages over four animalR, nml-nm4,
unleAA Rtated ot.herwifle. F.rror marginfl indicat.e fltandard deviations.
85
86
Obermayer, Schulten. and Blasdel
a)
. . :<o>:\~. >:
....:~~;~ .: ::..',.
:.:. :. :? ..: ::::-:':0: ::::~;:::-;::::: . iI;ip;:lr:,
..
",-....
13
-e
"'
.-6
1.5
b)
.~
C':S
~
C44
~",-....
c)
0"0
c:: ~ 0.5
1.0
cE .-
c::~
0
c::
'-'
;:':.:
i .i;? ': i~l;'~;::??;i:>?~ ?..?..~;,
X: :: :::.- ....
~ 'j ': :.
.. nm2
0.5
& 0.0 a
.9
-theory
1
2
spatial frequency
(nonnalized)
e
~ 0
0.0
.-nm2
-d
~'-'
3
8
_
-0.5
0.0
0.5
1.0
theory
1.5
distance (nonnalized)
Figure 2: Fourier analysis and correlation functions of the orientation map in
monkey striate cortex (animal nm2) compared with the predictions of the SOFMmodel. Simulation results were taken from the data set described in Fig. 1, right.
(a) Fourier spectra of nm2 (left) and simulation results (right). Each pixel represents one mode; location and gray value of the pixel indicate wave-vector and energy,
respectively. (b) Approximate power spectrum (normalized) obtained by averaging
the Fourier-spectra in (a) over all directions of the wave-vector. Peak frequency
of 1.0 corresponds to 1.4/mm for nm2. (c) Correlation functions (normalized). A
distance of 1.0 corresponds to 725IJm for nm2.
Local properties of the spatial patterns, as well as correlations between orientation
preference and ocular dominance, can be quantitatively characterized using GaborHelstrom-transforms (see Table 1). If the radius u g of the Gaussian function in the
Gabor-filter is smaller than the coherence length the Gabor-transform of any of the
quantities W3Cr'), W4(r') and ws(r') typically consists of two localized regions of high
energy located on opposite sides of the origin. The length Ikd of the vectors ki'
i E [3,4,5], which corresponds to the centroids of these regions, fluctuates around
the characteristic wave-number 27r/>'o of this pattern, and its direction gives the
normal to the ocular dominance bands and iso-orientation slabs at the location r,
where the Gabor-transform was performed.
A Neural Network Model for the Fonnation of Brain Maps Compared with Experimental Data
U)
6
.:::1
~
-
10
8
8
'0
6
&
4
S
~
8.
nml-nm4
..
?
.:::1
8
rP PIBllel alalll
2
/SI
0
fP
II
~
90?
6
&
4
S
c::
.J.
.
8
'C)
~
8.
ainguJaritiel
10
theory
2
0
cP
U
Figure 3: Gabor-analysis of cortical maps. The percentage of map locations is
plotted against the parameters 81 and 82 (see text) for 3,421 locations randomly
selected from the cortical maps of four monkeys, nml-nm4, (left) and for 1,755
locations randomly selected from simulation results (right). Error bars indicate
standard deviations. Simulation results were taken from the data set described in
Fig. 1. CTg was 150l'm for the experimental data and 28 pixels for the SOFM-map.
Results of this analysis are shown in Fig. 3 (left) for 3,434 samples selected randomly
from data of four animals. The angle between k3 and k4 is represented along the 81
axis. Histograms at the back, where 81 0?, represent regions where iso-orientation
90?, represent regions conlines are parallel. Histograms in the front, where 81
taining singularities. The intersection angle of iso-orientation slabs and ocular dominance bands is represented along the 82 axis. The proportion of sampled regions
increases steadily with decreasing 81. As 81 approaches zero, values accumulate
at the right, where orientation and ocular dominance bands are orthogonal. Thus
linear zones and singularities are important elements of cortical organization but
linear zones (back rows) are the most prominent features in monkey striate cortex3 .
Where iso-orientation regions are organized as parallel slabs, orientation slabs intersect ocular dominance bands at nearly right angles (back and right corner of
diagrams).
=
2
=
Topology preserving maps
Recently, topology preserving maps have been suggested as a basic design principle
underlying these patterns and its was proposed that these maps are generated by
simple and biologically plausible pattern formation processes [4,6,7]. In the following
we will test these models against the recent experimental data.
We consider a five-dimensional feature 8pace V which is spanned by quantities describing the most prominent receptive field properties of cortical cells: position
of a receptive field in retinotopic space (VI, V2), orientation preference and tuning
strength (V3, V4), and ocular dominance (V5). If all combinations of these properties
!iDBtB from BreB 17 of the eBt indkBte t.hat in thifl flpeciefl, Blthough hot,h elementfl are
prPJlent, flingulBritiefi Bre more import.ant [23]
87
88
Obermayer, Schulten, and Blasdel
are represented in striate cortex, each point in this five-dimensional feature space
is mapped onto one point on the two-dimensional cortical surface A.
In order to generate these maps we employ the feature map (SOFM-) algorithm
of Kohonen [15,16] which is known to generate topology preserving maps between
spaces of different dimensionality [4,5]4. The algorithm describes the development
of these patterns as unsupervised learning, i.e. the features of the input patterns
determine the features to be represented in the network [4]. Mathematically, the
algorithm assignes feature vectors w(r), which are points in the feature space, to
cortical units r, which are points on the cortical surface. In our model the surface is
divided into N x N small patches, units r, which are arranged on a two-dimensional
lattice (network layer) with periodic boundary conditions (to avoid edge effects).
The average receptive field properties of neurons located in each patch are characterized by the feature vector w( r) whose components (Wj (r) are interpreted as
receptive field properties of these neurons. The algorithm follows an iterative procedure. At each step an input vector V, which is of the same dimensionality as w(;;')
is chosen at random according to a probability distribution P( V). Then the unit
i whose feature vector w(S) is closest to the input pattern v is selected and the
components (Wj(r) of its feature vector are changed according to the feature map
learning rule [15,16]'
P( V) was chosen to be constant within a cylindrical manifold in feature space,
where
vg::x and vrax are some real constants, and zero elsewhere.
Figure 4 shows a typical map, a surface in feature space, generated by the SOFMalgorithm. For the sake of illustration the five-dimensional feature space is projected
onto a three-dimensional subspace spanned by the coordinate-axes corresponding
to retinotopic location (VI and V2) and ocular dominance (vs). The locations of
feature vectors assigned to the cortical units are indicated by the intersections of
a grid in feature space. Preservation of topology requires that the feature vectors
assigned to neighboring cortical units must locally have equal distance and must be
arranged on a planar square lattice in feature space. Consequently, large changes
in one feature, e.g. ocular dominance vs, along a given direction on the network
correlate with small changes of the other features, e.g. retinotopic location VI and
V2, along the same direction (crests and troughs of the waves in Fig. 4) and vice
versa. Other correlations arise at points where the map exhibits maximal changes
in two features. For example for retinotopic location (VI) and ocular dominance
(vs) to vary at a maximal rate, the surface in Fig. 4 must be parallel to the (VI, vs)plane. Obviously, at such points the directions of maximal change of retinotopic
location and ocular dominance are orthogonal on the surface.
In order to compare model predictions with experimental data the surface in the fivedimensional feature space has to be projected into the three-dimensional subspace
"'fhp. p.xBd form of t.hp. Bigorit.hm iA not P.A.'lP.ntiBI, howp.vp.r. AIgorit.hmA hR.'JP.<i on AimilBr
prindplP.A, e.g. the elMtie net Bigorithm [6], predict. AimilBr pBtt.ernA.
A Neural Network Model for the Formation of Brain Maps Compared with Experimental Data
Figure 4: Typical map generated by the SOFM-algorithm.
The five-dimensional feature
space is projected into the threedimensional subspace spanned by
the three coordinates (VI, V2 and
V5). Locations of feature vectors
which are mapped to the units in
the network are indicated by the
intersections of a grid in feature
space. Only every fourth vector
is shown.
spanned by orientation preferences (V3 and V4) and ocular dominance (vs). This
projection cannot be visualized easily because the surface completely fills space,
intersecting itself multiple times. However, the same line of reasoning applies: (i)
regions where orientation preferences change quickly, correlate with regions where
ocular dominance changes slowly, and (ii) in regions where orientation preferences
change most rapidly along one direction, ocular dominance has to change most
rapidly along the orthogonal direction. Consequently we expect discontinuities of
the orientation map to be located in the centers of the ocular dominance bands and
iso-orientation slabs to intersect ocular dominance bands at steep angles.
Figures 1, 2 and 3 show simulation results in comparison with experimental data.
The algorithm generates all the prominent features of lateral cortical organization:
singularities (arrow 1), linear zones (arrow 2), and parallel ocular dominance bands.
Singularities are aligned with the centers of ocular dominance bands (region 1) and
iso-orientation slabs intersect ocular dominance stripes at nearly right angles (region
2). The shape of Fourier- and power-spectra as well as of the correlation functions
agrees quantitatively with the experimental data (see Fig. 2). Isotropic spectra
are the result of the invariance of eqs. (1) and (2) under rotation with respect to
cortical coordinates
global disorder and singularities are a consequence of their
invariance under translation. The emergence of singularities can also be understood from an entropy argument. Since dimension reducing maps, which exhibit
these features, have increased entropy, they are generated with higher probability.
Correlations between orientation preference and ocular dominance, however, follow
from geometrical constraints and are inherent properties the topology preserving
maps.
r;
3
Conclusions
On the basis of our findings the following picture of orientation and ocular dominance columns in monkey striate cortex emerges. Orientation preferences are organized into linear zones and singularities, but areas where iso-orientation regions
form parallel slabs are apparent across most of the cortical surface. In linear zones,
89
90
Obermayer, Schulten, and Blasdel
iso-orientation slabs indeed intersect ocular dominance slabs at right angles as initially suggested by Hubel and Wiesel [8]. Orientation preferences, however, are
arranged in an orderly fashion only in regions 0.8mm in size, and the pattern is
characterized by local correlation and global disorder.
These patterns can be explained as the result of topology-preserving, dimension
reducing maps. Local correlations follow from geometrical constraints and are a
direct consequence of the principle of dimension reduction. Global disorder and
singularities are consistent with this principle but reflect their generation by a local
and stochastic self-organizing process.
Acknowledgements
The authors would like to thank H. Ritter for fruitful discussions and comments
and the Boehringer-Ingelheim Fonds for financial support by a scholarship to K.
O. This research has been supported by the National Science Foundation (grant
numbers DIR 90-17051 and DIR 91-22522). Computer time on the Connection
Machine CM-2 has been made available by the National Center for Supercomputer
Applications at Urbana-Champaign funded by NSF.
References
[1]
[2]
[3]
[4]
Blasdel G.G. and Salama G. (1986), Nature 321, 579-585.
Blasdel G.G. (1992), J. Neurosci. in press.
Blasdel G.G. (1992), J. Neurosci. in press.
Kohonen T. (1987), Self-Organization and Associative Memory, Springer-Verlag,
New York.
[5] Ritter H. and Schulten K. (1988), BioI. Cybern. 60, 59-71.
[6] Durbin R. and Mitchison M. (1990), Nature 343, 644-647.
[7] Obermayer K. et aI. (1990), Proc. Natl. Acad. Sci. USA 81, 8345-8349.
[8] Hubel D.H. and Wiesel T.N. (1974), J. Compo NeuroI. 158, 267-294.
[9] Braitenberg V. and Braitenberg C. (1979), BioI. Cybern. 33, 179-186.
[10] Swindale N.V. (1982), Proc. R. Soc. Lond. B 215, 211-230.
[11] Baxter W.T. and Dow B.M. (1989), BioI. Cybern. 61, 171-182.
[12] Rojer A.S. and Schwartz E.L. (1990), BioI. Cybern. 62, 381-391.
[13] Malsburg C. (1973), Kybernetik 14, 85-100.
[14] Takeuchi A. and Amari S. (1979), BioI. Cybern. 35,63-72.
[15] Kohonen T. (1982a), BioI. Cybern. 43, 59-69.
[16] Kohonen T. (1982b), BioI. Cybern. 44, 135-140.
[17] Linsker R. (1986), Proc. Natl. Acad. Sci. USA 83, 8779-8783.
[18] Soodak R. (1987), Proc. Natl. Acad. Sci. USA 84, 3936-3940.
[19] Kammen D.M. and Yuille A.R. (1988), BioI. Cybern. 59, 23-31.
[20] Miller K.D. et al. (1989), Science 245, 605-615.
[21] Miller K.D. (1989), Soc. Neurosci. Abs. 15, 794.
[22] Grinvald A. et al. (1986), Nature 324, 361-364.
[23] Bonhoeffer T. and Grinvald A. (1991), Nature 353,429-431.
|
478 |@word cylindrical:1 wiesel:2 proportion:1 simulation:7 reduction:1 initial:1 contains:1 si:1 import:1 must:3 shape:2 v:5 selected:4 plane:1 isotropic:1 iso:10 lr:1 compo:1 location:14 preference:22 five:4 along:7 direct:1 ik:1 consists:1 autocorrelation:1 indeed:1 wier:1 brain:4 decreasing:1 anisotropy:1 jm:3 retinotopic:9 underlying:1 cm:1 interpreted:1 monkey:10 finding:1 ikd:1 quantitative:3 every:5 schwartz:1 unit:6 medical:1 grant:1 appear:1 understood:1 local:6 consequence:2 acad:3 kybernetik:1 black:2 perpendicular:1 procedure:1 area:6 intersect:4 w4:2 gabor:5 projection:2 specificity:1 onto:2 close:2 cannot:1 cybern:8 fruitful:1 map:30 center:4 resolution:2 disorder:4 rule:1 dominate:1 spanned:4 fill:1 financial:1 coordinate:3 origin:1 harvard:2 element:2 approximated:1 located:4 stripe:1 region:19 wj:10 yuille:1 completely:1 basis:1 easily:1 represented:4 vista:1 separated:1 describe:1 formation:4 whose:3 fluctuates:1 apparent:1 plausible:1 amari:1 transform:3 itself:1 emergence:1 ip:1 associative:1 obviously:1 kammen:1 net:1 maximal:3 kohonen:4 neighboring:1 aligned:1 rapidly:2 organizing:1 description:1 macaca:1 school:1 eq:1 strong:2 soc:2 indicate:3 nml:3 direction:11 radius:1 filter:1 stochastic:1 ao:1 singularity:11 mathematically:1 swindale:1 mm:5 around:2 normal:1 exp:2 k3:1 blasdel:8 predict:1 slab:11 vary:1 proc:4 beckman:2 agrees:1 wl:1 vice:1 establishes:1 reflects:1 gaussian:1 avoid:1 ax:3 rwj:1 indicates:2 cortex3:1 centroid:2 typically:1 initially:1 w:3 salama:1 pixel:3 orientation:47 development:3 animal:2 spatial:8 equal:2 field:5 elongation:1 represents:1 unsupervised:1 nearly:4 linsker:1 braitenberg:2 stimulus:2 quantitatively:2 inherent:1 employ:1 randomly:3 midline:2 national:2 ab:1 organization:5 highly:2 circular:1 natl:3 edge:1 neglecting:1 orthogonal:4 iv:1 plotted:1 increased:1 column:2 lattice:2 deviation:2 front:1 characterize:1 periodic:2 dir:2 density:1 peak:2 v4:2 ritter:2 quickly:1 intersecting:1 reflect:1 slowly:1 corner:1 trough:1 vi:6 performed:1 start:1 wave:5 parallel:8 elaborated:1 il:2 ir:1 square:1 takeuchi:1 characteristic:1 miller:2 spaced:1 ant:1 vp:1 confirmed:1 simultaneous:1 against:2 energy:2 acquisition:1 frequency:2 ocular:34 steadily:1 sampled:1 emerges:1 dimensionality:2 organized:4 amplitude:1 bre:1 back:3 higher:1 follow:2 planar:1 response:1 arranged:3 correlation:12 dow:1 horizontal:1 mode:1 indicated:2 gray:2 usa:3 effect:1 concept:1 normalized:2 assigned:2 during:1 width:1 self:2 prominent:3 sofm:5 cp:1 postprocessing:1 reasoning:1 image:2 geometrical:2 recently:2 rotation:1 rl:4 jp:1 slight:1 he:3 accumulate:1 versa:1 ai:1 tuning:2 grid:2 pm:6 hp:1 illinois:2 dot:1 funded:1 cortex:16 surface:9 indicat:1 align:1 closest:1 recent:2 selectivity:5 verlag:1 preserving:7 minimum:1 determine:1 v3:2 period:3 ii:4 preservation:1 multiple:1 rj:1 fonnation:1 champaign:1 characterized:4 cross:1 long:1 divided:1 equally:1 prediction:3 basic:2 histogram:2 represent:2 iteration:1 repetitive:2 cell:1 interval:1 diagram:1 ot:1 comment:1 tend:3 ikr:1 near:2 enough:1 baxter:1 w3:1 topology:8 opposite:1 regarding:1 york:1 amount:1 transforms:3 locally:4 band:12 visualized:1 tth:1 generate:2 percentage:1 nsf:1 pace:1 dominance:35 datal:1 four:4 boehringer:1 drawn:1 k4:1 ce:1 rectangle:1 imaging:3 orient:1 angle:7 fourth:1 patch:3 coherence:3 ki:1 layer:1 display:1 durbin:1 strength:2 constraint:2 sake:1 generates:1 fourier:6 argument:1 lond:1 optical:3 according:2 combination:1 across:2 smaller:1 describes:1 lp:1 biologically:1 explained:1 taken:2 describing:2 end:1 available:3 v2:4 hat:2 rp:1 supercomputer:1 denotes:1 malsburg:1 testable:1 scholarship:1 threedimensional:1 v5:2 quantity:3 occurs:1 receptive:5 striate:13 obermayer:6 exhibit:2 gradient:1 subspace:3 distance:6 thank:1 mapped:3 lateral:1 sci:3 manifold:1 reason:1 length:5 code:1 illustration:1 steep:1 cij:1 design:2 neuron:2 urbana:3 pinwheel:1 rojer:1 nonnalized:2 ijm:1 connection:1 c44:1 nm2:6 macaque:2 discontinuity:2 adult:2 suggested:3 bar:1 nm4:3 pattern:17 fp:1 ocularity:1 memory:1 power:3 hot:1 ation:1 ia:1 hr:1 eye:1 picture:1 realvalued:1 axis:2 hm:1 text:2 acknowledgement:1 embedded:1 expect:1 generation:1 enclosed:1 localized:1 vg:1 foundation:1 sufficient:1 consistent:1 principle:4 translation:1 row:1 elsewhere:1 summary:1 changed:1 repeat:3 supported:1 side:1 allow:1 institute:2 boundary:2 dimension:5 cortical:16 contour:1 author:1 made:1 projected:3 taining:1 correlate:2 fivedimensional:2 crest:1 approximate:1 preferred:1 orderly:1 global:7 hubel:2 reveals:4 mitchison:1 spectrum:7 continuous:1 iterative:1 table:3 nature:4 linearly:2 arrow:4 border:2 neurosci:3 arise:2 fig:12 fashion:2 vr:2 position:2 schulten:6 grinvald:2 ingelheim:1 showing:1 r2:4 evidence:2 fonds:1 boston:1 entropy:2 intersection:3 bonhoeffer:1 isoorientation:1 visual:1 rror:1 applies:1 springer:1 corresponds:5 ma:1 bioi:8 consequently:2 experimentally:1 change:11 typical:3 reducing:2 averaging:2 mexican:1 invariance:2 experimental:13 la:3 zone:7 mark:1 support:1 soodak:1 prima:1
|
4,176 | 4,780 |
To appear in: Neural Information Processing Systems (NIPS),
Lake Tahoe, Nevada. December 3-6, 2012.
Hierarchical spike coding of sound
Yan Karklin?
Howard Hughes Medical Institute,
Center for Neural Science
New York University
[email protected]
Chaitanya Ekanadham?
Courant Institute of Mathematical Sciences
New York University
[email protected]
Eero P. Simoncelli
Howard Hughes Medical Institute, Center for Neural Science,
and Courant Institute of Mathematical Sciences
New York University
[email protected]
Abstract
Natural sounds exhibit complex statistical regularities at multiple scales. Acoustic events underlying speech, for example, are characterized by precise temporal
and frequency relationships, but they can also vary substantially according to the
pitch, duration, and other high-level properties of speech production. Learning
this structure from data while capturing the inherent variability is an important
first step in building auditory processing systems, as well as understanding the
mechanisms of auditory perception. Here we develop Hierarchical Spike Coding,
a two-layer probabilistic generative model for complex acoustic structure. The
first layer consists of a sparse spiking representation that encodes the sound using kernels positioned precisely in time and frequency. Patterns in the positions
of first layer spikes are learned from the data: on a coarse scale, statistical regularities are encoded by a second-layer spiking representation, while fine-scale
structure is captured by recurrent interactions within the first layer. When fit to
speech data, the second layer acoustic features include harmonic stacks, sweeps,
frequency modulations, and precise temporal onsets, which can be composed to
represent complex acoustic events. Unlike spectrogram-based methods, the model
gives a probability distribution over sound pressure waveforms. This allows us to
use the second-layer representation to synthesize sounds directly, and to perform
model-based denoising, on which we demonstrate a significant improvement over
standard methods.
1
Introduction
Natural sounds, such as speech and animal vocalizations, consist of complex acoustic events occurring at multiple scales. Precise timing and frequency relationships among these events convey
important information about the sound, while intrinsic variability confounds simple approaches to
sound processing and understanding. Speech, for example, can be described as a sequence of words,
which are composed of precisely interrelated phones, but each utterance may have its own prosody,
with variable duration, loudness, and/or pitch. An auditory representation that captures the corresponding structure while remaining invariant to this variability would provide a useful first step for
many applications in auditory processing.
?
Contributed equally
1
Many recent efforts to learn auditory representations in an unsupervised setting have focused on
sparse decompositions chosen to capture structure inherent in sound ensembles. The dictionaries
can be chosen by hand [1, 2] or learned from data. For example, Klein et al [3] adapted a set of
time-frequency kernels to represent spectrograms of speech signals and showed that the resulting
kernels were localized and bore resemblance to auditory receptive fields. Lee et al [4] trained a
two-layer deep belief network on spectrogram patches and used it for several auditory classification
tasks.
These approaches have several limitations. First, they operate on spectrograms (rather than the original sound waveforms), which impose limitations on both time and frequency resolution. In addition,
most models built on spectrograms rely on block-based partitioning of time, and thus are susceptible
to artifacts ? precisely-timed acoustic events can appear across multiple blocks and events can appear at different temporal offsets relative to the block, making their identification and representation
difficult [5]. The features learned by these models are tied to specific frequencies, and must be replicated at different frequency offsets to accommodate pitch shifts that occur in natural sounds. Finally,
the linear generative models underlying most methods are unsuitable for constructing hierarchical
models, since the composition of multiple linear stages is again linear.
To address these limitations, we propose a two-layer hierarchical model that encodes complex acoustic events using a representation that is shiftable in both time and frequency. The first layer is a
?spikegram? representation of the sound pressure waveform, as developed in [6, 5]. The prior probabilities for coefficients in the first layer are modulated by the output of the second layer, combined
with a recurrent component that operates within the first layer. When trained on speech, the kernels
learned at the second layer encode complex acoustic events which, when positioned at specific times
and frequencies, compactly represent the first-layer spikegram, which is itself a compact description
of the sound pressure waveform. Despite its very sparse activation, the second-layer representation
retains much of the acoustic information: sounds sampled according to the generative model approximate well the original sound. Finally, we demonstrate that the model performs well on a denoising
task, particularly when the noise is structured, suggesting that the higher-order representation provides a useful statistical description of speech.
2
Hierarchical spike coding
In the ?spikegram? representation [5], a sound is encoded using a linear combination of sparse,
time-shifted kernels ?f (t):
X
xt =
S?,f ?f (t ? ? ) + ?t
(1)
?,f
where ?t denotes Gaussian white noise and the coefficients S?,f are mostly zero. As in [5], the ?f (t)
are gammatone functions with varying center frequencies, indexed by f . In order to encode the signal, a sparse set of ?spikes? (i.e., nonzero coefficients at specific times and frequencies) is estimated
using an approximate inference method, such as matching pursuit [7]. The resulting spikegram,
shown in Fig. 1b, offers an efficient representation of sounds [8] that avoids the blocking artifacts
and time-frequency trade-offs associated with more traditional spectrogram representations.
We aim to model the statistical regularities present in the spikegram representations. Spikegrams exhibit clear statistical structure, both at coarse (Fig. 1b,c) and at fine temporal scales (Fig. 1e,f). Spikes
placed at precise locations in time and frequency reveal acoustic features, harmonic structures, as
well as slow modulations in the sound envelope. The coarse scale non-stationarity is likely caused
by higher-order acoustic events, such as phoneme utterances that span a much larger time-frequency
range than the individual gammatone kernels. On the other hand, the fine-scale correlations are
due to some combination of the correlations inherent in the gammatone filterbank and the precise
temporal structure present in speech.
We introduce the hierarchical spike coding (HSC) model, illustrated in Fig. 2, to capture the structure in the spikegrams (S (1) ) on both coarse and fine scales. We add a second layer of unobserved
spikes (S (2) ), assumed to arise from a Poisson process with constant rate ?. These spikes are convolved with a set of time-frequency ?rate kernels? (K r ) to yield the logarithm of the firing rate of
the first-layer spikes on a coarse scale. On a fine scale, the logarithm of the firing rate of firstlayer spikes is modulated using recurrent interactions, by convolving the local spike history with
2
b
4
center freq (Hz)
3
2
10
40
10
3 e
0.81
0.82
0.83
time (sec)
time/freq cross?correlation
c
10
center freq (Hz)
d
0
spikegram representation
10
1
(? logHz)
speech waveform
a
3 f
0
?1
?0.5
0
(? sec)
0.5
3
10
2
10
0.81
0.84
0.82
0.83
time (sec)
0.84
0
0.01
0.02
inter spike interval (sec)
Figure 1: Coarse (top row) and fine (bottom row) scale structure in spikegram encodings of speech.
a. The sound pressure waveform of a spoken sentence and b. the corresponding spikegram. Each
spike (dot) has an associated time (abscissa) and center frequency (ordinate) as well as an amplitude
(dot size). c. Cross-correlation function for a spikegram ensemble reveals correlations across large
time/frequency scales. d. Magnification of a portion of (a), with two gammatone kernels (red and
blue), corresponding to the red and blue spikes in (e). e. Magnification of corresponding portion of
(b) , revealing that spike timing exhibits strong regularities at a fine scale. f. Histograms of interspike-intervals for two frequency channels corresponding to the colored spikes in (e) reveal strong
temporal dependencies.
a set of ?coupling kernels? (K c ). The amplitudes of the first-layer spikes are also specified hierarchically: the logarithm of the amplitudes is assumed to be normally distributed, with a mean
specified by the convolution of second-layer spikes with ?amplitude kernels?, (K a not shown) without any
and the variance fixed at ? 2 . The model parameters are denoted by
?
? recurrent contribution,
r
a
c ~r ~a
? = K , K , K , b , b where ~br , ~ba are the bias vectors corresponding to the log-rate and logamplitude of the first-layer coefficients, respectively. The model specifies a conditional probability
density over first-layer coefficients,
?
?
(1)
(1)
(1)
(1)
(2)
for St,f ? 0, ?t, f
P (St,f |S (2) ; ?) = (1 ? p) ?(St,f ) + pN log St,f ; At,f , ? 2
Rt,f
where p = ?t ?f e
and
Rt,f = brf + (K c ? 1S (1) )t,f
At,f = baf +
?
2
?
e?
= ?
(x??)2
2? 2
N x; ?, ?
2?? 2
i
Xh
(2)
+
(Kir ? Si )t,f
(3)
(4)
i
i
Xh
(2)
(Kia ? Si )t,f
(5)
i
In Eq. (2), ?(.) is the Dirac delta function. In Eq. (3), ?t and ?f are the time and frequency bin
sizes. In Eqs. (4-5), ? denotes convolution and 1x is 1 if x 6= 0, and 0 otherwise.
3
Learning
The joint log-probability of the first and second layer can be expressed as a function of the model
parameters ? and the (unobserved) second-layer spikes S (2) :
L(?, S (2) ) = log P (S (1) , S (2) ; ?, ?) = log P (S (1) |S (2) ; ?) + log P (S (2) ; ?)
?2 ? X
X ?
1 ?
(1)
?
Rt,f ? 2 log St,f ? At,f
eRt,f ?t ?f
=
2?
(1)
t,f
(t,f )?S
? log (??t ?f ) kS (2) k0 + const
3
(6)
(7)
Figure 2: Illustration of the hierarchical spike coding model. Second-layer spikes S (2) associated with 3 features (indicated by color) are sampled in time and frequency according to a Poisson
process, with exponentially-distributed amplitudes (indicated by dot size). These are convolved
with corresponding rate kernels K r (outlined in colored rectangles), summed together, and passed
through an exponential nonlinearity to drive the instantaneous rate of the first-layer spikes on a
coarse scale. The first-layer spike rate is also modulated on a fine scale by a recurrent component
that convolves previous spikes with coupling kernels K c . At a given time step (vertical line), spikes
S (1) are generated according to a Poisson process whose rate depends on the top-down and the
recurrent terms.
where the equality in Eq. (7) holds in the limit ?t ?f ? 0. Maximizing the data likelihood requires integrating L over all possible second-layer representations S (2) , which is computationally
intractable. Instead, we choose to approximate the optimal ? by maximizing L jointly over ? and
S (2) . If S (2) is known, then the model falls within the well-known class of generalized linear models
(GLMs) [9], and Eq. (6) is convex in ?. Conversely, if ? is known then Eq. (6) is convex in S (2)
except for the L0 penalty term corresponding to the prior on S (2) . Motivated by these facts, we
adopt a coordinate-descent approach by alternating between the following steps:
S (2) ? arg max L(?, S (2) )
S (2)
(8)
? ? ? + ??? L(?, S (2) )
(9)
where ? is a fixed learning rate. Section 4 describes a method for approximate inference of the
second-layer spikes (solving Eq. (8)). The gradients used in Eq. (9) are straightforward to compute
and are given by
X
?L
= (# 1? spikes in channel f ) ?
eRt,f ?t ?f
(10)
r
?bf
t
?
1 X?
?L
(1)
(11)
log
S
?
A
=
t,f
t,f
?baf
?2 t
X
X
?L
(2)
(2)
Si (t ? ?, f ? ?) ?
=
eRt,f St??,f ??,i ?t ?f
(12)
r
?K?,?,i
(1)
t,f
(t,f )?S
?L
=
c
?K?,f,f
?
X
(1)
t?Sf
1S (1)
t??,f ?
?
X
eRt,f 1S (1)
t??,f ?
t
4
?t ?f
(13)
0
0 time (sec)0.4
center freq = 111Hz
1.34
freq (octaves)
freq (octaves)
3.84
center freq = 246Hz
center freq = 546Hz
center freq = 1214Hz
1.5
0
0
?1.34
?1.5
0
time (sec)
0.02
Figure 3: Example model kernels learned on the TIMIT data set. Top: rate kernels (colormaps
individually rescaled). Bottom: Four representative coupling kernels (scaling indicated by colorbar).
4
Inference
Inference of the second-layer spikes S (2) (Eq. (8)) involves maximizing the trade-off between the
?
GLM likelihood term, which we denote by L(?,
S (2) ) and the last term which penalizes the number
(2)
of spikes (kS k0 ). Solving Eq. (8) exactly is NP-hard. We adopt a variant of the well-known
matching pursuit algorithm [7] to approximate the solution. First, S (2) is initialized to ~0. Then the
following two steps are repeated:
?
1. Select the coefficient that maximizes a second-order Taylor approximation of L(?,
?) about
(2)
the current solution S :
!2
?
? 2 L?
? L?
? ? ?
/
(14)
(? , ? , i ) = arg max ?
(2)
(2) 2
?,?,i
?S?,?,i
?S?,?,i
?
2. Perform a line search to determine the step size for this coefficient that maximizes L(?,
?).
If the maximal improvement does not outweigh the cost ? log(??t ?f ) of adding a spike,
terminate. Otherwise update S (2) using this step and repeat Step 1.
5
Results
Model parameters learned from speech
We applied the model to the TIMIT speech corpus [10]. First, we obtained spikegrams by encoding
sounds to 20dB precision using a set of 200 gammatone filters with center frequencies spaced evenly
on a logarithmic scale (see [5] for details). For each audio sample, this gave us a spikegram with
fine time and frequency resolution (6.25?10?5 s and 3.8?10?2 octaves, respectively). We trained
a model with 20 rate and 20 amplitude kernels, with frequency resolution equivalent to that of the
spikegram and time resolution of 20ms. These kernels extended over 400ms?3.8 octaves (spanning
20 time and 100 frequency bins). Coupling kernels were defined independently for each frequency
channel; they extended over 20ms and 2.7 octaves around the channel center frequency with the
same time/frequency resolution as the spikegram. All parameters were initialized randomly, and
were learned according to Eq. (8-9).
Fig. 3 displays the learned rate kernels (top) and coupling kernels (bottom). Among the patterns
learned by the rate kernels are harmonic stacks of different durations and pitch shifts (e.g., kernels
4, 9, 11, 18), ramps in frequency (kernels 1, 7, 15, 16), sharp temporal onsets and offsets (kernels
5
S
aa + r
(2)
+
+
+
+
+
+
+
+
+
+
+
+
+
S
ao + l
(2)
?
+
+
?
?
+
+
?
?
+
+
?
?
+
+
?
freq
5
+
0
0 time 0.4
Figure 4: Model representation of phone pairs aa+r (left) and ao+l (right), as uttered by four speakers (rows: two male, two female). Each row shows inferred second-layer spikes, the rate kernels
most correlated with the utterance of each phone pair, shifted to their corresponding spikes? frequencies (colored on left), and the encoded log firing rate centered on the phone pair utterance.
7, 13, 19), and acoustic features localized in time and frequency (kernels 5, 10, 12, 20) (example
sounds synthesized by turning on single features are available in supplementary materials). The
corresponding amplitude kernels (not shown) contain patterns highly correlated with the rate kernels,
suggesting a strong dependence in the spikegram between spike rate and magnitude. For most
frequency channels, the coupling kernels are strongly negative at times immediately following the
spike and at adjacent frequencies, representing ?refractory periods? observed in the spikegrams.
Positive peaks in the coupling kernels encode precise alignment of spikes across time and frequency.
Second-layer representation
The learned kernels combine in various ways to represent complex acoustic events. For example,
Fig. 4 illustrates how features can combine to represent two different phone pairs. Vowel phones
are approximated by a harmonic stack (outlined in yellow) together with a ramp in frequency (outlined in orange and dark blue). Because the rate kernels add to specify the logarithm of the firing
rate, their superposition results in a multiplicative modulation of the intensities at each level of the
harmonic stack. In addition, the ?r? consonant in the first example is characterized by a high concentration of energy at the high frequencies and is largely accounted for by the kernel outlined in red.
The ?l? consonant following ?ao? contains a frequency modulation captured by the v-shaped feature
(outlined in cyan).
Translating the kernels in log-frequency allows the same set of fundamental features to participate
in a range of acoustic events: the same vocalizations at different pitch are often represented by the
same set of features. In Fig. 4, the same set of kernels is used in a similar configuration across different speakers and genders. It should be noted that the second-layer representation does not discard
precise time and frequency information (this information is carried in the times and frequencies of
the second-layer spikes). However, the identities of the features that are active remain invariant to
pitch and frequency modulations.
Synthesis
One can further understand the acoustic information that is captured by second-layer spikes by
sampling a spikegram according to the generative model. We took the second-layer encoding of a
single sentence from the TIMIT speech corpus [10] (Fig. 5 middle) and sampled two spikegrams:
one with only the hierarchical component (left), and one that included both hierarchical and coupling
components (right). At a coarse scale the two samples closely resemble the spikegram of the original
sound. However, at the fine time scale, only the spikegram sampled with coupling contains the
regularities observed in speech data (Fig. 5 bottom row). Sounds were also generated from these
spikegram samples by superimposing gammatone kernels as in [5]. Despite the fact that the second6
Second layer (176 spikes)
4
freq (log Hz)
10
3
10
2
10
0
Hierarchical (2741 spikes)
4
3
Data (2544 spikes)
Coupling + Hierarchical (2358 spikes)
freq (log Hz)
10
3
10
2
10
0
3
4
freq (log Hz)
10
3
10
2
10
0.8
0.9
time
Figure 5: Synthesis from inferred second-layer spikes. Middle bottom: spikegram representation
of the sentence in Fig. 1; Middle top: Inferred second-layer representation; Left: first-layer spikes
generated using only the hierarchical model component; Right: first-layer spikes generated using
hierarchical and coupling kernels. Synthesized waveforms are included in the supplementary materials.
noise level
-10dB
-5dB
0dB
5dB
10dB
Wiener
-7.00
0.00
5.49
7.84
10.31
white noise
wav thr
MP
2.41
2.26
4.93
4.79
7.94
7.71
11.15
11.01
14.64
14.49
HSC
2.50
5.01
7.99
11.33
14.83
-10dB
-5dB
0dB
5dB
10dB
sparse temporally modulated noise
Wiener
wav thr
MP
HSC
-8.68
-8.73
-5.12
-4.37
-3.09
-3.63
-0.96
-0.38
1.90
1.23
2.97
3.30
6.37
6.06
7.11
7.40
9.68
11.28
11.58 11.88
Table 1: Denoising accuracy (dB SNR) for speech corrupted with white noise (left) or with sparse,
temporally modulated noise (right).
layer representation contains over 15 times fewer spikes as the first-layer spikegrams, the synthesized
sounds are intelligible and the addition of the coupling filters provides a noticeable improvement
(audio examples in supplementary materials).
Denoising
Although the model parameters have been adapted to the data ensemble, obtaining an estimate of the
likelihood of the data ensemble under the model is difficult, as it requires integrating over unobserved
variables (S (2) ). Instead, we can use performance on unsupervised signal processing tasks, such
as denoising, to validate the model and compare it to other methods that explicitly or implicitly
represent data density. In the noiseless case, a spikegram is obtained by running matching pursuit
until the decrease in the residual falls below a threshold; in the presence of noise, this encoding
process can be formulated as a denoising operation, terminated when the improvement in the loglikelihood (variance of the residual divided by the variance of the noise) is less than the cost of
adding a spike (the negative log-probability of spiking). We incorporate the HSC model directly
into this denoising algorithm by replacing the fixed probability of spiking at the first layer with the
7
rate specified by the second layer. Since neither the first- nor second-layer spike code for the noisy
signal is known, we first infer the first and then the second layer using MAP estimation, and then
recompute the first layer given both the data and second layer. The denoised waveform is obtained
by reconstructing from the resulting first-layer spikes.
To the extent that the parameters learned by HSC reflect statistical properties of the signal, incorporating the more sophisticated spikegram prior into a denoising algorithm should allow us to better
distinguish signal from noise. We tested this by denoising speech waveforms (held out during model
training) that have been corrupted by additive white Gaussian noise. We compared the model?s performance to that of the matching pursuit encoding (sparse signal representation without a hierarchical model), as well as to two standard denoising methods, Wiener filtering and wavelet-threshold
denoising (implemented with MATLAB?s wden function, using symlets, SURE estimator for soft
threshold selection; other parameters optimized for performance on the training data set) [11].
HSC-based denoising is able to outperform standard methods, as well as matching pursuit denoising
(Table 1 left). Although the performance gains are modest, the fact that the HSC model, which is not
optimized for the task or trained on noisy data, can match the performance of adaptive algorithms
like wavelet filtering denoising suggests that it has learned a representation that successfully exploits
the statistical regularities present in the data.
To test more rigorously the benefit of a structured prior, we evaluated denoising performance on
signals corrupted with non-stationary noise whose power is correlated over time. This is a more
challenging task, but it is also more relevant to real-world applications, where sources of noise are
often non-stationary. Algorithms that incorporate specific (but often incorrect) noise models (e.g.,
Wiener filtering) tend to perform poorly in this setting. We generated sparse temporally modulated
noise by scaling white Gaussian noise with a temporally smooth envelope (given as a convolution of
a Gaussian function with st. dev. of 0.02s with a Poisson process with rate 16s?1 ). All methods fare
worse on this task. Again, the hierarchical model outperforms other methods (Table 1 right), but
here the improvement in performance is larger, especially at high noise levels where the model prior
plays a greater role. The reconstruction SNR does not fully convey the manner in which different
algorithms handle noise: perceptually, we find that the sounds denoised by the hierarchical model
sound more similar to the original (audio examples in supplementary materials).
6
Discussion
We developed a hierarchical spike code model that captures complex structure in sounds. Our
work builds on the spikegram representation of [5], thus avoiding the limitations arising from
spectrogram-based methods, and makes a number of novel contributions. Unlike previous work
[3, 4], the learned kernels are shiftable in both time and log-frequency, which enables the model to
learn time- and frequency-relative patterns and use a small number of kernels efficiently to represent
a wide variety of sound features. In addition, the model describes acoustic structure on multiple
scales (via a hierarchical component and a recurrent component), which capture fundamentally different kinds of statistical regularities.
Technical contributions of ths work include methods for learning and performing approximate inference in a generalized linear model in which some of the inputs are unobserved and sparse (in
this case the second-layer spikes). The computational framework developed here is general, and
may have other applications in modeling sparse data with partially observed variables. Because the
model is nonlinear, multi-layer cascades could lead to substantially more powerful models.
Applying the model to complex natural sounds (speech), we demonstrated that it can learn nontrivial features, and we have shown how these features can be composed to form basic acoustic
units. We also showed a simple application to denoising, demonstrating improved performance to
wavelet thresholding. The framework provides a general methodology for learning higher-order
features of sounds, and we expect that it will prove useful in representing other structured sounds
such as music, animal vocalizations, or ambient natural sounds.
6.1
Acknowledgments
We thank Richard Turner and Josh McDermott for helpful discussions.
8
References
[1] C. Fevotte, B. Torresani, L. Daudet, and S. Godsill, ?Sparse linear regression with structured priors and
application to denoising of musical audio,? Audio, Speech, and Language Processing, IEEE Transactions
on, vol. 16, pp. 174 ?185, jan. 2008.
[2] M. Plumbley, T. Blumensath, L. Daudet, R. Gribonval, and M. Davies, ?Sparse representations in audio
and music: From coding to source separation,? Proceedings of the IEEE, vol. 98, pp. 995 ?1005, june
2010.
[3] D. J. Klein, P. K?onig, and K. P. K?ording, ?Sparse spectrotemporal coding of sounds,? EURASIP J. Appl.
Signal Process., vol. 2003, pp. 659?667, Jan. 2003.
[4] H. Lee, Y. Largman, P. Pham, and A. Y. Ng, ?Unsupervised feature learning for audio classification using
convolutional deep belief networks,? in Advances in Neural Information Processing Systems, pp. 1096?
1104, The MIT Press, 2009.
[5] E. Smith and M. S. Lewicki, ?Efficient coding of time-relative structure using spikes,? Neural Computation, vol. 17, no. 1, pp. 19?45, 2005.
[6] M. Lewicki and T. Sejnowski, ?Coding time-varying signals using sparse, shift-invariant representations,?
in Advances in Neural Information Processing Systems, pp. 730?736, The MIT Press, 1999.
[7] S. Mallat and Z. Zhang, ?Matching pursuits with time-frequency dictionaries,? IEEE Trans Sig Proc,
vol. 41, pp. 3397?3415, December 1993.
[8] E. Smith and M. S. Lewicki, ?Efficient auditory coding,? Nature, vol. 439, no. 7079, 2006.
[9] P. McCullagh and J. A. Nelder, Generalized linear models (Second edition). London: Chapman & Hall,
1989.
[10] J. S. Garofolo, L. F. Lamel, W. M. Fisher, J. G. Fiscus, D. S. Pallett, and N. L. Dahlgren, ?Darpa timit
acoustic phonetic continuous speech corpus cdrom,? 1993.
[11] S. Mallat, A Wavelet Tour of Signal Processing, Third Edition: The Sparse Way. Academic Press, 3rd ed.,
2008.
9
|
4780 |@word middle:3 bf:1 decomposition:1 pressure:4 accommodate:1 configuration:1 contains:3 ording:1 outperforms:1 current:1 activation:1 si:3 must:1 additive:1 interspike:1 enables:1 update:1 stationary:2 generative:4 fewer:1 smith:2 gribonval:1 colored:3 coarse:8 math:1 provides:3 location:1 recompute:1 tahoe:1 zhang:1 plumbley:1 mathematical:2 incorrect:1 consists:1 prove:1 blumensath:1 combine:2 manner:1 introduce:1 inter:1 abscissa:1 nor:1 multi:1 underlying:2 maximizes:2 kind:1 substantially:2 developed:3 spoken:1 unobserved:4 temporal:7 dahlgren:1 exactly:1 filterbank:1 partitioning:1 normally:1 medical:2 unit:1 appear:3 onig:1 positive:1 timing:2 local:1 limit:1 brf:1 despite:2 encoding:5 colorbar:1 firing:4 modulation:5 garofolo:1 k:2 conversely:1 challenging:1 suggests:1 appl:1 range:2 acknowledgment:1 hughes:2 block:3 jan:2 yan:2 cascade:1 revealing:1 matching:6 davy:1 word:1 integrating:2 symlets:1 selection:1 applying:1 outweigh:1 equivalent:1 map:1 center:12 maximizing:3 uttered:1 straightforward:1 demonstrated:1 duration:3 convex:2 focused:1 resolution:5 independently:1 immediately:1 estimator:1 handle:1 coordinate:1 ert:4 play:1 mallat:2 sig:1 synthesize:1 magnification:2 particularly:1 approximated:1 blocking:1 bottom:5 observed:3 role:1 capture:5 trade:2 rescaled:1 decrease:1 fiscus:1 rigorously:1 trained:4 hsc:7 solving:2 compactly:1 joint:1 convolves:1 k0:2 darpa:1 various:1 represented:1 london:1 sejnowski:1 whose:2 encoded:3 larger:2 supplementary:4 loglikelihood:1 ramp:2 otherwise:2 jointly:1 itself:1 noisy:2 vocalization:3 sequence:1 nevada:1 propose:1 took:1 interaction:2 maximal:1 reconstruction:1 relevant:1 poorly:1 gammatone:6 description:2 validate:1 dirac:1 regularity:7 coupling:12 develop:1 recurrent:7 noticeable:1 eq:11 strong:3 implemented:1 involves:1 resemble:1 waveform:9 closely:1 filter:2 centered:1 translating:1 material:4 bin:2 ao:3 hold:1 pham:1 around:1 hall:1 vary:1 dictionary:2 adopt:2 estimation:1 proc:1 spectrotemporal:1 superposition:1 individually:1 successfully:1 offs:1 mit:2 gaussian:4 aim:1 rather:1 pn:1 varying:2 encode:3 l0:1 june:1 improvement:5 likelihood:3 helpful:1 inference:5 arg:2 among:2 classification:2 denoted:1 animal:2 summed:1 orange:1 field:1 shaped:1 ng:1 sampling:1 chapman:1 unsupervised:3 np:1 torresani:1 fundamentally:1 inherent:3 richard:1 randomly:1 composed:3 individual:1 vowel:1 stationarity:1 highly:1 alignment:1 male:1 held:1 ambient:1 modest:1 indexed:1 taylor:1 logarithm:4 chaitanya:1 timed:1 penalizes:1 initialized:2 soft:1 modeling:1 dev:1 retains:1 cost:2 ekanadham:1 tour:1 snr:2 shiftable:2 dependency:1 corrupted:3 combined:1 st:7 density:2 peak:1 fundamental:1 probabilistic:1 lee:2 off:1 together:2 synthesis:2 again:2 reflect:1 choose:1 worse:1 convolving:1 suggesting:2 coding:10 sec:6 coefficient:7 caused:1 mp:2 onset:2 depends:1 explicitly:1 multiplicative:1 portion:2 red:3 denoised:2 timit:4 contribution:3 accuracy:1 wiener:4 musical:1 phoneme:1 efficiently:1 variance:3 largely:1 ensemble:4 yield:1 spaced:1 confounds:1 yellow:1 identification:1 drive:1 history:1 ed:1 energy:1 frequency:44 pp:7 associated:3 sampled:4 auditory:8 gain:1 color:1 amplitude:7 positioned:2 sophisticated:1 higher:3 courant:2 methodology:1 specify:1 improved:1 evaluated:1 strongly:1 stage:1 correlation:5 glms:1 hand:2 until:1 replacing:1 nonlinear:1 artifact:2 indicated:3 reveal:2 resemblance:1 building:1 contain:1 firstlayer:1 equality:1 alternating:1 nonzero:1 freq:13 illustrated:1 white:5 adjacent:1 fevotte:1 during:1 speaker:2 noted:1 m:3 generalized:3 octave:5 demonstrate:2 performs:1 lamel:1 largman:1 harmonic:5 instantaneous:1 novel:1 spiking:4 refractory:1 exponentially:1 fare:1 synthesized:3 significant:1 composition:1 rd:1 outlined:5 nonlinearity:1 language:1 dot:3 add:2 own:1 recent:1 showed:2 female:1 phone:6 discard:1 phonetic:1 mcdermott:1 captured:3 greater:1 impose:1 spectrogram:7 determine:1 period:1 signal:11 multiple:5 sound:33 simoncelli:2 infer:1 smooth:1 technical:1 match:1 characterized:2 academic:1 offer:1 cross:2 divided:1 logamplitude:1 equally:1 pitch:6 variant:1 basic:1 regression:1 noiseless:1 poisson:4 histogram:1 kernel:39 represent:7 addition:4 fine:10 interval:2 source:2 envelope:2 operate:1 unlike:2 sure:1 hz:9 chaitu:1 tend:1 db:12 december:2 presence:1 variety:1 fit:1 gave:1 pallett:1 br:1 shift:3 motivated:1 passed:1 effort:1 penalty:1 speech:20 york:3 matlab:1 deep:2 useful:3 clear:1 dark:1 specifies:1 outperform:1 wav:2 shifted:2 estimated:1 delta:1 arising:1 klein:2 blue:3 vol:6 four:2 threshold:3 demonstrating:1 neither:1 rectangle:1 powerful:1 lake:1 patch:1 separation:1 scaling:2 capturing:1 layer:52 cyan:1 distinguish:1 display:1 nontrivial:1 adapted:2 occur:1 precisely:3 encodes:2 span:1 performing:1 structured:4 according:6 combination:2 across:4 describes:2 remain:1 reconstructing:1 making:1 invariant:3 glm:1 computationally:1 mechanism:1 pursuit:6 available:1 operation:1 hierarchical:18 convolved:2 original:4 denotes:2 remaining:1 include:2 top:5 running:1 const:1 unsuitable:1 music:2 exploit:1 especially:1 build:1 sweep:1 spike:52 receptive:1 concentration:1 rt:3 dependence:1 traditional:1 exhibit:3 loudness:1 gradient:1 thank:1 evenly:1 participate:1 extent:1 spanning:1 code:2 relationship:2 illustration:1 difficult:2 susceptible:1 mostly:1 negative:2 ba:1 godsill:1 kir:1 perform:3 contributed:1 vertical:1 convolution:3 howard:2 descent:1 extended:2 variability:3 precise:7 stack:4 sharp:1 intensity:1 inferred:3 ordinate:1 pair:4 specified:3 sentence:3 thr:2 optimized:2 acoustic:18 learned:13 nip:1 trans:1 address:1 able:1 below:1 perception:1 pattern:4 cdrom:1 built:1 max:2 belief:2 power:1 event:11 natural:5 rely:1 turning:1 karklin:2 residual:2 turner:1 representing:2 temporally:4 carried:1 utterance:4 prior:6 understanding:2 relative:3 fully:1 expect:1 limitation:4 baf:2 filtering:3 localized:2 thresholding:1 production:1 row:5 kia:1 accounted:1 placed:1 last:1 repeat:1 bias:1 allow:1 understand:1 institute:4 fall:2 wide:1 convolutional:1 sparse:16 distributed:2 benefit:1 world:1 avoids:1 adaptive:1 replicated:1 transaction:1 approximate:6 compact:1 implicitly:1 active:1 reveals:1 corpus:3 assumed:2 eero:2 consonant:2 nelder:1 search:1 continuous:1 table:3 nature:1 learn:3 channel:5 terminate:1 obtaining:1 complex:9 constructing:1 hierarchically:1 terminated:1 intelligible:1 noise:18 arise:1 edition:2 repeated:1 convey:2 fig:10 representative:1 slow:1 precision:1 position:1 xh:2 exponential:1 sf:1 tied:1 third:1 wavelet:4 down:1 specific:4 xt:1 nyu:3 offset:3 consist:1 intrinsic:1 intractable:1 incorporating:1 adding:2 magnitude:1 perceptually:1 illustrates:1 occurring:1 logarithmic:1 interrelated:1 likely:1 josh:1 expressed:1 partially:1 lewicki:3 aa:2 gender:1 daudet:2 conditional:1 identity:1 formulated:1 fisher:1 hard:1 eurasip:1 bore:1 included:2 except:1 operates:1 mccullagh:1 denoising:17 colormaps:1 superimposing:1 select:1 prosody:1 modulated:6 incorporate:2 audio:7 tested:1 avoiding:1 correlated:3
|
4,177 | 4,781 |
Volume Regularization for Binary Classification
Tal Wagner?
Faculty of Mathematics and Computer Science
Weizmann Institute of Science
Rehovot, 76100, Israel
[email protected]
Koby Crammer
Department of Electrical Enginering
The Technion - Israel Institute of Technology
Haifa, 32000 Israel
[email protected]
Abstract
We introduce a large-volume box classification for binary prediction, which maintains a subset of weight vectors, and specifically axis-aligned boxes. Our learning
algorithm seeks for a box of large volume that contains ?simple? weight vectors
which most of are accurate on the training set. Two versions of the learning process are cast as convex optimization problems, and it is shown how to solve them
efficiently. The formulation yields a natural PAC-Bayesian performance bound
and it is shown to minimize a quantity directly aligned with it. The algorithm outperforms SVM and the recently proposed AROW algorithm on a majority of 30
NLP datasets and binarized USPS optical character recognition datasets.
1
Introduction
Linear models are widely used for a variety of tasks including classification and regression. Support
vectors machines [3, 22] (SVMs) are considered a primary method to efficiently build linear classifiers from data, yielding state-of-the-art performance. SVMs and many other methods are often
easy to implement and efficient, yet return only a single weight-vector with no additional information about alternative models nor about confidence in prediction.
An alternative approach is taken by Bayesian methods [21, 13]. The primary object is a (posterior)
distribution over models that is updated using Bayes rule. Unfortunately, the posterior is very complicated even for simple models, such as Bayesian logistic regression [15], and it is not known how
to perform the update analytically, and approximations are required.
In this work we integrate the advantages of both approaches. We propose to model uncertainty
over weight-vectors by maintaining a (simple) set of possible weight-vectors, rather than a single
weight-vector. Learning is motivated from principles of discriminative learning rather than Bayes?
rule, and it is optimizing a combination of an hand-crafted regularization term and the empirical
loss. Specifically, our algorithm maintains an axis-aligned box, which only requires double number
of parameters than maintaing a single weight-vector, a dominating model for many tasks.
We use a similar conceptual reasoning as used in Bayes point machines (BPM) [13]. Both approaches maintain a set of possible weights, which can be thought of as a posterior. BPMs use the
version space, the set of all consistent weight vectors, which is a convex polyhedron. Since the size
of the polyhedron?s representation grows with the number of training examples, BPMs approximate
the polyhedron with a single weight-vector, the Bayes point. Our algorithms model the set as a box,
with a representation that is fixed in the size of the input, and find an optimal prediction box.
We cast learning as a convex optimization problem and propose methods to solve it efficiently. We
further provide generalization bounds using PAC-Bayesian theory, and show that our algorithm is
?
The research was performed while TW was a student at the Technion.
1
minimizing a quantity directly related to the generalization bound. We give two formulations or
versions of the algorithm, one that is closely related to the bound, while the other is smooth.
We experiment with 30 binary text categorization datasets from various tasks: sentiment classification, predicting domain of product-review, assigning topics to news items, tagging spam emails, and
classifying posts to news-groups. The results indicate that our algorithms outperform SVM and the
recently proposed AROW [4] algorithm, which was shown to be the state-of-the-art in numerous
NLP tasks. Additional support for the superiority and robustness of our algorithms, especially in
high-noise setting, is provided using experiments with 45 pairs of binarized USPS OCR problems.
Notation: Given a vector x ? Rd , we denote its kth element by xk ? R, and by |x| ? Rd the
vector with component-wise absolute value of its elements, |x| = (|x1 |, . . . , |xd |).
2
Large-Volume Box Classifiers
Standard linear classification learning algorithms maintain and return a single weight vector w? ?
Rd used to predict the label of any test point. We study a generalization of these algorithms where
hypotheses are uncertainty (sub)sets of weight vectors w. Such a hypothesis can be seen as a randomized linear classifier or a voting process. To classify an instance x, a parameter vector w is
drawn according to the hypothesis and predicts the label sign(w ? x). Herbrich et.al. [13, 12], argued in a similar context that such a randomization yields a more robust solution. PAC-Bayesian
analysis and its generalization bounds give additional justification to this approach (see Sec 5).
The uncertainty subsets we study are axis aligned boxes parametrized with two vectors u, v ?
Rd where we assume, uk ? v k for all k = 1 . . . d. In words, u is the vertex with the lowest
coordinates, and v is the vertex with the largest coordinates. The projection of the box onto the
k-axis yields the interval [uk , v k ]. The set of weight vectors contained in the box is denoted by,
Q = {w : uk ? wk ? v k for k = 1 . . . d} . Given an instance x to be classified, a Gibbs classifier
samples a weight vector uniformly in random from the box w ? Q and returns sign(w ? x). A
deterministic alternative we use in practice is to employ the center of mass defined by ? = 21 (u + v)
and return sign(? ? x). For linear classifiers, the majority prediction with Gibbs sampling coincides
with predicting using the center of mass. We also define the uncertainty intervals ? = 12 (v ? u).
Intuitively, the uncertainty in the weight associated with the kth feature is ? k . Clearly, v = ? + ?
and u = ? ? ?.
3
Learning as Optimization
Given a labeled sample S = {(xi , yi )}ni=1 , a common practice in learning linear models w is to
perform structural risk minimization (SRM) [25] that picks a weight-vector that is both ?simple? (eg
small norm) and performs well on the training set. Learning is cast as an optimization problem,
1X
w? = arg min
`(w, (xi , yi )) + D R(w) .
(1)
w
n i
The first term is the empirical loss evaluated on the training set with some loss function `(w, (x, y)),
and the second term is a regularization that penalizes weight-vectors according to their complexity.
The parameter D > 0 is a tradeoff parameter.
Learning with uncertainty sets invites us to balance three desires rather than two as when learning
a single weight-vector. The first two desires are generalizations of the structural risk minimization principle [25] to boxes: we prefer boxes containing weight-vectors that attain both low loss
`(w, (xi , yi )) and are ?simple? (eg small norm). This alone though is not enough, as if the loss and
regularization functions are strictly convex then the optimal box would in fact be a single weightvector. The third desire is thus to prefer boxes with large volume. Intuitively, if during training
an algorithm finds a box with large volume, such that all weight-vectors belonging to it attain low
training error and are simple, we expect the classifier based on the center of mass to be robust to
noise or fluctuations. This will be formally stated in the analysis described in Sec 5. We formalize
this requirement by adding a term that is inversely proportional to the volume of the box Q.
We take a worst-case approach, and define the loss of the box Q given an example (x, y) denoted
by `(Q, (x, y)) to be the loss of the worst member w ? Q. Similarly, we define the complexity
of the box Q to be the complexity of the most complex member of the box w ? Q, formally,
`(Q, (x, y)) = supw?Q `(w, (x, y)) and R(Q) = supw?Q R(w).
2
Putting it all together, we replace (1) with,
!
1 X
`(w, (xi , yi )) + DR(w) ,
m i
?
Q = arg min sup
Q?Q w?Q
(2)
where Q is a set of boxes with some minimal volume. In other words, the algorithm is seeking for
a set of alternative weight-vectors all of which are performing well on the training data. We expect
this formulation to be robust, as a box is evaluated with its worst performing member.
We modify the problem by removing the constraint Q ? Q and adding an equivalent penalty term to
the objective, namely the log-volume of the box. We use the log-volume function for three reasons.
First, it is a common barrier function in optimization [26], and in our case it keeps the box from
actually shrinking to a zero volume box. Second, this choice is supported by the analysis below,
and third, it is additive in the dimension of the data d, like all other quantities of the objective.
Additionally, we bound the supremum over w with a sum of supremum operators. To conclude, we
cast the learning problem as the following optimization problem over boxes,
1 X
arg min
sup `(w, (xi , yi )) ? C log volQ + D sup R(w) ,
(3)
Q m
w?Q
i w?Q
where C, D > 0 are two trade-off parameters used to balance the three goals. (In the analysis
below it will be shown that D can be also interpreted as a Langrange multiplier of a constrained
optimization problem.) We further develop the last equation by making additional assumptions over
the loss function and the regularization. We assume that the loss is a monotonically decreasing
function of the product y(x> w), often called the margin (or the signed margin). This is a property
of many popular loss functions for binary classification, including the hinge-loss and its square used
by SVMs [3, 22], exp-loss used by boosting [9], logistic-regression [11] and the Huber-loss [14].
Under this assumption we compute analytically the first term of the objective (3).
Lemma 1 If the loss function is monotonically decreasing in the margin, `(w, (x, y)) =
`(y x> w ) then supw?Q `(w, (xi , yi )) = `(y(x> ?) ? |x|?).
Proof: From the monotonicity of `(?) we have supw?Q ` y(x> w)
Computing the infimum we get,
inf y(x> w) =
w?Q
=
d
X
inf
wk ?[uk ,v k ] for k=1...d
(yxk )
k=1
d
X
uk
vk
(yxk )wk =
k=1
d
X
k=1
inf
= ` inf w?Q y(x> w) .
(yxk )wk
wk ?[uk ,v k ]
d
X
(yxk ) ? 0
(yxk ) (?k ? sign(yxk )? k )
=
(yxk ) < 0
= y(x> ?) ? |x|? ,
k=1
using u = ? ? ? and v = u + ? as stated above.
The lemma highlights the need to constrain the volume to be strictly larger than zero: due to monotonicity and the fact that ? ? 0 (component wise) we have `(y(x> ?) ? |x|?) ? `(y(x> ?)), so
the loss is always minimized when we set ? = 0. We next turn to analyse the third term of (3) with
the following lemma.
Lemma 2 (1) Assuming R(w) is convex, then supw?Q R(w) is attained on vertices of the box Q.
(2) Additionally, if R(w) is strictly convex then the supremum is attained only on vertices.
Proof: We use the fact that every point in the box can be represented as a convex combination
2d
of the vertices. Formally, given
P a point in the box w ?PQ, there exists a vector ? ? R with
non-negative elements and t ?t = 1 such
P that w = t ?t z t where z t are the vertices of the
box. Convexity of R(?) yields, R(w) ? t ?t R(z t ) ? maxt {R(z t )} . Thus, if w attains the
supremum supw?Q R(w) then so does at least one vertex. Additionally, if R(w) is a strictly convex
function, then the first inequality in the last equation is a strict inequality, and thus a non-vertex
cannot attain the supremum.
Common
regularization functions are defined as sums over individual features, that is R(w) =
P
r(w
).
In this case the supremum is attained on each coordinate independently as follows.
k
k
3
P
Corollary 3 Assuming R(w) is a sum of scalar-convex functions k r(wk ), we have,
X
X
sup R(w) =
max {r(uk ), r(v k )}
=
max {r(?k ? ? k ), r(?k + ? k )} .
w?Q
k
k
The corollary follows from the lemma since a supremum of a scalar-function over a box is equivalent
to taking the supremum over the box projected to a single coordinate.
Q Finally, the volume
Q of a box
is given
by
a
product
of
the
length
of
its
axes,
that
is,
vol
(Q)
=
(v
?
u
)
=
k
k
k
k (2? k ) =
Q
2d k ? k .
To summarize, the learning problem of the large-volume box algorithm is cast by solving the following minimization problem, in terms of the center ? and the size (or dimensions) ?,
min
??0,?
m
X
X
1 X
` yi (x>
log ? k +D
max {r(?k ? ? k ), r(?k + ? k )} , (4)
i ?) ? |xi |? ?C
m i=1
k
k
where `(?) is a monotonically decreasing function, r(?) is a convex function, and C, D > 0 are two
trade-off parameters used to balance our three desires. We denote by
z i,+ = yi xi + |xi | ? Rd , z i,? = yi xi ? |xi | ? Rd .
(5)
The kth element of z i,+ (z i,? ) is twice the kth element of |xi | if the sign of the kth element of xi
agrees (disagrees) with yi , and zero otherwise.
This problem can equivalently be written in terms of the two ?extreme? vertices u and v as follows,
m
1 X
1 >
>
min
`
v (z i,? ) + u (z i,+ )
v?u m
2
i=1
X
X
?C
log (v k ? uk )+D
max {r(v k ), r(uk )} ,
(6)
k
k
>
>
by using the relation yi x>
i (v+u)?|xi |(v?u) = v (z i,? )+u (z i,+ ) . Note, if the loss function `(?)
is convex, then both formulations (4) and (6) of the learning problem are convex in their arguments,
as each is a sum of convex functions of linear combination of the arguments, and a maximum of
convex functions is convex.
We conclude this section with an additional alternative formulation, which for convenience, we
present
in the notation of (6). Although the above problem is convex, the regularization term
P
In this alternative, we replace
k max {r(v k ), r(uk )} is not smooth because of the max operator.
P
it with a smooth term, by changing the max to a sum, yielding k r(v k ) + r(uk ) = R(u) + R(v).
The problem then becomes,
m
X
1 >
1 X
`
v (z i,? ) + u> (z i,+ ) ? C
log (v k ? uk ) + D (R(u) + R(v)) . (7)
min
v?u m
2
i=1
k
The two alternatives are related via the following chain of inequalities, 0.5 max {r(v k ), r(uk )} ?
0.5 (r(v k ) + r(uk )) ? max {r(v k ), r(uk )} ? r(v k ) + r(uk ) . In other words, given either one
of the problems (6) or (7), we can lower and upper bound it with the other problem with a proper
choice of trade-off parameter D. We call the two versions BoW for box-of-weights algorithm, and
refer to them as BoW-M(ax) and BoW-S(um), respectively.
4
Optimization Algorithm
We now present an algorithm to solve (6) for the special case r(x) = x2 . The algorithm is a based
on COMID [8] and its convergence analysis follows directly from the analysis of COMID, which
is omitted due to lack of space. The algorithm works in iterations. On each iteration a (stochastic)
gradient decent step is performed, followed by a regularization-optimization step. Formally, the
algorithm picks a random example i and updates,
?
1 >
0
>
? ) ? (u, v) ? ? z i,+ , z i,? for ? = `
(?
u, v
v (z i,? ) + u (z i,+ )
.
2
2
4
The algorithm then solves the following regularization-oriented optimization problem,
X
X
1
1
? k2 + kv ? v
? k2 ? C
min ku ? u
log (v k ? uk ) + D
max v 2k , u2k .
u,v 2
2
k
k
The objective of the last problem decomposes over individual pairs uk , v k so we reduce the optimization to d independent problems, each defined over 2 scalars u and v (omitting index k),
1
1
2
2
?) + (v ? v?) ? C log (v ? u) + D max v 2 , u2 .
(8)
min F (u, v) = (u ? u
u,v
2
2
We denote the half-plane H = {(u, v) ? R2 : v > u} and partition it into three subsets: G1 =
{(u, v) ? H : v > ?u}, G2 = {(u, v) ? H : v < ?u}, and the line L = {(u, v) ? R2 : v = ?u}.
The following lemma describes the optimal solution of (8).
Lemma 4 Exactly one of the items below holds and describe the optimal solution of (8).
1. If there exists (u, v) ? G1 such that v is a root of f (v) = ?v 2 + ?v + ? and u =
u
? ? 2Dv + (?
v ? v) where ? = 2(1 + D)(1 + 2D), ? = ??
u(1 + 2D) ? v?(3 + 4D), and
? = v? + u
?v? ? C, then it is a global minimum of F .
2. If there exists (u, v) ? G2 such that u is a root of f (u) = ?u2 + ?u + ? and v =
v? ? 2Du + (?
u ? u) where ? = 2(1 + D)(1 + 2D), ? = ??
v (1 + 2D) ? u
?(3 + 4D) and
?=u
? + v?u
? ? C, then it is a global minimum of F . Furthermore, such point and a point
described in 1 cannot exist simultaneously.
3. If no points as described in 1 nor 2 exist, then the global minimum of F is (u, ?u) such
that u is a root of f (u) = ?u2 + ?u + C where ? = 2 + 2D, ? = v? ? u
?, ? = ?C.
Proof sketch: By definition, the function F is smooth and convex on G1 . The condition in 1 is
equivalent
to satisfying ?F (u, v) = 0, and therefore any point that satisfies it, is a minimum of
F G1 . A similar argument applies to G2 with 2. The convexity of F on the entire set H yields
that any such point is also a global minimum of F , and that if no such point exists then F attains
a global minimum on L (which is derived in 3). The latter is sure to exist since limv?0 F |L =
limv?? F |L = ?. The algebraic derivation is omitted due to lack of space.
Similarly, we develop the update for solving (7). Here after the gradient step we need to solve the
2
2
following problem
per coordinate k, minu,v F (u, v) = 12 (u ? u
?) + 21 (v ? v?) ? C log (v ? u) +
D v 2 + u2 . The following lemma characterizes the optimal solution.
Lemma 5 The optimal solution (u, v) ? {(u, v) ? R2 : v ? u > 0} of the last problem is such
that u is a root of the polynomial f (u) = ?u2 + ?u + ? where ? = 2 + 2D + 6D + 8D2 ,? =
?(?
v + 2D?
v+u
? + 6D?
u) ? 2?
u,? = u
?2 + u
?v? ? 4C ? 2CD and v = (?
v+u
? ? u(1 + 2D)) /(1 + 2D).
Its proof is similar to the proof of Lemma 4, but simpler and omitted due to lack of space.
5
Analysis
PAC-Bayesian bounds were introduced by McAllester [19], were further refined later (e.g. [17, 23]),
and applied to analyze SVMs [18]. They often have been shown to be quite tight.
?
We first introduce some notation needed for the discussion of these bounds. Let `(w,
(x, y)) denote
?
?
the zero-one loss, that is `(w, (x, y)) = 1 if sign(w ? x) 6= y and `(w, (x, y)) = 0 otherwise. Let D
?
be a distribution over the labeled examples (x, y), and denote by `(w,
D) the expected zero-one loss
?
of a linear classifier characterized by its weight vector w: `(w,
D) = Pr(x,y)?D [sign(w ?x) 6= y] =
?
?
?
E(x,y)?D [`(w,
(x, y))] . We abuse notation, and denote by `(w,
S) the expected loss `(w,
DS ) for
the empirical distribution DS of a sample S.
PAC-Bayesian analysis states generalization bounds in terms of two distributions - prior and posterior - over all hypotheses (i.e. over weight-vectors w). Below, we identify a compact set with a
uniform distribution over the set, and in particular we identify a box Q with a uniform distribution
5
over all weight vectors it contains (and zero mass otherwise). Similarly, we identify any compact
body P with a uniform distribution over its elements. In other words, we refer to the prior P and
the posterior Q both as two uniform distributions and as their support (which are subsets). We
also denote by `(Q, D) the expectation of `(w, D) over weight vectors w drawn according to the
distribution Q. We quote Cor. 2.2 of Germain et.al. [10],
Corollary 6 ([10]) : For any distribution D, for any set H of weight-vectors, for any distribution
P of support H, for any ? ? (0, 1], and any positive number ? the following statement holds with
probability ? 1 ? ? over samples S of size n,
(
"
!#)
1
1
1
1
?
?
`(Q, D) ?
1 ? exp ? ? ? `(Q, S) + DKL (QkP) + ln
. (9)
1 ? e??
n
n
?
The corollary states that the expected number of mistakes over examples drawn according to some
fixed and unknown distribution D over inputs, and over weight-vectors drawn from the box Q uniformly, is bounded by the right term, which is a monotonic function of the following sum,
? S) + 1 DKL (QkP) .
`(Q,
n?
For uniform distributions we have the following,
log
DKL (QkP) =
?
vol(P)
vol(Q)
Q?P
otherwise
(10)
.
Additionally, we bound the empirical training error,
Z
n
1X 1
1X
>
?
?
`(Q, S) =
`(w, (xi , yi )) dw ?
` inf yi (xi w) ,
w?Q
n i volQ w?Q
n i
(11)
(12)
? S), and the inequality follows by choosing a loss function
where the equality is the definition of `(Q,
`(?) which upper bounds the zero-one loss (e.g. Hinge loss), by bounding an expectation with the
supremum value, and from Lemma 1.
We get that to minimize the generalization bound of (9) we can minimize a bound on (10) which is
obtained by substituting (11) and (12) in (10). Omitting constants we get,
1X
1
` inf yi w> xi ?
log volQ s.t. Q ? P .
(13)
min
w?Q
Q n
n?
i
Next, we set P to be a ball of radius R about the origin, and, as in Sec 2, we set Q as a box
parametrized with the vectors u and v. We use the following lemma, of which proof is omitted due
to lack of space,
Lemma 7 If P is a P
ball of radius R about the origin and Q is a box parametrized using u and v,
we have Q ? P ? k max{v 2k , u2k } ? R2 .
Finally, plugging Lemma 7 and Lemma 1 in (13) we get the following problem, which Pis monotonically related to a bound Pof the generalization loss,
m
1
1
1
>
>
minv?u
? n?
subject
to
i=1 ` 2 v (z i,? ) + u (z i,+ )
k log (v k ? uk )
n
P
2
max
{r(v
),
r(u
)}
?
R
.
To
solve
the
last
problem
we
write
its
Lagrangian,
k
k
k
m
1X
1 >
1 X
>
max min
`
v (z i,? ) + u (z i,+ ) ?
log (v k ? uk )
? v?u n
2
n?
i=1
k
X
+?
max {r(v k ), r(uk )} ? ?R2 ,
(14)
k
where ? is the Lagrange multiplier ensuring the constraint. Comparing (14), whose objective is used
in the PAC-Bayesian bound, and our learning algorithm in (6), we observe that the three terms in
1
both objectives are the same by setting C = n?
and identifying the optimal value of the Lagrange
6
?1
10 ?1
10
error (%) AROW
0
10
10
0
10
?1
0
1
10
10
error (%) BoW?M
10 ?1
10
1
10
error (%) AROW
1
error (%) SVM
error (%) SVM
1
10
0
10
?1
0
10 ?1
10
1
10
10
error (%) BoW?S
1
10
0
10
?1
0
1
10
10
error (%) BoW?M
10 ?1
10
0
1
10
10
error (%) BoW?S
Figure 1: Fraction of error on text classification datasets of BoW-M and BoW-S vs SVM (two left plots); and
BoW-M and BoW-S vs AROW (two right plots). Markers above the line indicate superior BoW performance.
multipler with the trade-off constant ? = D. In fact, each value of the radius R yields a unique
optimal value of the Lagrange multiplier ?. Thus, we can interpret the role of the constant D as
setting implicitly the effective radius of the prior ball P.
Few comments are in order. First, the KL-divergence between distributions is minimized more
effectively if both P and Q are of the same form, e.g. both P and Q are boxes. However, we chose
Q to be a box, as it has a nice interpretation of uncertainty over features, and P to be a ball, as
it decomposes (as opposed to an `? ball), which allows simpler optimization algorithms. Second,
as noted above, BoW-S is indeed smoother than BoW-M, yet, from (14) it follows that the latter is
better motivated from the PAC-Bayesian bound, as we want Q ? P. Third, the bound is small if
the volume of the box Q is large, which motivates seeking for large-volume boxes, whose members
perform well.
6
Empirical Evaluation
We evaluated BoW-M and BoW-S on NLP tasks experimenting with all the 12 datasets used by
Dredze et al [6] (sentiment classification in 6 Amazon domains, 3 pairs of 20 newsgroups and 3
pairs of Reuters (RCV1)). We defined an additional task from the 6 Amazon domains (book, dvd,
music, video, electronics, kitchen). Given reviews from two domains, the goal is to identify the domain identity. We used all 6?5/2 = 15 unordered pairs of domains. Additionally, we selected 3 users
from task A of the 2006 ECML/PKDD Discovery Challenge spam data set. The goal is to classify
an email as either a spam or a not-spam. This yielded a total of 30 datasets. For each problem we
selected about 2, 000 instances and represented them with vectors of uni/bi-grams counts. Feature
extraction followed a previous protocol [6, 2]. Each dataset was randomly divided for 10-fold cross
validation. We also experimented with USPS OCR data which we binarized into 45 all-pairs problems, maintaing the standard split into training and test sets. Given an image of one of two digits,
the goal is to detect which of the two digits is shown in the image.
We implemented BoW-M and BoW-S both with Hinge loss and Huber loss. The performance of the
latter was slightly worse than the former, thus we report results only for the Hinge loss. We also tried
AdaGrad [7] but surprisingly it did not work as well as COMID. We compared BoW with support
vector machines (SVM) [3] and AROW [4] which was shown to outperform many algorithms on
NLP tasks. (Other algorithms we evaluated, including maximum-entropy and SGD with Huberloss, performed worse than either of these two algorithms and thus are omitted.) It is not clear at
this point how to incorporate Mercer kernels into BoW, and thus we are restricted to evaluate all
algorithms on data that can be classified well with linear models.
Classifiers parameters (C for SVM, r for AROW and C, D for BoW) were tuned for each task on
a single additional randomized run over the data splitting it into 80%, used for training, and the
remaining 20% of examples were used to choose the parameters. Results are reported for NLP tasks
as the average error over the 10 folds per problem , while for USPS the standard test sets are used.
The mean error for 30 NLP tasks over 10 folds of BoW-M and BoW-S vs SVM is summarized in the
two left panels of Fig. 1. Markers above the line indicate superior BoW performance. Clearly, both
BoW versions outperform SVM obtaining lower test error on most (26) datasets and higher only on
few (at most 3). The right two panels compare the performance of both BoW versions with AROW.
Here the trend remains yet with a smaller gap, BoW-M outperforms AROW in 20 datasets, and is
outperformed in 9, while BoW-S outperforms AROW in 19 datasets and outperformed in 12. Note,
AROW was previously shown [4] to have superior performance on text data over other algorithms.
7
20
10
0
30
20
10
0%
10%
20%
Training Label Error
30%
0
40
BoW?M
AROW
TIE
30
No wins
30
40
BoW?S
SVM
TIE
No wins
No wins
30
40
BoW?M
SVM
TIE
No wins
40
20
10
0%
10%
20%
Training Label Error
0
30%
BoW?S
AROW
TIE
20
10
0%
10%
20%
Training Label Error
30%
0
0%
10%
20%
Training Label Error
30%
Figure 2: No. of USPS 1-vs-1 datasets (out of 45) for which one algorithm is better than the other (see legend)
shown for four levels of label noise during training: 0%, 10%, 20% and 30% (left to right). Higher values
indicate better performance.
The results of the experiments with USPS are summarized in Fig. 2. Each panel shows the number of
datasets (out of 45) for which one algorithm outperforms another algorithm, for four levels of label
noise (i.e. probability of flipping the correct label) during training: 0%, 10%, 20% and 30%.The
four pairs compared are BoW vs SVM (two left panels, BoW-M most left) and BoW vs AROW
(two right panels, BoW-S most right). A left bar higher than a middle bar (in each group in each
panel) indicates superior BoW performance. With no label noise (left group in each panel) SVM
outperforms both BoW algorithms (e.g. SVM attains lower test error than BoW-S on 20 datasets
and higher on 12 datasets, with a tie in 13 datasets). The average test error of SVM is 1.81, AROW
is 1.98 and BoW-S is 1.97. When the level of noise increases both BoW algorithms outperform
AROW and SVM. With maximal level of 30% label noise, the average test error is 16.1% for SVM,
14.8 for AROW, and 6.1% for BoW-S. BoW-M achieves lower test error on 27 datasets (compared
both with SVM and AROW), while BoW-S achieves lower test error than SVM on 38 datasets and
than AROW on 40 datasets. Interestingly, while, in general, BoW-M achieved lower test error than
BoW-S on the NLP problems, the situation is reversed in the USPS data where BoW-S achieves in
general lower test error.
7
Related Work
There is much previous work on a related topic of incorporating additional constraints, using prior
knowledge of the problem. Shivaswamy and Jebara [24] use a geometric motivation to modify
SVMs. Their effort and other related works, first deduce some additional knowledge about the
problem [16, 20, 1], and keep it fixed while learning. In contrast, our method learns together the
classifier and some additional information.
Another line of research is about algorithms that are maintaining a Gaussian distribution over
weights, as opposed to uniform distribution as in our case, either AROW [4] in the online setting
and its predecessors, or Gaussian Margin Machines (GMMs) [5] in the batch setting. Our motivation is similar to the motivation behind GMMs, yet it is different in few important aspects. (1) BoW
maintains only 2d parameters, while GMM employs d + d(d + 1)/2 as it maintains a full covariance
matrix. (2) As a consequence, GMMs are not feasible to run on data with more than hundreds of
features, which is further supported by the fact that GMMs were evaluated only on data of dimension 64 [5]. (3) We use directly a specialized PAC-Bayes bound for convex loss functions [10] while
the analysis of GMMs uses a bound designed for the 0 ? 1 loss which is then further bounded. (4)
The optimization problem of both versions of BoW is convex, while the optimization problem of
GMMs is not convex, and it is only approximated with a convex problem. (5) Therefore, we can and
do employ COMID [8] which is theoretically justified and fast in practice, while GMMs are trained
using another technique with no convergence (to local minima) guarantees. (6) Conceptually, BoW
maintains a compact set (box) while the set of possible weights for GMM is not compact. This
allows us to extend our work to other types of sets (in progress), while its not clear how to extend
the GMMs approach from Gaussian distributions to other objects.
8
Conclusion
We extend the commonly used linear classifiers to subsets of the class of possible classifiers, or
in other words uniform distributions over weight vectors. Our learning algorithm is based on a
worst-case margin minimization principle, and it benefits from strong theoretical guarantees based
on tight PAC-Bayesian bounds. The empirical evaluation presented shows that our method performs
favourably with respect to SVMs and AROW, and is more robust in the presence of label noise. We
8
plan to study the integration of kernels, extend our framework for various shapes and problems, and
develop specialized large scale algorithms.
Acknowledgments: The paper was partially supported by an Israeli Science Foundation grant ISF1567/10 and by a Google research award.
References
[1] J. Bi and T. Zhang. Support vector classification with input data uncertainty. In NIPS, 2004.
[2] J. Blitzer, M. Dredze, and F. Pereira. Biographies, bollywood, boom-boxes and blenders:
Domain adaptation for sentiment classification. In ACL, 2007.
[3] C. Cortes and V. Vapnik. Support-vector networks. Machine Learning, 20(3):273?297,
September 1995.
[4] K. Crammer, A. Kulesza, and M. Dredze. Adaptive regularization of weighted vectors. In
NIPS, 2009.
[5] K. Crammer, M. Mohri, and F. Pereira. Gaussian margin machines. In AISTATS, 2009.
[6] M. Dredze, K. Crammer, and F. Pereira. Confidence-weighted linear classification. In ICML,
2008.
[7] J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and
stochastic optimization. In COLT, 2010.
[8] J. Duchi, S. Shalev-Shwartz, Y. Singer, and A. Tewari. Composite objective mirror descent. In
COLT, pages 250?264, 2010.
[9] Y. Freund and R.E. Schapire. A decision-theoretic generalization of on-line learning and an
application to boosting. In Euro-COLT, pages 23?37, 1995.
[10] P. Germain, A. Lacasse, F. Laviolette, and M. Marchand. Pac-bayesian learning of linear
classifiers. In ICML, 2009.
[11] T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning: Data Mining,
Inference, and Prediction. Springer, 2001.
[12] R. Herbrich, T. Graepel, and C. Campbell. Robust Bayes point machines. In ESANN 2000,
pages 49?54, 2000.
[13] R. Herbrich, T. Graepel, and C. Campbell. Bayes point machines. JMLR, 1:245?279, 2001.
[14] P.J. Huber. Robust estimation of a location parameter. Annals of Statistics, 53:73101, 1964.
[15] T. Jaakkola and M. Jordan. A variational approach to Bayesian logistic regression models and
their extensions. In Workshop on Artificial Intelligence and Statistics, 1997.
[16] G. Lanckriet, L. Ghaoui, C. Bhattacharyya, and M. Jordan. A robust minimax approach to
classification. JMLR, 3:555?582, 2002.
[17] J. Langford and M. Seeger. Bounds for averaging classifiers. Technical report, CMU-CS-01102, 2002.
[18] J. Langford and J. Shawe-Taylor. PAC-bayes and margins. In NIPS, 2002.
[19] D. McAllester. PAC-Bayesian model averaging. In COLT, 1999.
[20] J. Nath, C. Bhattacharyya, and M. Murty. Clustering based large margin classification: A
scalable approach using SOCP formulation. In KDD, 2006.
[21] J. Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference.
Morgan Kaufmann, 1988.
[22] B. Sch?olkopf and A. J. Smola. Learning with Kernels: Support Vector Machines, Regularization, Optimization and Beyond. MIT Press, 2002.
[23] M. Seeger. PAC-Bayesian generalization bounds for gaussian processes. JMLR, 3:233?269,
2002.
[24] P. Shivaswamy and T. Jebara. Ellipsoidal kernel machines. In AISTATS, 2007.
[25] V. N. Vapnik. Statistical Learning Theory. Wiley, 1998.
[26] M.H. Wright. The interior-point revolution in optimization: history, recent developments, and
lasting consequences. Bull. Amer. Math. Soc., 42:39?56, 2005.
9
|
4781 |@word version:7 middle:1 faculty:1 polynomial:1 norm:2 d2:1 seek:1 tried:1 blender:1 covariance:1 pick:2 sgd:1 electronics:1 contains:2 tuned:1 interestingly:1 bhattacharyya:2 outperforms:5 com:1 comparing:1 gmail:1 yet:4 assigning:1 written:1 additive:1 partition:1 kdd:1 shape:1 plot:2 designed:1 update:3 v:6 alone:1 half:1 selected:2 intelligence:1 item:2 plane:1 xk:1 boosting:2 math:1 location:1 herbrich:3 simpler:2 zhang:1 predecessor:1 introduce:2 theoretically:1 huber:3 indeed:1 tagging:1 expected:3 pkdd:1 nor:2 decreasing:3 becomes:1 provided:1 pof:1 notation:4 bounded:2 panel:7 mass:4 lowest:1 israel:3 interpreted:1 guarantee:2 every:1 binarized:3 voting:1 xd:1 tie:5 exactly:1 um:1 k2:2 classifier:13 uk:21 grant:1 superiority:1 positive:1 local:1 modify:2 mistake:1 consequence:2 fluctuation:1 abuse:1 signed:1 chose:1 twice:1 acl:1 bi:2 weizmann:1 unique:1 acknowledgment:1 practice:3 minv:1 implement:1 digit:2 empirical:6 thought:1 attain:3 projection:1 composite:1 confidence:2 word:5 murty:1 get:4 onto:1 cannot:2 convenience:1 operator:2 interior:1 context:1 risk:2 equivalent:3 deterministic:1 lagrangian:1 center:4 independently:1 convex:21 amazon:2 identifying:1 splitting:1 rule:2 dw:1 coordinate:5 justification:1 updated:1 qkp:3 annals:1 user:1 us:1 hypothesis:4 origin:2 lanckriet:1 element:8 trend:1 recognition:1 satisfying:1 approximated:1 predicts:1 labeled:2 role:1 electrical:1 worst:4 news:2 trade:4 convexity:2 complexity:3 trained:1 solving:2 tight:2 usps:7 various:2 represented:2 derivation:1 fast:1 describe:1 effective:1 artificial:1 choosing:1 refined:1 shalev:1 u2k:2 quite:1 whose:2 widely:1 solve:5 dominating:1 larger:1 plausible:1 otherwise:4 statistic:2 g1:4 analyse:1 online:2 advantage:1 propose:2 product:3 maximal:1 adaptation:1 aligned:4 bow:50 kv:1 olkopf:1 convergence:2 double:1 requirement:1 categorization:1 object:2 blitzer:1 develop:3 ac:1 progress:1 strong:1 soc:1 implemented:1 c:1 esann:1 indicate:4 solves:1 radius:4 closely:1 correct:1 stochastic:2 mcallester:2 argued:1 generalization:10 randomization:1 strictly:4 extension:1 hold:2 considered:1 wright:1 exp:2 minu:1 predict:1 substituting:1 achieves:3 omitted:5 estimation:1 outperformed:2 label:12 quote:1 largest:1 agrees:1 weighted:2 minimization:4 mit:1 clearly:2 always:1 gaussian:5 rather:3 jaakkola:1 corollary:4 ax:2 derived:1 vk:1 polyhedron:3 indicates:1 experimenting:1 contrast:1 seeger:2 attains:3 comid:4 detect:1 inference:2 shivaswamy:2 entire:1 relation:1 bpm:3 arg:3 classification:13 supw:6 colt:4 denoted:2 development:1 plan:1 art:2 constrained:1 special:1 integration:1 extraction:1 enginering:1 sampling:1 koby:2 icml:2 minimized:2 report:2 intelligent:1 employ:3 few:3 oriented:1 randomly:1 simultaneously:1 divergence:1 individual:2 kitchen:1 maintain:2 friedman:1 mining:1 evaluation:2 extreme:1 yielding:2 behind:1 chain:1 accurate:1 taylor:1 penalizes:1 haifa:1 theoretical:1 minimal:1 instance:3 classify:2 bull:1 vertex:9 subset:5 uniform:7 technion:3 srm:1 hundred:1 reported:1 randomized:2 probabilistic:1 off:4 together:2 containing:1 choose:1 opposed:2 dr:1 worse:2 book:1 return:4 socp:1 unordered:1 student:1 sec:3 wk:6 summarized:2 boom:1 performed:3 root:4 later:1 analyze:1 sup:4 characterizes:1 hazan:1 bayes:8 maintains:5 complicated:1 minimize:3 il:1 ni:1 square:1 kaufmann:1 efficiently:3 yield:6 identify:4 conceptually:1 bayesian:14 limv:2 classified:2 history:1 email:2 definition:2 associated:1 proof:6 dataset:1 popular:1 knowledge:2 graepel:2 formalize:1 actually:1 campbell:2 attained:3 higher:4 formulation:6 evaluated:5 box:45 though:1 amer:1 furthermore:1 smola:1 langford:2 d:2 hand:1 sketch:1 favourably:1 invite:1 marker:2 lack:4 google:1 logistic:3 infimum:1 grows:1 dredze:4 omitting:2 multiplier:3 former:1 regularization:11 analytically:2 equality:1 eg:2 during:3 noted:1 coincides:1 theoretic:1 performs:2 duchi:2 reasoning:2 image:2 wise:2 variational:1 recently:2 common:3 superior:4 specialized:2 volume:16 extend:4 interpretation:1 interpret:1 refer:2 gibbs:2 rd:6 mathematics:1 similarly:3 shawe:1 pq:1 deduce:1 posterior:5 recent:1 optimizing:1 inf:6 inequality:4 binary:4 yi:14 seen:1 minimum:7 additional:10 morgan:1 monotonically:4 smoother:1 full:1 smooth:4 technical:1 characterized:1 cross:1 divided:1 post:1 award:1 dkl:3 plugging:1 ensuring:1 prediction:5 scalable:1 regression:4 expectation:2 cmu:1 iteration:2 kernel:4 achieved:1 justified:1 want:1 interval:2 sch:1 strict:1 sure:1 subject:1 comment:1 member:4 legend:1 gmms:8 nath:1 jordan:2 call:1 ee:1 structural:2 presence:1 split:1 easy:1 enough:1 decent:1 variety:1 newsgroups:1 hastie:1 reduce:1 tradeoff:1 motivated:2 effort:1 sentiment:3 penalty:1 algebraic:1 tewari:1 clear:2 ellipsoidal:1 svms:6 schapire:1 outperform:4 exist:3 sign:7 per:2 tibshirani:1 rehovot:1 write:1 vol:3 group:3 putting:1 four:3 drawn:4 changing:1 gmm:2 subgradient:1 fraction:1 sum:6 run:2 uncertainty:8 decision:1 prefer:2 bound:23 followed:2 fold:3 marchand:1 yielded:1 constraint:3 constrain:1 x2:1 dvd:1 tal:2 aspect:1 argument:3 min:10 performing:2 rcv1:1 optical:1 department:1 according:4 combination:3 ball:5 belonging:1 describes:1 slightly:1 smaller:1 character:1 tw:1 making:1 lasting:1 intuitively:2 dv:1 pr:1 restricted:1 ghaoui:1 taken:1 ln:1 equation:2 remains:1 previously:1 turn:1 count:1 needed:1 singer:2 cor:1 observe:1 ocr:2 alternative:7 robustness:1 batch:1 remaining:1 nlp:7 clustering:1 maintaining:2 hinge:4 laviolette:1 music:1 build:1 especially:1 seeking:2 objective:7 quantity:3 flipping:1 primary:2 september:1 gradient:2 kth:5 win:4 reversed:1 majority:2 parametrized:3 topic:2 reason:1 assuming:2 length:1 index:1 minimizing:1 balance:3 equivalently:1 unfortunately:1 statement:1 stated:2 negative:1 proper:1 motivates:1 unknown:1 perform:3 upper:2 datasets:17 lacasse:1 descent:1 ecml:1 situation:1 jebara:2 introduced:1 cast:5 required:1 pair:7 namely:1 germain:2 kl:1 pearl:1 nip:3 israeli:1 beyond:1 bar:2 below:4 kulesza:1 summarize:1 challenge:1 including:3 max:15 video:1 natural:1 predicting:2 minimax:1 technology:1 inversely:1 numerous:1 axis:4 text:3 review:2 prior:4 disagrees:1 nice:1 discovery:1 geometric:1 adagrad:1 freund:1 loss:28 expect:2 highlight:1 proportional:1 validation:1 foundation:1 integrate:1 consistent:1 mercer:1 principle:3 classifying:1 pi:1 cd:1 maxt:1 mohri:1 supported:3 last:5 surprisingly:1 institute:2 taking:1 wagner:2 barrier:1 absolute:1 benefit:1 dimension:3 gram:1 commonly:1 adaptive:2 projected:1 spam:4 approximate:1 compact:4 uni:1 implicitly:1 keep:2 supremum:9 monotonicity:2 global:5 conceptual:1 conclude:2 discriminative:1 xi:17 shwartz:1 decomposes:2 additionally:5 ku:1 robust:7 obtaining:1 du:1 complex:1 domain:7 protocol:1 bollywood:1 did:1 aistats:2 bounding:1 noise:8 reuters:1 motivation:3 x1:1 body:1 crafted:1 fig:2 euro:1 wiley:1 shrinking:1 sub:1 pereira:3 jmlr:3 third:4 learns:1 removing:1 pac:13 revolution:1 r2:5 experimented:1 svm:19 cortes:1 exists:4 incorporating:1 workshop:1 vapnik:2 adding:2 effectively:1 mirror:1 margin:8 gap:1 entropy:1 lagrange:3 desire:4 contained:1 arow:21 g2:3 scalar:3 partially:1 u2:5 applies:1 monotonic:1 springer:1 satisfies:1 goal:4 identity:1 replace:2 feasible:1 specifically:2 uniformly:2 averaging:2 lemma:15 called:1 total:1 formally:4 support:8 latter:3 crammer:4 incorporate:1 evaluate:1 biography:1
|
4,178 | 4,782 |
Scalable imputation of genetic data with a discrete
fragmentation-coagulation process
Lloyd T. Elliott
Gatsby Computational Neuroscience Unit
University College London
17 Queen Square
London WC1N 3AR, U.K.
[email protected]
Yee Whye Teh
Department of Statistics
University of Oxford
1 South Parks Road
Oxford OX1 3TG, U.K.
[email protected]
Abstract
We present a Bayesian nonparametric model for genetic sequence data in which
a set of genetic sequences is modelled using a Markov model of partitions. The
partitions at consecutive locations in the genome are related by the splitting and
merging of their clusters. Our model can be thought of as a discrete analogue of
the continuous fragmentation-coagulation process [Teh et al 2011], preserving the
important properties of projectivity, exchangeability and reversibility, while being
more scalable. We apply this model to the problem of genotype imputation, showing improved computational efficiency while maintaining accuracies comparable
to other state-of-the-art genotype imputation methods.
1
Introduction
The increasing availability of genetic data (for example, from the Thousand Genomes project [1])
and the importance of genetics in scientific and medical applications requires the development of
scalable and accurate models for genetic sequences which are informed by genetic processes. Although standard models such as the coalescent with recombination [2] are accurate, they suffer from
intractable posterior computations. To address this, various hidden Markov model (HMM) based
approaches have been proposed in the literature as more scalable alternatives (e.g. [3, 4]).
Due to gene conversion and chromosomal crossover, genetic sequences exhibit a local ?mosaic?-like
structure wherein sequences are composed of prototypical segments called haplotypes [5]. Locally,
these prototypical segments are shared by a cluster of sequences: each sequence in the cluster is
described well by a haplotype that is specific to the location on the chromosome of the cluster. An
example of such a structure is shown in Figure 1. HMMs can capture this structure by having each
latent state correspond to one of the haplotypes [3, 6]. Unfortunately, this leads to symmetries in
the posterior distribution arising from the nonidentifiability of the state labels [7, 8]. Furthermore,
current state-of-the-art HMM methods often involve costly model selection procedures in order to
choose the number of latent states.
A continuous fragmentation-coagulation process (CFCP) has recently been proposed for modelling
local mosaic structure in genetic sequences [9]. The CFCP is a nonparametric models defined directly on unlabelled partitions thereby avoiding both costly model selection and the label switching
problem [8]. Although inference algorithms derived for the CFCP scale linearly in the number and
length of the sequences [9], since the CFCP is a Markov jump process the computational overhead
needed to model the arbitrary number of latent events located between two consecutive observations
might preclude scalability to large datasets.
In this work, we present a novel fragmentation-coagulation process defined on a discrete grid (called
the DFCP) which provides the advantages of the CFCP while being more scalable. The DFCP
1
C
T
C
C
C
T
C
G
T
T
G
T
A
G
T
A
T
T
A
A
G
T
T
C
C
C
A
T
G
A
T
A
G
T
G
G
C
G
A
T
A
C
A
G
C
G
C
T
C
G
C
C
C
C
T
A
A
G
A
T
C
A
C
G
A
C
C
A
C
T
C
A
G
T
G
C
G
A
T
A
T
C
C
A
C
G
A
C
G
T
r s1044043
r s2857101
r s10484565
r s241456
r s241455
r s241454
r s241453
r s241452
r s241451
r s17034
r s241449
r s241439
r s241438
r s241437
r s2228396
r s241436
G
T
A
T
G
C
C
A
G
r s16870907
C
G
T
r s2857105
A
G
T
r s2857103
G
A
C
r s13209654
A
A
C
r s9380326
r s4576294
r s1015166
r s241433
r s2228397
A
G
r s16870923
A
C
r s1894411
C
T
C
r s241448
A
T
C
T
r s241447
T
G
r s4148876
T
A
T
A
r s241446
A
A
C
A
r s241445
C
T
T
T
r s241440
T
T
G
C
C
Figure 1: Haplotype structure of the CEU and YRI populations from HapMap [10] found by DFCP.
Data consists of single nucleotide polymorphisms (SNPs) from TAP2 gene. Horizontal axis indicates SNP location and label. Vertical axis represents clusters from last sample of an MCMC chain
converging to DFCP posterior. Letters inside clusters indicate base identity.
describes location-dependent unlabelled partitions such that at each location on the chromosome the
clusters will split into multiple clusters which then merge to form the clusters at the next location. As
with the CFCP, the DFCP avoids the label switching problem by defining a probability distribution
directly on the space of unlabelled partitions.
The splitting and merging of clusters across the chromosome forms a mosaic structure of haplotypes.
Figure 1 gives an example of the structure discovered by the DFCP. We describe the DFCP in
section 2, and a forward-backward inference algorithm in section 3. Sections 4 and 5 report some
experimental results showing good performance on an imputation problem, and in section 6 we
conclude.
2
The discrete fragmentation-coagulation process
In humans, most of the bases on a chromosome are the same for all individuals in a population.
Genetic variations arise through mutations such as single nucleotide polymorphisms (SNPs), which
are locations in the genome where a single base was altered by a mutation at some time in the
ancestry of the chromosome. At each SNP location, a particular chromosome has one of usually two
possible bases (referred to as the major and minor allele). Consequently, SNP data for a chromosome
can be modelled as a binary sequence, with each entry indicating which of the two bases is present
at that location. In this paper we consider SNP data consisting of n binary sequences x = (xi )ni=1 ,
where each sequence xi = (xit )Tt=1 is of length T and corresponds to the T SNPs on a segment of
a chromosome in an individual. The t-th entry xit of sequence i is equal to zero if individual i has
the major allele at location t and equal to one otherwise.
We will model these sequences using a discrete fragmentation-coagulation process (DFCP) so that
the sequence values at the SNP at location t are described by the latent partition ?t of the sequences.
Each cluster in the partition corresponds to a haplotype. The DFCP models the sequence of partitions
using a discrete Markov chain as follows: starting with ?t , we first fragment each cluster in ?t into
smaller clusters, forming a finer partition ?t . Then we coagulate the clusters in ?t to form the clusters
of ?t+1 . In the remainder of this section, we will first give some background theory on partitions, and
random fragmentation and coagulation operations and then we will describe the DFCP as a Markov
chain over partitions. Finally, we will describe the likelihood model used to relate the sequence of
partitions to the observed sequences.
2.1
Random partitions, fragmentations and coagulations
A partition of a set S is a clustering of S into non-overlapping non-empty subsets of S whose union
is all of S. The Chinese restaurant process (CRP) forms a canonical family of distributions on
partitions. A random partition ? of a set S is said to follow the law CRP(S, ?, ?) if:
Y
[? + ?]#??1
?
Pr(?) =
[1 ? ?]#a?1
(1)
1
#S?1
[? + 1]1
a??
where [x]nd = (x)(x + d) . . . (x + (n ? 1)d) is Kramp?s symbol and ? > ??, ? ? [0, 1) are the concentration and discount parameters respectively [11]. A CRP can also be described by the following
2
analogy: customers (elements of S) enter a Chinese restaurant and choose to sit at tables (clusters in
?). The first customer chooses any table. Subsequently, the i-th customer sits at a previously chosen
table a with probability proportional to #a ? ? where #a is the number of customers already sitting
there and at some unoccupied table with probability proportional to ? + ?#? where #? is the total
number of tables already sat at by previous customers.
The fragmentation and coagulation operators are random operations on partitions. The fragmentation F RAG(?, ?, ?) of a partition ? is formed by partitioning further each cluster a of ? according
to CRP(a, ?, ?) and then taking the union of the resulting partitions, yielding a partition of S that
is finer than ?. Conversely, the coagulation C OAG(?, ?, ?) of ? is formed by partitioning the set of
clusters of ? (i.e., the set ? itself) according to CRP(?, ?, ?) and then replacing each cluster with the
union of its elements, yielding a partition that is coarser than ?. The fragmentation and coagulation
operators are linked through the following theorem by Pitman [12].
Theorem 1. Let S be a set and let A1 , B1 , A2 , B2 be random partitions of S such that:
A1 ? CRP(S, ??2 , ?1 ?2 ),
B1 |A1 ? F RAG(A1 , ??1 ?2 , ?2 ),
B2 ? CRP(S, ??2 , ?2 ),
A2 |B2 ? C OAG(B2 , ?, ?1 ).
Then, for all partitions A and B of the set S such that B is a refinement of A:
Pr(A1 = A, B1 = B) = Pr(A2 = A, B2 = B).
2.2
(2)
The discrete fragmentation-coagulation process
?1
The DFCP is parameterized by a concentration ? > 0 and rates (Rt )Tt=1
with Rt ? [0, 1). Under
the DFCP, the marginal distribution of the partition ?t is CRP(S, ?, 0) and so ? controls the number
of clusters that are found at each location. The rate parameter Rt controls the strength of dependence
between ?t and ?t+1 , with Rt = 0 implying that ?t = ?t+1 and Rt ? 1 implying independence.
?1
Given ? and (Rt )Tt=1
, the DFCP on a set of sequences indexed by the set S = {1, . . . , n} is described by the following Markov chain. First we draw a partition ?1 ? CRP(S, ?, 0). This CRP
describes the clustering of S at location t = 1. Subsequently, we draw ?t |?t from F RAG(?t , 0, Rt ),
which fragments each of the clusters in ?t into smaller clusters in ?t , and then ?t+1 |?t from
C OAG(?t , ?/Rt , 0), which coagulates clusters in ?t into larger clusters in ?t+1 .
Each ?t has CRP(S, ?, 0) as its invariant marginal distribution and each ?t is marginally distributed
as CRP(S, ?, Rt ). This can be seen by applying Theorem 1 with the substitution ?1 = 0, ?2 = Rt ,
? = ?/Rt . In population genetics the CRP appears as (and was predated by) Ewen?s sampling
formula [13], a counting formula for the number of alleles appearing in a population, observed at
a given location. Over a short segment of the chromosome where recombination rates are low,
haplotypes behave like alleles and so a CRP prior on the number of haplotypes at a location is
reasonable.
Further, since fragmentation and coagulation operators are defined in terms of CRPs which are projective and exchangeable, the Markov chain is projective and exchangeable in S as well. Projectivity
and exchangeability are desirable properties for Bayesian nonparametric models because they imply
that the marginal distribution of a given data item does not depend on the total number of other data
items or on the order in which the other data items are indexed. In genetics, this captures the fact
that usually only a small subset of a population is observed.
Finally, the theorem also shows that conditioned on ?t+1 , ?t has distribution F RAG(?t+1 , 0, Rt )
while ?t |?t has distribution C OAG(?t , ?/Rt , 0) meaning that the Markov chain defining the DFCP
is reversible. Chromosome replication is directional and so statistics for genetic processes along the
chromosome are not reversible. But the strength of this effect on SNP data is not currently known
and many genetic models such as the coalescent with recombination [14] assume reversibility for
simplicity. The non-reversibility displayed by models such as fastPHASE is an artifact of their
construction rather than an attempt to capture non-reversible aspects of genetic sequences.
2.3
Likelihood model for sequence observations
Given the sequence of partitions (?t )Tt=1 , we model the observations in each cluster at each location
t independently. For each cluster a ? ?t at location t, we adopt a discrete likelihood model in which
3
?1
?1 ? CRP(S, ?, 0),
?t |?t ? F RAG(?t , 0, Rt ),
?t+1 |?t ? C OAG(?t , ?/Rt , 0),
log ? ? N (m, v),
log Rt ? Uniform(log Rmin , 0),
xit |ait = ?tait , ?ta |?t ? Bernoulli(?t ),
?t ?t
?t |?t ? Beta( , ),
2 2
log ?t ? Uniform(log ?min , 0).
(3)
?T ?1
?2
?1
?2
???
?T
xi1
xi2
???
xiT
?1?i?n
?1a
?2a
? a ? ?1
? a ? ?2
?1
?2
???
?T a
Figure 2: Left: Graphical model for the discrete fragmentation coagulation process. Hyperparameters are not shown. Right: Generative process for genetic sequences xit .
? a ? ?T
???
?T
the same observation is emitted for each sequence in the cluster. For each sequence i, let ait ? ?t
be the cluster in ?t containing i. Let ?ta be the emission of cluster a at location t. Since SNP data
has binary labels, ?ta ? {0, 1} is a Bernoulli random variable. Let the mean of ?ta be ?t (this
is the latent allele frequency at location t). We assume that conditioned on the partitions and the
parameters, the observations xit are independent, and determined by the cluster parameter ?ta . Thus
the probability Pr(?ta = 1|?t ) = ?t and the probability Pr(xit |ait = a, ?ta ) = ?(xit = ?ta ) where
? is an indicator function (i.e., it is one if xit = ?ta and zero otherwise).
We place a beta prior on ?t with mean parameter 1/2 and mass parameter ?t . The mass parameters
are themselves marginally independent and we place on them an uninformative log-uniform prior
over a range: p(?t ) ? ?t?1 , ?t ? ?min . Since this distribution is heavy tailed, the ?t variables
will have more mass near 0 and 1 than they would have if ?t were fixed, adding sparsity to the
latent allele frequencies. This phenomenon is empirically observed in SNP data. We also place an
uninformative log-uniform prior on Rt over a range: p(Rt ) ? Rt?1 , Rt ? Rmin . Note that the
prior gives more mass to values of Rt close to Rmin which we set close to zero, since we expect
the partitions of consecutive locations to be relatively similar so that the mosaic haplotype structure
can be formed. Finally, we place a truncated log-normal prior on ? with mean m and variance v:
log ? ? N (m, v), ? > 0. The graphical model for this generative process is shown in Figure 2.
2.4
Relationship with the continuous fragmentation-coagulation process
The continuous version of the fragmentation-coagulation process [9], which we refer to as the CFCP,
is a partition valued Markov jump process (MJP). (The ?time? variable for this MJP is the chromosome location, viewed as a continuous variable.) The CFCP is a pure jump process and can be
defined in terms of its rates for various jump events. There are two types of events in the CFCP:
binary fragmentation events, in which a single cluster a is split into two clusters b and c at a rate of
R?(#b)?(#c)/?(#a), and binary coagulation events in which two clusters b and c merge to form
one cluster a at a rate of R/?.
As was shown in [9] the CFCP can be realised as a continuous limit of the DFCP. Consider a DFCP
with concentration ? and constant rate parameter R?. Then as ? ? 0 the probability that the
coagulation and fragmentation operations at a specific time step t induce no change in the partition
structure ?t approaches 1. Conversely, the probability that these operations are the binary events
given above scales as O(?), while all other events scale as larger powers of ?. If we rescale the time
steps by t 7? ?t, then the expected number of binary events over a finite interval approaches ? times
the rates given above and the expected number of all other events goes to zero, yielding the CFCP.
In the CFCP fragmentation and coagulation events are binary: they involve either one cluster fragmenting into two new clusters, or two clusters coagulating into one new cluster. However, for the
DFCP the fragmentation and coagulation operators can describe more complicated haplotype structures without introducing more latent events. For example one cluster splitting into three clusters
(as happens to the second haplotype from the top of Figure 1 after the 18th SNP) can be described
4
by the DFCP using just one fragmentation operator. The order of the latent events introduced by the
CFCP required does not matter, adding unnecessary symmetry to its posterior.
3
Inference with the discrete fragmentation coagulation process
We derive a Gibbs sampler for posterior simulation in the DFCP by making use of the exchangeability of the process. Each iteration of the sampler updates the trajectory of cluster assignments of one
sequence i through the partition structure. To arrive at the updates, we first derive the conditional
distribution of the i-th trajectory given the others, which can be shown to be a Markov chain. Coupled with the deterministic likelihood terms, we then use a backwards-filtering/forwards-sampling
algorithm to obtain a new trajectory for sequence i. In this section, we derive the conditional distribution of trajectory i using the definition of fragmentation and coagulation and also the posterior
distributions of the parameters Rt , ? which we will update using slice sampling [15].
3.1
Conditional probabilities for the trajectory of sequence i
We will refer to the projection of the partitions ?t and ?t onto S ? {i} by ?t?i and ??i
t respectively.
Let at (respectively bt ) be the cluster assignment of sequence i at location t in ?t (respectively ?t ).
If the sequence i is placed in a new cluster by itself in ?t (i.e., it forms a singleton cluster) we will
denote this by at = ? and for ??i
t we will denote the respective event by bt = ?. Otherwise, if
the the sequence i is placed in an existing cluster in ?t?i (respectively ??i
t ) we will denote this by
at ? ?t?i (respectively bt ? ??i
).
Thus
the
state
spaces
of
a
and
b
are
respectively
?t?i ? {?} and
t
t
t
?i
?t ? {?}.
Starting at t = 1, since the initial distribution is ?1 ? CRP(S, ?, 0), the conditional cluster assignment of the sequence i in ?1 is given by the CRP probabilities from (1):
#a/(n ? 1 + ?) if a ? ?t?i ,
Pr(at = a|?1?i ) =
(4)
?/(n ? 1 + ?)
if a = ?.
To find the conditional distribution of bt given at , we use the definition of the fragmentation operation as independent CRP partitions of each cluster in ?t . If at = ?, then the sequence i is in a
cluster by itself in ?t and so it will remain in a cluster by itself after fragmenting. Thus, bt = ?
with probability 1. If at = a ? ?t?i then bt must be one of the clusters in ?t into which a fragments.
This can be a singleton cluster, in which case bt = ?, or it can be one of the clusters in ??i
t . We will
refer to this set of clusters in ??i
by
F
(a).
Since
a
is
fragmented
according
to
CRP(a,
0,
R), when
t
t
the i-th sequence is added to this CRP it is placed in a cluster b ? Ft (a) with probability proportional to (#b ? R) and it is placed in a singleton cluster with probability proportional to R#Ft (a).
Normalizing these probabilities yields the following joint distribution:
?
(#b ? Rt )/#a if a ? ?t?i , b ? Ft (a),
?
?
Rt #Ft (a)/#a if a ? ?t?i , b = ?,
Pr(bt = b|at = a, ?t?i , ??i
(5)
t )=
1
if a = b = ?,
?
?
0
otherwise.
Similarly, to find the conditional distribution of at+1 given bt = b we use the definition of the
coagulation operation. If b 6= ?, then the sequence i was not in a singleton cluster in ??i
t and so it
?i
must follow the rest of the sequences in b to the unique a ? ?t+1
such that b ? a (i.e., b coagulates
with other clusters to form a). We will refer to the set of clusters in ??i
t that coagulate to form a by
?i
Ct (a). If b = ? then the sequence i is in a singleton cluster in ?t and so we can imagine it being the
last customer added to the coagulating CRP(?t , ?/Rt , 0) of the clusters of ?t . Hence the probability
?i
that sequence i is placed in a cluster a ? ?t+1
is proportional to #Ct (a) while the probability that it
?i
forms a cluster by itself in ?t+1 is proportional to ?/Rt . This yields the following joint probability:
?
?i
1
if a ? ?t+1
, b ? Ct (a),
?
?
?
?i
?i
R
#C
(a)/(?
+
R
#?
)
if
a
?
?
?i
t
t
t
t
t+1 , b = ?,
Pr(at+1 = a|bt = b, ?t+1
, ??i
(6)
t )=
?i
?
?/(?
+
R
#?
)
if
a
=
b
= ?,
t
?
t
?
0
otherwise.
5
3.2
Message passing and sampling for the sequences of the DFCP
Once the conditional probabilities are defined, it is straightforward to derive messages that allow
us to conduct backwards-filtering/forwards-sampling to resample the trajectory of sequence i in the
DFCP. This provides an exact Gibbs update for the trajectory of that sequence conditioned on the
trajectories of all the other sequences and the data. The messages we will define are the conditional
distribution of all the data seen after a given location in the sequence conditioned on the cluster
assignment of sequence i at that location. The messages are defined as follows:
?i
mtC (a) = Pr(xi,(t+1):T |at = a, ?t:T
, ??i
t:(T ?1) ).
(7)
?i
mtF (b) = Pr(xi,(t+1):T |bt = b, ?t:T
, ??i
t:(T ?1) ).
(8)
We define the last messages to be mTC (a) = 1. These messages are computed as follows:
X
?i
?i
mtF (b) =
mt+1
C (a) ?(xi,(t+1) = ?(t+1),a ) Pr(at+1 = a|bt = b, ?t+1 , ?t ) .
{z
}
|
|
{z
}
?i
a??t+1 ?{?}
mtC (a)
=
X
b???i
t ?{?}
Coagulation probabilities from (6).
Likelihood.
mtF (b) Pr(bt
|
= b|at =
{z
(9)
a, ?t?i , ??i
t ).
(10)
}
Fragmentation probabilities from (5).
As the fragmentation and coagulation conditional probabilities are only supported for clusters a, b
such that b ? a, these sums can be expanded so that only non-zero terms are summed over. For
simplicity we do not provide these expanded forms here. Given these computations it is easy to
define backwards messages using the reversibility of the process. The backwards messages can be
used to compute marginal probabilities of the observation as in the forward-backward algorithm.
To sample from the posterior distribution of the trajectory for sequence i conditioned on the other
trajectories and the data, we use the Markov property for the chain a1 , b1 , . . . , bT ?1 , aT and the
definition of the messages. Starting at location 1, we have:
?i
Pr(a1 = a|xi , ?1:T
, ??i
1:(T ?1) )
?i
? Pr(a1 = a|?1?i ) Pr(xi1 |a1 = a) Pr(xi,2:T |a1 = a, ?1:T
, ??i
1:(T ?1) ),
= Pr(a1 = a|?1?i ) ?(x1 = ?1a ) m1C (a).
{z
}
|
{z
}|
CRP probabilities (1).
(11)
Likelihood.
For subsequent bt and at+1 for locations t = 1, . . . , T ? 1,
?i
?i
Pr(bt = b|at = a, xi , ?1:T
, ?1:(T
?1) )
?i
?i
? Pr(bt = b|at = a, ?t?i , ??i
t ) Pr(xi,(t+1):T |bt = b, ?t:T , ?t:(T ?1) ),
t
= Pr(bt = b|at = a, ?t?i , ??i
t ) mF (b).
|
{z
}
(12)
Fragmentation probabilities from (5).
?i
Pr(at = a|bt?1 = b, xi , ?1:T
, ??i
1:(T ?1) )
?i
?i
? Pr(at = a|bt?1 = b, ?t?i , ??i
t?1 ) Pr(xit |at = a) Pr(xi,(t+1):T |at = a, ?t:T , ?t:(T ?1) ),
t
= Pr(at = a|bt?1 = b, ?t?i , ??i
t?1 ) ?(xit = ?ta ) mC (a).
{z
}
|
{z
}|
Coagulation probability from (6).
(13)
Likelihood.
The complexity of this update is O(KT ) where K is the expected number of clusters in the posterior.
This complexity class is the same as for the continuous fragmentation-coagulation process and other
related HMM methods such as fastPHASE. But there is no exact Gibbs update for the trajectories in
the CFCP. Instead the CFCP sampler relies on uniformization [16] which has slower mixing times
than exact Gibbs and so the update for the DFCP is, theoretically, more efficient.
3.3
Parameter updates
We use slice sampling [15] to update the ? and Rt parameters conditioned on the partition structure.
Using Bayes? rule, the definition (3) of the DFCP, and the identity [a]nb = bn ?(a/b + n)/?(a/b),
6
1.00
0.990
accuracy (proportion correct)
accuracy (proportion correct)
0.99
0.98
0.97
0.96
0.95
0.94
DFCP
CFCP
fastPHASE
BEAGLE
0.1
0.3
0.5
0.7
proportion missing data
0.9
0.989
DFCP
CFCP
0.988
0
500
1000
1500
2000
runtime (seconds)
2500
3000
3500
Figure 3: Allele imputation for X chromosomes from the Thousand Genomes project. Left:
Accuracy for prediction of held out alleles for continuous (CFCP) and discrete (DFCP) versions of
fragmentation-coagulation process and for popular methods BEAGLE and fastPHASE. 90% missing data condition truncates BEAGLE accuracies to emphasize other conditions. Right: Runtime
versus accuracy for 500 MCMC iterations for DFCP and CFCP in 50% missing data condition.
Points are averaged over 20 datasets and 25 consecutive samples.
the posterior probabilities of ? and Rt given the partitions ?1:T and ?1:(T ?1) are as follows:
Pr(?|?, ?) ? Pr(?) Pr(?1 |?, R1 ) Pr(?1 |?1 , ?, R1 ) ? ? ? Pr(?T |?T ?1 , ?, RT ?1 ),
? Pr(?)
TY
?1
PT
?(?)
?(?/Rt )
??T + t=1 #?t
.
?(? + n)
?(?/R
t + #?t )
t=1
(14)
Pr(Rt |?, ?, ?) ? Pr(Rt ) Pr(?t |?t , ?, Rt ) Pr(?t+1 |?t , ?, Rt ),
? Rt )?#?t Y
?(#b ? Rt ).
?(#?t + ?/Rt )
#?t ?#?t ?#?t+1 +1 ?(?/Rt )?(1
? Pr(Rt )Rt
(15)
b??t
4
Experiments
To examine the accuracy and scalability of the DFCP we conducted an allele imputation experiment
on SNP data from the Thousand Genomes project1 . We also compared the runtime of the samplers
for the DFCP and CFCP on data simulated from the coalescent with recombination model [14]. In
this section, we describe the setup of these experiments and in section 5 we present the results.
For the allele imputation experiment, we considered SNPs from 524 male X chromosomes. We
chose 20 intervals randomly, each containing 500 consecutive SNPs. In five conditions we held out
nested sets of between 10% and 90% of the alleles uniformly over all pairs of sites and individuals,
and used fastPHASE [3], BEAGLE [17], CFCP [9] and the DFCP to predict the held out alleles.
We used the most recent versions of BEAGLE and fastPHASE software available to us. We implemented the DFCP with many of the same libraries and programming techniques as the CFCP and
both versions were optimized. In each missing data condition, the CFCP and DFCP were run with
five random restarts and 46 MCMC iterations per restart (26 of which were discarded for burnin and
thinning). The accuracies for the DFCP and CFCP were computed by thresholding the empirical
marginal probabilities of the held out alleles at 0.5. The priors on the hyper parameters and the
likelihood specification of the two models were matched and the samplers were initialized using a
sequential Monte Carlo method based on the trajectory updates.
The posterior distributions of the concentration parameter ? for the two methods are different. In
order to match the expected number of clusters in the posterior, we also conducted allele imputation
in the 50% missing data condition with ? fixed at 10.0 for both models. We simulated 500 MCMC
iterations with no random restarts. We then computed the accuracy of the samples by predicting
held out alleles based on the cluster assignments of the sample.
1
March 2012 v3 release of the Thousand Genomes Project.
7
In a second experiment we simulated datasets from the coalescent with recombination model consisting of between 10,000 and 50,000 sequences using the software ms [14]. We conducted posterior
MCMC simulation in both models and compared the computation time required per iteration.
5
Results
The accuracy of the DFCP in the allele imputation experiment was comparable to that of the CFCP
and fastPHASE in all missing data conditions (Figure 3, left). For the 70% and 90% missing data
conditions, BEAGLE performed poorly (its median accuracy for this condition was 93.90% and
mean at chance accuracy for all conditions was 93.44%). In Figure 3(right) we compare the accuracy
and runtime for the 50% missing data condition. This figure shows that the runtime required for each
iteration is lower for the DFCP and the sequential Monte Carlo initialization is better (i.e., closer
to a posterior mode) for the DFCP. No difference in mixing time is suggested by the figure. As an
aside, we estimated the Shannon entropy in these samples and found that the DFCP had slightly
more entropy per sample than the CFCP (the difference was small but statistically significant). This
could indicate that the DFCP has better mixing.
For the second experiment, we plot the runtime per iteration of both models against the number
of sequences in the simulated dataset (Figure 4). The DFCP was around 2.5 times faster than the
CFCP for the condition with 50,000 sequences. In both models, most of the computation time was
spent calculating the messages in the backwards-filtering step. The CFCP has an arbitrary number of
latent events between consecutive observations and it is likely that the runtime improvement shown
by the DFCP is due to its reduced number of required message calculations.
50
Discussion
In this paper we have presented a discrete
fragmentation-coagulation process. The DFCP
is a partition-valued Markov chain, where partitions change along the chromosome by a fragmentation operation followed by a coagulation
operation. The DFCP is designed to model
the mosaic haplotype structure observed in genetic sequences. We applied the DFCP to an allele prediction task on data from the Thousand
Genomes Project yielding accuracies comparable to state-of-the-art methods and runtimes
that were lower than the runtimes of the continuous fragmentation-coagulation process [9].
runtime (seconds/iteration)
6
DFCP
CFCP
40
30
20
10
0
1.0
1.5
2.0
2.5 3.0 3.5
#individuals
4.0
4.5 5.0
?104
Figure 4: Runtimes per iteration per sequence of
The DFCP and CFCP induce different joint dis- DFCP and CFCP on simulated datasets consisttributions on the partitions at adjacent locations. ing of large numbers of sequences. Lines indicate
The CFCP is a Markov jump process with an ar- mean. Shaded region indicates standard deviation.
bitrary number of latent binary events wherein
a single cluster is split into two clusters, or two clusters are merged into one. The DFCP however
can model any partition structure with one pair of fragmentation and coagulation operations. Exact Gibbs updates for the partitions are possible in the DFCP whereas sampling in the CFCP uses
uniformization [16] which, although fast in practice, has in theory slower mixing than exact Gibbs.
In future work we will explore better calling and calibration methods to improve imputation accuracies. Another avenue of future research is to understand how other genetic processes can be
incorporated into the fragmentation-coagulation framework, including population admixture and
gene conversion. Although haplotype structure is a local property, the Markov assumption does not
hold in real genetic data. This could be reflected through hierarchical FCP models or adaptation of
other dependent nonparametric models such as the spatially normalized Gamma process [18].
Acknowledgements
We thank the Gatsby Charitable Foundation for funding. We also thank Andriy Mnih, Vinayak Rao
and Anna Goldenberg for helpful discussion and the anonymous reviewers for their suggestions.
8
References
[1] The 1000 Genomes Project Consortium. A map of human genome variation from population-scale sequencing. Nature, 467:1061?1073, 2010.
[2] R. R. Hudson. Properties of a neutral allele model with intragenic recombination. Theoretical Population
Biology, 23(2):183 ? 201, 1983.
[3] P. Scheet and M. Stephens. A fast and flexible statistical model for large-scale population genotype data:
Applications to inferring missing genotypes and haplotypic phase. The American Journal of Human
Genetics, 78(4):629 ? 644, 2006.
[4] J. Marchini, B. Howie, S. Myers, G. McVean, and P. Donnelly. A new multipoint method for genome-wide
association studies by imputation of genotypes. Nature Genetics, 39(7):906?913, 2007.
[5] M. J. Daly, J. D. Rioux, S. F. Schaffner, T. J. Hudson, and R. S. Lander. High-resolution haplotype
structure in the human genome. Nature Genetics, 29:229?232, 2001.
[6] J. Marchini, D. Cutler, N. Patterson, M. Stephens, E. Eskin, E. Halperin, S. Lin, Z.S. Qin, H.M. Munro,
G.R. Abecasis, P. Donnelly, and the International HapMap Consortium. A comparison of phasing algorithms for trios and unrelated individuals. The American Journal of Human Genetics, 78(3):437 ? 450,
2006.
[7] M. Stephens. Dealing with label switching in mixture models. Journal of the Royal Statistical Society:
Series B (Statistical Methodology), 62(4):795?809, 2000.
[8] A. Jasra, C. C. Holmes, and D. A. Stephens. Markov chain Monte Carlo methods and the label switching
problem in Bayesian mixture modeling. Statistical Science, 20(1):50?67, 2005.
[9] Y. W. Teh, C. Blundell, and L. T. Elliott. Modelling genetic variations using fragmentation-coagulation
processes. In Advances in neural information processing systems, 2011.
[10] The International HapMap Consortium. The international HapMap project. Nature, 426:789?796, 2003.
[11] J. Pitman. Combinatorial stochastic processes. Springer-Verlag, 2006.
[12] J. Pitman. Coalescents with multiple collisions. Annals of Probability, 27:1870?1902, 1999.
[13] W. J. Ewens. The sampling theory of selectively neutral alleles. Theoretical Population Biology, 3:87?
112, 1972.
[14] R. R. Hudson. Generating samples under a Wright-Fisher neutral model of genetic variation. Bioinfomatics, 18:337?338, 2002.
[15] R. M. Neal. Slice sampling. Annals of Statistics, 31:705?767, 2003.
[16] V. Rao and Y. W. Teh. Fast MCMC sampling for Markov jump processes and continuous time Bayesian
networks. In Proceedings of the International Conference on Uncertainty in Artificial Intelligence, 2011.
[17] B. L. Browning and S. R. Browning. A unified approach to genotype imputation and haplotype-phase
inference for large data sets of trios and unrelated individuals. American Journal of Human Genetics,
84:210?223, 2009.
[18] V. Rao and Y. W. Teh. Spatial normalized gamma processes. In Advances in Neural Information Processing Systems, volume 22, pages 1554?1562, 2009.
9
|
4782 |@word version:4 proportion:3 nd:1 mjp:2 multipoint:1 simulation:2 bn:1 thereby:1 initial:1 substitution:1 series:1 fragment:3 genetic:18 existing:1 current:1 must:2 subsequent:1 partition:40 plot:1 designed:1 update:11 aside:1 implying:2 generative:2 intelligence:1 item:3 short:1 eskin:1 provides:2 location:27 sits:1 coagulation:34 five:2 along:2 ewens:1 beta:2 replication:1 consists:1 overhead:1 fastphase:7 inside:1 theoretically:1 expected:4 themselves:1 examine:1 preclude:1 increasing:1 project:6 matched:1 unrelated:2 mass:4 informed:1 unified:1 runtime:8 uk:2 partitioning:2 unit:1 medical:1 control:2 exchangeable:2 hudson:3 local:3 limit:1 switching:4 oxford:2 mtf:3 merge:2 might:1 chose:1 initialization:1 conversely:2 shaded:1 hmms:1 projective:2 range:2 statistically:1 averaged:1 fragmenting:2 unique:1 union:3 practice:1 procedure:1 empirical:1 crossover:1 thought:1 projection:1 road:1 induce:2 consortium:3 onto:1 close:2 selection:2 operator:5 nb:1 applying:1 yee:1 deterministic:1 customer:6 missing:9 reviewer:1 map:1 go:1 straightforward:1 starting:3 independently:1 resolution:1 simplicity:2 splitting:3 stats:1 pure:1 rule:1 holmes:1 population:10 variation:4 annals:2 construction:1 imagine:1 pt:1 exact:5 programming:1 us:1 mosaic:5 howie:1 element:2 located:1 coarser:1 observed:5 ft:4 capture:3 thousand:5 region:1 projectivity:2 complexity:2 depend:1 segment:4 patterson:1 efficiency:1 joint:3 various:2 fast:3 describe:5 london:2 monte:3 artificial:1 hyper:1 whose:1 larger:2 valued:2 otherwise:5 coalescents:1 statistic:3 itself:5 sequence:52 advantage:1 myers:1 ucl:1 remainder:1 adaptation:1 qin:1 mixing:4 poorly:1 scalability:2 cluster:70 empty:1 r1:2 generating:1 spent:1 derive:4 ac:2 rescale:1 minor:1 implemented:1 indicate:3 merged:1 correct:2 subsequently:2 allele:19 stochastic:1 coalescent:4 human:6 mtc:3 hapmap:4 polymorphism:2 anonymous:1 hold:1 around:1 considered:1 wright:1 normal:1 predict:1 major:2 consecutive:6 a2:3 adopt:1 resample:1 daly:1 label:7 currently:1 combinatorial:1 bioinfomatics:1 nonidentifiability:1 rather:1 exchangeability:3 derived:1 xit:11 emission:1 release:1 improvement:1 modelling:2 indicates:2 likelihood:8 bernoulli:2 sequencing:1 helpful:1 inference:4 goldenberg:1 dependent:2 browning:2 bt:22 hidden:1 flexible:1 development:1 art:3 summed:1 spatial:1 marginal:5 equal:2 once:1 having:1 reversibility:4 sampling:10 runtimes:3 biology:2 represents:1 park:1 future:2 report:1 others:1 randomly:1 composed:1 gamma:2 individual:7 phase:2 consisting:2 attempt:1 message:11 mnih:1 male:1 mixture:2 genotype:6 yielding:4 cutler:1 held:5 wc1n:1 chain:10 accurate:2 kt:1 closer:1 nucleotide:2 respective:1 indexed:2 conduct:1 initialized:1 theoretical:2 modeling:1 rao:3 chromosomal:1 ar:2 vinayak:1 queen:1 assignment:5 tg:1 introducing:1 deviation:1 subset:2 entry:2 neutral:3 uniform:4 conducted:3 trio:2 tap2:1 chooses:1 international:4 xi1:2 containing:2 choose:2 american:3 singleton:5 lloyd:1 b2:5 availability:1 matter:1 performed:1 linked:1 realised:1 bayes:1 complicated:1 mutation:2 square:1 ni:1 accuracy:15 formed:3 variance:1 correspond:1 sitting:1 yield:2 directional:1 modelled:2 bayesian:4 marginally:2 mc:1 trajectory:12 carlo:3 finer:2 definition:5 against:1 ty:1 frequency:2 dataset:1 popular:1 marchini:2 thinning:1 appears:1 ta:10 follow:2 restarts:2 wherein:2 improved:1 reflected:1 methodology:1 ox:1 furthermore:1 just:1 crp:23 horizontal:1 replacing:1 ox1:1 unoccupied:1 overlapping:1 reversible:3 mode:1 artifact:1 halperin:1 scientific:1 effect:1 normalized:2 hence:1 spatially:1 neal:1 adjacent:1 intragenic:1 m:1 whye:1 tt:4 snp:15 meaning:1 novel:1 recently:1 funding:1 haplotype:15 empirically:1 mt:1 volume:1 association:1 refer:4 significant:1 gibbs:6 enter:1 beagle:6 grid:1 similarly:1 had:1 specification:1 calibration:1 base:5 posterior:13 recent:1 verlag:1 binary:9 yri:1 preserving:1 seen:2 v3:1 stephen:4 multiple:2 desirable:1 ing:1 unlabelled:3 match:1 faster:1 calculation:1 lin:1 a1:11 converging:1 scalable:5 prediction:2 iteration:9 background:1 uninformative:2 whereas:1 interval:2 lander:1 median:1 rest:1 phasing:1 south:1 coagulating:2 emitted:1 near:1 counting:1 backwards:5 split:3 easy:1 independence:1 restaurant:2 andriy:1 avenue:1 blundell:1 munro:1 suffer:1 passing:1 collision:1 involve:2 nonparametric:4 discount:1 locally:1 reduced:1 rioux:1 canonical:1 neuroscience:1 arising:1 per:6 estimated:1 discrete:12 donnelly:2 imputation:12 backward:2 sum:1 run:1 letter:1 parameterized:1 uncertainty:1 place:4 family:1 reasonable:1 arrive:1 draw:2 comparable:3 ct:3 followed:1 strength:2 rmin:3 software:2 calling:1 aspect:1 min:2 expanded:2 relatively:1 jasra:1 department:1 according:3 march:1 describes:2 across:1 smaller:2 remain:1 slightly:1 making:1 happens:1 invariant:1 pr:37 previously:1 xi2:1 needed:1 available:1 operation:9 apply:1 hierarchical:1 appearing:1 alternative:1 schaffner:1 slower:2 top:1 clustering:2 graphical:2 maintaining:1 calculating:1 recombination:6 chinese:2 society:1 already:2 added:2 costly:2 concentration:4 rt:40 dependence:1 said:1 exhibit:1 thank:2 simulated:5 hmm:3 restart:1 dfcp:48 length:2 relationship:1 setup:1 unfortunately:1 truncates:1 relate:1 teh:6 conversion:2 vertical:1 observation:7 markov:16 datasets:4 discarded:1 finite:1 behave:1 displayed:1 truncated:1 defining:2 incorporated:1 discovered:1 arbitrary:2 uniformization:2 introduced:1 pair:2 required:4 optimized:1 rag:5 address:1 suggested:1 usually:2 sparsity:1 including:1 royal:1 analogue:1 power:1 event:15 predicting:1 indicator:1 altered:1 improve:1 imply:1 library:1 axis:2 admixture:1 coupled:1 prior:7 literature:1 acknowledgement:1 law:1 expect:1 prototypical:2 suggestion:1 proportional:6 filtering:3 analogy:1 versus:1 foundation:1 elliott:3 mcvean:1 thresholding:1 charitable:1 heavy:1 genetics:8 placed:5 last:3 supported:1 dis:1 allow:1 understand:1 wide:1 taking:1 pitman:3 distributed:1 slice:3 fragmented:1 avoids:1 genome:11 forward:4 jump:6 refinement:1 emphasize:1 gene:3 dealing:1 sat:1 b1:4 conclude:1 unnecessary:1 xi:11 continuous:10 latent:10 ancestry:1 tailed:1 table:5 nature:4 chromosome:15 symmetry:2 fcp:1 anna:1 linearly:1 arise:1 hyperparameters:1 ait:3 ceu:1 x1:1 site:1 referred:1 gatsby:3 inferring:1 coagulate:2 theorem:4 formula:2 specific:2 showing:2 symbol:1 normalizing:1 sit:1 intractable:1 merging:2 adding:2 importance:1 fragmentation:35 sequential:2 conditioned:6 mf:1 entropy:2 likely:1 explore:1 forming:1 springer:1 corresponds:2 nested:1 chance:1 relies:1 conditional:9 identity:2 viewed:1 consequently:1 shared:1 fisher:1 change:2 determined:1 uniformly:1 sampler:5 called:2 total:2 experimental:1 shannon:1 burnin:1 indicating:1 selectively:1 college:1 avoiding:1 mcmc:6 phenomenon:1
|
4,179 | 4,783 |
Optimal Neural Tuning Curves for Arbitrary
Stimulus Distributions: Discrimax, Infomax and
Minimum Lp Loss
Alan A. Stocker
Department of Psychology
University of Pennsylvania
Philadelphia, PA 19104
[email protected]
Zhuo Wang
Department of Mathematics
University of Pennsylvania
Philadelphia, PA 19104
[email protected]
Daniel D. Lee
Department of Electrical and Systems Engineering
University of Pennsylvania
Philadelphia, PA 19104
[email protected]
Abstract
In this work we study how the stimulus distribution influences the optimal coding
of an individual neuron. Closed-form solutions to the optimal sigmoidal tuning
curve are provided for a neuron obeying Poisson statistics under a given stimulus
distribution. We consider a variety of optimality criteria, including maximizing
discriminability, maximizing mutual information and minimizing estimation error under a general Lp norm. We generalize the Cramer-Rao lower bound and
show how the Lp loss can be written as a functional of the Fisher Information
in the asymptotic limit, by proving the moment convergence of certain functions
of Poisson random variables. In this manner, we show how the optimal tuning
curve depends upon the loss function, and the equivalence of maximizing mutual
information with minimizing Lp loss in the limit as p goes to zero.
1
Introduction
A neuron represents sensory information via its spike train. Rate coding maps an input stimulus to
a spiking rate via the neuron?s tuning. Previous work in computational neuroscience has tried to
explain this mapping via optimality criteria. An important factor determining the optimal shape of
the tuning curve is the input statistics of the stimulus. It has previously been observed that environmental statistics can influence the neural tuning curves of sensory neurons [1, 2, 3, 4, 5]. However,
most theoretical analysis has usually assumed the input stimulus distribution to be uniform. Only
recently, theoretical work has been demonstrating how non-uniform prior distributions will affect
the optimal shape of the neural tuning curves [6, 7, 8, 9, 10].
An important factor in determining the optimal tuning curve is the optimality criterion [11]. Most
previous work used local Fisher Information [12, 13, 14], the estimation square loss or discriminability (discrimax) [15, 16] or the mutual information (infomax) [9, 17] to evaluate neural codes.
It has been shown that both the square loss and the mutual information are related to the Fisher Information via lower bounds: the lower bound of estimation square loss is provided by the Cramer-Rao
lower bound [18, 19] and the mutual information can be lower bounded by a functional of Fisher
Information as well [7]. It has also been proved that both lower bounds can be attained on the con1
dition that the encoding time is long enough and the estimator behaves well in the asymptotic limit.
However, there has been no previous study to integrate those two lower bounds into a more general
framework.
In this paper, we ask the question what tuning curve is optimally encoding a stimulus with an arbitrary prior distribution such that the Lp estimation lost is minimized. We are able to provide
analytical solutions to the above question. With the asymptotic analysis of the maximum likelihood
estimator (MLE), we can show how the Lp loss converges to a functional of Fisher Information in
the limit of long encoding time. The optimization of such functional can be conducted for arbitrary
stimulus prior and for all p ? 0 in general. The special case of p = 2 and the limit p ? 0 corresponds to discrimax and infomax, respectively. The general result offers a framework to help us
understand the infomax problem in a new point of view: maximizing mutual information is equivalent to minimizing the expected L0 loss.
2
2.1
Model and Methods
Encoding and Decoding Model
Throughout this paper we denote s as the scalar stimulus. The stimulus follows an arbitrary prior
distribution ?(s). The encoding process involves a probabilistic mapping from stimulus to a random
number of spikes. For each s, the neuron will fire at a predetermined firing rate h(s), representing the
neuron?s tuning curve. The encoded information will contain some noise due to neural variability.
According to the conventional Poisson noise model, if the available coding time is T , then the
observed spike count N has a Poisson distribution with parameter ? = h(s)T
P[N = k] =
1
k
(h(s)T ) e?h(s)T
k!
(1)
The tuning curve h(s) is assumed to be sigmoidal, i.e. monotonically increasing, but limited to
a certain range hmin ? h(s) ? hmax due to biological constraints. The decoding process is the
reverse process of encoding. The estimator s? = s?(N ) should be a function of observed count N .
? for mean firing rate
One conventional choice is to use the MLE estimator. First the MLE estimator ?
?1 ?
?
is ? = N/T . There for the MLE estimator for stimulus s is simply s? = h (?).
2.2
Fisher Information and Reversal Formula
The Fisher Information can be used to describe how well one can distinguish a specific distribution
from its neighboring distributions within the same family of distributions. For a family of distribution with scalar parameter s, the Fisher Information is defined as
2
Z
?
I(s) =
log P(N |s) P(N |s) dN.
(2)
?s
For tuning function h(s) with Poisson spiking model, the Fisher Information is (see [12, 7])
Ih (s) = T
h0 (s)2
h(s)
(3)
Further with the sigmoidal assumption, by solving the above ordinary differential equation, we can
derive the inverse formula in Eq.(4) and an equivalent constraint on Fisher Information in Eq.(5)
p
2
Z s p
1
?
h(s) =
hmin +
Ih (t) dt
(4)
2 T ??
Z s p
p
? p
Ih (t) dt ? 2 T
hmax ? hmin
(5)
??
p
This constraint is closely related to the Jeffrey?s prior, which claims that ? ? (s) ? I(s) is the least
informative prior. The above inequality means that the normalization factor of the Jeffrey?s prior is
finite, as long as the range of firing rate is limited hmin ? h(s) ? hmax .
2
3
3.1
Two Bounds on Loss Function via Fisher Information
Cramer-Rao Bound
The Cramer-Rao Bound [18] for unbiased estimators is
E[(?
s ? s)2 |s] ?
1
I(s)
(6)
We can achieve maximum discriminability ? ?1 by minimizing the mean asymptotic squared error
(MASE), defined in [15] as
Z
?(s)
? 2 = E[(?
s ? s)2 ] ?
ds,
(7)
Ih (s)
Even if Eq.(7) is only a lower bound, it is attained by the MLE of s asymptotically. In order to
optimize the right side of Eq.(7) under the constraints Eq.(5), variation method can be applied and
the optimal condition and the optimal solution can be written as
!
R
p
s ?(t)1/3 dt 2
p
p
??
Ih (s) ? ?(s)2/3 , h2 (s) =
hmin +
hmax ? hmin R ?
(8)
?(t)1/3 dt
??
3.2
Mutual Information Bound
Similar as the Cramer-Rao Bound, Brunel and Nadal [7] gave an upper bound of the mutual information between the MLE s? and the environmental stimulus s
Z
1
2?e
Imutual (?
s, s) ? H? ?
?(s) log
ds,
(9)
2
Ih (s)
where H? is the entropy of the stimulus prior ?(s). Although this is an lower bound on the mutual
information which we want to maximize, the equality holds asymptotically by the MLE of s as
stated in [7]. To maximize the mutual information, we can maximize the right side of Eq.(9) under
the constraint of Eq.(5) by variation method again and obtain the optimal condition and optimal
solution as
!
R
p
s ?(t) dt 2
p
p
??
2
hmin +
hmax ? hmin R ?
(10)
Ih (s) ? ? (s), h0 (s) =
?(t) dt
??
To see the connection between solutions in Eq.(8) and Eq.(10), we need the result of the following
section.
4
Asymptotic Behavior of Estimators
In general, minimizing the lower bound does not imply that the measures of interest, e.g. the left
side of Eq.(7) and Eq.(9), is minimized. In order to make the lower bounds useful, we need to know
the conditions for which there exist ?good? estimators that can reach these theoretical lower bounds.
First we will introduce some definitions of estimator properties. Let T be the encoding
time for a
?
neuron with Poisson noise, and s?T be the MLE at time T . If we denote YT0 = T (?
sT ? s) and
Z 0 ? N (0, T /I(s)), then the notions we have mentioned above are defined as below
E[YT0 ] ? 0
Var[YT0 ]
? T /I(s)
D
YT0 ? Z 0
E[|YT0 |p ] ? E[|Z 0 |p ]
(asymptotic consistency)
(11)
(asymptotic efficiency)
(12)
(asymptotic normality)
(13)
(p-th moment convergence)
(14)
Generally the above four conditions are listed from the weakest to the strongest, top to bottom.
To have the equality in Eq.(7) hold, we need the asymptotic consistent and asymptotic efficient
estimators. To have the equality in (9) hold, we need the asymptotic normal estimators (see [7]). If
3
we want to generalize the problem even further, i.e. finding the tuning curve which minimizes the
Lp estimation loss, then we need the moment convergent estimator for all p-th moments.
Here we will give two theorems to prove that the MLE s? of the true stimulus s would satisfy all the
above four properties in Eq.(11)-(14). Let h(s) be the tuning curve of a neuron with Poisson spiking
? We will show that the limiting distribution of
noise. The the MLE of s is given by s? = h?1 (?).
?
T (?
sT ? s) is a Gaussian distribution
with mean 0 and variance h(s)/h0 (s)2 . We will also show
?
that any positive p-norm of T (?
sT ? s) will converge the p-norm of the corresponding Gaussian
distribution. The proof of Theorem 1 and 2 will be provided in Appendix A.
Pn
Theorem 1. Let Xi be i.i.d. Poisson distributed random variables with mean ?. Let Sn = i=1 Xi
be the partial sum. Then
(a) Sn has Poisson distribution with mean n?.
?
(b) Yn = n(Sn /n ? ?) converges to Z ? N (0, ?) in distribution.
(c) The p-th moment of Yn converges, and limn?? E? [|Yn |p ] = E[|Z|p ] for all p > 0.
One direct application of this theorem is that, if the tuning curve h(s) = s for (s > 0) and the
encoding time is T?, then the estimator s? = N/T is asymptotically efficient since as T ? ?,
Var[?
s] ? E[|Z? / T |2 ] = s/T = 1/I(s).
Theorem 2. Let Xi , Sn be defined as in Theorem 1. Let g(x) be any function with bounded derivative |g 0 (x)| ? M . Then
?
(a) Yn0 = n(g(Sn /n) ? g(?)) converges to Z 0 ? N (0, ?g 0 (?)2 ) in distribution.
(b) The p-th moment of Yn0 converges, and limn?? E? [|Yn0 |p ] = E[|Z 0 |p ] for all p > 0.
Theorem 1 indicates that we can always estimate the firing rate ? = h(s) efficiently by the estimator
? = N/T . Theorem 2 indicates, however, that we can also estimate a smooth transformation of
?
the firing rate efficiently in the asymptotic limit T ? ?. Now, if we go back to the conventional
?
setting of the tuning curve ? = h(s), we can estimate the stimulus by the estimator s? = h?1 (?).
0
0
0
To meet the need of boundedness of g: |g (?)| ? M , we have 1/g (?) = h (s) ? 1/M hence this
theory only works for stimulus from a compact set s ? [?M, M ], although the M can be chosen as
large as possible. The larger the M is, the longer encoding time T will be necessary to observe the
asymptotic normality and the convergence of moments.
? is biased for finite T , but it is asymptotically unbiased and efficient. This
The estimator s? = h?1 (?)
is because as T ? ?
?
Es [ T (?
sT ? s)] ? E[Z 0 ] = 0
(15)
?
h(s)
T
Vars [ T (?
sT ? s)] ? E[|Z 0 |2 ] = ?(h?1 )0 (?)2 = 0 2 =
(16)
h (s)
I(s)
From the above analysis we can see that the property of Lp (?
s, s) = Es [|?
sT ? s|p ] saturating the
lower bound fully relies upon the asymptotic normality. With asymptotic normality, we can do more
than just optimizing Imutual (N, s) and Lp (?
s, s). In general we can find the optimal tuning curve
which minimizes the expected Lp loss Lp (?
s, s) since as T ? ?
p i
h?
p
E T (?
sT ? s) ? E |Z 0 |
(17)
p
where Z 0 = ?/ I(s)/T , ? ? N (0, 1). To calculate the right side of the above limit, we can use
the fact that for any p ? 0,
? p ? p+1
2
K(p) = E [|?|p ] =
2
(18)
? 21
where ?(?) is the gamma function
Z
?(z) =
?
0
4
e?t tz?1 dt
(19)
The general conclusion is that (Cramer-Rao Lower bound is a special case with p = 2)
p i
h?
K(p)
p
Es T (?
sT ? s) ? E |Z 0 | =
(I(s)/T )p/2
4
(B)
3
3
K (p)
Loss L(?
s, s)
4
p = 2.0
(A)
(20)
p = 1.0
2
p = 0.5
1
1
p = 0.1
0
0
0.5
1
1.5
2
0
0
2
1
| s? ? s|
2
3
p
4
Figure 1: (A) Illustration of Lp -loss as a function of |?
s ? s| for different values of p. When p = 2 the
loss is the squared loss and when p ? 0, the Lp loss converges to 0-1 loss pointwise. (B) The plot
of p-th absolute moments K(p) = E[|?|p ] of standard Gaussian random variable ? for p ? [0, 4].
5
Optimal Tuning Curves: Infomax, Discrimax and More
With the asymptotic normality and moment convergence, we know the asymptotic expected Lp loss
will approach E[|Z 0 |p ] for each s. Hence
Z
Z
?(s)
p
p
ds.
(21)
E [|?
s ? s| ] ? ?(s)Es |Z 0 | ds = K(p)
I(s)p/2
To obtain the optimal tuning curve for the Lp loss, we need to solve a simple variation problem
Z
minimize
?(s)f (Ih (s)) ds
(22)
h
Z p
subject to
Ih (s) ds ? const
(23)
with fp0 (x) = ?x?p/2?1 . To solve this variational problem, the Euler-Lagrange equation and the
Lagrange multiplier method can be used to derive the optimal condition
p
?
?
0=
?(s)fp (Ih (s)) ? ? Ih = ?(s)fp0 (Ih (s)) ? Ih (s)?1/2
(24)
?Ih
2
p
?
Ih (s) ? ?(s)1/(p+1)
(25)
Therefore the fp -optimal tuning curve, which minimizes the asymptotic Lp loss, is given by equation
below, followed from (4) and (25). For some examples of Lp optimal tuning curves, see Fig. 2.
!
R
p
s ?(t)1/(p+1) dt 2
p
p
??
hp (s) =
hmin +
hmax ? hmin R ?
(26)
?(t)1/(p+1) dt
??
p
2
p
?(s)2/(p+1)
Ip (s) = 4T
hmax ? hmin
(27)
2
R
?(t)1/(p+1) dt
Following from (21) and (27), the optimal expected Lp loss is
p
E [|?
s ? s| ] = K(p) ? (4T )
?p/2
p
hmax ?
5
p
hmin
?p Z
1/(p+1)
?(t)
p+1
dt
(28)
A very interesting observation is that, by taking the limit p ? 0, we will end up with the infomax
tuning curve. This shows that the infomax tuning curve simultaneously optimizes the mutual information as well as the expected L0 norm of the error s? ? s. The L0 norm can be considered as the
0-1 loss, i.e. L(?
s, s) = 0 if s? = s and L(?
s, s) = 1 otherwise. To put this in a different approach, we
may consider the natural log function as a limit of power function:
1 ? z ?p/2
(29)
p?0
p/2
Z
Z
2
? ?(s) log I(s) ds = lim
1 ? ?(s)I(s)?p/2 ds
(30)
p?0 p
R
and we can conclude
that minimizing ?(s)I(s)?p/2 ds in the limit of p ? 0 (L0 loss) is the same
R
as maximizing ?(s) log I(s)ds and the mutual information.
?4
(A)
?2
0
2
4
?4
stimulus s
(B)
? (s )
tuning curve h p (s)
fisher info I p (s)
log z = lim
?4?2 0 2 4
s
?2
p =0. 0
p =0. 5
p =1. 0
p =2. 0
0
2
4
stimulus s
Figure 2: For stimulus with standard Gaussian prior distribution (inset figure) and various values of
p, (A) shows the optimal allocation of Fisher Information Ip (s) and (B) shows the fp -optimal tuning
curve hp (s). When p = 2 the f2 -optimal (discrimax) tuning curve minimizes the squared loss and
when p = 0 the f0 -optimal (infomax) tuning curve maximizes the mutual information.
6
Simulation Results
Numerical simulations were performed in order to validate our theory. In each iteration, a random
stimulus s was chosen from the standard Gaussian distribution or Exponential distribution with mean
one. A Poisson neuron was simulated to generate spikes in response to that stimulus. The difference
between the MLE s? and s is recorded to analyze the Lp loss. In one simple task, we compared the
numerical value vs. the theoretical value of Lp loss for fq -optimal tuning curve
p Z
p
?p Z
p
p
p
E [|?
s ? s| ] = K(p) ? (4T )?p/2
hmax ? hmin
?(t)1/(q+1) dt
?(s)1? q+1 ds
(31)
The above theoretical prediction works well for distributions with compact support s ? [A, B]. It
also requires q > p ? 1 for any distribution with tail decaying faster than exponential: ?(s) ? e?Cs ,
such as e.g. a Gaussian or exponential distribution. Otherwise the integral in the last term will blow
up in general.
The numerical and theoretical prediction of Lp loss are plotted, for both Gaussian N (0, 1) prior
(Fig.3A) and Exp(1) prior (Fig.3B). The vertical axis shows 1/p ? log E[|?
s ? s|p ] so all p-norms are
displayed at the same unit.
6
?4
(A)
1/p ? log(E [| s? ? s| p ])
1/p ? log(E [| s? ? s| p ])
?4
(B)
?4.5
?4.5
p = 2.0, q ? =2.0
?5
p = 1.0, q ? =1.0
p = 0.5, q =0.5
p = 0.1, q ? =0.1
0
2
q
4
p = 1.0, q ? =1.1
?5.5
?
?5.5
p = 2.0, q ? =2.0
?5
6
p = 0.5, q ? =0.5
?6
p = 0.1, q ? =0.1
0
2
q
4
6
Figure 3: The comparison between numerical result (solid curves) and theoretical prediction (dashed
curves). (A) For standard Gaussian prior. (B) For exponential prior with parameter 1.
7
Discussion
In this paper we have derived a closed form solution for the optimal tuning curve of a single neuron
given an arbitrary stimulus prior ?(s) and for a variety of optimality. Our work offers a principled
explanation for the observed non-linearity in neuron tuning: Each neuron should adapt their tuning
curves to reallocate the limited amount of Fisher information they can carry and minimize the Lp
error. We have shown in section 2 that each sigmoidal tuned neuron with Poisson spiking noise has
an upper bound for the integral of square root of Fisher information and the fp -optimal tuning curve
has the form
!
R
p
s ?(t)1/(p+1) dt 2
p
p
hp (s) =
hmin +
hmax ? hmin R??
(32)
?
?(t)1/(p+1) dt
??
where the fp -optimal tuning curve minimizes the estimation Lp loss E[|?
s ? s|p ] of the decoding
process in the limit of long encoding time T . Two special and well known cases are maximum
mutual information (p = 0) and maximum discriminant (p = 2).
To obtain this result, we established two theorems regarding the asymptotic behavior of the MLE
? Asymptotically, the MLE converges to a standard Gaussian not only with regard to
s? = h?1 (?).
its distribution, but also in terms of its p-th moment for arbitrary p > 0. By calculating the p-th
moments for the Gaussian random variable, we can predict the Lp loss of the encoding-decoding
process and optimize the tuning curve by minimizing the attainable limit. The Cramer-Rao lower
bound and the mutual information lower bound proposed by [7] are special cases with p = 2 or
p = 0 respectively.
So far, we have put our focus on a single neuron with sigmoidal tuning curve. However, the conclusions in Theorem 1 and 2 still hold for the case of neuronal populations with bell-shaped neurons,
with correlated or uncorrelated noise. The optimal condition for Fisher information can be calculated, regardless of the tuning curve(s) format. According to the assumption on the number of
neurons and the shape of the tuning curves, the optimized Fisher information can be inverted to
derive the optimal tuning curve via the same type of analysis as we presented in this paper.
One theoretic limitation of our method is that we only addressed the problem for long encoding
times, which is usually not the typical scenario in real sensory systems. Though the long encoding
time limit can be replaced by short encoding time with many identical tuned neurons. It is still an
interesting problem to find out the optimal tuning curve for arbitrary prior, in the sense of Lp loss
function. Some work [16, 20] has been done to maximize mutual information or L2 for uniformly
distributed stimuli. Another problem is that the asymptotic behavior is not uniformly true if the space
of stimulus is not compact. The asymptotic behavior will take longer to be observed if the slop of
the tuning function is too close to zero. In Theorem 2 we made the assumption that |g 0 (s)| ? M
and that is the reason we cannot evaluate the estimation error for s with large absolute value hence
we do not have a perfect match for low p values in the simulation section (see Fig. 3).
7
A
Proof of Theorems in Section 4
Proof. of Theorem 1
(a) Immediately follows from Poisson distribution. Use induction on n.
(b) Apply Central Limit Theorem. Notice that E[Xi ] = Var[Xi ] = ? for Poisson random variables.
(c) In general, convergence in distribution does not imply convergence in p-th moment. However in
our case, we do have the convergence property for all p-th moments. To show this, we need to show
for all p > 0, |Yn |p is uniformly integrable i.e. for any , there exist a K such that
E[|Yn |p ? 1{|Yn |?K} ] ?
(33)
This is obvious with Cauchy-Schwartz inequality and Markov inequality
2
E[|Yn |]
E[|Yn |p ? 1{Yn ?K} ] ? E[|Yn |2p ] ? P[|Yn | ? K] ? E[|Yn |2p ]
?0
K
(34)
To see the last limit, we use the fact that for all p > 0, supn E[|Yn |p ] < ?. According to [21],
E[|Sn ? n?|p ] =
p
X
(n?)a S2 (p, a),
(35)
a=0
where S2 (p, a) denotes the number of partitions of a set of size n into a subsets with no singletons
(i.e. no subsets with only one element). For our purpose, notice that S2 (p, a) = 0 for a > p/2 and
S2 (p, a) ? pa . Therefore the supreme of E[|Yn |p ] is bounded since
p/2
X
?
n(?p)p/2+1
? C(?p)p/2
E[|Yn |p ] = E[| n(Sn /n ? ?)|p ] ? n?p/2
(n?)a pa ?
n?p
?
1
a=0
(36)
For arbitrary q, choose any even number p such that p > q, and by Jensen?s inequality, E[|Yn |q ] ?
E[|Yn |p ]q/p . Thus for all p > 0, n, E[|Yn |p ] < ?.
Proof. of Theorem 2
? n = Sn /n. Apply mean value theorem for g(x) near ? :
(a) Denote ?
? n ) ? g(?) = g 0 (?? )(?
? n ? ?)
g(?
? n and ?. Therefore
for some ?? between ?
?
?
D 0
? n ) ? g(?) = g 0 (?? ) n(?
? ? ?) ?
n g(?
g (?)Z
(37)
(38)
? n ? ? in probability, ?? ? ? in probability and g 0 (?? ) ? g 0 (?) in probability, together
Note that ?
? ?
D
with the fact that n(?
n ? ?) ? Z, apply Slutsky?s theorem and the conclusion follows.
(b) Use Taylor?s expansion and Slutsky?s theorem again,
?
p
p
?
? n ) ? g(?) = g 0 (?? ) n(?
? ? ?) = |g 0 (?? )|p |Yn |p ? |g 0 (?)|p |Yn |p
n g(?
(39)
To see |Yn0 |p is uniformly integrable, notice that |Yn0 |p ? K ? |Yn |p ? K ? M ?p . The rest follows
in a similar manner as when proving Theorem 1(c).
8
References
[1] TM Maddess and SB Laughlin. Adaptation of the motion-sensitive neuron h1 is generated
locally and governed by contrast frequency. Proc. R. Soc. Lond. B Biol. Sci, 225:251?275,
1985.
[2] J Atick. Could information theory provide an ecological theory of sensory processing? Network, 3:213?251, 1992.
[3] RA Harris, DC O?Carroll, and SB Laughlin. Contrast gain reduction in fly motion adaptation.
Neuron, 28:595?606, 2000.
[4] I Dean, NS Harper, and D McAlpine. Neural population coding of sound level adapts to
stimulus statistics. Nature neuroscience, 8:1684?1689, 2005.
[5] AA Stocker and EP Simoncelli. Noise characteristics and prior expectations in human visual
speed perception. Nature neuroscience, 9:578?585, 2006.
[6] J-P Nadal and N Parga. Non linear neurons in the low noise limit: A factorial code maximizes
information transfer, 1994.
[7] N Brunel and J-P Nadal. Mutual information, fisher information and population coding. Neural
Computation, 10(7):1731?1757, 1998.
[8] Tvd Twer and DIA MacLeod. Optimal nonlinear codes for the perception of natural colours.
Network: Computation in Neural Systems, 12(3):395?407, 2001.
[9] MD McDonnell and NG Stocks. Maximally informative stimuli and tuning curves for sigmoidal rate-coding neurons and populations. Phys. Rev. Lett., 101:058103, 2008.
[10] D Ganguli and EP Simoncelli. Implicit encoding of prior probabilities in optimal neural populations. Adv. Neural Information Processing Systems, 23:658?666, 2010.
[11] HB Barlow. Possible principles underlying the transformation of sensory messages. M.I.T.
Press, 1961.
[12] HS Seung and H Sompolinsky. Simple models for reading neuronal population codes. Proc.
of the National Aca. of Sci. of the U.S.A., 90:10749?10753, 1993.
[13] K Zhang and TJ Sejnowski. Neuronal tuning: To sharpen or broaden? Neural Computation,
11:75?84, 1999.
[14] A Pouget, S Deneve, J-C Ducom, and PE Latham. Narrow versus wide tuning curves: Whats
best for a population code? Neural Computation, 11:85?90, 1999.
[15] M Bethge, D Rotermund, and K Pawelzik. Optimal short-term population coding: when Fisher
information fails. Neural Computation, 14:2317?2351, 2002.
[16] M Bethge, D Rotermund, and K Pawelzik. Optimal neural rate coding leads to bimodal firing
rate distributions. Netw. Comput. Neural Syst., 14:303?319, 2003.
[17] S Yarrow, E Challis, and P Seris. Fisher and shannon information in finite neural populations.
Neural Computation, In Print, 2012.
[18] TM Cover and J Thomas. Elements of Information Theory. Wiley, 1991.
[19] SI Amari, H Nagaoka, and D Harada. Methods of Information Geometry. Translations of
Mathematical Monographs. American Mathematical Society, 2007.
[20] AP Nikitin, NG Stocks, RP Morse, and MD McDonnell. Neural population coding is optimized
by discrete tuning curves. Phys. Rev. Lett., 103:138101, 2009.
[21] N Privault. Generalized Bell polynomials and the combinatorics of Poisson central moments.
Electronic Journal of Combinatorics, 18, 2011.
9
|
4783 |@word h:1 polynomial:1 norm:6 simulation:3 tried:1 attainable:1 solid:1 boundedness:1 carry:1 reduction:1 moment:14 daniel:1 tuned:2 si:1 written:2 numerical:4 partition:1 informative:2 predetermined:1 shape:3 plot:1 v:1 short:2 sigmoidal:6 zhang:1 mathematical:2 dn:1 direct:1 differential:1 prove:1 introduce:1 manner:2 upenn:3 twer:1 ra:1 expected:5 behavior:4 pawelzik:2 increasing:1 provided:3 bounded:3 linearity:1 maximizes:2 underlying:1 what:1 nadal:3 minimizes:5 finding:1 transformation:2 schwartz:1 unit:1 yn:21 positive:1 engineering:1 local:1 limit:16 encoding:15 meet:1 firing:6 ap:1 discriminability:3 equivalence:1 limited:3 range:2 challis:1 lost:1 bell:2 cannot:1 close:1 put:2 influence:2 optimize:2 equivalent:2 dean:1 map:1 conventional:3 maximizing:5 go:2 regardless:1 immediately:1 pouget:1 estimator:17 proving:2 population:10 notion:1 variation:3 limiting:1 astocker:1 pa:5 element:2 observed:5 bottom:1 ep:2 fly:1 wang:1 electrical:1 calculate:1 adv:1 sompolinsky:1 mentioned:1 principled:1 monograph:1 seung:1 solving:1 upon:2 efficiency:1 f2:1 stock:2 various:1 train:1 describe:1 sejnowski:1 h0:3 encoded:1 larger:1 solve:2 otherwise:2 amari:1 statistic:4 nagaoka:1 ip:2 analytical:1 adaptation:2 neighboring:1 supreme:1 achieve:1 adapts:1 validate:1 convergence:7 sea:1 perfect:1 converges:7 help:1 derive:3 sa:2 eq:13 soc:1 c:1 involves:1 closely:1 human:1 biological:1 hold:4 cramer:7 considered:1 normal:1 exp:1 mapping:2 predict:1 claim:1 purpose:1 estimation:7 proc:2 sensitive:1 gaussian:10 always:1 pn:1 l0:4 derived:1 focus:1 likelihood:1 indicates:2 fq:1 contrast:2 sense:1 ganguli:1 sb:2 special:4 mutual:17 shaped:1 ng:2 identical:1 represents:1 fp0:2 minimized:2 stimulus:27 gamma:1 simultaneously:1 national:1 individual:1 replaced:1 geometry:1 fire:1 jeffrey:2 interest:1 message:1 tj:1 stocker:2 integral:2 partial:1 necessary:1 taylor:1 plotted:1 theoretical:7 con1:1 rao:7 cover:1 ordinary:1 subset:2 euler:1 uniform:2 harada:1 conducted:1 too:1 optimally:1 st:8 lee:1 probabilistic:1 decoding:4 infomax:8 together:1 bethge:2 squared:3 again:2 recorded:1 central:2 choose:1 tz:1 american:1 derivative:1 syst:1 singleton:1 blow:1 coding:9 tvd:1 satisfy:1 combinatorics:2 depends:1 performed:1 view:1 root:1 closed:2 h1:1 analyze:1 aca:1 decaying:1 minimize:2 square:4 hmin:15 variance:1 characteristic:1 efficiently:2 generalize:2 parga:1 explain:1 strongest:1 reach:1 phys:2 definition:1 frequency:1 obvious:1 proof:4 gain:1 proved:1 ask:1 lim:2 back:1 attained:2 dt:14 response:1 maximally:1 done:1 though:1 just:1 atick:1 implicit:1 d:11 nonlinear:1 contain:1 unbiased:2 true:2 multiplier:1 barlow:1 equality:3 hence:3 criterion:3 generalized:1 theoretic:1 latham:1 motion:2 variational:1 recently:1 behaves:1 functional:4 spiking:4 mcalpine:1 tail:1 tuning:44 consistency:1 mathematics:1 hp:3 sharpen:1 f0:1 longer:2 carroll:1 optimizing:1 optimizes:1 reverse:1 scenario:1 certain:2 ecological:1 inequality:4 wangzhuo:1 inverted:1 minimum:1 integrable:2 converge:1 maximize:4 monotonically:1 dashed:1 sound:1 simoncelli:2 alan:1 smooth:1 faster:1 adapt:1 match:1 offer:2 long:6 mle:13 prediction:3 expectation:1 poisson:14 iteration:1 normalization:1 bimodal:1 want:2 addressed:1 limn:2 biased:1 rest:1 subject:1 slop:1 near:1 enough:1 hb:1 variety:2 affect:1 psychology:1 gave:1 pennsylvania:3 regarding:1 tm:2 colour:1 useful:1 generally:1 listed:1 factorial:1 amount:1 locally:1 generate:1 exist:2 notice:3 neuroscience:3 discrete:1 four:2 demonstrating:1 deneve:1 asymptotically:5 sum:1 inverse:1 throughout:1 family:2 electronic:1 appendix:1 rotermund:2 bound:22 followed:1 distinguish:1 convergent:1 dition:1 slutsky:2 constraint:5 speed:1 optimality:4 lond:1 format:1 department:3 according:3 mcdonnell:2 lp:25 rev:2 equation:3 previously:1 count:2 know:2 reversal:1 end:1 dia:1 available:1 apply:3 observe:1 rp:1 thomas:1 broaden:1 top:1 denotes:1 const:1 calculating:1 macleod:1 society:1 question:2 print:1 spike:4 md:2 supn:1 simulated:1 sci:2 cauchy:1 discriminant:1 reason:1 induction:1 code:5 pointwise:1 illustration:1 minimizing:7 info:1 stated:1 upper:2 vertical:1 neuron:22 observation:1 markov:1 finite:3 displayed:1 variability:1 dc:1 arbitrary:8 connection:1 optimized:2 narrow:1 established:1 yn0:5 zhuo:1 able:1 usually:2 below:2 perception:2 fp:5 reading:1 including:1 explanation:1 power:1 natural:2 discrimax:5 ducom:1 representing:1 normality:5 imply:2 axis:1 mase:1 philadelphia:3 sn:8 prior:17 l2:1 determining:2 asymptotic:21 morse:1 loss:31 fully:1 interesting:2 limitation:1 allocation:1 var:4 versus:1 h2:1 integrate:1 consistent:1 principle:1 uncorrelated:1 translation:1 yt0:5 last:2 side:4 understand:1 laughlin:2 wide:1 taking:1 absolute:2 distributed:2 regard:1 curve:41 calculated:1 lett:2 sensory:5 made:1 far:1 compact:3 netw:1 assumed:2 conclude:1 xi:5 nature:2 transfer:1 expansion:1 s2:4 noise:8 neuronal:3 fig:4 ddlee:1 wiley:1 n:1 fails:1 obeying:1 exponential:4 comput:1 governed:1 pe:1 hmax:10 formula:2 theorem:19 specific:1 inset:1 jensen:1 weakest:1 ih:15 entropy:1 simply:1 visual:1 lagrange:2 saturating:1 scalar:2 brunel:2 aa:1 corresponds:1 environmental:2 relies:1 harris:1 fisher:20 typical:1 uniformly:4 e:4 shannon:1 support:1 harper:1 evaluate:2 biol:1 correlated:1
|
4,180 | 4,784 |
Factorial LDA:
Sparse Multi-Dimensional Text Models
Michael J. Paul and Mark Dredze
Human Language Technology Center of Excellence (HLTCOE)
Center for Language and Speech Processing (CLSP)
Johns Hopkins University
Baltimore, MD 21218
{mpaul,mdredze}@cs.jhu.edu
Abstract
Latent variable models can be enriched with a multi-dimensional structure to
consider the many latent factors in a text corpus, such as topic, author perspective
and sentiment. We introduce factorial LDA, a multi-dimensional model in which a
document is influenced by K different factors, and each word token depends on a
K-dimensional vector of latent variables. Our model incorporates structured word
priors and learns a sparse product of factors. Experiments on research abstracts
show that our model can learn latent factors such as research topic, scientific discipline, and focus (methods vs. applications). Our modeling improvements reduce
test perplexity and improve human interpretability of the discovered factors.
1
Introduction
There are many factors that contribute to a document?s word choice: topic, syntax, sentiment, author
perspective, and others. Latent variable ?topic models? such as latent Dirichlet allocation (LDA)
implicitly model a single factor of topical content [1]. More in-depth analyses of corpora call for
models that are explicitly aware of additional factors beyond topic. Some topic models have been
used to model specific factors like sentiment [2], and more general models?like the topic aspect
model [3] and sparse additive generative models (SAGE) [4]?have jointly considered both topic
and another factor, such as perspective. Most prior work has only considered two factors at once.1
This paper presents factorial LDA, a general framework for multi-dimensional text models that capture an arbitrary number of factors. While standard topic models associate each word token with
a single latent topic variable, a multi-dimensional model associates each token with a vector of
multiple factors, such as (topic, political ideology) or (product type, sentiment, author age).
Scaling to an arbitrary number of factors poses challenges that cannot be addressed with existing two-dimensional models. First, we must ensure consistency across different word distributions which have the same components. For example, the word distributions associated with the
(topic, perspective) pairs (ECONOMICS,LIBERAL) and (ECONOMICS,CONSERVATIVE) should both
give high probability to words about economics. Additionally, increasing the number of factors results in a multiplicative increase in the number of possible tuples that can be formed, and not all
tuples will be well-supported by the data. We address these two issues by adding additional structure to our model: we impose structured word priors that link tuples with common components, and
we place a sparse prior over the space of possible tuples. We demonstrate that both of these model
structures lead to improvements in model performance.
In the next section, we introduce our model, where our main contributions are to:
1
A recent variant of SAGE modeled three factors in historic documents: topic, time, and location [5].
1
? introduce a general model that can accommodate K different factors (dimensions) of language,
? design structured priors over the word distributions that tie together common factors,
? enforce a sparsity pattern which excludes unsupported combinations of components (tuples).
We then discuss our inference procedure (?4) and share experimental results (?5).
2
Factorial LDA: A Multi-Dimensional Generative Model
Latent Dirichlet allocation (LDA) [1] assumes we have a set of Z latent components (usually called
?topics? in the context of text modeling), and each data point (a document) has a discrete distribution
? over these topics. The set of topics can be thought as a vector of length Z, where each cell is a
pointer into a discrete distribution over words, parameterized by ?z . Under LDA, a document is
generated by choosing the topic distribution ? from a Dirichlet prior, then for each token we sample
a latent topic t from this distribution before sampling a word w from the tth word distribution ?t .
Without additional structure, LDA tends to learn distributions which correspond to semantic topics
(such as SPORTS or ECONOMICS) [6] which dominate the choice of words in a document, rather
than syntax, perspective, or other aspects of document content.
Imagine that instead of a one-dimensional vector of Z topics, we have a two-dimensional matrix of
Z1 components along one dimension (rows) and Z2 components along the other (columns). This
structure makes sense if a corpus is composed of two different factors, and the two dimensions might
correspond to factors such as news topic and political perspective (if we are modeling newspaper editorials), or research topic and discipline (if we are modeling scientific papers). Individual cells of the
matrix would represent pairs such as (ECONOMICS,CONSERVATIVE) or (GRAMMAR,LINGUISTICS)
and each is associated with a word distribution ?~z . Conceptually, this is the idea behind the twodimensional models of TAM [3] and SAGE [4].
Let us expand this idea further by assuming K factors modeled with a K-dimensional array, where
each cell of the array has a pointer to a word distribution corresponding to that particular K-tuple.
For example, in addition to topic and perspective, we might want to model a third factor of the author?s gender in newspaper editorials, yielding triples such as (ECONOMICS,CONSERVATIVE,MALE).
Conceptually, each K-tuple ~t functions as a topic in LDA (with an associated word distribution ?~t)
except that K-tuples imply a structure, e.g. the pairs (ECONOMICS,CONSERVATIVE) and (ECO NOMICS , LIBERAL ) are related. This is the idea behind factorial LDA (f-LDA).
At its core, our model follows the basic template of LDA, but each word token is associated with
a K-tuple rather than a single topic value. Under f-LDA, each document has a distribution over
tuples, and each tuple indexes into a distribution over words. Of course, without additional strucQK
ture, this would simply be equivalent to LDA with k Zk topics. In f-LDA, we induce a factorial
structure by creating priors which tie together tuples that share components: distributions involving the pair (ECONOMICS,CONSERVATIVE) should have commonalties with distributions for (ECO NOMICS , LIBERAL ). The key ingredients of our new model are:
? We model the intuition that tuples which share components should share other properties.
For example, we expect the word distributions for (ECONOMICS,CONSERVATIVE) and (ECO NOMICS , LIBERAL ) to both give high probability to words about economics, while the pairs ( ECO NOMICS , LIBERAL ) and ( ENVIRONMENT, LIBERAL ) should both reflect words about liberalism.
Similarly, we want each document?s distribution over tuples to reflect the same type of consistency. If a document is written from a liberal perspective, then we believe that pairs of the form
(*,LIBERAL) are more likely to have high probability than pairs with CONSERVATIVE as the second component. This consistency across factors is encouraged by sharing parameters across
the word and topic prior distributions in the model: this encodes our a priori assumption that
distributions which share components should be similar.
? Additionally, we allow for sparsity across the set of tuples. As the dimensionality of the array
increases, we are going to encounter problems of overparameterization, because the model will
likely contain more tuples than are observed in the data. We handle this by having an auxiliary
multi-dimensional array which encodes a sparsity pattern over tuples. The priors over tuples are
augmented with this sparsity pattern. These priors model the belief that the Cartesian product of
factors should be sparse; the posterior may ?opt out? of some tuples.
2
?(0)
?d
K
?
z
?
?
b
Q
k
w
K
K
?
P
Zk
k
N
exp(
D
?
Zk
Q
k
Zk
based
different
article
research
present
information
level
new
role
time
+
(1)
?WORDS
?(2)
EDUCATION
words
word
phoneme
speech
length
following
sequence
phonetic
sentence
hebrew
children
school
teachers
education
year
teaching
educational
reading
childrens
use
+
(a)
)=
Prior
Posterior
examines
words
context
development
word
following
primary
article
use
skills
writing
ability
instruction
spelling
strategy
written
skills
adult
narratives
use
(b)
Figure 1: (a) Factorial LDA as a graphical model. (b) An illustration of word distributions in f-LDA
with two factors. When applying f-LDA to a collection of scientific articles from various disciplines,
we learn weights ? corresponding to a topic we call WORDS and the discipline EDUCATION as well
as background words. These weights are combined to form the Dirichlet prior, and the distribution
for (WORDS,EDUCATION) is drawn from this prior: this distribution describes writing education.
The generative story (we?ll describe the individual pieces below) is as follows.
1. Draw the various hyperparameters ? and ?
from N (0, I? 2 )
2. For each tuple ~t = (t1 , t2 , . . . , tK ):
where the Dirichlet vectors ?
? and ?
? are defined
as:
(
)
X (k)
(~
t)
(B)
(0)
?
? w , exp ?
+ ?w +
?tk w
? (~t) )
(a) Sample word distribution ?~t ? Dir(?
(b) Sample sparsity ?bit? b~t ? Beta(?0 , ?1 )
k
(
(d)
?
?~t ,
3. For each document d ? D:
exp ?
!)
(B)
+
X
(D,k)
?tk
+
(d,k)
?tk
(1)
k
(a) Draw document component weights
?(d,k) ? N (0, I? 2 ) for each factor k
(b) Sample distribution over tuples
?(d) ? Dir(B ? ?
? (d) )
(c) For each token:
i. Sample component tuple ~z ? ?(d)
ii. Sample word w ? ?~z
See Figure 1a for the graphical model, and Figure 1b for an illustration of how the weight vectors ? (0) and ? (k) are combined to form ?
? for a
particular tuple that was inferred by our model.
The words shown have the highest weight after
running our inference procedure (see ?5 for experimental details).
As discussed above, the only difference between f-LDA and LDA is that structure has been added
to the Dirichlet priors for the word and topic distributions. We use a form of Dirichlet-multinomial
regression [7] to formulate the priors for ? and ? in terms of the log-linear functions in Eq. 1. We
will now describe these priors in more detail.
Prior over ?: We formulate the priors of ? to encourage word distributions to be consistent across
components of each factor. For example, tuples that reflect the same topic should share words. To
achieve this goal, we link the priors for tuples that share common components by utilizing a loglinear parameterization of the Dirichlet prior of ? (Eq. 1). Formally, we place a prior Dirichlet(?
? (~t) )
~
(
t
)
over ?~t, the word distribution for tuple ~t = (t1 , t2 , . . . , tK ). The Dirichlet vector ?
? controls the
precision and focus of the prior. It is a function of three types of hyperparameters. First, a single
corpus-wide bias scalar ? (B) , and second, a vector over the vocabulary ? (0) , which reflects the
relative likelihood of different words. These respectively increase the overall precision of words
(k)
and the default likelihood of each word. Finally, ?tk w introduces bias parameters for each word w
(k)
for component tk of the kth factor. By increasing the weight of a particular ?tk w , we increase the
expected relative log-probabilities of word w in ?~z for all ~z that contain component tk , thereby tying
these priors together.
Prior over ?: We use a similar formulation for the prior over ?. Recall that we want documents to
naturally favor tuples that share components, i.e. favoring both (ECONOMICS,CONSERVATIVE) and
(EDUCATION,CONSERVATIVE) if the document favors CONSERVATIVE in general. To address this,
we let ?(d) be drawn from Dirichlet(?
? (d) ), where instead of a corpus-wide prior, each document has a
3
vector ?
? (d) which reflects the independent contributions of the factors via a log-linear function. This
function contains three types of hyperparameters. First, ?(B) is the corpus-wide precision parameter
(D,k)
(the bias); this is shared across all documents and tuples. Second, ?tk
indicates the bias for the
kth factor?s component tk across the entire corpus D, which enables the model to favor certain
(d,k)
components a priori. Finally, ?tk is the bias for the kth factor?s component tk specifically in
document d. This allows documents to favor certain components over others, such as the perspective
CONSERVATIVE in a specific document. We assume all ?s and ?s are independent and normally
distributed around 0, which gives us L2 regularization during optimization.
Sparsity over tuples: Finally, we describe the generation of the sparsity pattern over tuples in the
corpus. We assume a K-dimensional binary array B, where an entry b~t corresponds to tuple ~t.
If b~t = 1, then ~t is active: that is, we are allowed to chose ~t to generate a token and we learn
?~t; otherwise we do not. We modify this prior over ? to include a binary mask of the tuples:
?(d) ? Dirichlet(B ? ?
? (d) ), where ? is the Hadamard (cell-wise) product. ? will not include tuples
for which b~t = 0; otherwise the prior will remain unchanged.
We would ideally model B so that its values are in {0,1}. While we could use a Beta-Bernoulli
model (a finite Indian Buffet Process [8]) to generate a finite binary matrix (array), this model is typically learned over continuous data; learning over discrete observations (tuples) can be exceedingly
difficult since forcing the model to change a bit can yield large changes to the observations, which
makes mixing very slow.2 To aid learning, we relax the constraint that B must be binary and instead
allow b~t to be real-valued in (0, 1). This is a common approximation used in other models, such as
artificial neural networks and deep belief networks. To encourage sparsity, we place a ?U-shaped?
Beta(?0 , ?1 ) prior over b~t, where ? < 1, which yields a density function that is concentrated around
the edges 0 and 1. Empirically, we will show that this effectively learns a sparse binary B. The
effect is that the prior assigns tiny probabilities to some tuples instead of strictly 0.
3
Related Work
Previous work on multi-dimensional modeling includes the topic aspect model (TAM) [3], multiview LDA (mv-LDA) [10], cross-collection LDA [11] and sparse additive generative models
(SAGE) [4], which jointly consider both topic and another factor. Other work has jointly modeled topic and sentiment [2]. Zhang et al. [12] apply PLSA [13] to multi-dimensional OLAP data,
but not with a joint model. Our work is the first to jointly model an arbitrary number of factors. A
rather different approach considered different dimensions of clustering using spectral methods [14],
in which K different clusterings are obtained by considering K different eigenvectors. For example,
product reviews can be clustered not only by topic, but also by sentiment and author attributes.
We contrast this body of work with probabilistic matrix and tensor factorization models [15, 16]
which model data that has already been organized in multiple dimensions ? for example, topic-like
models have been used to model the movie ratings within a matrix of users and movies. f-LDA and
the models described above, however, operate over flat input (text documents), and it is only the
latent structure that is assumed to be organized along multiple dimensions.
An important contribution of f-LDA is the use of priors to tie together word distributions with the
same components. Previous work with two-dimensional models, such as TAM and mv-LDA, assume
conditional independence among all ?, and there is no explicit encouragement of correlation. An
alternative approach would be to strictly enforce consistency, such as through a ?product of experts?
model [17], in which each factor has independent word distributions thatQ
are multiplied together
and renormalized to form the distribution for a particular tuple, i.e. ?~t ? k ?tk . Syntactic topic
models [18] and shared components topic models [19] follow this approach. Our structured word
prior generalizes both of these approaches. By setting all ? (k) to 0, the factors have no influence on
the prior and we obtain the distribution independence of TAM. If instead we have large ? values,
then the model behaves like a product of experts; as precision increases, the posterior converges to
the prior. By learning ? our model can determine the optimal amount of coherence among the ?.
2
One approach is to (approximately) collapse out the sparsity array [9], but this is difficult when working
over the entire corpus of tokens. Experiments with Metropolis-Hastings samplers, split-merge based samplers,
and alternative prior structures all suffered from mixing problems.
4
Another key part of f-LDA is the inclusion of a sparsity pattern. There have been several recent approaches that enforce sparsity in topic models. Various applications of sparsity can be organized into
three categories. First, one could enforce sparsity over the topic-specific word distributions, forcing
each topic to select a subset of relevant words. This is the idea behind sparse topic models [20],
which restrict topics to a subset of the vocabulary, and SAGE [4], which applies L1 regularization to
word weights. A second approach is to enforce sparsity in the document-specific topic distributions,
focusing each document on a subset of relevant topics. This is the idea in focused topic models
[9]. Finally?our contribution?is to impose sparsity among the set of topics (or K-tuples) that are
available to the model. Among sparsity-inducing regularizers, one that closely relates to our goals
is the group lasso [21]. While the standard lasso will drive vector elements to 0, the group lasso will
drive entire vectors to 0.
4
Inference and Optimization
f-LDA turns out to be fairly similar to LDA in terms of inference. In both models, words are
generated by first sampling a latent variable (in our case, a latent tuple) from a distribution ?, then
sampling the word from ? conditioned on the latent variable. The differences between LDA and
f-LDA lie in the parameters of the Dirichlet priors. The presentation of our optimization procedure
focuses on these parameters.
We follow the common approach of alternating between sampling the latent variables and direct
optimization of the Bayesian hyperparameters [22]. We use a Gibbs sampler to estimate E[~z], and
given the current estimate of this expectation, we optimize the parameters ?, ? and B. These two
steps form a Monte Carlo EM (MCEM) routine.
4.1
Latent Variable Sampling
The latent variables ~z are sampled using the standard collapsed Gibbs sampler for LDA [23], with
the exception that the basic Dirichlet priors have been replaced with our structured priors for ? and
?. The sampling equation for ~z for token i, given all other latent variable assignments ~z, the corpus
w and the parameters (?, ?, and B) becomes:
!
(~
t)
~
t
+
?
?
n
w
(d)
w
d
p(~
zi = ~t | ~z \{~
zi }, w, ?, ?, B) ? n~t + b~t ?
?~t
(2)
P
(~
t)
~
t +?
?
n
0
0
0
w
w
w
where nba denotes the number of times a occurs in b.
4.2
Optimizing the Sparsity Array and Hyperparameters
For mathematical convenience, we reparameterize B in terms of the logistic function ?, such that
b~t ? ?(?~t). We optimize ? ? R to obtain b ? (0, 1). The derivative of ?(x) has the simple form
?(x)?(?x). For a tuple ~t, the gradient of the corpus log likelihood L with respect to ?~t is:
"
X
?L
(d)
= (?0 ? 1)?(??~t) + (?1 ? 1)(??(?~t)) +
?(?~t)?(??~t) ?
(3)
?~t ?
??~t
d?D
#
(d)
P
P d
(d)
(d)
(d)
d
?(n~t + ?(?~t)?
?~t ) ? ?(?(?~t)?
?~t ) + ? ( ~u ?(?~u ) ?
? ~u ) ? ?
? ~u )
u ?
~
u n~
u + ?(?~
where the ? values are the Beta parameters. The top terms are a result of the Beta prior over b~t,
while the summation over documents reflects the gradient of the Dirichlet-multinomial compound.
Standard non-convex optimization methods can be used on this gradient. To avoid shallow local
minima, we optimize this gradually by taking small gradient steps, performing a single iteration of
gradient ascent after each Gibbs sampling iteration (see ?5 for more details).
The gradients for the ? and ? variables have a similar form to (3); the main difference with ? is
that the gradient involves a sum over components rather than over documents. We similarly update
these values through gradient ascent.
5
5
Experiments
We experiment with two data sets that could contain multiple factors. The first is a collection of 5000
computational linguistics abstracts from the ACL Anthology (ACL). The second combines these
abstracts (C) with several journals in the fields of linguistics (L), education (E), and psychology (P).
We use 1000 articles from each discipline (CLEP). For both corpora, we keep an additional 1000
documents for development and 1000 for test (uniformly representative of the 4 CLEP disciplines).
~ = (?, 2, 2) for ACL and Z
~ = (?, 4) for CLEP for various numbers of ?topics? Z1 ?
We used Z
{5, . . . , 50}. While we cannot say in advance what each factor will represent, we observed that
when Zk is large, components along this factor correspond to topics. Therefore, we set Z1 > Zk>1
and assume the first factor is topic. While our model presentation assumed latent factors, we could
observe factors, such as knowing the journal of each article in CLEP. However, our experiments
strictly focus on the unsupervised setting to measure what the model can infer on its own.
We will compare our complete model against simpler models by ablating parts of f-LDA. If we
remove the structured word priors and array sparsity, we are left with a basic multi-dimensional
model (base). We will compare against models where we add back in the structured word priors (W)
and array sparsity (S), and finally the full f-LDA model (SW). All variants are identical except that
we fix all ? (k) = 0 to remove structured word priors and fix B = 1 to remove sparsity.
We also compare against the topic aspect model (TAM) [3], a two-dimensional model, using the
public implementation.3 TAM is similar to the ?base? two-factor f-LDA model except that f-LDA
has a single ? per document with priors that are independently weighted by each factor, whereas
TAM has K independent ?s, with a different ?k for each factor. If the Dirichlet precision in f-LDA
is very high, then it should exhibit similar behavior as having separate ?s. TAM only models two
dimensions so we are restricted to running it on the two-dimensional CLEP data set.
For hyperparameters, we set ?0 = ?1 = 0.1 in the Beta prior over b~t, and we set ? 2 = 10 for ? and
1 for ? in the Gaussian prior over weights. Bias parameters (?(B) , ? (B) ) are initialized to ?5 for
weak initial priors. Our sampling algorithm alternates between a full pass over tokens and a single
gradient step on the parameters (step size of 10?2 for ?; 10?3 for ? and ?). Results are averaged or
pooled from five trials of randomly initialized chains, which are each run for 10,000 iterations.
Perplexity Following standard practice we measure perplexity on held-out data by fixing all parameters during training except document-specific parameters (?(d,k) , ?(d) ), which are computed
from the test document. We use the ?document completion? method: we infer parameters from half
a document and measure perplexity on the remaining half [24]. Monte Carlo EM is run on test data
for 200 iterations. Average perplexity comes from another 10 iterations.
Figure 2a shows that the structured word priors yield lower perplexity, while results for sparse models are mixed. On ACL, sparsity consistently improves perplexity once the number of topics exceeds
20, while on CLEP sparsity does worse. Experiments with varying K yielded similar orderings, suggesting that differences are data dependent and not dependent on K. On CLEP, we find that TAM
performs worse than f-LDA with a lower number of topics (which is what we find to work best qualitatively), but catches up as the number of topics increases. (Beyond 50 topics, we find that TAM?s
perplexity stays about the same, and then begins to increase again once Z ? 75.) Thus, in addition
to scaling to more factors, f-LDA is more predictive than simpler multi-dimensional models.
Qualitative Results To illustrate model behavior we include a sample of output on ACL (Figure
(k)
3). We consider the component-specific weights for each factor ?
~ tk , which present an ?overview?
of each component, as well as the tuple-specific word distributions ?~t. Upon examination, we determined that the first factor (Z1 = 20) corresponds to topic, the second (Z2 = 2) to approach (empirical
vs. theoretical), and the third (Z3 = 2) to focus (methods vs. applications). The top row shows
words common across all components for each factor. The bottom row shows specific ?~t. Consider
the topic SPEECH: the triple (SPEECH,METHODS,THEORETICAL) emphasizes the linguistic side of
speech processing (phonological, prosodic, etc.) while (SPEECH,APPLICATIONS,EMPIRICAL) is
predominantly about dialogue systems and speech interfaces. We also see tuple sparsity (shaded
3
Most other two-dimensional models, including SAGE [4] and multi-view LDA [10], assume that the second
factor is fixed and observed. Our focus in this paper is fully unsupervised models.
6
ACL (K = 3)
CLEP (K = 2)
180
Base
S
W
SW
TAM
2800
2600
Distribution of Sparsity Values
Best Fit
Prior
160
140
Number of Instances
Held-Out Perplexity (nats)
3000
120
100
2400
2200
80
60
40
2000
20
1800
5
10
15
20
25
30
35
Number of ?topics?
40
45
50
5
10
15
20
25
30
35
Number of ?topics?
40
(a)
45
50
0
0.0
0.2
0.4
0.6
0.8
1.0
b
(b)
Figure 2: (a) The document completion perplexity on two data sets. Models with ?W? use structured
word priors, and those with ?S? use sparsity. Error bars indicate 90% confidence intervals. When
pooling results across all numbers of topics ?20, we find that S is significantly better than Base with
p = 1.4 ? 10?4 and SW is better than W with p = 5 ? 10?5 on the ACL corpus. (b) The distribution
~ = (20, 2, 2).
of sparsity values induced on the ACL corpus with Z
TAM
Baseline
Sparsity (S)
Word Priors (W)
Combined (SW)
ACL
CLEP
Intrusion Accuracy
n/a
46%
39%
38%
51%
43%
76%
45%
73%
67%
ACL
CLEP
Relatedness Score (1?5)
n/a
2.29 ? 0.26
2.35 ? 0.31 2.55 ? 0.37
2.61 ? 0.37 2.53 ? 0.48
3.56 ? 0.36 2.59 ? 0.33
3.90 ? 0.37 2.67 ? 0.55
Table 1: Results from human judgments. The best scoring model for each data set is in bold. 90%
confidence intervals are indicated for scores; scores were more varied on the CLEP corpus.
tuples, in which b~t ? 0.5) for poor tuples. For example, under the topic of DATA, a mostly empirical
topic, tuples along the THEORETICAL component are inactive.
Human Judgments Perplexity may not correlate with human judgments [6], which are important
for f-LDA since structured word priors and array sparsity are motivated in part by semantic coherence. We measured interpretability based on the notion of relatedness: among components that are
inferred to belong to the same factor, how many actually make sense together? Seven annotators
provided judgments for two related tasks. First, we presented annotators with two word lists (ten
most frequent words assigned to each tuple4 ) that are assigned to the same topic, along with a word
list randomly selected from another topic. Annotators are asked to choose the word list that does
not belong, i.e. an intrusion test [6]. If the two tuples from the same topic are strongly related, the
random list should be easy to identify. Second, annotators are presented with pairs of word lists
from the same topic and asked to judge the degree of relation using a 5-point Likert scale.
We ran these experiments on both corpora with 20 topics. For the two models without the structured word priors, we use a symmetric prior (by optimizing only ? (B) and fixing ? (0) = 0), since
symmetric word priors can lead to better interpretability [22].5 We exclude tuples with b~t ? 0.5.
Across all data sets and models, annotators labeled 362 triples in the intrusion experiment and 333
pairs in the scoring experiment. The results (Table 1) differ slightly from the perplexity results. The
word priors help in all cases, but much more so on ACL. The models with sparsity are generally
better than those without, even on CLEP, in contrast to perplexity where sparse models did worse.
This suggests that removing tuples with small b~t values removes nonsensical tuples. Overall, the
judgments are worse for the CLEP corpus; this appears to be a difficult corpus to model due to
high topic diversity and low overlap across disciplines. TAM is judged to be worse than all f-LDA
variants when directly scored by annotators. The intrusion performance with TAM is better than or
comparable to the ablated versions of f-LDA, but worse than the full model. It thus appears that both
the structured priors and sparsity yield more interpretable word clusters.
4
We use frequency instead of the actual posterior because including the learned priors (which share many
words) could make the task unfairly easy.
5
We used an asymmetric prior for the perplexity experiments, which gave slightly better results.
7
?Topic?
? SPEECH ?
speech
spoken
recognition
state
vocabulary
recognizer
utterances
synthesis
?Approach?
? M . T.?
translation
machine
source
mt
parallel
french
bilingual
transfer
SPEECH
...
...
...
...
...
...
...
...
...
? EMPIRICAL?
task
tasks
performance
improve
accuracy
learning
demonstrate
using
DATA
?Focus?
? THEORETICAL?
theory
description
formal
forms
treatment
linguistics
syntax
ed
? METHODS ?
word
algorithm
method
accuracy
best
sentence
statistical
previously
MODELING
?APPLICATIONS ?
user
research
project
technology
processing
science
natural
development
GRAMMAR
METHODS
APPL .
METHODS
APPL .
METHODS
APPL .
METHODS
(b=0.20)
(b=1.00)
dialogue
spoken
speech
dialogues
understanding
task
recognition
(b=0.00)
(b=1.00)
corpus
data
training
model
tagging
annotated
test
(b=0.07)
(b=1.00)
data
corpus
annotation
annotated
corpora
collection
xml
(b=0.02)
(b=1.00)
models
model
approach
shown
error
errors
statistical
(b=1.00)
rules
rule
model
shown
models
right
left
(b=0.50)
(b=1.00)
parsing
parser
syntactic
tree
parse
dependency
treebank
(b=1.00)
grammar
parsing
grammars
structures
paper
formalism
based
EMPIRICAL
THEORETICAL
Approach
Topic
Focus
? I . R .?
document
retrieval
documents
question
web
answering
query
answer
(b=0.99)
speech
words
recognition
prosodic
written
phonological
spoken
(b=0.01)
APPL .
(b=0.57)
grammar
parsing
based
robust
component
processing
linguistic
(b=1.00)
grammar
grammars
formalism
parsing
based
efficient
unification
~ = (20, 2, 2). Above: The top words (based
Figure 3: Example output from the ACL corpus with Z
on their ? values) for a few components from three factors. Below: A three-dimensional table
showing a sample of four topics (i.e. components of the first factor) with their top words (based on
their ? values) as they appear in all combinations of factors. The components in the top table are
combined to create 3-tuples in the bottom table. Shaded cells (b ? 0.5) are inactive. The names of
factors and their components in quotes are manually assigned through post-hoc analysis.
Sparsity Patterns Finally, we examine the learned sparsity patterns: how much of B is close to 0
or 1? Figure 2b shows a histogram of b~t values (ACL with 20 topics, 3 factors) pooled across five
sampling chains. The majority of values are close to 0 or 1, effectively capturing a sparse binary
array. The higher variance near 0 relative to 1 suggests that the model prefers to keep bits ?on??
and give tuples tiny probability?rather than ?off.? This suggests that a model with a hard constraint
might struggle to ?turn off? bits during inference.
While we fixed the Beta parameters in our experiments, these can be tuned to control sparsity. The
model will favor more ?on? than ?off? bits by setting ?1 > ?0 , or vice versa. When ? > 1, the Beta
distribution no longer favors sparsity; we confirmed empirically that this leads to b~t values that are
closer to 0.8 or 0.9 rather than 1. In contrast, setting ? 0.1 yields more extreme values near 0
and 1 than with ? = 0.1 (e.g. .9999 instead of .991), but this does not greatly affect the number of
non-binary values. Thus, a sparse prior alone cannot fully satisfy our preference that B is binary.
Comparison to LDA The runtimes of samplers for LDA and f-LDA are on the same order (but
we have not investigated differences in mixing time). Our f-LDA implementation is one to two
times slower per iteration than our own comparable LDA implementation (with hyperparameter
optimization using the methods in [25]). We did not observe a consistent pattern regarding the
perplexity of the two models. Averaged across all numbers of topics, the perplexity of LDA was
97% the perplexity of f-LDA on ACL and 104% on CLEP. Note that our experiments always use a
~ = (20, 2, 2) is the same as Z = 80 topics in LDA.
comparable number of word distributions, thus Z
6
Conclusion
We have presented factorial LDA, a multi-dimensional text model that can incorporate an arbitrary
number of factors. To encourage the model to learn the desired patterns, we developed two new
types of priors: word priors that share features across factors, and a sparsity prior that restricts the
set of active tuples. We have shown both qualitatively and quantitatively that f-LDA is capable of
discovering interpretable patterns even in multi-dimensional spaces.
8
Acknowledgements
We are grateful to Jason Eisner, Matthew Gormley, Nicholas Andrews, David Mimno, and the
anonymous reviewers for helpful discussions and feedback. This work was supported in part by
a National Science Foundation Graduate Research Fellowship under Grant No. DGE-0707427.
References
[1] D. Blei, A. Ng, and M. Jordan. Latent Dirichlet allocation. JMLR, 2003.
[2] Q. Mei, X. Ling, M. Wondra, H. Su, and C. Zhai. Topic sentiment mixture: modeling facets and opinions
in weblogs. In WWW, 2007.
[3] M. Paul and R. Girju. A two-dimensional topic-aspect model for discovering multi-faceted topics. In
AAAI, 2010.
[4] J. Eisenstein, A. Ahmed, and E. P. Xing. Sparse additive generative models of text. In ICML, 2011.
[5] W. Y. Wang, E. Mayfield, S. Naidu, and J. Dittmar. Historical analysis of legal opinions with a sparse
mixed-effects latent variable model. In ACL, pages 740?749, July 2012.
[6] J. Chang, J. Boyd-Graber, S. Gerrish, C. Wang, and D. Blei. Reading tea leaves: How humans interpret
topic models. In NIPS, 2009.
[7] D. Mimno and A. McCallum. Topic models conditioned on arbitrary features with dirichlet-multinomial
regression. In UAI, 2008.
[8] T. Griffiths and Z. Ghahramani. Infinite latent feature models and the Indian buffet process. In NIPS,
2006.
[9] S. Williamson, C. Wang, K. Heller, and D. Blei. The IBP-compound dirichlet process and its application
to focused topic modeling. In ICML, 2010.
[10] A. Ahmed and E. P. Xing. Staying informed: supervised and semi-supervised multi-view topical analysis
of ideological perspective. In EMNLP, pages 1140?1150, 2010.
[11] M. Paul and R. Girju. Cross-cultural analysis of blogs and forums with mixed-collection topic models. In
EMNLP, pages 1408?1417, August 2009.
[12] D. Zhang, C. Zhai, J. Han, A. Srivastava, and N. Oza. Topic modeling for OLAP on multidimensional
text databases: topic cube and its applications. Statistical Analysis and Data Mining, 2, 2009.
[13] T. Hofmann. Probabilistic latent semantic indexing. In SIGIR, 1999.
[14] S. Dasgupta and V. Ng. Mining clustering dimensions. In ICML, 2010.
[15] I. Porteous, E. Bart, and M. Welling. Multi-HDP: a non parametric Bayesian model for tensor factorization. In AAAI, pages 1487?1490, 2008.
[16] L. Mackey, D. Weiss, and M. I. Jordan. Mixed membership matrix factorization. In ICML, 2010.
[17] G. E. Hinton. Training products of experts by minimizing contrastive divergence. Neural Comput.,
14:1771?1800, August 2002.
[18] J. Boyd-Graber and D. Blei. Syntactic topic models. In NIPS, 2008.
[19] M. R. Gormley, M. Dredze, B. Van Durme, and J. Eisner. Shared components topic models. In NAACL,
2010.
[20] C. Wang and D. Blei. Decoupling sparsity and smoothness in the discrete hierarchical Dirichlet process.
In NIPS, 2009.
[21] L. Meier, S. van de Geer, and P. B?uhlmann. The group lasso for logistic regression. Journal Of The Royal
Statistical Society Series B, 70(1):53?71, 2008.
[22] H. Wallach, D. Mimno, and A. McCallum. Rethinking LDA: Why priors matter. In NIPS, 2009.
[23] T. Griffiths and M. Steyvers. Finding scientific topics. In Proceedings of the National Academy of Sciences
of the United States of America, 2004.
[24] M. Rosen-Zvi, T. Griffiths, M. Steyvers, and P. Smyth. The author-topic model for authors and documents.
In UAI, 2004.
[25] Michael J. Paul. Mixed membership Markov models for unsupervised conversation modeling. In EMNLPCoNLL, 2012.
9
|
4784 |@word trial:1 version:1 nonsensical:1 plsa:1 instruction:1 contrastive:1 thereby:1 accommodate:1 initial:1 contains:1 score:3 series:1 united:1 tuned:1 document:34 existing:1 current:1 z2:2 must:2 written:3 john:1 parsing:4 additive:3 hofmann:1 enables:1 remove:4 interpretable:2 update:1 v:3 alone:1 generative:5 half:2 selected:1 discovering:2 parameterization:1 leaf:1 bart:1 mccallum:2 core:1 pointer:2 blei:5 contribute:1 location:1 preference:1 liberal:8 simpler:2 zhang:2 five:2 mathematical:1 along:6 direct:1 beta:8 qualitative:1 combine:1 mayfield:1 introduce:3 excellence:1 tagging:1 mask:1 expected:1 faceted:1 behavior:2 examine:1 multi:17 actual:1 considering:1 increasing:2 becomes:1 begin:1 provided:1 project:1 cultural:1 what:3 tying:1 developed:1 informed:1 spoken:3 finding:1 multidimensional:1 tie:3 control:2 normally:1 grant:1 appear:1 before:1 t1:2 local:1 modify:1 tends:1 struggle:1 approximately:1 merge:1 might:3 chose:1 acl:15 wallach:1 suggests:3 shaded:2 appl:4 factorization:3 collapse:1 graduate:1 averaged:2 practice:1 procedure:3 mei:1 empirical:5 jhu:1 thought:1 significantly:1 boyd:2 word:76 induce:1 confidence:2 griffith:3 cannot:3 convenience:1 close:2 judged:1 twodimensional:1 context:2 applying:1 writing:2 influence:1 collapsed:1 optimize:3 equivalent:1 www:1 reviewer:1 center:2 educational:1 economics:11 independently:1 convex:1 focused:2 formulate:2 sigir:1 assigns:1 examines:1 rule:2 array:12 utilizing:1 dominate:1 steyvers:2 handle:1 notion:1 imagine:1 parser:1 user:2 smyth:1 thatq:1 associate:2 element:1 recognition:3 asymmetric:1 labeled:1 database:1 observed:3 role:1 bottom:2 wang:4 capture:1 oza:1 news:1 ordering:1 highest:1 ran:1 intuition:1 environment:1 nats:1 ideally:1 asked:2 renormalized:1 grateful:1 mcem:1 predictive:1 upon:1 joint:1 various:4 america:1 describe:3 prosodic:2 monte:2 artificial:1 query:1 choosing:1 unsupported:1 overparameterization:1 valued:1 relax:1 otherwise:2 say:1 grammar:7 ability:1 favor:6 syntactic:3 jointly:4 hoc:1 sequence:1 product:8 frequent:1 relevant:2 hadamard:1 mixing:3 achieve:1 academy:1 description:1 inducing:1 cluster:1 converges:1 staying:1 tk:15 help:1 illustrate:1 andrew:1 completion:2 fixing:2 pose:1 measured:1 durme:1 school:1 ibp:1 eq:2 auxiliary:1 c:1 involves:1 come:1 indicate:1 judge:1 differ:1 closely:1 annotated:2 attribute:1 human:6 opinion:2 public:1 education:7 fix:2 clustered:1 anonymous:1 opt:1 summation:1 ideology:1 strictly:3 weblogs:1 around:2 considered:3 exp:3 matthew:1 recognizer:1 narrative:1 quote:1 uhlmann:1 vice:1 create:1 reflects:3 weighted:1 gaussian:1 always:1 rather:6 avoid:1 varying:1 linguistic:2 gormley:2 focus:8 improvement:2 consistently:1 bernoulli:1 likelihood:3 indicates:1 intrusion:4 political:2 contrast:3 greatly:1 baseline:1 sense:2 helpful:1 inference:5 dependent:2 membership:2 entire:3 typically:1 relation:1 favoring:1 expand:1 going:1 issue:1 overall:2 among:5 priori:2 development:3 fairly:1 cube:1 field:1 aware:1 phonological:2 once:3 having:2 shaped:1 sampling:9 encouraged:1 identical:1 manually:1 runtimes:1 unsupervised:3 icml:4 ng:2 rosen:1 others:2 t2:2 quantitatively:1 few:1 girju:2 randomly:2 composed:1 national:2 divergence:1 individual:2 replaced:1 mining:2 ablating:1 male:1 introduces:1 extreme:1 mixture:1 yielding:1 behind:3 regularizers:1 held:2 chain:2 tuple:14 encourage:3 edge:1 unification:1 closer:1 capable:1 tree:1 initialized:2 desired:1 ablated:1 theoretical:5 instance:1 column:1 modeling:10 formalism:2 facet:1 assignment:1 entry:1 subset:3 zvi:1 dependency:1 answer:1 teacher:1 dir:2 combined:4 density:1 stay:1 probabilistic:2 off:3 discipline:7 michael:2 together:6 hopkins:1 synthesis:1 again:1 reflect:3 aaai:2 choose:1 emnlp:2 worse:6 tam:14 creating:1 expert:3 derivative:1 dialogue:3 suggesting:1 exclude:1 diversity:1 de:1 pooled:2 bold:1 includes:1 matter:1 satisfy:1 explicitly:1 mv:2 depends:1 piece:1 multiplicative:1 view:2 jason:1 xing:2 parallel:1 annotation:1 contribution:4 formed:1 accuracy:3 phoneme:1 variance:1 correspond:3 yield:5 judgment:5 identify:1 conceptually:2 weak:1 bayesian:2 top:5 emphasizes:1 carlo:2 confirmed:1 drive:2 influenced:1 sharing:1 ed:1 against:3 frequency:1 olap:2 naturally:1 associated:4 sampled:1 treatment:1 recall:1 conversation:1 dimensionality:1 improves:1 organized:3 routine:1 actually:1 back:1 focusing:1 appears:2 higher:1 supervised:2 follow:2 wei:1 formulation:1 strongly:1 correlation:1 working:1 hastings:1 parse:1 web:1 su:1 french:1 logistic:2 lda:54 indicated:1 scientific:4 believe:1 dge:1 dredze:2 name:1 effect:2 naacl:1 contain:3 regularization:2 assigned:3 alternating:1 symmetric:2 semantic:3 ll:1 during:3 eisenstein:1 anthology:1 syntax:3 multiview:1 complete:1 demonstrate:2 eco:4 l1:1 performs:1 interface:1 wise:1 predominantly:1 common:6 behaves:1 multinomial:3 mt:1 empirically:2 overview:1 discussed:1 belong:2 interpret:1 versa:1 gibbs:3 encouragement:1 smoothness:1 consistency:4 similarly:2 teaching:1 inclusion:1 language:3 han:1 longer:1 etc:1 base:4 add:1 posterior:4 own:2 recent:2 perspective:10 optimizing:2 perplexity:17 forcing:2 phonetic:1 certain:2 compound:2 binary:8 blog:1 scoring:2 minimum:1 additional:5 impose:2 determine:1 july:1 semi:1 ii:1 multiple:4 relates:1 full:3 infer:2 exceeds:1 ahmed:2 cross:2 retrieval:1 post:1 variant:3 basic:3 involving:1 regression:3 expectation:1 editorial:2 iteration:6 represent:2 histogram:1 cell:5 addition:2 want:3 background:1 whereas:1 baltimore:1 addressed:1 interval:2 fellowship:1 source:1 suffered:1 operate:1 ascent:2 pooling:1 induced:1 incorporates:1 jordan:2 call:2 near:2 likert:1 split:1 easy:2 ture:1 independence:2 fit:1 zi:2 psychology:1 gave:1 restrict:1 lasso:4 affect:1 reduce:1 idea:5 regarding:1 knowing:1 inactive:2 motivated:1 sentiment:7 speech:12 prefers:1 deep:1 generally:1 eigenvectors:1 factorial:8 amount:1 ten:1 concentrated:1 category:1 tth:1 generate:2 restricts:1 per:2 discrete:4 hyperparameter:1 dasgupta:1 tea:1 group:3 key:2 four:1 drawn:2 excludes:1 year:1 sum:1 run:2 parameterized:1 place:3 draw:2 coherence:2 scaling:2 comparable:3 bit:5 nba:1 capturing:1 yielded:1 constraint:2 flat:1 encodes:2 aspect:5 reparameterize:1 performing:1 structured:13 alternate:1 combination:2 poor:1 across:14 describes:1 remain:1 em:2 slightly:2 metropolis:1 shallow:1 gradually:1 restricted:1 indexing:1 legal:1 equation:1 previously:1 discus:1 turn:2 generalizes:1 available:1 multiplied:1 apply:1 observe:2 hierarchical:1 enforce:5 spectral:1 nicholas:1 alternative:2 encounter:1 buffet:2 slower:1 assumes:1 remaining:1 dirichlet:20 ensure:1 linguistics:4 running:2 graphical:2 include:3 clustering:3 denotes:1 sw:4 porteous:1 eisner:2 ghahramani:1 forum:1 society:1 unchanged:1 tensor:2 added:1 already:1 occurs:1 question:1 strategy:1 primary:1 parametric:1 md:1 spelling:1 loglinear:1 exhibit:1 gradient:9 kth:3 link:2 separate:1 rethinking:1 majority:1 topic:86 seven:1 assuming:1 hdp:1 length:2 emnlpconll:1 modeled:3 index:1 illustration:2 z3:1 zhai:2 hebrew:1 minimizing:1 difficult:3 mostly:1 sage:6 design:1 implementation:3 observation:2 markov:1 finite:2 mdredze:1 hinton:1 topical:2 discovered:1 varied:1 arbitrary:5 august:2 inferred:2 rating:1 david:1 pair:9 meier:1 z1:4 sentence:2 learned:3 ideological:1 nip:5 address:2 beyond:2 adult:1 bar:1 usually:1 pattern:10 below:2 sparsity:36 challenge:1 reading:2 interpretability:3 including:2 royal:1 belief:2 overlap:1 natural:1 examination:1 improve:2 movie:2 technology:2 xml:1 imply:1 clsp:1 catch:1 utterance:1 text:8 prior:63 review:1 l2:1 understanding:1 acknowledgement:1 heller:1 relative:3 fully:2 historic:1 expect:1 mixed:5 generation:1 allocation:3 ingredient:1 age:1 triple:3 annotator:6 foundation:1 degree:1 consistent:2 article:5 treebank:1 story:1 tiny:2 share:10 translation:1 row:3 course:1 token:10 supported:2 childrens:1 unfairly:1 bias:6 allow:2 side:1 formal:1 wide:3 template:1 taking:1 sparse:14 distributed:1 mimno:3 feedback:1 depth:1 dimension:8 vocabulary:3 default:1 van:2 exceedingly:1 author:7 collection:5 qualitatively:2 historical:1 welling:1 newspaper:2 correlate:1 skill:2 implicitly:1 relatedness:2 keep:2 active:2 uai:2 corpus:22 assumed:2 tuples:37 continuous:1 latent:23 why:1 table:5 additionally:2 learn:5 zk:6 transfer:1 robust:1 decoupling:1 williamson:1 investigated:1 did:2 main:2 ling:1 paul:4 hyperparameters:6 scored:1 bilingual:1 child:1 allowed:1 graber:2 body:1 enriched:1 augmented:1 representative:1 mackey:1 slow:1 aid:1 precision:5 explicit:1 comput:1 lie:1 answering:1 jmlr:1 third:2 learns:2 removing:1 specific:8 showing:1 list:5 adding:1 effectively:2 conditioned:2 cartesian:1 simply:1 likely:2 sport:1 scalar:1 chang:1 applies:1 gender:1 corresponds:2 gerrish:1 conditional:1 goal:2 presentation:2 shared:3 content:2 change:2 hard:1 specifically:1 except:4 uniformly:1 determined:1 sampler:5 infinite:1 conservative:11 called:1 geer:1 pas:1 experimental:2 exception:1 formally:1 select:1 mark:1 indian:2 incorporate:1 srivastava:1
|
4,181 | 4,785 |
Scalable nonconvex inexact proximal splitting
Suvrit Sra
Max Planck Institute for Intelligent Systems
72076 T?ubigen, Germany
[email protected]
Abstract
We study a class of large-scale, nonsmooth, and nonconvex optimization problems. In particular, we focus on nonconvex problems with composite objectives.
This class includes the extensively studied class of convex composite objective
problems as a subclass. To solve composite nonconvex problems we introduce a
powerful new framework based on asymptotically nonvanishing errors, avoiding
the common stronger assumption of vanishing errors. Within our new framework
we derive both batch and incremental proximal splitting algorithms. To our knowledge, our work is first to develop and analyze incremental nonconvex proximalsplitting algorithms, even if we were to disregard the ability to handle nonvanishing errors. We illustrate one instance of our general framework by showing an
application to large-scale nonsmooth matrix factorization.
1
Introduction
This paper focuses on nonconvex composite objective problems having the form
minimize ?(x) := f (x) + h(x)
x ? X,
(1)
where f : Rn ? R is continuously differentiable, h : Rm ? R ? {?} is lower semi-continuous
(lsc) and convex (possibly nonsmooth), and X is a compact convex set. We also make the common
assumption that ?f is locally (in X ) Lipschitz continuous, i.e., there is a constant L > 0 such that
k?f (x) ? ?f (y)k ? Lkx ? yk
for all x, y ? X .
(2)
Problem (1) is a natural but far-reaching generalization of composite objective convex problems,
which enjoy tremendous importance in machine learning; see e.g., [2, 3, 11, 34]. Although, convex
formulations are extremely useful, for many difficult problems a nonconvex formulation is natural. Familiar examples include matrix factorization [20, 23], blind deconvolution [19], dictionary
learning [18, 23], and neural networks [4, 17].
The primary contribution of this paper is theoretical. Specifically, we present a new algorithmic
framework: Nonconvex Inexact Proximal Splitting (NIPS). Our framework solves (1) by ?splitting?
the task into smooth (gradient) and nonsmooth (proximal) parts. Beyond splitting, the most notable
feature of NIPS is that it allows computational errors. This capability proves critical to obtaining
a scalable, incremental-gradient variant of NIPS, which, to our knowledge, is the first incremental
proximal-splitting method for nonconvex problems.
NIPS further distinguishes itself in how it models computational errors. Notably, it does not require
the errors to vanish in the limit, which is a more realistic assumption as often one has limited to no
control over computational errors inherent to a complex system. In accord with the errors, NIPS also
does not require stepsizes (learning rates) to shrink to zero. In contrast, most incremental-gradient
methods [5] and stochastic gradient algorithms [16] do assume that the computational errors and
stepsizes decay to zero. We do not make these simplifying assumptions, which complicates the
convergence analysis a bit, but results in perhaps a more satisfying description.
1
Our analysis builds on the remarkable work of Solodov [29], who studied the simpler setting of
differentiable nonconvex problems (which correspond with h ? 0 in (1)). NIPS is strictly more
general: unlike [29] it solves a non-differentiable problem by allowing a nonsmooth regularizer
h 6? 0, and this h is tackled by invoking proximal-splitting [8].
Proximal-splitting has proved to be exceptionally fruitful and effective [2, 3, 8, 11]. It retains the
simplicity of gradient-projection while handling the nonsmooth regularizer h via its proximity operator. This approach is especially attractive because for several important choices of h, efficient
implementations of the associated proximity operators exist [2, 22, 23]. For convex problems, an
alternative to proximal splitting is the subgradient method; similarly, for nonconvex problems one
may use a generalized subgradient method [7, 12]. However, as in the convex case, the use of subgradients has drawbacks: it fails to exploit the composite structure, and even when using sparsity
promoting regularizers it does not generate intermediate sparse iterates [11].
Among batch nonconvex splitting methods, an early paper is [14]. More recently, in his pioneering
paper on convex composite minimization, Nesterov [26] also briefly discussed nonconvex problems.
Both [14] and [26], however, enforced monotonic descent in the objective value to ensure convergence. Very recently, Attouch et al. [1] have introduced a generic method for nonconvex nonsmooth
problems based on Kurdyka-?ojasiewicz theory, but their entire framework too hinges on descent.
A method that uses nonmontone line-search to eliminate dependence on strict descent is [13].
In general, the insistence on strict descent and exact gradients makes many of the methods unsuitable
for incremental, stochastic, or online variants, all of which usually lead to a nonmonotone objective
values especially due to inexact gradients. Among nonmonotonic methods that apply to (1), we are
aware of the generalized gradient-type algorithms of [31] and the stochastic generalized gradient
methods of [12]. Both methods, however, are analogous to the usual subgradient-based algorithms
that fail to exploit the composite objective structure, unlike proximal-splitting methods.
But proximal-splitting methods do not apply out-of-the-box to (1): nonconvexity raises significant
obstructions, especially because nonmonotonic descent in the objective function values is allowed
and inexact gradient might be used. Overcoming these obstructions to achieve a scalable non-descent
based method that allows inexact gradients is what makes our NIPS framework novel.
2
The NIPS Framework
To simplify presentation, we replace h by the penalty function
g(x) := h(x) + ?(x|X ),
(3)
where ?(?|X ) is the indicator function for X : ?(x|X ) = 0 for x ? X , and ?(x|X ) = ? for x 6? X .
With this notation, we may rewrite (1) as the unconstrained problem:
minx?Rn
?(x) := f (x) + g(x),
(4)
and this particular formulation is our primary focus. We solve (4) via a proximal-splitting approach,
so let us begin by defining our most important component.
Definition 1 (Proximity operator). Let g : Rn ? R be an lsc, convex function. The proximity
operator for g, indexed by ? > 0, is the nonlinear map [see e.g., 28; Def. 1.22]:
1
P?g : y 7? argmin g(x) + 2?
kx ? yk2 .
(5)
x?Rn
The operator (5) was introduced by Moreau [24] (1962) as a generalization of orthogonal projections. It is also key to Rockafellar?s classic proximal point algorithm [27], and it arises in a host of
proximal-splitting methods [2, 3, 8, 11], most notably in forward-backward splitting (FBS) [8].
FBS is particularly attractive because of its simplicity and algorithmic structure. It minimizes convex
composite objective functions by alternating between ?forward? (gradient) steps and ?backward?
(proximal) steps. Formally, suppose f in (4) is convex; for such f , FBS performs the iteration
xk+1 = P?gk (xk ? ?k ?f (xk )),
k = 0, 1, . . . ,
(6)
where {?k } is a suitable sequence of stepsizes. The usual convergence analysis of FBS is intimately
tied to convexity of f . Therefore, to tackle nonconvex f we must take a different approach. As
2
previously mentioned, such approaches were considered by Fukushima and Mine [14] and Nesterov
[26], but both proved convergence by enforcing monotonic descent.
This insistence on descent severely impedes scalability. Thus, the key challenge is: how to retain
the algorithmic simplicity of FBS and allow nonconvex losses, without sacrificing scalability?
We address this challenge by introducing the following inexact proximal-splitting iteration:
xk+1 = P?gk (xk ? ?k ?f (xk ) + ?k e(xk )),
k = 0, 1, . . . ,
(7)
where e(xk ) models the computational errors in computing the gradient ?f (xk ). We also assume
that for ? > 0 smaller than some stepsize ??, the computational error is uniformly bounded, that is,
?ke(x)k ? ?,
for some fixed error level ? ? 0,
and ?x ? X .
(8)
Condition (8) is weaker than the typical vanishing error requirements
X
?ke(xk )k < ?,
lim ?ke(xk )k = 0,
k
k??
which are stipulated by most analyses of methods with gradient errors [4, 5]. Obviously, since errors
are nonvanishing, exact stationarity cannot be guaranteed. We will, however, show that the iterates
produced by (7) do progress towards reasonable inexact stationary points. We note in passing that
even if we assume the simpler case of vanishing errors, NIPS is still the first nonconvex proximalsplitting framework that does not insist on monotonicity, which complicates convergence analysis
but ultimately proves crucial to scalability.
Algorithm 1 Inexact Nonconvex Proximal Splitting (NIPS)
Input: Operator P?g , and a sequence {?k } satisfying
c ? lim inf k ?k ,
lim supk ?k ? min {1, 2/L ? c} ,
0 < c < 1/L.
(9)
Output: Approximate solution to (7)
k ? 0; Select arbitrary x0 ? X
while ? converged do
e (xk ) := ?f (xk ) ? e(xk )
Compute approximate gradient ?f
k+1
g
k
e (xk ))
Update: x
= P?k (x ? ?k ?f
k ?k+1
end while
2.1
Convergence analysis
We begin by characterizing inexact stationarity. A point x? is a stationary point for (4) if and only if
it satisfies the inclusion
0 ? ?C ?(x? ) := ?f (x? ) + ?g(x? ),
(10)
where ?C ? denotes the Clarke subdifferential [7]. A brief exercise shows that this inclusion may be
equivalently recast as the fixed-point equation (which augurs the idea of proximal-splitting)
x? = P?g (x? ? ??f (x? )),
for ? > 0.
(11)
This equation helps us define a measure of inexact stationarity: the proximal residual
?(x) := x ? P1g (x ? ?f (x)).
?
(12)
?
Note that for an exact stationary point x the residual norm k?(x )k = 0. Thus, we call a point x to
be -stationary if for a prescribed error level (x), the corresponding residual norm satisfies
k?(x)k ? (x).
k
(13)
Assuming the error-level
(x) (say if ? = lim supk (x )) satisfies the bound (8), we prove below
that the iterates xk generated by (7) satisfy an approximate stationarity condition of the form (13),
by allowing the stepsize ? to become correspondingly small (but strictly bounded away from zero).
We start by recalling two basic facts, stated without proof as they are standard knowledge.
3
Lemma 2 (Lipschitz-descent [see e.g., 25; Lemma 2.1.3]). Let f ? CL1 (X ). Then,
|f (x) ? f (y) ? h?f (y), x ? yi| ? L2 kx ? yk2 , ? x, y ? X .
(14)
g
Lemma 3 (Nonexpansivity [see e.g., 9; Lemma 2.4]). The operator P? is nonexpansive, that is,
kP?g (x) ? P?g (y)k ? kx ? yk,
? x, y ? Rn .
(15)
Next we prove a crucial monotonicity property that actually subsumes similar results for projection
operators derived by Gafni and Bertsekas [15; Lem. 1], and may therefore be of independent interest.
Lemma 4 (Prox-Monotonicity). Let y, z ? Rn , and ? > 0. Define the functions
pg (?) := ?1 kP?g (y ? ?z) ? yk, and qg (?) := kP?g (y ? ?z) ? yk.
(16)
Then, pg (?) is a decreasing function of ?, and qg (?) an increasing function of ?.
Proof. Our proof exploits properties of Moreau-envelopes [28; pp. 19,52], and we present it in the
language of proximity operators. Consider the ?deflected? proximal objective
1
mg (x, ?; y, z) := hz, x ? yi + 2?
kx ? yk2 + g(x), for some y, z ? X .
(17)
Associate to objective mg the deflected Moreau-envelope
Eg (?) := inf mg (x, ?; y, z),
(18)
x?X
whose infimum is attained at the unique point P?g (y ? ?z). Thus, Eg (?) is differentiable, and its
derivative is given by Eg0 (?) = ? 2?12 kP?g (y ? ?z) ? yk2 = ? 21 p(?)2 . Since Eg is convex in ?,
Eg0 is increasing ([28; Thm. 2.26]), or equivalently p(?) is decreasing. Similarly, define e?g (?) :=
Eg (1/?); this function is concave in ? as it is a pointwise infimum (indexed by x) of functions linear
g
in ? [see e.g., ?3.2.3 in 6]. Thus, its derivative e?0g (?) = 21 kP1/?
(x ? ? ?1 y) ? xk2 = qg (1/?), is a
decreasing function of ?. Set ? = 1/? to conclude the argument about qg (?).
We now proceed to bound the difference between objective function values from iteration k to k + 1,
by developing a bound of the form
?(xk ) ? ?(xk+1 ) ? h(xk ).
(19)
k
Obviously, since we do not enforce strict descent, h(x ) may be negative too. However, we show
that for sufficiently large k the algorithm makes enough progress to ensure convergence.
Lemma 5. Let xk+1 , xk , ?k , and X be as in (7), and that ?k ke(xk )k ? (xk ) holds. Then,
?(xk ) ? ?(xk+1 )
2?L?k
k+1
2?k kx
?
? xk k 2 ?
k
k+1
1
?k (x )kx
? xk k.
(20)
Proof. For the deflected Moreau envelope (17), consider the directional derivative dmg with respect
to x in the direction w; at x = xk+1 , this derivative satisfies the optimality condition
dmg (xk+1 , ?; y, z)(w) = hz + ? ?1 (xk+1 ? y) + sk+1 , wi ? 0,
k
k
k
k
Set z = ?f (x ) ? e(x ), y = x , and w = x ? x
k
k
k+1
k
k+1
?1
sk+1 ? ?g(xk+1 ).
(21)
in (21), and rearrange to obtain
k+1
h?f (x ) ? e(x ), x
? x i ? h? (x
? xk ) + sk+1 , xk ? xk+1 i.
From Lemma 2 it follows that
?(xk+1 ) ? f (xk ) + h?f (xk ), xk+1 ? xk i + L2 kxk+1 ? xk k2 + g(xk+1 ),
(22)
(23)
whereby upon adding and subtracting e(xk ), and then using (22) we further obtain
f (xk ) + h?f (xk ) ? e(xk ), xk+1 ? xk i +
? f (xk ) + g(xk+1 ) + hsk+1 , xk ?
? f (xk ) + g(xk ) ?
? ?(xk ) ?
? ?(xk ) ?
2?L?k
k+1
2?k kx
k+1
k 2
2?L?k
2?k kx
2?L?k
k+1
2?k kx
k+1
L
? xk k2 + g(xk+1 ) + he(xk ), xk+1 ? xk i
2 kx
xk+1 i + L2 ? ?1k kxk+1 ? xk k2 + he(xk ), xk+1 ? xk i
k 2
k
k+1
k
? x k + he(x ), x
?x i
? x k + ke(xk )kkxk+1 ? xk k
? xk k2 +
k
k+1
1
?k (x )kx
? xk k.
The second inequality above follows from convexity of g, the third one from Cauchy-Schwarz, and
the last one by assumption on (xk ). Now flip signs and apply (23) to conclude the bound (20).
4
Next we further bound (20) by deriving two-sided bounds on kxk+1 ? xk k.
Lemma 6. Let xk+1 , xk , and (xk ) be as before; also let c and ?k satisfy (9). Then,
ck?(xk )k ? (xk ) ? kxk+1 ? xk k ? k?(xk )k + (xk ).
(24)
Proof. First observe that from Lemma 4 that for ?k > 0 it holds that
if 1 ? ?k then q(1) ? qg (?k ),
and if ?k ? 1 then pg (1) ? pg (?k ) =
1
?k qg (?k ).
(25)
Using (25), the triangle inequality, and Lemma 3, we have
min {1, ?k } qg (1) = min {1, ?k } k?(xk )k
?
kP?gk (xk ? ?k ?f (xk )) ? xk k
?
kxk+1 ? xk k + kxk+1 ? P?gk (xk ? ?k ?f (xk ))k
?
kxk+1 ? xk k + k?k e(xk )k
?
kxk+1 ? xk k + (xk ).
From (9) it follows that for sufficiently large k we have kxk+1 ? xk k ? ck?(xk )k ? (xk ). For the
upper bound note that
kxk+1 ? xk k ? kxk ? P?gk (xk ? ?k ?f (xk ))k + kP?gk (xk ? ?k ?f (xk )) ? xk+1 k
? max {1, ?k } k?(xk )k + k?k e(xk )k
?
k?(xk )k + (xk ).
Lemma 5 and Lemma 6 help prove the following crucial corollary.
Corollary 7. Let xk , xk+1 , ?k , and c be as above and k sufficiently large so that c and ?k satisfy (9).
Then, ?(xk ) ? ?(xk+1 ) ? h(xk ) holds with h(xk ) given by
k 2
L2 c2
L2 c
L2 c3
k?(xk )k2 ? 2?cL
+ 1c k?(xk )k(xk ) ? 1c ? 2(2?cL)
(x ) .
(26)
h(xk ) := 2(2?2Lc)
Proof. Plug in the bounds (24) into (20), invoke (9), and simplify?see [32] for details.
We now have all the ingredients to state the main convergence theorem.
1
convex on X .
Theorem
k 8 (Convergence). Let f ? CL (X ) such that inf X f > ?? and let g be lsc,
k
Let x ? X be a sequence generated
by
(7),
and
let
condition
(8)
on
each
ke(x
)k
hold. There
k
?
?
exists
a
limit
point
x
of
the
sequence
x
,
and
a
constant
K
>
0,
such
that
k?(x
)k
? K(x? ).
k
k
?
?
If ?(x ) converges, then for every limit point x of x it holds that k?(x )k ? K(x? ).
Proof. Lemma 5, 6, and Corollary 7 have done all the hard work. Indeed, they allow us to reduce
our convergence proof to the case where the analysis of the differentiable case becomes applicable,
and an appeal to the analysis of [29; Thm. 2.1] grants us our claim.
Theorem 8 says that we can obtain an approximate stationary point for which the norm of the residual is bounded by a linear function of the error level. The statement of the theorem is written in a
conditional form, because nonvanishing errors e(x) prevent us from making a stronger statement.
In particular, once the
enter a region where the residual norm falls below the error thresh iterates
old, the behavior of xk may be arbitrary. This, however, is a small price to pay for having the
added flexibility of nonvanishing errors. Under the stronger assumption of vanishing errors (and
diminishing stepsizes), we can also obtain guarantees to exact stationary points.
3
Scaling up NIPS: incremental variant
We now apply NIPS to the large-scale setting, where we have composite objectives of the form
XT
?(x) :=
ft (x) + g(x),
(27)
t=1
where each ft : Rn ? R is a CL1 t (X ) function. For simplicity, we use L = maxt Lt in the sequel.
It is well-known
that for such decomposable objectives it can be advantageous to replace the full
P
gradient t ?ft (x) by an incremental gradient ?f?(t) (x), where ?(t) is some suitable index.
5
Nonconvex incremental methods for differentiable problems have been extensively analyzed, e.g.,
backpropagation algorithms [5, 29], which correspond to g(x) ? 0. However, when g(x) 6= 0, the
only incremental methods that we are aware of are stochastic generalized gradient methods of [12]
or the generalized gradient methods of [31]. As previously mentioned, both of these fail to exploit
the composite structure of the objective function, a disadvantage even in the convex case [11].
In stark contrast, we do exploit the composite structure of (27). Formally, we propose the following
incremental nonconvex proximal-splitting iteration:
XT
xk+1 = M xk ? ?k
?ft (xk,t ) , k = 0, 1, . . . ,
t=1
(28)
xk,1 = xk , xk,t+1 = O(xk,t ? ?k ?ft (xk,t )), t = 1, . . . , T ? 1,
where O and M are appropriate operators, different choices of which lead to different algorithms.
For example, when X = Rn , g(x) ? 0, M = O = Id, and ?k ? 0, then (28) reduces to the classic
incremental gradient method (IGM) [4], and to the IGM of [30], if lim ?k = ?? > 0. If X a closed
convex set, g(x) ? 0, M is orthogonal projection onto X , O = Id, and ?k ? 0, then iteration (28)
reduces to (projected) IGM [4, 5].
We may consider four variants of (28) in Table 1; to our knowledge, all of these are new. Which
of the four variants one prefers depends on the complexity of the constraint set X and cost to apply
P?g . The analysis of all four variants is similar, so we present details only for the most general case.
X
Rn
Rn
Convex
Convex
g
6? 0
6? 0
h(x) + ?(X |x)
h(x) + ?(X |x)
M
P?g
P?g
P?g
P?g
O
Id
P?g
Id
P?g
Penalty and constraints
penalized, unconstrained
penalized, unconstrained
penalized, constrained
penalized, constrained
Proximity operator calls
once every major (k) iteration
once every minor (k, t) iteration
once every major (k) iteration
once every minor (k, t) iteration
Table 1: Different variants of incremental NIPS (28).
3.1
Convergence analysis
Specifically, we analyze convergence for the case M = O = P?g by generalizing the differentiable
case treated by [30]. We begin by rewriting (28) in a form that matches the main iteration (7):
XT
xk+1 = P?g xk ? ?k
?ft (xk,t )
t=1
XT
XT
g
k
(29)
= P? x ? ?k
?ft (xk ) + ?k
ft (xk ) ? ft (xk,t )
t=1
t=1
X
= P?g xk ? ?k
?ft (xk ) + ?k e(xk ) .
t
To show that iteration (29) is well-behaved and actually fits the main NIPS iteration (7), we must
ensure that the norm of the error term is bounded. We show this via a sequence of lemmas.
Lemma 9 (Bounded-increment). Let xk,t+1 be computed by (28), and let st ? ?g(xk,t ). Then,
kxk,t+1 ? xk,t k ? 2?k k?ft (xk,t ) + st k.
Proof. From the definition of a proximity operator (5), we have the inequality
=?
k,t+1
1
2 kx
k,t+1
1
2 kx
? xk,t + ?k ?ft (xk,t )k2 + ?k g(xk,t+1 ) ? 21 k?k ?ft (xk,t )k2 + ?k g(xk,t ),
? xk,t k2 ? ?k h?ft (xk,t ), xk,t ? xk,t+1 i + ?k (g(xk,t ) ? g(xk,t+1 )).
Since st ? ?g(xk,t ), we have g(xk,t+1 ) ? g(xk,t ) + hst , xk,t+1 ? xk,t i. Therefore,
k,t+1
1
2 kx
? xk,t k2 ? ?k hst , xk,t ? xk,t+1 i + h?ft (xk,t ), xk,t ? xk,t+1 i
? ?k kst + ?ft (xk,t )kkxk,t ? xk,t+1 k
=?
kxk,t+1 ? xk,t k ? 2?k k?ft (xk,t ) + st k.
Lemma 9 proves helpful in bounding the overall error.
6
(30)
Lemma 10 (Bounded error). If for all xk ? X , k?ft (xk )k ? M and k?g(xk )k ? G, then there
exists a constant K1 > 0 such that ke(xk )k ? K1 .
Proof. To bound the error of using xk,t instead of xk first define the term
t := k?ft (xk,t ) ? ?ft (xk )k, t = 1, . . . , T.
(31)
Then, an inductive argument (see [32] for details) shows that for 2 ? t ? T
Xt?1
t ? 2?k L
(1 + 2?k L)t?1?j k?fj (xk ) + sj k.
(32)
j=1
P
T
Since ke(xk )k = t=1 t , and 1 = 0, (32) then leads to the bound
XT
XT Xt?1
XT ?1 XT ?t?1
t ? 2?k L
(1 + 2?k L)t?1?j ?j = 2?k L
?t
(1 + 2?k L)j
t=2
t=2
j=1
t=1
j=0
XT ?1
XT ?1
T ?t
T ?1
?
(1 + 2?k L)
?t ? (1 + 2?k L)
k?ft (x) + st k
t=1
t=1
? C1 (T ? 1)(M + G) =: K1 .
Thus, the error norm ke(xk )k is bounded from above by a constant, whereby it satisfies the requirement (8), making the incremental NIPS method (28) a special case of the general NIPS framework.
This allows us to invoke the convergence result Theorem 8 for without further ado.
4
Illustrative application
The main contribution of our paper is the new NIPS framework, and a specific application is not
one of the prime aims of this paper. We do, however, provide an illustrative application of NIPS to
a challenging nonconvex problem: sparsity regularized low-rank matrix factorization
XT
2
1
min
?t (at ),
(33)
2 kY ? XAkF + ?0 (X) +
t=1
X,A?0
m?T
m?K
K?T
where Y ? R
,X ?R
and A ? R
, with a1 , . . . , aT as its columns. Problem (33)
generalizes the well-known nonnegative matrix factorization (NMF) problem of [20] by permitting
arbitrary Y (not necessarily nonnegative), and adding regularizers on X and A. A related class of
problems was studied in [23], but with a crucial difference: the formulation in [23] does not allow
nonsmooth regularizers on X. The class of problems studied in [23] is in fact a subset of those covered by NIPS. On a more theoretical note, [23] considered stochastic-gradient like methods whose
analysis requires computational errors and stepsizes to vanish, whereas our method is deterministic
and allows nonvanishing stepsizes and errors.
Following [23] we also rewrite (33) in a form more amenable to NIPS. We eliminate A and consider
XT
minX ?(X) :=
ft (X) + g(X), where g(X) := ?0 (X) + ?(X|? 0),
(34)
t=1
and where each ft (X) for 1 ? t ? T is defined as
ft (X) := mina
1
2 kyt
? Xak2 + gt (a),
(35)
1
where gt (a) := ?t (a) + ?(a|? 0). For simplicity, assume that (35) attains its unique minimum,
say a? , then ft (X) is differentiable and we have ?X ft (X) = (Xa? ? yt )(a? )T . Thus, we can
instantiate (28), and all we need is a subroutine for solving (35).2
We present empirical results on the following two variants of (34): (i) pure unpenalized NMF (?t ?
0 for 0 ? t ? T ) as a baseline; and (ii) sparsity penalized NMF where ?0 (X) ? ?kXk1 and
?t (at ) ? ?kat k1 . Note that without the nonnegativity constraints, (34) is similar to sparse-PCA.
We use the following datasets and parameters: JiK RAND: 4000 ? 4000 dense random (uniform
[0, 1]); rank-32 factorization; (?, ?) = (10?5 , 10); JiiK CBCL: CBCL database [33]; 361 ? 2429;
rank-49 factorization; JiiiK YALE: Yale B Database [21]; 32256?2414 matrix; rank-32 factorization;
JivK WEB: Web graph from google; sparse 714545 ? 739454 (empty rows and columns removed)
matrix; ID: 2301 in the sparse matrix collection [10]); rank-4 factorization; (? = ? = 10?6 ).
1
Otherwise, at the expense of more notation, we can add a small strictly convex perturbation to ensure
uniqueness; this perturbation can be then absorbed into the overall computational error.
2
In practice, it is better to use mini-batches, and we used the same sized mini-batches for all the algorithms.
7
NIPS
NIPS
SPAMS
2.2
10
SPAMS
10
Objective function value
Objective function value
Objective function value
0
10
NIPS
2.8
SPAMS
2.3
10
2.7
10
2.6
10
2.5
10
2.4
10
2.1
10
2.3
10
0
10
20
30
40
50
Running time (seconds)
60
70
0
5
10
15
Running time (seconds)
20
25
0
50
100
150
200
250
Running time (seconds)
300
350
400
Figure 1: Running times of NIPS (Matlab) versus SPAMS (C++) for NMF on RAND, CBCL, and YALE
datasets. Initial objective values and tiny runtimes have been suppressed for clarity of presentation.
On the NMF baseline (Fig. 1), we compare NIPS against the well optimized state-of-the-art C++
toolbox SPAMS (version 2.3) [23]. We compare against SPAMS only on dense matrices, as its NMF
code seems to be optimized for this case. Obviously, the comparison is not fair: unlike SPAMS,
NIPS and its subroutines are all implemented in M ATLAB, and they run equally easily on large
sparse matrices. Nevertheless, NIPS proves to be quite competitive: Fig. 1 shows that our M ATLAB
implementation runs only slightly slower than SPAMS. We expect a well-tuned C++ implementation
of NIPS to run at least 4?10 times faster than the M ATLAB version?the dashed line in the plots
visualizes what such a mere 3X-speedup to NIPS might mean.
Figure 2 shows numerical results comparing the stochastic generalized gradient (SGGD) algorithm
of [12] against NIPS, when started at the same point. As in well-known, SGGD requires careful
stepsize tuning; so we searched over a range of stepsizes, and have reported the best results. NIPS
too requires some stepsize tuning, but substantially lesser than SGGD. As predicted, the solutions
returned by NIPS have objective function values lower than SGGD, and have greater sparsity.
4
10
NIPS
NIPS?X
Objective function value
SGGD
SGGD?X
3
10
NIPS?A
SGGD?A
2
10
0
0.1
0.2
0.3
0.4
0.5
Sparsity
0.6
0.7
0.8
0.9
1
10
0
10
20
30
40
50
Running time (seconds)
60
70
80
Figure 2: Sparse NMF: NIPS versus SGGD. The bar plots show the sparsity (higher is better) of the factors
X and A. Left plots for RAND dataset; right plots for WEB. As expected, SGGD yields slightly worse objective
function values and less sparse solutions than NIPS.
5
Discussion
We presented a new framework called NIPS, which solves a broad class of nonconvex composite
objective problems. NIPS permits nonvanishing computational errors, which can be practically useful. We specialized NIPS to also obtain a scalable incremental version. Our numerical experiments
on large scale matrix factorization indicate that NIPS is competitive with state-of-the-art methods.
We conclude by mentioning that NIPS includes numerous other algorithms as special cases. For example, batch and incremental convex FBS, convex and nonconvex gradient projection, the proximalpoint algorithm, among others. Theoretically, however, the most exciting open problem resulting
from this paper is: extend NIPS in a scalable way when even the nonsmooth part is nonconvex.
This case will require very different convergence analysis, and is left to the future.
References
[1] H. Attouch, J. Bolte, and B. F. Svaiter. Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized Gauss-Seidel methods. Math.
8
Programming Series A, Aug. 2011. Online First.
[2] F. Bach, R. Jenatton, J. Mairal, and G. Obozinski. Convex optimization with sparsity-inducing norms. In
S. Sra, S. Nowozin, and S. J. Wright, editors, Optimization for Machine Learning. MIT Press, 2011.
[3] A. Beck and M. Teboulle. A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems. SIAM J. Imgaging Sciences, 2(1):183?202, 2009.
[4] D. P. Bertsekas. Nonlinear Programming. Athena Scientific, second edition, 1999.
[5] D. P. Bertsekas. Incremental Gradient, Subgradient, and Proximal Methods for Convex Optimization: A
Survey. Technical Report LIDS-P-2848, MIT, August 2010.
[6] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, March 2004.
[7] F. H. Clarke. Optimization and nonsmooth analysis. John Wiley & Sons, Inc., 1983.
[8] P. L. Combettes and J.-C. Pesquet. Proximal Splitting Methods in Signal Processing. arXiv:0912.3522v4,
May 2010.
[9] P. L. Combettes and V. R. Wajs. Signal recovery by proximal forward-backward splitting. Multiscale
Modeling and Simulation, 4(4):1168?1200, 2005.
[10] T. A. Davis and Y. Hu. The University of Florida Sparse Matrix Collection. ACM Transactions on
Mathematical Software, 2011. To appear.
[11] J. Duchi and Y. Singer. Online and Batch Learning using Forward-Backward Splitting. J. Mach. Learning
Res. (JMLR), Sep. 2009.
[12] Y. M. Ermoliev and V. I. Norkin. Stochastic generalized gradient method for nonconvex nonsmooth
stochastic optimization. Cybernetics and Systems Analysis, 34:196?215, 1998.
[13] M. A. T. Figueiredo, R. D. Nowak, and S. J. Wright. Gradient Projection for Sparse Reconstruction:
Application to Compressed Sensing and Other Inverse Problems. IEEE J. Selected Topics in Sig. Proc., 1
(4):586?597, 2007.
[14] M. Fukushima and H. Mine. A generalized proximal point algorithm for certain non-convex minimization
problems. Int. J. Systems Science, 12(8):989?1000, 1981.
[15] E. M. Gafni and D. P. Bertsekas. Two-metric projection methods for constrained optimization. SIAM
Journal on Control and Optimization, 22(6):936?964, 1984.
[16] A. A. Gaivoronski. Convergence properties of backpropagation for neural nets via theory of stochastic
gradient methods. Part 1. Optimization methods and Software, 4(2):117?134, 1994.
[17] S. Haykin. Neural Networks: A Comprehensive Foundation. Prentice Hall PTR, 1st edition, 1994.
[18] K. Kreutz-Delgado, J. F. Murray, B. D. Rao, K. Engan, T.-W. Lee, and T. J. Sejnowski. Dictionary
learning algorithms for sparse representation. Neural Computation, 15:349?396, 2003.
[19] D. Kundur and D. Hatzinakos. Blind image deconvolution. IEEE Signal Processing Magazine, 13(3),
May 1996.
[20] D. D. Lee and H. S. Seung. Algorithms for Nonnegative Matrix Factorization. In NIPS, 2000.
[21] K.C. Lee, J. Ho, and D. Kriegman. Acquiring linear subspaces for face recognition under variable lighting.
IEEE Trans. Pattern Anal. Mach. Intelligence, 27(5):684?698, 2005.
[22] J. Liu and J. Ye. Efficient Euclidean projections in linear time. In ICML, Jun. 2009.
[23] J. Mairal, F. Bach, J. Ponce, and G. Sapiro. Online Learning for Matrix Factorization and Sparse Coding.
JMLR, 11:10?60, 2010.
[24] J. J. Moreau. Fonctions convexes duales et points proximaux dans un espace hilbertien. C. R. Acad. Sci.
Paris Sr. A Math., 255:2897?2899, 1962.
[25] Y. Nesterov. Introductory Lectures on Convex Optimization: A Basic Course. Springer, 2004.
[26] Y. Nesterov. Gradient methods for minimizing composite objective function. Technical Report 2007/76,
Universit catholique de Louvain, September 2007.
[27] R. T. Rockafellar. Monotone operators and the proximal point algorithm. SIAM J. Control and Optimization, 14, 1976.
[28] R. T. Rockafellar and R. J.-B. Wets. Variational analysis. Springer, 1998.
[29] M. V. Solodov. Convergence analysis of perturbed feasible descent methods. J. Optimization Theory and
Applications, 93(2):337?353, 1997.
[30] M. V. Solodov. Incremental gradient algorithms with stepsizes bounded away from zero. Computational
Optimization and Applications, 11:23?35, 1998.
[31] M. V. Solodov and S. K. Zavriev. Error stability properties of generalized gradient-type algorithms. J.
Optimization Theory and Applications, 98(3):663?680, 1998.
[32] S. Sra. Nonconvex proximal-splitting: Batch and incremental algorithms. Sep. 2012. arXiv:1109.0258v2.
[33] K.-K. Sung. Learning and Example Selection for Object and Pattern Recognition. PhD thesis, MIT, 1996.
[34] L. Xiao. Dual averaging method for regularized stochastic learning and online optimization. In NIPS,
2009.
9
|
4785 |@word version:3 briefly:1 advantageous:1 stronger:3 norm:7 seems:1 open:1 hu:1 simulation:1 simplifying:1 pg:4 invoking:1 delgado:1 initial:1 liu:1 series:1 tuned:1 nonmonotone:1 comparing:1 must:2 written:1 john:1 realistic:1 numerical:2 plot:4 update:1 stationary:6 intelligence:1 instantiate:1 selected:1 xk:171 vanishing:4 ojasiewicz:1 haykin:1 iterates:4 math:2 simpler:2 mathematical:1 c2:1 become:1 prove:3 introductory:1 introduce:1 theoretically:1 x0:1 notably:2 expected:1 indeed:1 behavior:1 mpg:1 insist:1 decreasing:3 increasing:2 becomes:1 begin:3 notation:2 bounded:8 what:2 argmin:1 minimizes:1 substantially:1 jik:1 wajs:1 sung:1 guarantee:1 sapiro:1 every:5 subclass:1 concave:1 tackle:1 proximaux:1 universit:1 rm:1 k2:9 control:3 grant:1 enjoy:1 appear:1 planck:1 bertsekas:4 before:1 insistence:2 limit:3 severely:1 acad:1 mach:2 id:5 might:2 studied:4 challenging:1 mentioning:1 factorization:11 limited:1 range:1 unique:2 practice:1 backpropagation:2 kat:1 empirical:1 composite:14 projection:8 boyd:1 cannot:1 onto:1 selection:1 operator:13 prentice:1 fruitful:1 map:1 deterministic:1 yt:1 convex:25 survey:1 ke:9 simplicity:5 splitting:24 decomposable:1 pure:1 recovery:1 deriving:1 vandenberghe:1 his:1 classic:2 handle:1 stability:1 increment:1 analogous:1 suppose:1 magazine:1 exact:4 programming:2 us:1 sig:1 associate:1 satisfying:2 particularly:1 recognition:2 database:2 kxk1:1 ft:26 region:1 removed:1 yk:4 mentioned:2 tame:1 convexity:2 complexity:1 seung:1 nesterov:4 mine:2 kriegman:1 ultimately:1 raise:1 rewrite:2 solving:1 upon:1 triangle:1 easily:1 sep:2 regularizer:2 fast:1 effective:1 sejnowski:1 kp:6 nonmonotonic:2 whose:2 quite:1 solve:2 say:3 otherwise:1 compressed:1 ability:1 hilbertien:1 itself:1 online:5 obviously:3 sequence:5 differentiable:8 mg:3 net:1 propose:1 subtracting:1 reconstruction:1 dans:1 flexibility:1 achieve:1 description:1 inducing:1 scalability:3 cl1:2 ky:1 convergence:17 empty:1 requirement:2 incremental:19 converges:1 object:1 help:2 derive:1 develop:1 illustrate:1 minor:2 progress:2 aug:1 solves:3 implemented:1 hst:2 predicted:1 indicate:1 direction:1 drawback:1 stochastic:10 stipulated:1 require:3 generalization:2 strictly:3 hold:5 proximity:7 sufficiently:3 considered:2 practically:1 wright:2 cbcl:3 hall:1 algorithmic:3 claim:1 major:2 dictionary:2 early:1 xk2:1 uniqueness:1 dmg:2 proc:1 applicable:1 wet:1 schwarz:1 minimization:2 mit:3 aim:1 reaching:1 ck:2 shrinkage:1 stepsizes:8 corollary:3 derived:1 focus:3 ponce:1 rank:5 contrast:2 attains:1 baseline:2 hsk:1 helpful:1 entire:1 eliminate:2 diminishing:1 igm:3 subroutine:2 germany:1 overall:2 among:3 dual:1 constrained:3 special:2 art:2 aware:2 once:5 having:2 runtimes:1 broad:1 icml:1 future:1 espace:1 report:2 nonsmooth:11 simplify:2 intelligent:1 inherent:1 distinguishes:1 others:1 kp1:1 comprehensive:1 familiar:1 beck:1 fukushima:2 recalling:1 stationarity:4 interest:1 analyzed:1 hatzinakos:1 rearrange:1 regularizers:3 amenable:1 nowak:1 orthogonal:2 indexed:2 old:1 euclidean:1 re:1 sacrificing:1 theoretical:2 complicates:2 instance:1 column:2 modeling:1 teboulle:1 rao:1 disadvantage:1 retains:1 cost:1 introducing:1 subset:1 uniform:1 too:3 reported:1 perturbed:1 proximal:27 svaiter:1 st:6 siam:3 retain:1 sequel:1 v4:1 invoke:2 lee:3 continuously:1 nonvanishing:7 thesis:1 possibly:1 worse:1 derivative:4 stark:1 prox:1 de:2 coding:1 subsumes:1 includes:2 rockafellar:3 inc:1 int:1 satisfy:3 notable:1 blind:2 depends:1 closed:1 analyze:2 start:1 competitive:2 capability:1 contribution:2 minimize:1 who:1 correspond:2 yield:1 directional:1 fbs:6 produced:1 mere:1 lighting:1 cybernetics:1 visualizes:1 converged:1 definition:2 inexact:10 against:3 pp:1 atlab:3 associated:1 proof:10 proved:2 dataset:1 knowledge:4 lim:5 actually:2 jenatton:1 attained:1 higher:1 rand:3 formulation:4 done:1 shrink:1 box:1 xa:1 web:3 nonlinear:2 multiscale:1 google:1 infimum:2 perhaps:1 behaved:1 scientific:1 attouch:2 ye:1 inductive:1 alternating:1 eg:4 attractive:2 davis:1 whereby:2 illustrative:2 ptr:1 generalized:9 mina:1 performs:1 duchi:1 fj:1 image:1 variational:1 novel:1 recently:2 common:2 specialized:1 discussed:1 he:3 extend:1 significant:1 cambridge:1 enter:1 fonctions:1 tuning:2 unconstrained:3 augur:1 similarly:2 inclusion:2 language:1 yk2:4 lkx:1 gt:2 add:1 thresh:1 inf:3 prime:1 certain:1 nonconvex:26 suvrit:2 inequality:3 yi:2 minimum:1 greater:1 signal:3 dashed:1 semi:2 full:1 ii:1 reduces:2 seidel:1 smooth:1 technical:2 match:1 faster:1 plug:1 bach:2 host:1 permitting:1 equally:1 a1:1 qg:7 scalable:5 variant:8 basic:2 metric:1 arxiv:2 iteration:12 accord:1 c1:1 subdifferential:1 whereas:1 crucial:4 duales:1 envelope:3 unlike:3 sr:1 strict:3 hz:2 call:2 intermediate:1 enough:1 fit:1 pesquet:1 reduce:1 idea:1 lesser:1 engan:1 pca:1 penalty:2 returned:1 algebraic:1 passing:1 proceed:1 prefers:1 matlab:1 useful:2 covered:1 obstruction:2 extensively:2 locally:1 generate:1 exist:1 sign:1 key:2 four:3 nevertheless:1 clarity:1 prevent:1 rewriting:1 eg0:2 nonconvexity:1 backward:5 asymptotically:1 subgradient:4 graph:1 monotone:1 enforced:1 run:3 inverse:2 powerful:1 deflected:3 reasonable:1 clarke:2 scaling:1 bit:1 def:1 bound:10 pay:1 guaranteed:1 tackled:1 yale:3 kyt:1 nonnegative:3 constraint:3 software:2 argument:2 extremely:1 min:4 prescribed:1 subgradients:1 optimality:1 speedup:1 developing:1 convexes:1 march:1 nonexpansive:1 smaller:1 slightly:2 son:1 intimately:1 suppressed:1 wi:1 lid:1 making:2 lem:1 sided:1 equation:2 previously:2 fail:2 singer:1 flip:1 end:1 solodov:4 generalizes:1 permit:1 promoting:1 apply:5 observe:1 away:2 generic:1 enforce:1 appropriate:1 v2:1 stepsize:4 batch:7 alternative:1 ho:1 slower:1 florida:1 denotes:1 running:5 include:1 ensure:4 ado:1 hinge:1 unsuitable:1 exploit:5 k1:4 prof:4 build:1 especially:3 murray:1 objective:24 added:1 primary:2 dependence:1 usual:2 september:1 gradient:30 minx:2 subspace:1 sci:1 gaivoronski:1 athena:1 topic:1 cauchy:1 tuebingen:1 enforcing:1 kundur:1 assuming:1 kst:1 code:1 pointwise:1 index:1 mini:2 minimizing:1 equivalently:2 difficult:1 statement:2 expense:1 gk:6 stated:1 negative:1 implementation:3 anal:1 allowing:2 upper:1 datasets:2 descent:12 defining:1 rn:10 perturbation:2 arbitrary:3 thm:2 august:1 nmf:7 overcoming:1 introduced:2 paris:1 toolbox:1 c3:1 optimized:2 louvain:1 tremendous:1 nip:45 trans:1 address:1 beyond:1 bar:1 usually:1 below:2 pattern:2 sparsity:7 challenge:2 pioneering:1 recast:1 max:2 unpenalized:1 critical:1 suitable:2 natural:2 treated:1 regularized:3 indicator:1 residual:5 brief:1 numerous:1 started:1 jun:1 l2:6 loss:1 expect:1 lecture:1 versus:2 remarkable:1 ingredient:1 foundation:1 xiao:1 exciting:1 editor:1 thresholding:1 tiny:1 nowozin:1 maxt:1 row:1 course:1 penalized:5 last:1 figueiredo:1 catholique:1 allow:3 weaker:1 institute:1 fall:1 characterizing:1 correspondingly:1 face:1 sparse:11 moreau:5 ermoliev:1 forward:5 collection:2 projected:1 spam:8 far:1 transaction:1 sj:1 approximate:4 compact:1 monotonicity:3 mairal:2 kreutz:1 conclude:3 continuous:2 search:1 iterative:1 un:1 sk:3 table:2 sra:3 obtaining:1 lsc:3 complex:1 cl:3 necessarily:1 main:4 dense:2 bounding:1 edition:2 allowed:1 fair:1 fig:2 wiley:1 lc:1 combettes:2 fails:1 nonnegativity:1 exercise:1 tied:1 vanish:2 jmlr:2 third:1 theorem:5 xt:15 specific:1 showing:1 sensing:1 appeal:1 decay:1 deconvolution:2 exists:2 adding:2 importance:1 phd:1 kx:14 bolte:1 generalizing:1 lt:1 kurdyka:1 absorbed:1 kxk:13 impedes:1 monotonic:2 supk:2 acquiring:1 springer:2 satisfies:5 acm:1 obozinski:1 conditional:1 sized:1 presentation:2 careful:1 towards:1 lipschitz:2 replace:2 exceptionally:1 hard:1 price:1 feasible:1 specifically:2 typical:1 uniformly:1 averaging:1 lemma:17 called:1 gauss:1 disregard:1 formally:2 select:1 searched:1 arises:1 avoiding:1 handling:1
|
4,182 | 4,786 |
The Time-Marginalized Coalescent Prior for
Hierarchical Clustering
Max Welling
Department of Computer Science
University of California, Irvine
Irvine, CA 92617
[email protected]
Levi Boyles
Department of Computer Science
University of California, Irvine
Irvine, CA 92617
[email protected]
Abstract
We introduce a new prior for use in Nonparametric Bayesian Hierarchical Clustering. The prior is constructed by marginalizing out the time information of
Kingman?s coalescent, providing a prior over tree structures which we call the
Time-Marginalized Coalescent (TMC). This allows for models which factorize
the tree structure and times, providing two benefits: more flexible priors may be
constructed and more efficient Gibbs type inference can be used. We demonstrate
this on an example model for density estimation and show the TMC achieves competitive experimental results.
1
Introduction
Hierarchical clustering models aim to fit hierarchies to data, and enjoy the property that clusterings of varying size can be obtained by ?pruning? the tree at particular levels. In contrast, standard
clustering models must specify the number of clusters beforehand, while Nonparametric Bayesian
(NPB) clustering models such as the Dirichlet Process Mixture (DPM) [5, 13] directly infer the (effective) number of clusters. Hierarchical clustering is often used in population genetics for inferring
ancestral history and bioinformatics for genetic clustering, and has also seen use in computer vision
[18, 1] and topic modelling [3, 1].
NPB models are a class of models of growing popularity. Being Bayesian, these models can easily
quantify the uncertainty the the resulting inferences, and being nonparametric, they can seamlessly
adapt to increasingly complicated data, avoiding the model selection problem. NPB hierarchical
clustering models are an important regime of such models, and have been shown to have superior
performance to alternative models in many domains [8]. Thus, further advances in the applicability
of these models is important.
There has been substantial work on NPB models for hierarchical clustering. Dirichlet Diffusion
Trees (DDT) [16], Kingman?s Coalescent [9, 10, 4, 20], and Pitman-Yor Diffusion Trees (PYDT)
[11] all provide models in which data is generated from a Continuous-Time Markov Chain (CTMC)
that lives on a tree that splits (or coalesces) according to some continuous-time process. The nested
CRP and DP [3, 17] and Tree-Structured Stick Breaking (TSSB) [1] define priors over tree structures
from which data is directly generated.
Although there is extensive and impressive literature on the subject demonstrating its useful clustering properties, NPB hierarchical clustering has yet to see widespred use. The expensive computational cost typically associated with these models is a likely inhibitor to the adoption of these
models. The CTMC based models are typically more computationally intensive than the direct generation models, and there has been substantial work in improving the speed of inference in these
models. [12] introduces a variational approximation for the DDT, and [7, 6] provide more efficient
SMC schemes for the Coalescent. The direct generation models are typically faster, but usually
1
Figure 1: Coalescent tree construction (left) A pair isuniformly drawn from N = 5 points to coalesce. (middle) The coalescence time t5 is drawn from Exp( 52 ), and another pair on the remaining 4 points is drawn
uniformly. (right) After drawing t4 ? Exp( 42 ), the coalescence time for the newly coalesced pair is t5 + t4 .
Figure 2: Consider the trees one might construct by uniformly picking pairs of points to join, starting with four
leaves {a, b, c, d}. One can join a and b first, and then c and d (and then the two parents), or c and d and then a
and b to construct the tree on the left. By defining a uniform prior over ?n , and then marginalizing out the order
of the internal nodes ? (equivalently, the order in which pairs are joined), we then have a prior over ?n that
puts more mass on balanced trees than unbalanced ones. For example the tree on right can only be constructed
in one way by node joining.
come at some cost or limitation; for example the TSSB allows (and requires) that data live at some
of its internal nodes.
Our contribution is a new prior over tree structures that is simpler than many of the priors described
above, yet still retains the exchangeability and consistency properties of a NPB model. The prior
is derived by marginalizing out the times and ordering of internal nodes of the coalescent. The
remaining distribution is an exchangeable and consistent prior over tree structures. This prior may
be used directly with a data generation process, or a notion of time may be reintroduced, providing
a prior with a factorization between the tree structures and times. The simplicity of the prior allows
for great flexibility and the potential for more efficient inference algorithms. For the purposes of
this paper, we focus on one such possible model, wherein we generate branch lengths according to
a process similar to stick-breaking.
We introduce the proposed prior on tree structures in Section 2, the distribution over times conditioned on tree structure in 3.1, and the data generation process in 3.2. We show experimental results
in Section 4, and conclude in Section 5.
2
2.1
Coalescent Prior on Tree Structures
Kingman?s Coalescent
Kingman?s Coalescent provides a prior over balanced, edge-weighted trees, wherein the weights
are often interpreted as representing some notion of time. See Figure 1. A coalescent tree can be
sampled as follows: start with n points
and n dangling edges hanging from them, and all weights set
to 0. Sample a time from Exp( n2 ), and add this value to the weight for each of the dangling edges.
Then pick a pair uniformly at random to coalesce (giving rise to their mutual parent, whose new
dangling edge has weight 0). Repeat this process on the remaining n ? 1 points until a full weighted
tree is constructed. Note however, that the weights do not influence the choice of tree structures
sampled, which suggests we can marginalize out the times and still retain an exchangeable and
consistent distribution over trees. What remains is simply a process in which a uniformly chosen
pair of points is joined at every iteration.
2.2
Coalescent Distribution over Trees
We consider two types of tree-like structures, generic (rooted) unweighted tree graphs which we
denote ?n living in ?n , and trees of the previous type, but with a specified ordering ? on the
internal (non-leaf) nodes of the tree, denoted (?n , ?) = ?n ? ?n . Marginalizing out the times
of the coalescent gives a uniform prior over ordered tree structures ?n . The order information is
2
Figure 3: (left) A sample from the described prior with stick-breaking parameter B(1, 1) (uniform). (middle)
A sample using B(2, 2). (right) A sample using B(4, 2).
necessary because for a given ?n there are multiple ways of constructing it by uniformly picking
pairs to join, see Figure 2. If there are i remaining nodes to join, there are 2i ways of joining them,
so we have for the probability of a particular ?n :
n ?1
Y
i
p(?n ) =
2
i=2
This defines an exchangeable and consistent prior over ?n ; exchangeable because p(?n ) does not
depend on the order in which the data is seen, and consistent because the conditional prior1 is well
defined ? we can imagine adding a new leaf to an existing ?n , which creates a new internal node.
Let yi denote the ith internal node2 of ?n , i ? {1...n ? 1}, and let y ? denote the new internal node.
There are n ways of attaching the new internal node below y1 , n?1 ways of attaching below y2 , and
= n2 ways of attaching y ? into ?n . Thus if we make this choice uniformly at
so on, giving n(n+1)
2
?1 Qn+1 i ?1
random, we get the probability of the new tree is p(?n+1 ) = p(?n ) n+1
= i=2 2
.
2
It is possible to marginalize out the ordering information in the coalescent tree structures ?n to derive
exchangeable, consistent priors on ?unordered? tree structures ?n . We can perform this marginalization by counting how many ordered tree structures ?n ? ?n are consistent with a particular
unordered tree structure ?n .
Lemma 1. A tree ?n has T (?n ) = Q(n?1)!
possible orderings on its internal nodes, where mi is
n?1
i=1 mi
the number of internal nodes in the subtree rooted at node i.
(For proof see the supplementary material.) This is in agreement with what we would expect: for
an unbalanced tree, mi = {1, 2, ..., n ? 1}, so this gives T = 1. Since an unbalanced tree imposes
a full ordering on the internal nodes, there can only be one unbalanced ordered tree that maps to the
corresponding unbalanced unordered tree. As the tree becomes more balanced, the mi s decrease,
increasing T .
Thus the probability of a particular ?n is T (?n ) times the probability of an ordered tree ?n under
the coalescent:3
n ?1
n ?1
Y
i
(n ? 1)! Y i
p(?n ) = T (?n )
= Qn?1
(1)
2
2
i=1 mi i=2
i=2
Theorem 1. p(?n ) defines an exchangeable and consistent prior over ?n
p(?n ) is clearly still exchangeable as it does not depend on any order of the data, and was defined by
marginalizing out a consistent process, so its conditional priors are still well defined and thus p(?n )
is consistent. For a more explicit proof see the supplementary material.
1
The sequential sampling scheme often associated with NPB models; for example the conditional prior for
the CRP is the prior probability of adding the n + 1st point to one of the existing clusters (or a new cluster)
given the clustering on the first n points.
2
When times are given, we index the internal nodes from most recent to root. Otherwise, nodes are ordered
such that parents always succeed children.
3
It has been brought to our attention that this prior and its connection to the coalescent has been studied
before in [2] as the beta-splitting model with parameter ? = 0, and later in [14] under the framework of
Gibbs-Fragmentation trees.
3
Figure 4: The subtree Sl rooted at the red node l is pruned in preparation for an MCMC move. We perform
slice sampling to determine where the pruned subtree?s parent should be placed next in the remaining tree ?l .
Figure 5: We compute the posterior pdf for each branch that we might attach to. If ? or ? are greater than one,
the Beta prior on branch lengths can cause these pdfs to go to zero at the limits of their domain. Thus to enable
moves across branches we compute the extrema of each pdf so that all valid intervals are found.
3
Data Generation Model
Given a prior over tree structures such as (1), we can define a data generating process in many
ways (indeed, any L1 bounded martingale will do [19]); here we restrict our attention to generative
models in which we first sample times given a tree structure, and then sample the data according to
some process described on those times (in our case Brownian Motion). Examples of other potential
data generation models include those in [1], such as the ?Generalized Gaussian Diffusions,? and the
multinomial likelihood often used with the Coalescent.
3.1
Branch Lengths
Given a tree structure ?, we can sample branch lengths, si = tpi ? ti , with ti the time of coalescent
event i, with t = 0 at the leaves and pi is the parent of i. Consider the following construction similar
to a stick-breaking process: Start with a stick of unit length. Starting at the root, travel down the
given ?, and at each split point duplicate the current stick into two sticks, assigning one to each
child. Then, sample a Beta random variable B for each of the two sticks where the corresponding
children are not leaves. B will be the proportion of the remaining stick attributed to that branch of
the tree until the next split point (sticks afterwards will be of length proportional to (1 ? B)). We
have, Bi = 1 ? (ti /tpi ) = si /tpi . The total prior over branch lengths can thus be written as:
p({Bi }|?) =
N
?2
Y
B(Bi |?, ?)
(2)
i=1
See Figure 3 for samples from this prior. Note that any distribution whose support is the unit interval
may be used, and in fact more innovative schemes for sampling the times may be used as well; one
of the major advantages of the TMC over the Coalescent and DDT is that the times may be defined
in a variety of ways.
There is a single Beta random variable attributed to each internal node of the tree (except the root,
which has B set to 1). Since the order in which we see the data does not affect the way in which we
sample these stick proportions, the process remains exchangeable. We denote pairs (?n , {Bi }) as
?, i.e. a tree structure with branch lengths.
3.2
Brownian Motion
Given a tree ? we can define a likelihood for continuous data xi ? Rp using Brownian motion.
We denote the length of each branch segment of the tree si . Data is generated as follows: we start
at some unknown location in Rp at time t = 1 and immediately split into two independent Wiener
processes (with parameter ?), each denoted yi , where i is the index of the relevant branch in ?. Each
4
Figure 6: (left) Approximated log-density using a DP mixture. (midleft) Log-density using a Dirichlet Diffusion Tree model. (midright) Log-density using our model directly. (right) Log-density using our model with a
heavy-tailed noise model at the leaves. Contours are spaced 1 apart, for a total of 15 contours. In the probability
domain the various densities look similar.
Figure 7: Posterior sample from our model applied to the leukemia dataset. Best viewed in color. Each pure
subtree is painted a color unique to the class associated with it. The OTHERS class is a set of datapoints to
which no diagnostic label was assigned. A larger view of this figure can be found in the supplementary material.
of the processes evolves for times si , then, a new independent Wiener process is instantiated at the
time of each split, and this continues until all processes reach the leaves of ? (i.e. t = 0), at which
point the yi s at the leaves are associated with the data x. This is a similar likelihood to the ones used
for Dirichlet Diffusion Trees [16] and the Coalescent [20] for continuous data.
3.2.1
Likelihood Computation
The likelihood p(x|?) can be calculated using a single bottom-up sweep of message passing. As in
[20], by marginalizing out from the leaves towards the root, the message at an internal node i is a
Gaussian with mean y?i and variance ??i .
The ? and y? messages can be written for any number of incoming nodes in a single form:
?i?1 =
X
(?j + sj )?1 ;
y?i = ?i
j?c(i)
X
j?c(i)
y?j
? j + sj
where c(i) are the nodes sending incoming messages to i. We can compute the likelihood using any
arbitrary node as the root for message passing. Fixing a particular node as root, we can write the
total likelihood of the tree as:
n?1
Y
Zc(i) (x, ?)
(3)
p(x|?) =
i=1
When |c(i)| = 1 (e.g. when passing through the root at t = 1), Zc(i) = 1. When |c(i)| = 2 and
|c(i)| = 3 (when collecting at an arbitrary node i chosen as the root):
1
? 12
2
?
? i = ?(?l + ?r + sl + sr )
Zli ,ri (x, ?) = |2? ?i | exp ? ||?
(4)
?
yri ? y?li ||?? ;
i
i
i
i
i
2
2
?
2
?
2
?
k
1
(x, ?) = |2??|?1 ? ? ? 2 e(? 2 (?pi ||?yli ??yri ||?? ? +?ri ||?ypi ??yli ||?? ? +?li ||?ypi ??yri ||?? ? ))
Z
pi ,li ,ri
?p?i = ?pi + si ;
?l?i = ?li + sli ;
?r?i = ?ri + sri ;
? ? = ?p?i ?l?i + ?l?i ?r?i + ?r?i ?p?i
(5)
where ||.||? corresponds to the Mahalanobis norm with covariance ?. These messages are derived
by using the product of Gaussian pdf identities.
5
3.3
MCMC Inference
We propose an MCMC procedure that samples from the posterior distribution over ? as follows.
First, a random node l is pruned from the tree (so that its parent pl has no parent and only one child),
giving the pruned subtree Sl and remaining tree ?l . See Figure 4. We then consider all possible
moves that would place pl into a valid location elsewhere in the tree. For each branch indexed by
the node i ?below? it, we compute the posterior density function of where pl should be placed on
that branch. We then slice sample on this collection of density functions. See Figure 5. By cycling
through the nodes to prune and reattach, we achieve a Gibbs sampler over ?.
We can efficiently compute the relative change in the likelihood p(x|?) through a combination of
belief propagation and local computations. First we perform belief propagation on ?l to give upward
and downward messages, and on Sl to give only upwards messages. Denote ?(S, i, t) as the tree
formed by attaching S above node i at time t in ?. For the new tree we imagine collecting messages
to node pl resulting in a new factor Zi,l,pi (x, ?l (Sl , i, t)). The messages directly downstream of this
factor are Zc(i) (x, ?l ) and Zppi ,rpi (x, ?l ) (if lpi = i, ie i is the ?left? child of its parent). If we
now imagine that the original likelihood was computed by collecting to node pi , then we see that the
first factor should replace the factor Zlpi ,rpi ,ppi (x, ?l ) at node pi while the latter factor was already
included in Zlpi ,rpi ,ppi (x, ?l ). All other factors do no change. The total (multiplicative) change in
the likelihood is thus,
?Z(?l (Sl , i, t)) = Zi,l,pi (x, ?l (Sl , i, t))
Zppi ,rpi (x, ?l )
Zlpi ,rpi ,ppi (x, ?l )
(6)
The update in prior probability for adding the parent of l in the segment (i, pi ) (with times ti and
tpi ) at time t is proportional to the product of the Beta pdfs in (2) that arise when ?l (Sl , i, t) is
constructed, and inversely proportional to the Beta pdf that is removed from ?l , as well as being
proportional to the overall prior probability over ?n :4
ti
tl
; ?, ?)B(1 ? ; ?, ?)p(?(?l (Sl , i, t))
tpi
t
t
B(1 ?
; ?, ?)
(7)
Where ?(?) gives the structure part of ? = (?, {Bi }). p(?(?l (Sl , i, t)) can be computed for all i
in linear time via dynamic programming (it does not depend on the actual value of t). By taking the
product of (6) and (7) we get the joint posterior of (i, t):
p(?l (Sl , i, t)) ?
1
B(1 ?
ti
tpi
t
; ?, ?)B(1 ?
p(?l (Sl , i, t)|X) ? ?Z(?l (Sl , i, t)) p(?l (Sl , i, t))
(8)
p(?l (Sl , i, t)|X) defines the distribution from which we would like to sample. We propose a slice
sampler that can propose adding Sl to any segment in ?l . For a fixed i, p(?l (Sl , i, t)|X) is typically
unimodal, and typically has a small number of modes at most. If we can find all of the extrema of the
posterior, we can easily find the intervals that contain positive probability density for slice sampling
moves (see Figure 5). Thus this slice sampling procedure will mix as quickly as slice sampling on a
single unimodal distribution. We find the extrema of these functions using Newton methods.
The overall sampling procedure is then to sample a new location for each node (both leaves and
internal nodes) of the tree using the Gibbs sampling scheme explained above.
3.4
Hyperparameter Inference
As we do not know the structure of the data beforehand, we may not want to predetermine the
specific values of ?, ? and ?. Thus we define hyperpriors on these parameters and infer them as
well. For simplicity we assume the form ? = kI for the Brownian motion covariance parameter.
We use an Inverse-Gamma prior on k, so that k ?1 ? G(?, ?).
k
4
?1
|X ? G
N ?1
1 X
(N ? 1)p
+ ?,
di + ?
2
2 i=1
!
Note that if either l or i is a leaf, then the prior term will be simpler than the one listed here
6
where N is the number of datapoints, p is the dimension, and di =
Euclidean norm.
||?
yli ??
yri ||2
?li +?ri +sli +sri
, ||.|| is the
By putting a G(?, ?) prior on ? ? 1 and ? ? 1, we achieve a posterior for these parameters:
?
? ??(??1+??1)
p(?, ?|?, ?, X) ? (? ? 1) (? ? 1) e
N
?1
Y
i=1
1
B(?, ?)
ti
1?
tpi
??1
ti
tpi
??1
This posterior is log-concave and thus unimodal. We perform slice sampling to update ? and ?.
3.5
Predictive Density
Given a set of samples from the posterior, we can approximate the posterior density estimate by sampling a test point located at yt into each of these trees repeatedly (giving new trees ? ? = ?(yt , i, s)
for various values of i and s) and approximating p(yt |X) as:
Z
Z
Z
p(yt |X) = p(?|X)p(yt |?, X)d? = d?p(?|X) d? ? p(? ? , yt |?, X)
X
X
?
p(yt |?j? , X)
?i ??|X ?j? ?p(?j? |?i ,X)
R
where p(? ? |?, X) = p(? ? , yt |?, X)dyt . By integrating out yli in (5), we get a modification of (8)
that is proportional to p(? ? |?, X). Slice sampling from this gives us several new trees ?j? for each
?i , where one of the leaves is not observed. p(yt |?j? , X) is then available by message passing to the
leaf associated with yt (denoted l), which results in a Gaussian over yt . Thus the final approximated
density is a mixture of Gaussians with a component for each of the ?j? .
Performing the aforementioned integration (after replacing y?li with yt ), we get:
Z
Zpred (i, t) ? dyt Zi,pi ,l (x, ? ? (yt ))
k
1
1
= |2??|? 2 (?i? + ?p?i )? 2 exp ? (?l? + (?i? ?1 + ?p?i ?1 )?1 )di,pi
2
yi ? y?pi ||2?? ? . This gives the posterior density for the location of the unobserved
where di,pi = ||?
point:
Zli ,ri (x, ?l )
p(? ? = ?({l}, i, t)|?, X) ? Zpred (i, t)
p(?({l}, i, t))
Zli ,ri ,pi (x, ?l )
where p(?({l}, i, t)) is as in (7).
4
Experiments
We compare our model to Dirichlet Diffusion Trees (DDT) [16] and to Dirichlet Process Mixtures
(DPM) [5, 13]. We used Radford Neal?s Flexible Bayesian Modeling package [15] for both the DDT
and the DPM experiments. All algorithms were run with vague hyperpriors, except for the DPM
concentration parameter which we set to .1 as we did not expect many modes for these experiments.
4.1
Synthetic Data
To qualitatively compare our method to Dirichlet Process Mixtures and Dirichlet Diffusion Trees,
we ran all three methods on a simulated dataset with N = 200, p = 2. The data is generated from a
mixture of heavy tailed distributions to demonstrate the differences between these algorithms when
presented with outliers. As can be seen in Figure 6, the DDT fits a density with reasonably heavy
tails, whereas our model fits a narrower distribution. This is a result of the fact that the divergence
function of the DDT strongly encourages the branch lengths to be small, and thus a larger variance
is required to explain the data. Our model can be combined with a heavy-tailed observation model
to produce densities with heavier tails ? see the rightmost panel of Figure 6.
7
Figure 8: A comparison of our method to the DDT and DPM, using predictive log likelihood on test data. Plots
show performance over time, except the DPM which shows the result after convergence. (left) the comparison
on a p = 200 version of the St. Jude?s Leukemia dataset. The ?TMC - k? runs are with k fixed throughout
the run. (middle) the comparison on the p = 1000 version of the Leukemia dataset (right) comparison on a
N = 1400, p = 200 bag of visual words dataset.
4.2
Gene Clustering
We applied our model to the St. Jude?s Leukemia dataset [22], which has Ntrain = 215 datapoints,
Ntest = 112, and preprocessed5 to have p = 1000 dimensions. We preprocessed the data so that
each dimension had unit variance. Associated with each datapoint is one of 6 classifications of
leukemia, or a 7th class with which no diagnosis was attributed. We applied our method to the full
dataset to see if it could recover these classes. Figure 7 shows the posterior tree sampled after about
28 Gibbs passes (about 10 minutes). We also compared our method against the DDT and DPM on
these models? abilities to predict test data on a p = 200 subset of the p = 1000 dataset, as well as on
the p = 1000 datset, see Figure 8. On the p = 200 dataset, both the DDT and the TMC outperform
the DPM, with the TMC performing slightly worse. We attribute this difference in performance due
our model?s weaker prior on the branch lengths, which causes our model to overfit slightly; if we
preset the diffusion variance of our model to a value somewhat larger than the data variance, our
performance improves. In the p = 1000 dataset, the same phenomenon is observed.
4.3
Computer Vision Features
We also cluster visual bag of words features collected from birds images from Visipedia [21]. We
worked on a dataset of size N = 1400, Ntest = 1412, where each observation belongs to one of
200 classes of birds, see Figure 8. Again our method is better than DPM yet not as well as the DDT.
Fixing the variance does improve the performance of our algorithm but not enough to improve over
the DDT.
5
Conclusion
We introduced a new prior for use in NPB hierarchical clustering, one that can be used in a variety
of ways to define a generative model for data. By marginalizing out the time of the coalescent, we
achieve a prior from which data can be either generated directly via a graphical model living on
trees, or by a CTMC lying on a distribution over times for the branch lengths ? in the style of the
coalescent and DDT. However, unlike the coalescent and DDT, in our model the times are generated
conditioned on the tree structure; giving potential for more interesting models or more efficient
inference. The simplicity of the prior allows for efficient Gibbs style inference, and we provide an
example model and demonstrate that it can achieve similar performance to that of the DDT. However,
to achieve that performance the diffusion variance must be set in advance, suggesting that alternative
distributions over the branch lengths may provide better performance than the one explored here.
Acknowledgements
This material is based upon work supported by the National Science Foundation under Grant No.
0914783, 0928427, 1018433, 1216045.
5
We simply took the 1000 dimensions with the highest variance.
8
References
[1] R.P. Adams, Z. Ghahramani, and M.I. Jordan. Tree-structured stick breaking for hierarchical
data. Advances in Neural Information Processing Systems, 23:19?27, 2010.
[2] D Aldous. Probability distributions on cladograms. IMA Volumes in Mathematics and its . . . ,
1995.
[3] David Blei, Thomas L. Griffiths, Michael I. Jordan, and Joshua B. Tenenbaum. Hierarchical topic models and the nested chinese restaurant process. In Sebastian Thrun, Lawrence
Saul, and Bernhard Sch?olkopf, editors, Advances in Neural Information Processing Systems,
volume 16. MIT Press, Cambridge, MA, 2004.
[4] A. Drummond and A. Rambaut. Beast: Bayesian evolutionary analysis by sampling trees.
BMC evolutionary biology, 7(1):214, 2007.
[5] T.S. Ferguson. Bayesian density estimation by mixtures of normal distributions. Recent advances in statistics, pages 287?303, 1983.
?
[6] D. G?or?ur, L. Boyles, and M. Welling. Scalable inference on kingman2019s
coalescent using
pair similarity. In Proceedings of AISTATS, 2012.
[7] D. G?or?ur and Y. W. Teh. An efficient sequential Monte Carlo algorithm for coalescent clustering. In D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, editors, Advances in Neural
Information Processing Systems 21, pages 521?528, 2009.
[8] Katherine Heller and Zoubin Ghahramani. Bayesian hierarchical clustering. In Proceedings of
ICML, volume 22, 2005.
[9] J. F. C. Kingman. The coalescent. Stochastic Processes and their Applications, 13:235?248,
1982.
[10] J. F. C. Kingman. On the genealogy of large populations. Journal of Applied Probability,
19:27?43, 1982.
[11] D.A. Knowles and Z. Ghahramani.
Pitman-yor diffusion trees.
Arxiv preprint
arXiv:1106.2494, 2011.
[12] D.A. Knowles, J. Van Gael, and Z. Ghahramani. Message passing algorithms for dirichlet diffusion trees. In Proceedings of the 28th Annual International Conference on Machine Learning, 2011.
[13] A.Y. Lo. On a class of bayesian nonparametric estimates: I. density estimates. The Annals of
Statistics, 12(1):351?357, 1984.
[14] Peter McCullagh, Jim Pitman, and Matthias Winkel. Gibbs fragmentation trees. Bernoulli,
14(4):988?1002, November 2008.
[15] R. Neal. Software for flexible bayesian modeling and markov chain sampling. see http://www.
cs. toronto. edu/ radford/fbm. software. html, 2003.
[16] R.M. Neal. Density modeling and clustering using dirichlet diffusion trees. Bayesian Statistics,
7:619?629, 2003.
[17] A. Rodriguez, D.B. Dunson, and A.E. Gelfand. The nested dirichlet process. Journal of the
American Statistical Association, 103(483):1131?1154, 2008.
[18] R. Salakhutdinov, J. Tenenbaum, and A. Torralba. Learning to learn with compound hd models.
In Advances in Neural Information Processing Systems 21, 2012.
[19] J. Steinhardt and Z. Ghahramani. Flexible martingale priors for deep hierarchies. In International Conference on Artificial Intelligence and Statistics (AISTATS), volume 43, pages 61?62,
2012.
[20] Y. W. Teh, H. Daum?e III, and D. M. Roy. Bayesian agglomerative clustering with coalescents.
In Advances in Neural Information Processing Systems, volume 20, 2008.
[21] P. Welinder, S. Branson, T. Mita, C. Wah, F. Schroff, S. Belongie, and P. Perona. CaltechUCSD Birds 200. Technical Report CNS-TR-2010-001, California Institute of Technology,
2010.
[22] E.J. Yeoh, M.E. Ross, S.A. Shurtleff, W.K. Williams, D. Patel, R. Mahfouz, F.G. Behm, S.C.
Raimondi, M.V. Relling, A. Patel, et al. Classification, subtype discovery, and prediction of
outcome in pediatric acute lymphoblastic leukemia by gene expression profiling. Cancer cell,
1(2):133?143, 2002.
9
|
4786 |@word sri:2 version:2 middle:3 proportion:2 norm:2 covariance:2 pick:1 tr:1 genetic:1 rightmost:1 existing:2 current:1 si:5 yet:3 assigning:1 must:2 written:2 rpi:5 plot:1 update:2 generative:2 leaf:13 intelligence:1 ntrain:1 ith:1 blei:1 provides:1 node:32 location:4 toronto:1 simpler:2 constructed:5 direct:2 beta:6 introduce:2 indeed:1 growing:1 salakhutdinov:1 actual:1 increasing:1 becomes:1 bounded:1 panel:1 mass:1 what:2 interpreted:1 extremum:3 unobserved:1 every:1 collecting:3 ti:8 concave:1 stick:12 exchangeable:8 unit:3 grant:1 enjoy:1 subtype:1 before:1 positive:1 local:1 limit:1 painted:1 joining:2 might:2 bird:3 studied:1 suggests:1 branson:1 factorization:1 smc:1 bi:5 adoption:1 unique:1 procedure:3 word:2 integrating:1 griffith:1 zoubin:1 get:4 marginalize:2 selection:1 put:1 influence:1 live:1 www:1 map:1 yt:13 go:1 attention:2 starting:2 williams:1 simplicity:3 splitting:1 immediately:1 boyle:2 pure:1 datapoints:3 hd:1 population:2 notion:2 annals:1 hierarchy:2 construction:2 imagine:3 programming:1 agreement:1 roy:1 expensive:1 approximated:2 located:1 continues:1 pediatric:1 bottom:1 observed:2 preprint:1 ordering:5 decrease:1 removed:1 highest:1 ran:1 substantial:2 balanced:3 dynamic:1 depend:3 segment:3 predictive:2 creates:1 upon:1 vague:1 easily:2 joint:1 various:2 instantiated:1 effective:1 monte:1 artificial:1 outcome:1 whose:2 gelfand:1 supplementary:3 larger:3 drawing:1 otherwise:1 coalescents:1 ability:1 statistic:4 final:1 advantage:1 matthias:1 tpi:8 propose:3 took:1 product:3 uci:2 relevant:1 flexibility:1 achieve:5 drummond:1 olkopf:1 parent:9 cluster:5 convergence:1 produce:1 generating:1 adam:1 derive:1 fixing:2 c:1 come:1 quantify:1 attribute:1 stochastic:1 coalescent:26 lymphoblastic:1 enable:1 material:4 pl:4 genealogy:1 lying:1 normal:1 exp:5 great:1 lawrence:1 predict:1 major:1 achieves:1 torralba:1 purpose:1 estimation:2 coalesced:1 travel:1 schroff:1 bag:2 label:1 ross:1 weighted:2 brought:1 clearly:1 inhibitor:1 always:1 gaussian:4 aim:1 mit:1 exchangeability:1 varying:1 derived:2 focus:1 pdfs:2 modelling:1 likelihood:11 bernoulli:1 seamlessly:1 contrast:1 inference:9 ferguson:1 typically:5 perona:1 koller:1 upward:1 overall:2 aforementioned:1 flexible:4 classification:2 denoted:3 html:1 integration:1 mutual:1 construct:2 sampling:13 bmc:1 biology:1 look:1 icml:1 leukemia:6 others:1 report:1 prior1:1 duplicate:1 npb:8 gamma:1 divergence:1 national:1 ima:1 cns:1 message:12 introduces:1 mixture:7 chain:2 beforehand:2 edge:4 necessary:1 winkel:1 tree:75 indexed:1 euclidean:1 modeling:3 retains:1 applicability:1 cost:2 subset:1 uniform:3 welinder:1 synthetic:1 combined:1 st:3 density:18 international:2 ie:1 ancestral:1 retain:1 picking:2 michael:1 quickly:1 again:1 worse:1 coalesces:1 american:1 kingman:6 style:2 li:6 suggesting:1 potential:3 unordered:3 mcmc:3 later:1 root:8 view:1 multiplicative:1 red:1 competitive:1 start:3 recover:1 complicated:1 contribution:1 formed:1 wiener:2 variance:8 efficiently:1 spaced:1 bayesian:11 carlo:1 history:1 explain:1 datapoint:1 reach:1 sebastian:1 against:1 associated:6 mi:5 proof:2 attributed:3 di:4 irvine:4 newly:1 sampled:3 dataset:11 color:2 improves:1 specify:1 wherein:2 strongly:1 crp:2 until:3 overfit:1 replacing:1 propagation:2 rodriguez:1 defines:3 mode:2 contain:1 y2:1 assigned:1 neal:3 mahalanobis:1 encourages:1 rooted:3 generalized:1 pdf:4 demonstrate:3 node2:1 l1:1 motion:4 upwards:1 image:1 variational:1 superior:1 multinomial:1 ctmc:3 volume:5 tail:2 association:1 cambridge:1 gibbs:7 consistency:1 mathematics:1 coalescence:2 had:1 impressive:1 similarity:1 acute:1 add:1 posterior:12 brownian:4 recent:2 aldous:1 belongs:1 apart:1 compound:1 life:1 yri:4 yi:4 joshua:1 seen:3 greater:1 somewhat:1 prune:1 determine:1 living:2 branch:17 full:3 multiple:1 afterwards:1 infer:2 unimodal:3 mix:1 fbm:1 technical:1 faster:1 adapt:1 profiling:1 predetermine:1 prediction:1 scalable:1 vision:2 arxiv:2 iteration:1 lpi:1 jude:2 cell:1 whereas:1 want:1 interval:3 sch:1 unlike:1 sr:1 pass:1 subject:1 dpm:9 jordan:2 call:1 counting:1 split:5 enough:1 bengio:1 iii:1 variety:2 marginalization:1 fit:3 affect:1 zi:3 restaurant:1 restrict:1 intensive:1 expression:1 heavier:1 peter:1 passing:5 cause:2 repeatedly:1 deep:1 useful:1 gael:1 listed:1 nonparametric:4 tenenbaum:2 generate:1 http:1 sl:17 outperform:1 dangling:3 diagnostic:1 popularity:1 diagnosis:1 ddt:15 write:1 hyperparameter:1 levi:1 four:1 putting:1 demonstrating:1 ypi:2 drawn:3 preprocessed:1 diffusion:12 graph:1 downstream:1 run:3 inverse:1 package:1 uncertainty:1 zli:3 place:1 throughout:1 knowles:2 ki:1 annual:1 worked:1 ri:7 software:2 speed:1 innovative:1 pruned:4 performing:2 department:2 structured:2 according:3 hanging:1 combination:1 across:1 slightly:2 increasingly:1 beast:1 ur:2 evolves:1 modification:1 explained:1 outlier:1 computationally:1 remains:2 know:1 sending:1 available:1 gaussians:1 hyperpriors:2 hierarchical:11 generic:1 alternative:2 rp:2 original:1 thomas:1 clustering:19 dirichlet:11 remaining:7 include:1 graphical:1 marginalized:2 ppi:3 newton:1 daum:1 giving:5 ghahramani:5 chinese:1 approximating:1 sweep:1 move:4 already:1 concentration:1 cycling:1 evolutionary:2 dp:2 simulated:1 thrun:1 topic:2 agglomerative:1 collected:1 length:13 index:2 providing:3 equivalently:1 katherine:1 dunson:1 rise:1 unknown:1 perform:4 teh:2 yli:4 observation:2 markov:2 november:1 defining:1 jim:1 y1:1 arbitrary:2 introduced:1 david:1 pair:10 required:1 specified:1 extensive:1 connection:1 coalesce:2 tmc:6 reintroduced:1 wah:1 california:3 usually:1 below:3 regime:1 max:1 belief:2 event:1 attach:1 representing:1 scheme:4 improve:2 technology:1 inversely:1 prior:41 literature:1 acknowledgement:1 heller:1 discovery:1 marginalizing:7 relative:1 expect:2 generation:6 limitation:1 proportional:5 interesting:1 foundation:1 consistent:9 imposes:1 editor:2 pi:14 heavy:4 lo:1 cancer:1 genetics:1 elsewhere:1 repeat:1 placed:2 supported:1 zc:3 weaker:1 institute:1 saul:1 taking:1 attaching:4 pitman:3 yor:2 benefit:1 slice:8 van:1 calculated:1 dimension:4 valid:2 unweighted:1 t5:2 qn:2 contour:2 collection:1 qualitatively:1 sli:2 welling:3 sj:2 pruning:1 approximate:1 patel:2 bernhard:1 gene:2 incoming:2 conclude:1 belongie:1 factorize:1 xi:1 continuous:4 tailed:3 learn:1 reasonably:1 ca:2 improving:1 schuurmans:1 bottou:1 constructing:1 domain:3 did:1 aistats:2 noise:1 arise:1 n2:2 child:5 join:4 tl:1 martingale:2 inferring:1 explicit:1 breaking:5 theorem:1 down:1 minute:1 caltechucsd:1 specific:1 explored:1 datset:1 adding:4 sequential:2 fragmentation:2 subtree:5 conditioned:2 downward:1 t4:2 simply:2 likely:1 visual:2 steinhardt:1 ordered:5 joined:2 radford:2 nested:3 corresponds:1 ma:1 succeed:1 conditional:3 viewed:1 identity:1 narrower:1 towards:1 replace:1 change:3 mccullagh:1 included:1 tssb:2 uniformly:7 except:3 sampler:2 preset:1 lemma:1 total:4 dyt:2 ntest:2 experimental:2 internal:15 support:1 latter:1 unbalanced:5 bioinformatics:1 avoiding:1 preparation:1 mita:1 phenomenon:1
|
4,183 | 4,787 |
Discriminatively Trained Sparse Code Gradients
for Contour Detection
Xiaofeng Ren and Liefeng Bo
Intel Science and Technology Center for Pervasive Computing, Intel Labs
Seattle, WA 98195, USA
{xiaofeng.ren,liefeng.bo}@intel.com
Abstract
Finding contours in natural images is a fundamental problem that serves as the
basis of many tasks such as image segmentation and object recognition. At the
core of contour detection technologies are a set of hand-designed gradient features, used by most approaches including the state-of-the-art Global Pb (gPb)
operator. In this work, we show that contour detection accuracy can be significantly improved by computing Sparse Code Gradients (SCG), which measure
contrast using patch representations automatically learned through sparse coding.
We use K-SVD for dictionary learning and Orthogonal Matching Pursuit for computing sparse codes on oriented local neighborhoods, and apply multi-scale pooling and power transforms before classifying them with linear SVMs. By extracting rich representations from pixels and avoiding collapsing them prematurely,
Sparse Code Gradients effectively learn how to measure local contrasts and find
contours. We improve the F-measure metric on the BSDS500 benchmark to 0.74
(up from 0.71 of gPb contours). Moreover, our learning approach can easily adapt
to novel sensor data such as Kinect-style RGB-D cameras: Sparse Code Gradients on depth maps and surface normals lead to promising contour detection using
depth and depth+color, as verified on the NYU Depth Dataset.
1
Introduction
Contour detection is a fundamental problem in vision. Accurately finding both object boundaries and
interior contours has far reaching implications for many vision tasks including segmentation, recognition and scene understanding. High-quality image segmentation has increasingly been relying on
contour analysis, such as in the widely used system of Global Pb [2]. Contours and segmentations
have also seen extensive uses in shape matching and object recognition [8, 9].
Accurately finding contours in natural images is a challenging problem and has been extensively
studied. With the availability of datasets with human-marked groundtruth contours, a variety of
approaches have been proposed and evaluated (see a summary in [2]), such as learning to classify [17, 20, 16], contour grouping [23, 31, 12], multi-scale features [21, 2], and hierarchical region
analysis [2]. Most of these approaches have one thing in common [17, 23, 31, 21, 12, 2]: they are
built on top of a set of gradient features [17] measuring local contrast of oriented discs, using chisquare distances of histograms of color and textons. Despite various efforts to use generic image
features [5] or learn them [16], these hand-designed gradients are still widely used after a decade
and support top-ranking algorithms on the Berkeley benchmarks [2].
In this work, we demonstrate that contour detection can be vastly improved by replacing the handdesigned Pb gradients of [17] with rich representations that are automatically learned from data.
We use sparse coding, in particularly Orthogonal Matching Pursuit [18] and K-SVD [1], to learn
such representations on patches. Instead of a direct classification of patches [16], the sparse codes
on the pixels are pooled over multi-scale half-discs for each orientation, in the spirit of the Pb
1
+-
SVM
SVM
image patch: gray, ab
?
SVM
?
?
RGB-(D) contours
SVM
depth patch (optional):
depth, surface normal
local sparse coding
multi-scale pooling
oriented gradients
power transforms
? linear SVM
per-pixel
sparse codes
Figure 1: We combine sparse coding and oriented gradients for contour analysis on color as well as
depth images. Sparse coding automatically learns a rich representation of patches from data. With
multi-scale pooling, oriented gradients efficiently capture local contrast and lead to much more
accurate contour detection than those using hand-designed features including Global Pb (gPb) [2].
gradients, before being classified with a linear SVM. The SVM outputs are then smoothed and nonmax suppressed over orientations, as commonly done, to produce the final contours (see Fig. 1).
Our sparse code gradients (SCG) are much more effective in capturing local contour contrast than
existing features. By only changing local features and keeping the smoothing and globalization parts
fixed, we improve the F-measure on the BSDS500 benchmark to 0.74 (up from 0.71 of gPb), a substantial step toward human-level accuracy (see the precision-recall curves in Fig. 4). Large improvements in accuracy are also observed on other datasets including MSRC2 and PASCAL2008. Moreover, our approach is built on unsupervised feature learning and can directly apply to novel sensor
data such as RGB-D images from Kinect-style depth cameras. Using the NYU Depth dataset [27],
we verify that our SCG approach combines the strengths of color and depth contour detection and
outperforms an adaptation of gPb to RGB-D by a large margin.
2
Related Work
Contour detection has a long history in computer vision as a fundamental building block. Modern
approaches to contour detection are evaluated on datasets of natural images against human-marked
groundtruth. The Pb work of Martin et. al. [17] combined a set of gradient features, using brightness, color and textons, to outperform the Canny edge detector on the Berkeley Benchmark (BSDS).
Multi-scale versions of Pb were developed and found beneficial [21, 2]. Building on top of the Pb
gradients, many approaches studied the globalization aspects, i.e. moving beyond local classification and enforcing consistency and continuity of contours. Ren et. al. developed CRF models on
superpixels to learn junction types [23]. Zhu et. al. used circular embedding to enforce orderings
of edgels [31]. The gPb work of Arbelaez et. al. computed gradients on eigenvectors of the affinity
graph and combined them with local cues [2]. In addition to Pb gradients, Dollar et. al. [5] learned
boosted trees on generic features such as gradients and Haar wavelets, Kokkinos used SIFT features
on edgels [12], and Prasad et. al. [20] used raw pixels in class-specific settings. One closely related
work was the discriminative sparse models of Mairal et al [16], which used K-SVD to represent
multi-scale patches and had moderate success on the BSDS. A major difference of our work is the
use of oriented gradients: comparing to directly classifying a patch, measuring contrast between
oriented half-discs is a much easier problem and can be effectively learned.
Sparse coding represents a signal by reconstructing it using a small set of basis functions. It has
seen wide uses in vision, for example for faces [28] and recognition [29]. Similar to deep network
approaches [11, 14], recent works tried to avoid feature engineering and employed sparse coding of
image patches to learn features from ?scratch?, for texture analysis [15] and object recognition [30,
3]. In particular, Orthogonal Matching Pursuit [18] is a greedy algorithm that incrementally finds
sparse codes, and K-SVD is also efficient and popular for dictionary learning. Closely related to our
work but on the different problem of recognition, Bo et. al. used matching pursuit and K-SVD to
learn features in a coding hierarchy [3] and are extending their approach to RGB-D data [4].
2
Thanks to the mass production of Kinect, active RGB-D cameras became affordable and were
quickly adopted in vision research and applications. The Kinect pose estimation of Shotton et.
al. used random forests to learn from a huge amount of data [25]. Henry et. al. used RGB-D cameras to scan large environments into 3D models [10]. RGB-D data were also studied in the context
of object recognition [13] and scene labeling [27, 22]. In-depth studies of contour and segmentation
problems for depth data are much in need given the fast growing interests in RGB-D perception.
3
Contour Detection using Sparse Code Gradients
We start by examining the processing pipeline of Global Pb (gPb) [2], a highly influential and
widely used system for contour detection. The gPb contour detection has two stages: local contrast
estimation at multiple scales, and globalization of the local cues using spectral grouping. The core
of the approach lies within its use of local cues in oriented gradients. Originally developed in
[17], this set of features use relatively simple pixel representations (histograms of brightness, color
and textons) and similarity functions (chi-square distance, manually chosen), comparing to recent
advances in using rich representations for high-level recognition (e.g. [11, 29, 30, 3]).
We set out to show that both the pixel representation and the aggregation of pixel information in local
neighborhoods can be much improved and, to a large extent, learned from and adapted to input data.
For pixel representation, in Section 3.1 we show how to use Orthogonal Matching Pursuit [18] and
K-SVD [1], efficient sparse coding and dictionary learning algorithms that readily apply to low-level
vision, to extract sparse codes at every pixel. This sparse coding approach can be viewed similar
in spirit to the use of filterbanks but avoids manual choices and thus directly applies to the RGBD data from Kinect. We show learned dictionaries for a number of channels that exhibit different
characteristics: grayscale/luminance, chromaticity (ab), depth, and surface normal.
In Section 3.2 we show how the pixel-level sparse codes can be integrated through multi-scale pooling into a rich representation of oriented local neighborhoods. By computing oriented gradients
on this high dimensional representation and using a double power transform to code the features
for linear classification, we show a linear SVM can be efficiently and effectively trained for each
orientation to classify contour vs non-contour, yielding local contrast estimates that are much more
accurate than the hand-designed features in gPb.
3.1
Local Sparse Representation of RGB-(D) Patches
K-SVD and Orthogonal Matching Pursuit. K-SVD [1] is a popular dictionary learning algorithm
that generalizes K-Means and learns dictionaries of codewords from unsupervised data. Given a set
of image patches Y = [y1 , ? ? ? , yn ], K-SVD jointly finds a dictionary D = [d1 , ? ? ? , dm ] and an
associated sparse code matrix X = [x1 , ? ? ? , xn ] by minimizing the reconstruction error
min kY ? DXk2F
D,X
s.t. ?i, kxi k0 ? K; ?j, kdj k2 = 1
(1)
where k ? kF denotes the Frobenius norm, xi are the columns of X, the zero-norm k ? k0 counts the
non-zero entries in the sparse code xi , and K is a predefined sparsity level (number of non-zero entries). This optimization can be solved in an alternating manner. Given the dictionary D, optimizing
the sparse code matrix X can be decoupled to sub-problems, each solved with Orthogonal Matching
Pursuit (OMP) [18], a greedy algorithm for finding sparse codes. Given the codes X, the dictionary
D and its associated sparse coefficients are updated sequentially by singular value decomposition.
For our purpose of representing local patches, the dictionary D has a small size (we use 75 for 5x5
patches) and does not require a lot of sample patches, and it can be learned in a matter of minutes.
Once the dictionary D is learned, we again use the Orthogonal Matching Pursuit (OMP) algorithm
to compute sparse codes at every pixel. This can be efficiently done with convolution and a batch
version of the OMP algorithm [24]. For a typical BSDS image of resolution 321x481, the sparse
code extraction is efficient and takes 1?2 seconds.
Sparse Representation of RGB-D Data. One advantage of unsupervised dictionary learning is
that it readily applies to novel sensor data, such as the color and depth frames from a Kinect-style
RGB-D camera. We learn K-SVD dictionaries up to four channels of color and depth: grayscale
for luminance, chromaticity ab for color in the Lab space, depth (distance to camera) and surface
normal (3-dim). The learned dictionaries are visualized in Fig. 2. These dictionaries are interesting
3
(a) Grayscale
(b) Chromaticity (ab)
(c) Depth
(d) Surface normal
Figure 2: K-SVD dictionaries learned for four different channels: grayscale and chromaticity (in
ab) for an RGB image (a,b), and depth and surface normal for a depth image (c,d). We use a fixed
dictionary size of 75 on 5x5 patches. The ab channel is visualized using a constant luminance of 50.
The 3-dimensional surface normal (xyz) is visualized in RGB (i.e. blue for frontal-parallel surfaces).
to look at and qualitatively distinctive: for example, the surface normal codewords tend to be more
smooth due to flat surfaces, the depth codewords are also more smooth but with speckles, and the
chromaticity codewords respect the opponent color pairs. The channels are coded separately.
3.2
Coding Multi-Scale Neighborhoods for Measuring Contrast
Multi-Scale Pooling over Oriented Half-Discs. Over decades of research on contour detection and
related topics, a number of fundamental observations have been made, repeatedly: (1) contrast is
the key to differentiate contour vs non-contour; (2) orientation is important for respecting contour
continuity; and (3) multi-scale is useful. We do not wish to throw out these principles. Instead, we
seek to adopt these principles for our case of high dimensional representations with sparse codes.
Each pixel is presented with sparse codes extracted from a small patch (5-by-5) around it. To aggregate pixel information, we use oriented half-discs as used in gPb (see an illustration in Fig. 1). Each
orientation is processed separately. For each orientation, at each pixel p and scale s, we define two
half-discs (rectangles) N a and N b of size s-by-(2s+1), on both sides of p, rotated to that orientation. For each half-disc N , we use average pooling on non-zero entries (i.e. a hybrid of average and
max pooling) to generate its representation
"
,
,
#
X
X
X
X
F (N ) =
|xi1 |
I|xi1 |>0 , ? ? ? ,
|xim |
I|xim |>0
(2)
i?N
i?N
i?N
i?N
where xij is the j-th entry of the sparse code xi , and I is the indicator function whether xij is nonzero. We rotate the image (after sparse coding) and use integral images for fast computations (on
both |xij | and |xij | > 0, whose costs are independent of the size of N .
For two oriented half-dics N a and N b at a scale s, we compute a difference (gradient) vector D
D(Nsa , Nsb ) = F (Nsa ) ? F (Nsb )
(3)
where | ? | is an element-wise absolute value operation. We divide D(Nsa , Nsb ) by their norms
kF (Nsa )k + kF (Nsb )k + , where is a positive number. Since the magnitude of sparse codes varies
over a wide range due to local variations in illumination as well as occlusion, this step makes the
appearance features robust to such variations and increases their discriminative power, as commonly
done in both contour detection and object recognition. This value is not hard to set, and we find a
value of = 0.5 is better than, for instance, = 0.
At this stage, one could train a classifier on D for each scale to convert it to a scalar value of
contrast, which would resemble the chi-square distance function in gPb. Instead, we find that it is
much better to avoid doing so separately at each scale, but combining multi-scale features in a joint
representation, so as to allow interactions both between codewords and between scales. That is, our
final representation of the contrast at a pixel p is the concatenation of sparse codes pooled at all the
4
scales s ? {1, ? ? ? , S} (we use S = 4):
Dp = D(N1a , N1b ), ? ? ? , D(NSa , NSb ); F (N1a ? N1b ), ? ? ? , F (NSa ? NSb )
(4)
In addition to difference D, we also include a union term F (Nsa ? Nsb ), which captures the appearance of the whole disc (union of the two half discs) and is normalized by kF (Nsa )k + kF (Nsb )k + .
Double Power Transform and Linear Classifiers. The concatenated feature Dp (non-negative)
provides multi-scale contrast information for classifying whether p is a contour location for a particular orientation. As Dp is high dimensional (1200 and above in our experiments) and we need to do
it at every pixel and every orientation, we prefer using linear SVMs for both efficient testing as well
as training. Directly learning a linear function on Dp , however, does not work very well. Instead,
we apply a double power transformation to make the features more suitable for linear SVMs
Dp = Dp?1 , Dp?2
(5)
where 0<?1 <?2 <1. Empirically, we find that the double power transform works much better
than either no transform or a single power transform ?, as sometimes done in other classification
contexts. Perronnin et. al. [19] provided an intuition why a power transform helps classification,
which ?re-normalizes? the distribution of the features into a more Gaussian form. One plausible
intuition for a double power transform is that the optimal exponent ? may be different across feature
dimensions. By putting two power transforms of Dp together, we allow the classifier to pick its
linear combination, different for each dimension, during the stage of supervised training.
From Local Contrast to Global Contours. We intentionally only change the local contrast estimation in gPb and keep the other steps fixed. These steps include: (1) the Savitzky-Goley filter
to smooth responses and find peak locations; (2) non-max suppression over orientations; and (3)
optionally, we apply the globalization step in gPb that computes a spectral gradient from the local
gradients and then linearly combines the spectral gradient with the local ones. A sigmoid transform
step is needed to convert the SVM outputs on Dp before computing spectral gradients.
4
Experiments
We use the evaluation framework of, and extensively compare to, the publicly available Global
Pb (gPb) system [2], widely used as the state of the art for contour detection1 . All the results
reported on gPb are from running the gPb contour detection and evaluation codes (with default
parameters), and accuracies are verified against the published results in [2]. The gPb evaluation
includes a number of criteria, including precision-recall (P/R) curves from contour matching (Fig. 4),
F-measures computed from P/R (Table 1,2,3) with a fixed contour threshold (ODS) or per-image
thresholds (OIS), as well as average precisions (AP) from the P/R curves.
Benchmark Datasets. The main dataset we use is the BSDS500 benchmark [2], an extension of the
original BSDS300 benchmark and commonly used for contour evaluation. It includes 500 natural
images of roughly resolution 321x481, including 200 for training, 100 for validation, and 200 for
testing. We conduct both color and grayscale experiments (where we convert the BSDS500 images
to grayscale and retain the groundtruth). In addition, we also use the MSRC2 and PASCAL2008
segmentation datasets [26, 6], as done in the gPb work [2]. The MSRC2 dataset has 591 images of
resolution 200x300; we randomly choose half for training and half for testing. The PASCAL2008
dataset includes 1023 images in its training and validation sets, roughly of resolution 350x500. We
randomly choose half for training and half for testing.
For RGB-D contour detection, we use the NYU Depth dataset (v2) [27], which includes 1449 pairs
of color and depth frames of resolution 480x640, with groundtruth semantic regions. We choose
60% images for training and 40% for testing, as in its scene labeling setup. The Kinect images are
of lower quality than BSDS, and we resize the frames to 240x320 in our experiments.
Training Sparse Code Gradients. Given sparse codes from K-SVD and Orthogonal Matching Pursuit, we train the Sparse Code Gradients classifiers, one linear SVM per orientation, from sampled
locations. For positive data, we sample groundtruth contour locations and estimate the orientations
at these locations using groundtruth. For negative data, locations and orientations are random. We
subtract the mean from the patches in each data channel. For BSDS500, we typically have 1.5 to 2
1
In this work we focus on contour detection and do not address how to derive segmentations from contours.
5
0.94
0.92
0.88
0.86
0.84
0.8
0.9
0.88
0.86
horizontal edge
45?deg edge
vertical edge
135?deg edge
0.84
0.82
2
3
4
5
7
10
pooling disc size (pixel)
14
average precision
0.92
0.9
average precision
average precision
single scale
accum. scale
0.82
25
50
(a)
75
100
dictionary size
0.9
0.88
gray
color (ab)
gray+color
0.86
125
0.84
150
1
2
3
(b)
4
5
6
sparsity level
7
8
(c)
Figure 3: Analysis of our sparse code gradients, using average precision of classification on sampled
boundaries. (a) The effect of single-scale vs multi-scale pooling (accumulated from the smallest).
(b) Accuracy increasing with dictionary size, for four orientation channels. (c) The effect of the
sparsity level K, which exhibits different behavior for grayscale and chromaticity.
global
gPb (gray)
SCG (gray)
gPb (color)
SCG (color)
gPb (gray)
SCG (gray)
gPb (color)
SCG (color)
BSDS500
ODS OIS
.67
.69
.69
.71
.70
.72
.72
.74
.69
.71
.71
.73
.71
.74
.74
.76
0.9
AP
.68
.71
.71
.75
.67
.74
.72
.77
0.8
0.7
Precision
local
1
0.6
gPb (gray) F=0.69
gPb (color) F=0.71
SCG (gray) F=0.71
SCG (color) F=0.74
0.5
0.4
0.3
0.2
0
Table 1: F-measure evaluation on the BSDS500
benchmark [2], comparing to gPb on grayscale
and color images, both for local contour detection as well as for global detection (i.e. combined with the spectral gradient analysis in [2]).
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Recall
Figure 4: Precision-recall curves of SCG vs
gPb on BSDS500, for grayscale and color
images. We make a substantial step beyond
the current state of the art toward reaching
human-level accuracy (green dot).
million data points. We use 4 spatial scales, at half-disc sizes 2, 4, 7, 25. For a dictionary size of 75
and 4 scales, the feature length for one data channel is 1200. For full RGB-D data, the dimension is
4800. For BSDS500, we train only using the 200 training images. We modify liblinear [7] to take
dense matrices (features are dense after pooling) and single-precision floats.
Looking under the Hood. We empirically analyze a number of settings in our Sparse Code Gradients. In particular, we want to understand how the choices in the local sparse coding affect contour
classification. Fig. 3 shows the effects of multi-scale pooling, dictionary size, and sparsity level
(K). The numbers reported are intermediate results, namely the mean of average precision of four
oriented gradient classifier (0, 45, 90, 135 degrees) on sampled locations (grayscale unless otherwise
noted, on validation). As a reference, the average precision of gPb on this task is 0.878.
For multi-scale pooling, the single best scale for the half-disc filter is about 4x8, consistent with
the settings in gPb. For accumulated scales (using all the scales from the smallest up to the current
level), the accuracy continues to increase and does not seem to be saturated, suggesting the use of
larger scales. The dictionary size has a minor impact, and there is a small (yet observable) benefit to
use dictionaries larger than 75, particularly for diagonal orientations (45- and 135-deg). The sparsity
level K is a more intriguing issue. In Fig. 3(c), we see that for grayscale only, K = 1 (normalized
nearest neighbor) does quite well; on the other hand, color needs a larger K, possibly because ab is
a nonlinear space. When combining grayscale and color, it seems that we want K to be at least 3. It
also varies with orientation: horizontal and vertical edges require a smaller K than diagonal edges.
(If using K = 1, our final F-measure on BSDS500 is 0.730.)
We also empirically evaluate the double power transform vs single power transform vs no transform.
With no transform, the average precision is 0.865. With a single power transform, the best choice of
the exponent is around 0.4, with average precision 0.884. A double power transform (with exponents
6
gPb
SCG
gPb
SCG
MSRC2
ODS OIS AP
.37
.39 .22
.43
.43 .33
PASCAL2008
ODS OIS AP
.34
.38 .20
.37
.41 .27
gPb (color)
SCG (color)
gPb (depth)
SCG (depth)
gPb (RGB-D)
SCG (RGB-D)
Table 2: F-measure evaluation comparing
our SCG approach to gPb on two additional image datasets with contour groundtruth:
MSRC2 [26] and PASCAL2008 [6].
RGB-D (NYU v2)
ODS OIS AP
.51
.52 .37
.55
.57 .46
.44
.46 .28
.53
.54 .45
.53
.54 .40
.62
.63 .54
Table 3: F-measure evaluation on RGB-D contour detection using the NYU dataset (v2) [27].
We compare to gPb on using color image only,
depth only, as well as color+depth.
Figure 5: Examples from the BSDS500 dataset [2]. (Top) Image; (Middle) gPb output; (Bottom)
SCG output (this work). Our SCG operator learns to preserve fine details (e.g. windmills, faces, fish
fins) while at the same time achieving higher precision on large-scale contours (e.g. back of zebras).
(Contours are shown in double width for the sake of visualization.)
0.25 and 0.75, which can be computed through sqrt) improves the average precision to 0.900, which
translates to a large improvement in contour detection accuracy.
Image Benchmarking Results. In Table 1 and Fig. 4 we show the precision-recall of our Sparse
Code Gradients vs gPb on the BSDS500 benchmark. We conduct four sets of experiments, using
color or grayscale images, with or without the globalization component (for which we use exactly
the same setup as in gPb). Using Sparse Code Gradients leads to a significant improvement in
accuracy in all four cases. The local version of our SCG operator, i.e. only using local contrast, is
already better (F = 0.72) than gPb with globalization (F = 0.71). The full version, local SCG plus
spectral gradient (computed from local SCG), reaches an F-measure of 0.739, a large step forward
from gPb, as seen in the precision-recall curves in Fig. 4. On BSDS300, our F-measure is 0.715.
We observe that SCG seems to pick up fine-scale details much better than gPb, hence the much
higher recall rate, while maintaining higher precision over the entire range. This can be seen in the
examples shown in Fig. 5. While our scale range is similar to that of gPb, the multi-scale pooling
scheme allows the flexibility of learning the balance of scales separately for each code word, which
may help detecting the details. The supplemental material contains more comparison examples.
In Table 2 we show the benchmarking results for two additional datasets, MSRC2 and PASCAL2008. Again we observe large improvements in accuracy, in spite of the somewhat different
natures of the scenes in these datasets. The improvement on MSRC2 is much larger, partly because
the images are smaller, hence the contours are smaller in scale and may be over-smoothed in gPb.
As for computational cost, using integral images, local SCG takes ?100 seconds to compute on a
single-thread Intel Core i5-2500 CPU on a BSDS image. It is slower than but comparable to the
highly optimized multi-thread C++ implementation of gPb (?60 seconds).
7
Figure 6: Examples of RGB-D contour detection on the NYU dataset (v2) [27]. The five panels
are: input image, input depth, image-only contours, depth-only contours, and color+depth contours.
Color is good picking up details such as photos on the wall, and depth is useful where color is
uniform (e.g. corner of a room, row 1) or illumination is poor (e.g. chair, row 2).
RGB-D Contour Detection. We use the second version of the NYU Depth Dataset [27], which
has higher quality groundtruth than the first version. A median filtering is applied to remove double
contours (boundaries from two adjacent regions) within 3 pixels. For RGB-D baseline, we use a
simple adaptation of gPb: the depth values are in meters and used directly as a grayscale image
in gPb gradient computation. We use a linear combination to put (soft) color and depth gradients
together in gPb before non-max suppression, with the weight set from validation.
Table 3 lists the precision-recall evaluations of SCG vs gPb for RGB-D contour detection. All
the SCG settings (such as scales and dictionary sizes) are kept the same as for BSDS. SCG again
outperforms gPb in all the cases. In particular, we are much better for depth-only contours, for
which gPb is not designed. Our approach learns the low-level representations of depth data fully
automatically and does not require any manual tweaking. We also achieve a much larger boost by
combining color and depth, demonstrating that color and depth channels contain complementary
information and are both critical for RGB-D contour detection. Qualitatively, it is easy to see that
RGB-D combines the strengths of color and depth and is a promising direction for contour and
segmentation tasks and indoor scene analysis in general [22]. Fig. 6 shows a few examples of RGBD contours from our SCG operator. There are plenty of such cases where color alone or depth alone
would fail to extract contours for meaningful parts of the scenes, and color+depth would succeed.
5
Discussions
In this work we successfully showed how to learn and code local representations to extract contours
in natural images. Our approach combined the proven concept of oriented gradients with powerful
representations that are automatically learned through sparse coding. Sparse Code Gradients (SCG)
performed significantly better than hand-designed features that were in use for a decade, and pushed
contour detection much closer to human-level accuracy as illustrated on the BSDS500 benchmark.
Comparing to hand-designed features (e.g. Global Pb [2]), we maintain the high dimensional representation from pooling oriented neighborhoods and do not collapse them prematurely (such as
computing chi-square distance at each scale). This passes a richer set of information into learning contour classification, where a double power transform effectively codes the features for linear
SVMs. Comparing to previous learning approaches (e.g. discriminative dictionaries in [16]), our
uses of multi-scale pooling and oriented gradients lead to much higher classification accuracies.
Our work opens up future possibilities for learning contour detection and segmentation. As we illustrated, there is a lot of information locally that is waiting to be extracted, and a learning approach
such as sparse coding provides a principled way to do so, where rich representations can be automatically constructed and adapted. This is particularly important for novel sensor data such as RGB-D,
for which we have less understanding but increasingly more need.
8
References
[1] M. Aharon, M. Elad, and A. Bruckstein. K-SVD: An algorithm for designing overcomplete dictionaries
for sparse representation. IEEE Transactions on Signal Processing, 54(11):4311?4322, 2006.
[2] P. Arbelaez, M. Maire, C. Fowlkes, and J. Malik. Contour detection and hierarchical image segmentation.
IEEE Trans. PAMI, 33(5):898?916, 2011.
[3] L. Bo, X. Ren, and D. Fox. Hierarchical Matching Pursuit for Image Classification: Architecture and Fast
Algorithms. In Advances in Neural Information Processing Systems 24, 2011.
[4] L. Bo, X. Ren, and D. Fox. Unsupervised Feature Learning for RGB-D Based Object Recognition. In
International Symposium on Experimental Robotics (ISER), 2012.
[5] P. Dollar, Z. Tu, and S. Belongie. Supervised learning of edges and object boundaries. In CVPR, volume 2,
pages 1964?71, 2006.
[6] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The PASCAL Visual Object
Classes Challenge 2008 (VOC2008). http://www.pascal-network.org/challenges/VOC/voc2008/.
[7] R. Fan, K. Chang, C. Hsieh, X. Wang, and C. Lin. Liblinear: A library for large linear classification. The
Journal of Machine Learning Research, 9:1871?1874, 2008.
[8] V. Ferrari, T. Tuytelaars, and L. V. Gool. Object detection by contour segment networks. In ECCV, pages
14?28, 2006.
[9] C. Gu, J. Lim, P. Arbel?aez, and J. Malik. Recognition using regions. In CVPR, pages 1030?1037, 2009.
[10] P. Henry, M. Krainin, E. Herbst, X. Ren, and D. Fox. Rgb-d mapping: Using depth cameras for dense 3d
modeling of indoor environments. In International Symposium on Experimental Robotics (ISER), 2010.
[11] G. Hinton, S. Osindero, and Y. Teh. A fast learning algorithm for deep belief nets. Neural computation,
18(7):1527?1554, 2006.
[12] I. Kokkinos. Highly accurate boundary detection and grouping. In CVPR, pages 2520?2527, 2010.
[13] K. Lai, L. Bo, X. Ren, and D. Fox. A large-scale hierarchical multi-view RGB-D object dataset. In ICRA,
pages 1817?1824, 2011.
[14] H. Lee, R. Grosse, R. Ranganath, and A. Ng. Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In ICML, pages 609?616, 2009.
[15] J. Mairal, F. Bach, J. Ponce, G. Sapiro, and A. Zisserman. Discriminative learned dictionaries for local
image analysis. In CVPR, pages 1?8, 2008.
[16] J. Mairal, M. Leordeanu, F. Bach, M. Hebert, and J. Ponce. Discriminative sparse image models for
class-specific edge detection and image interpretation. ECCV, pages 43?56, 2008.
[17] D. Martin, C. Fowlkes, and J. Malik. Learning to detect natural image boundaries using brightness and
texture. In Advances in Neural Information Processing Systems 15, 2002.
[18] Y. Pati, R. Rezaiifar, and P. Krishnaprasad. Orthogonal Matching Pursuit: Recursive Function Approximation with Applications to Wavelet Decomposition. In The Twenty-Seventh Asilomar Conference on
Signals, Systems and Computers, pages 40?44, 1993.
[19] F. Perronnin, J. S?anchez, and T. Mensink. Improving the fisher kernel for large-scale image classification.
In ECCV, pages 143?156, 2010.
[20] M. Prasad, A. Zisserman, A. Fitzgibbon, M. Kumar, and P. Torr. Learning class-specific edges for object
detection and segmentation. Computer Vision, Graphics and Image Processing, pages 94?105, 2006.
[21] X. Ren. Multi-scale improves boundary detection in natural images. In ECCV, pages 533?545, 2008.
[22] X. Ren, L. Bo, and D. Fox. RGB-(D) scene labeling: features and algorithms. In Computer Vision and
Pattern Recognition (CVPR), 2012 IEEE Conference on, pages 2759?2766. IEEE, 2012.
[23] X. Ren, C. Fowlkes, and J. Malik. Cue integration in figure/ground labeling. In Advances in Neural
Information Processing Systems 18, 2005.
[24] R. Rubinstein, M. Zibulevsky, and M. Elad. Efficient Implementation of the K-SVD Algorithm using
Batch Orthogonal Matching Pursuit. Technical report, CS Technion, 2008.
[25] J. Shotton, A. Fitzgibbon, M. Cook, T. Sharp, M. Finocchio, R. Moore, A. Kipman, and A. Blake. Realtime human pose recognition in parts from single depth images. In CVPR, volume 2, page 3, 2011.
[26] J. Shotton, J. Winn, C. Rother, and A. Criminisi. Textonboost: Joint appearance, shape and context
modeling for multi-class object recognition and segmentation. In ECCV, 2006.
[27] N. Silberman and R. Fergus. Indoor scene segmentation using a structured light sensor. In IEEE Workshop
on 3D Representation and Recognition (3dRR), 2011.
[28] J. Wright, A. Yang, A. Ganesh, S. Sastry, and Y. Ma. Robust face recognition via sparse representation.
IEEE Trans. PAMI, 31(2):210?227, 2009.
[29] J. Yang, K. Yu, Y. Gong, and T. Huang. Linear spatial pyramid matching using sparse coding for image
classification. In CVPR, pages 1794?1801, 2009.
[30] K. Yu, Y. Lin, and J. Lafferty. Learning image representations from the pixel level via hierarchical sparse
coding. In CVPR, pages 1713?1720, 2011.
[31] Q. Zhu, G. Song, and J. Shi. Untangling cycles for contour grouping. In ICCV, 2007.
9
|
4787 |@word middle:1 version:6 seems:2 kokkinos:2 norm:3 everingham:1 open:1 seek:1 scg:28 rgb:31 prasad:2 tried:1 decomposition:2 hsieh:1 brightness:3 pick:2 textonboost:1 liblinear:2 contains:1 outperforms:2 existing:1 current:2 com:1 comparing:6 od:5 yet:1 intriguing:1 readily:2 shape:2 remove:1 designed:7 v:8 alone:2 half:14 cue:4 greedy:2 cook:1 core:3 provides:2 detecting:1 location:7 org:1 five:1 constructed:1 direct:1 symposium:2 combine:4 manner:1 behavior:1 roughly:2 growing:1 multi:22 chi:3 relying:1 voc:1 automatically:6 cpu:1 increasing:1 provided:1 moreover:2 panel:1 mass:1 developed:3 supplemental:1 finding:4 transformation:1 sapiro:1 berkeley:2 every:4 voc2008:2 exactly:1 k2:1 classifier:5 yn:1 before:4 positive:2 engineering:1 local:32 modify:1 despite:1 handdesigned:1 ap:5 pami:2 plus:1 studied:3 challenging:1 collapse:1 range:3 camera:7 hood:1 testing:5 union:2 block:1 recursive:1 fitzgibbon:2 maire:1 significantly:2 matching:15 word:1 tweaking:1 spite:1 interior:1 operator:4 put:1 context:3 www:1 map:1 center:1 shi:1 williams:1 resolution:5 embedding:1 ferrari:1 variation:2 updated:1 hierarchy:1 us:3 designing:1 element:1 recognition:16 particularly:3 continues:1 observed:1 x500:1 bottom:1 solved:2 capture:2 wang:1 region:4 cycle:1 ordering:1 zibulevsky:1 substantial:2 intuition:2 environment:2 principled:1 respecting:1 gpb:50 trained:2 segment:1 distinctive:1 basis:2 gu:1 untangling:1 easily:1 joint:2 k0:2 various:1 train:3 fast:4 effective:1 rubinstein:1 labeling:4 aggregate:1 neighborhood:5 aez:1 whose:1 quite:1 widely:4 plausible:1 larger:5 richer:1 elad:2 otherwise:1 cvpr:8 tuytelaars:1 transform:15 jointly:1 final:3 differentiate:1 advantage:1 net:1 arbel:1 reconstruction:1 interaction:1 adaptation:2 canny:1 tu:1 combining:3 flexibility:1 achieve:1 frobenius:1 ky:1 seattle:1 double:10 xim:2 extending:1 produce:1 rotated:1 object:13 help:2 derive:1 gong:1 pose:2 nearest:1 minor:1 throw:1 ois:5 resemble:1 c:1 direction:1 closely:2 filter:2 criminisi:1 human:6 material:1 require:3 wall:1 extension:1 around:2 ground:1 normal:8 blake:1 wright:1 bsds:6 mapping:1 rezaiifar:1 major:1 dictionary:27 adopt:1 smallest:2 purpose:1 estimation:3 successfully:1 sensor:5 gaussian:1 reaching:2 avoid:2 boosted:1 pervasive:1 focus:1 ponce:2 improvement:5 superpixels:1 contrast:16 suppression:2 baseline:1 dollar:2 detect:1 dim:1 perronnin:2 accumulated:2 integrated:1 typically:1 entire:1 pixel:19 issue:1 classification:13 orientation:16 pascal:2 krishnaprasad:1 exponent:3 art:3 smoothing:1 spatial:2 integration:1 once:1 extraction:1 ng:1 manually:1 represents:1 look:1 unsupervised:5 icml:1 yu:2 plenty:1 future:1 report:1 few:1 modern:1 oriented:17 randomly:2 preserve:1 occlusion:1 maintain:1 ab:8 detection:34 huge:1 interest:1 circular:1 highly:3 possibility:1 evaluation:8 saturated:1 nsa:8 yielding:1 light:1 implication:1 accurate:3 predefined:1 edge:10 integral:2 closer:1 n1b:2 orthogonal:10 decoupled:1 tree:1 conduct:2 divide:1 unless:1 fox:5 re:1 bsds300:2 overcomplete:1 instance:1 classify:2 column:1 soft:1 modeling:2 measuring:3 cost:2 entry:4 uniform:1 technion:1 examining:1 seventh:1 osindero:1 graphic:1 reported:2 varies:2 kxi:1 combined:4 thanks:1 fundamental:4 peak:1 international:2 retain:1 lee:1 xi1:2 picking:1 together:2 quickly:1 vastly:1 again:3 choose:3 possibly:1 huang:1 collapsing:1 corner:1 style:3 suggesting:1 coding:17 pooled:2 availability:1 coefficient:1 matter:1 filterbanks:1 includes:4 textons:3 ranking:1 performed:1 view:1 lot:2 lab:2 doing:1 analyze:1 start:1 aggregation:1 parallel:1 square:3 publicly:1 accuracy:12 became:1 convolutional:1 characteristic:1 efficiently:3 raw:1 accurately:2 disc:12 ren:10 published:1 classified:1 history:1 sqrt:1 detector:1 reach:1 manual:2 against:2 intentionally:1 dm:1 associated:2 sampled:3 dataset:11 popular:2 recall:8 color:38 lim:1 improves:2 segmentation:13 globalization:6 back:1 originally:1 higher:5 supervised:2 response:1 improved:3 zisserman:3 mensink:1 evaluated:2 done:5 stage:3 hand:7 horizontal:2 replacing:1 nonlinear:1 ganesh:1 liefeng:2 incrementally:1 continuity:2 quality:3 gray:9 building:2 usa:1 effect:3 verify:1 normalized:2 contain:1 concept:1 hence:2 alternating:1 nonzero:1 moore:1 semantic:1 illustrated:2 chromaticity:6 x5:2 during:1 width:1 adjacent:1 drr:1 noted:1 criterion:1 crf:1 demonstrate:1 image:50 wise:1 novel:4 common:1 sigmoid:1 empirically:3 volume:2 million:1 interpretation:1 significant:1 zebra:1 dxk2f:1 consistency:1 sastry:1 iser:2 had:1 henry:2 dot:1 moving:1 similarity:1 surface:10 recent:2 showed:1 optimizing:1 moderate:1 success:1 herbst:1 seen:4 additional:2 somewhat:1 omp:3 employed:1 signal:3 multiple:1 full:2 smooth:3 technical:1 adapt:1 bach:2 long:1 lin:2 lai:1 windmill:1 coded:1 impact:1 scalable:1 vision:8 metric:1 affordable:1 histogram:2 represent:1 sometimes:1 kernel:1 pyramid:1 robotics:2 addition:3 want:2 separately:4 fine:2 winn:2 singular:1 float:1 median:1 pass:1 pooling:15 tend:1 thing:1 lafferty:1 spirit:2 seem:1 extracting:1 yang:2 intermediate:1 shotton:3 easy:1 variety:1 affect:1 architecture:1 translates:1 whether:2 thread:2 effort:1 finocchio:1 song:1 repeatedly:1 deep:3 useful:2 eigenvectors:1 transforms:3 amount:1 extensively:2 locally:1 svms:4 visualized:3 processed:1 generate:1 http:1 outperform:1 xij:4 fish:1 per:3 blue:1 waiting:1 key:1 four:6 putting:1 threshold:2 pb:12 achieving:1 demonstrating:1 changing:1 verified:2 kept:1 rectangle:1 luminance:3 graph:1 convert:3 i5:1 powerful:1 groundtruth:8 patch:17 realtime:1 prefer:1 resize:1 comparable:1 pushed:1 capturing:1 fan:1 strength:2 adapted:2 scene:8 flat:1 sake:1 aspect:1 min:1 chair:1 kumar:1 martin:2 relatively:1 influential:1 structured:1 combination:2 poor:1 beneficial:1 across:1 increasingly:2 suppressed:1 reconstructing:1 smaller:3 iccv:1 pipeline:1 asilomar:1 visualization:1 count:1 xyz:1 fail:1 needed:1 serf:1 photo:1 adopted:1 available:1 operation:1 pursuit:12 junction:1 generalizes:1 apply:5 opponent:1 hierarchical:6 v2:4 generic:2 enforce:1 spectral:6 observe:2 fowlkes:3 batch:2 slower:1 original:1 bsds500:13 top:4 denotes:1 include:2 running:1 maintaining:1 concatenated:1 icra:1 silberman:1 malik:4 already:1 codewords:5 diagonal:2 exhibit:2 gradient:41 affinity:1 dp:9 distance:5 arbelaez:2 concatenation:1 topic:1 extent:1 toward:2 enforcing:1 rother:1 code:37 length:1 pati:1 illustration:1 minimizing:1 balance:1 optionally:1 setup:2 negative:2 implementation:2 twenty:1 teh:1 vertical:2 convolution:1 observation:1 datasets:8 anchez:1 benchmark:10 fin:1 speckle:1 optional:1 hinton:1 looking:1 prematurely:2 y1:1 frame:3 kinect:7 smoothed:2 sharp:1 nsb:8 pair:2 namely:1 extensive:1 optimized:1 kipman:1 learned:12 boost:1 trans:2 address:1 beyond:2 perception:1 pattern:1 indoor:3 sparsity:5 challenge:2 built:2 including:6 max:3 green:1 belief:2 gool:2 power:16 suitable:1 critical:1 natural:7 hybrid:1 haar:1 indicator:1 zhu:2 representing:1 scheme:1 improve:2 technology:2 library:1 x8:1 extract:3 understanding:2 meter:1 kf:5 fully:1 discriminatively:1 interesting:1 filtering:1 proven:1 validation:4 degree:1 consistent:1 principle:2 classifying:3 production:1 normalizes:1 row:2 eccv:5 summary:1 keeping:1 hebert:1 side:1 allow:2 understand:1 wide:2 neighbor:1 face:3 absolute:1 sparse:53 benefit:1 van:1 curve:5 default:1 depth:42 boundary:7 xn:1 avoids:1 contour:73 rich:6 dimension:3 computes:1 commonly:3 qualitatively:2 made:1 forward:1 far:1 transaction:1 ranganath:1 observable:1 keep:1 deg:3 global:9 active:1 sequentially:1 bruckstein:1 mairal:3 belongie:1 discriminative:5 xi:3 fergus:1 grayscale:14 accum:1 decade:3 why:1 table:7 promising:2 learn:9 channel:9 robust:2 nature:1 forest:1 improving:1 main:1 dense:3 linearly:1 whole:1 rgbd:2 complementary:1 x1:1 fig:11 intel:4 benchmarking:2 grosse:1 precision:20 sub:1 x300:1 wish:1 lie:1 learns:4 wavelet:2 minute:1 xiaofeng:2 specific:3 sift:1 nyu:7 list:1 svm:10 grouping:4 workshop:1 effectively:4 texture:2 magnitude:1 illumination:2 margin:1 easier:1 subtract:1 appearance:3 visual:1 kdj:1 bo:7 scalar:1 leordeanu:1 chang:1 applies:2 extracted:2 ma:1 succeed:1 marked:2 viewed:1 room:1 fisher:1 hard:1 change:1 typical:1 torr:1 aharon:1 partly:1 svd:14 experimental:2 meaningful:1 support:1 scan:1 rotate:1 frontal:1 avoiding:1 evaluate:1 d1:1 scratch:1
|
4,184 | 4,788 |
Joint Modeling of a Matrix with Associated Text
via Latent Binary Features
Lawrence Carin
Duke University
[email protected]
XianXing Zhang
Duke University
[email protected]
Abstract
A new methodology is developed for joint analysis of a matrix and accompanying
documents, with the documents associated with the matrix rows/columns. The
documents are modeled with a focused topic model, inferring interpretable latent
binary features for each document. A new matrix decomposition is developed,
with latent binary features associated with the rows/columns, and with imposition
of a low-rank constraint. The matrix decomposition and topic model are coupled
by sharing the latent binary feature vectors associated with each. The model is
applied to roll-call data, with the associated documents defined by the legislation.
Advantages of the proposed model are demonstrated for prediction of votes on
a new piece of legislation, based only on the observed text of legislation. The
coupling of the text and legislation is also shown to yield insight into the properties
of the matrix decomposition for roll-call data.
1
Introduction
The analysis of legislative roll-call data provides an interesting setting for recent developments in
the joint analysis of matrices and text [23, 8]. While the roll-call data matrix is typically binary,
the modeling framework is general, in that it may be readily extended to categorical, integer or
real observations. The problem is made interesting because, in addition to the matrix of votes, we
have access to the text of the legislation (e.g., characteristic of the columns of the matrix, with each
column representing a piece of legislation and each row a legislator). While roll-call data provides
an interesting proving ground, the basic methodologies are applicable to any setting for which one
is interested in analysis of matrices, and there is text associated with the rows or columns (e.g., the
text may correspond to content on a website; each column of the matrix may represent a website,
and each row an individual, with the matrix representing number of visits).
The analysis of roll-call data is of significant interest to political scientists [15, 6]. In most such
research the binary data are typically analyzed with a probit or logistic link function, and the underlying real matrix is assumed to have rank one. Each legislator and piece of legislation exists at a
point along this one dimension, which is interpreted as characterizing a (one-dimensional) political
philosophy (e.g., from ?conservative? to ?liberal?).
Roll-call data analysis have principally been interested in inferring the position of legislators in
the one-dimensional latent space, with this dictated in part by the fact that the ability to perform
prediction is limited. As in much matrix-completion research [17, 18], one typically can only infer
votes that are missing at random. It is not possible to predict the votes of legislators on a new piece
of legislation (for which, for example, an entire column of votes is missing). This has motivated the
joint analysis of roll-call votes and the associated legislation [23, 8]: by modeling the latent space
of the text legislation with a topic model, and making connections between topics and the latent
space of the matrix decomposition, one may infer votes of an entire missing column of the matrix,
assuming access to the text associated with that new legislation.
1
While the research in [23, 8] showed the potential of joint text-matrix analysis, there were several
open questions that motivate this paper. In [23, 8] a latent Dirichlet allocation (LDA) [5] topic model
was employed for the text. It has been demonstrated that LDA yields inferior perplexity scores when
compared to modern Bayesian topic models, such as the focused topic model (FTM) [24]. Another
significant issue with [23, 8] concerns how the topic (text) and matrix models are coupled. In [23, 8]
the frequency with which a given topic is utilized in the text legislation is used to infer the associated
matrix parameters (e.g., to infer the latent feature vector associated with the respective column of
the matrix). This is undesirable, because the frequency with which a topic is used in the document
is characteristic of the style of writing: their may be a topic that is only mentioned briefly in the
document, but that is critical to the outcome of the vote, while other topics may not impact the vote
but are discussed frequently in the legislation. We also wish to move beyond the rank-one matrix
assumption in [15, 6, 8].
Motivated by these limitations, in this paper the FTM is employed to model the text of legislation,
with each piece of legislation characterized by a latent binary vector that defines the sparse set of
associated topics. A new probabilistic low-rank matrix decomposition is developed for the votes,
utilizing latent binary features; this leverages the merits of what were previously two distinct lines
of matrix factorization methods [13, 17]. Unlike previous approaches, the rank is not fixed a priori
but inferred adaptively, with theoretical justifications. For a piece of legislation, the latent binary
feature vectors for the FTM and matrix decomposition are shared, yielding a new means of jointly
modeling text and matrices. This linkage between text and matrices is innovative as: (i) it?s based
on whether a topic is relevant to a document/legislation, not based on the frequency with which the
topic is used in the document (i.e., not based on the style of writing); (ii) it enables interpretation of
the underlying latent binary features [13, 9] based upon available text data. The rest of the paper is
organized as follows. Section 2 first reviews the focused topic model, then introduces a new lowrank matrix decomposition method, and the joint model of the two. Section 3 discusses posterior
inference. In Section 4 quantitative results are presented for prediction of columns of roll-call votes
based on the associated text legislation, and the joint model is demonstrated qualitatively to infer
meaning/insight for the characteristics of legislation and voting patterns, and Section 5 concludes.
2
2.1
Model and Analysis
Focused topic modeling
Focused topic model (FTM) [24] were developed to address a limitation of related models based
on the hierarchical Dirichlet process (HDP) [21]: the HDP shares a set of ?global? topics across
all documents, and each topic is in general manifested with non-zero probability in each document.
This property of HDP tends to yield less ?focused? or descriptive topics. It is desirable to share
a set of topics across all documents, but with the additional constraint that a given document only
utilize a small subset of the topics; this tends to yield more descriptive/focused topics, characteristic
of detailed properties of the documents. A FTM is manifested as a compound linkage of the Indian
buffet process (IBP) [10] and the Dirichlet process (DP). Each document draws latent binary features
from an IBP to select a finite subset of atoms/topics from the DP. In the model details, the DP is
represented in terms of a normalized gamma process [7] with weighting by the binary feature vector,
constituting a document-specific topic distribution in which only a subset of topics are manifested
with non-zero probability.
The key components of the FTM are summarized as follows [24]:
Qt
bjt |?t ? Bernoulli(bjt |?t ), ?t = l=1 ?t , ?t |?r ? Beta(?t |?r , 1)
?j |{bj: , ?} ? Dirichlet(?j |bj: ?), ?t |? ? Gamma(?t |?, 1)
(1)
where bjt 1 ? {0, 1} indicates if document j uses topic t, which is modeled as drawn from an IBP
parameterized by ?r under the stick breaking construction [20], as shown in the first line of (1).
r
? = {?t }K
t=1 represents the relative mass on Kr topics (Kr could be infinite in principle); ? is
shared across all documents, analogous to the ?top layer? of the HDP. ?j is the topic distribution
for the jth document, and the expression bj: ? denotes the pointwise vector product between
1
Throughout this paper notation bij are used to denote the entry locates at the ith row and jth column in
matrix B, bj: and b:k are used to represent the jth row and kth column in B respectively.
2
bj: and ?, thereby selecting a subset of topics for document j (those for which the corresponding
components of bj: are non-zero). The rest of the FTM is constructed similar to LDA [5], where for
each token n in document j, a topic indicator is drawn as zjn |?j ? Mult(zjn |1, ?j ). Conditional
Kr
r
on zjn and the topics {?k }K
k=1 , a word is drawn as wjn |zjn , {?k }k=1 ? Mult(wjn |1, ?zjn ), where
?k |? ? Dirichlet(?k |?).
Although in (1) bj: is mainly designed to map the global prevalence of topics across the corpus,
?, to a within-document proportion of topic usage, ?j , latent features bj: are informative in their
own right, as they indicate which subset of topics is relevant to a given document. The documentdependent topic usage bj: may be more important than ?j when characterizing the meaning of a
document: ?j specifies the frequency with which each of the selected topics is utilized in document
j (this is related to writing style ? verbosity or parsimony ? and less related to meaning); it may be
more important to just know what underlying topics are used in the document, characterized by bj: .
We therefore make the linkage between documents and an associated matrix via the bj: , not based
on ?j (where [23, 8] base the document-matrix linkage via ?j or it?s empirical estimate).
2.2
Matrix factorization with binary latent factors and a low-rank assumption
Binary matrix factorization (BMF) [13, 14] is a general framework in which real latent matrix X ?
RP ?N is decomposed as X = LHRT , where L ? {0, 1}P ?Kl , R ? {0, 1}N ?Kr are binary, and
H ? RKl ?Kr is real. The rows of L and R are modeled via IBPs, parameterized by ?l and ?r
respectively, and Kl and Kr are the truncation levels for the IBPs, which again can be infinite in
principle. The observed matrix is Y, which may be real, binary, or categorial [12]. The observations
are modeled in an element-wise fashion: yij = f (xij ). We focus on binary observed matrices,
Y ? {0, 1}P ?N , and utilize f (?) as a probit model [2]:
1 if x
?ij ? 0
yij =
(2)
0 if x
?ij < 0
with x
?ij = xij + ij , where ij ? N (0, 1).
We generalize the BMF framework by imposing that H is low-rank. Specifically, we impose the
PKc
T
rank-1 expansion H = k=1
, where u:k and v:k are column vectors (thus their outer product
u:k v:k
is a rank-1 matrix), each of them is modeled here by a Gaussian distribution:
u:k ? N (u:k |0, IKl )
v:k ? N (v:k |0, IKr )
(3)
and Kc is the number of such rank-1 matrices such that Kc < min(Kl , Kr ), i.e., H is low-rank.
PKc
T
To motivate this model, consider the representation H = k=1
u:k v:k
in the decomposition X =
P
Kc
T
T
LHR , which implies X = k=1 (Lu:k )(Rv:k ) . Therefore, we may also express X = ??T ,
with ? ? RP ?Kc and ? ? RN ?Kc ; the kth column of ? is defined by Lu:k and the kth column
of ? defined by Rv:k . Consequently, the low-rank assumption for H yields a low-rank model
X = ??T , precisely as in [17, 18]. Thus the definition of ? and ? via the binary matrices
L and R and the linkage matrix H merges previously two distinct lines of matrix factorization
methods. In the context of the application considered here, the decomposition X = LHRT will
prove convenient, as we may share the binary matrices L or R among the topic usage of available
documents. The binary features in L and R are therefore characteristic of the presence/absence of
underlying topics, or related latent processes, and the matrix H provides the mapping of how these
binary features map to observed data.
However, how to specify Kc remains an open question for the above low-rank construction. As a
contribution of this paper, we provide a new means of imposing a low-rank model within the prior.
We model the ?significance? of each rank-1 term in the expansion explicitly, using a stochastic
PKc
T
c
process {sk }K
k=1 sk u:k v:k , Kc can be infinity in
k=1 , therefore H can be decomposed as H =
principle. As a result, the hierarchical representation in modeling the latent matrix X in probit model
can be summarized as:
n
o
PKc
T
c
x
?ij | li: , rj: , {u:k , v:k , sk }K
?
N
x
?
|
s
(l
u
)(r
v
)
,
1
(4)
ij
k
i:
:k
j:
:k
k=1
k=1
Note that sk in (4) is similar to the singular value of SVD in spirit. Intuitively, we wish to impose
|sk | to decrease ?fast? as the increase of index k, and the rank-1 matrices with large indices will have
3
negligible impact over (4), therefore Kc plays a role similar to the truncation level in stick breaking
construction for DP [11] and IBP [20]. To achieve this end, we model each sk as a Gaussian random
variable with a conjugate multiplicative gamma process (MGP) placed on its precision parameter:
Qk
sk |?k ? N sk |0, ?k?1 , ?k = l=1 ?l , ?l |?c ? Gamma(?l |?c , 1)
(5)
The MGP was originally proposed in [3] for learning sparse factor models and further extended for
tree-structured sparse factor models [26] and change-point stick breaking process [25], one of its
properties is that it increasingly shrinks sk towards zero with the increase of index k. Next we make
the above intuition rigorous. Theorem 1 below formally states that if sk is modeled by MGP as in
(5), the rank-1 expansion in (4) will converge when Kc ? ?.
PKc
sk (li: u:k )(rj: v:k )T converges in `2 , as Kc ? ?.
Theorem 1. When ?c > 1, the sequence k=1
Although in MGP Kc is unbounded [3], for computational considerations we would like to truncate
it to a finite value Kc max (P, N ), without much loss of information. As justification, the
following theoretical bound is obtained, in a manner similar to its counterparts in DP [11].
P?
Kc 2
Kc
) >
= k=Kc +1 sk (li: u:k )(rj: v:k )T , then ? > 0 we have p{(Mij
Lemma 1. Denoting Mij
} <
ab(1?1/?c )
,
c
?K
c
where a = maxk E(li: u:k )2 , b = maxk E(rj: v:k )2 .
Lemma 1 states that, when ?c > 1 the approximation error introduced by the truncation level Kc
decays exponentially fast to 0, as Kc ? ?. In Section 3 an MCMC method is developed to
adaptively choose Kc at each iteration, which alleviates us from fixing it a priori. The proof of
Theorem 1 and Lemma 1 can be found in the Supplemental Material.
2.3
Joint learning of FTM and BMF
Via the FTM and BMF framework of the previous subsections, each piece of legislation j is represented as two latent binary feature vectors bj: and rj: . To jointly model the matrix of votes with
associated text of legislation, a natural choice is to impose bj: = rj: . As a result, the full joint model
can be specified by equations (1) - (5), with bjt in (1) replaced by rjt . Note that the joint model links
the topics characteristic of the text, to the latent binary features characteristic of legislation in the
matrix decomposition; and such linkage leverages statistical strength of the two data source across
the latent variables of the joint model during posterior inference. A graphical representation of the
joint model can be found in the Supplemental Material.
In the context of the model for Y = f (X), with X = LHRT , if one were to learn L and H
based upon available training data, then a new legislation y:N +1 could be predicted if we had access
to r:N +1 . Via the construction above, not only do we gain a predictive advantage, because the
new legislation?s latent binary features r:N +1 can be obtained from modeling its document as in
(1), but also the model provides powerful interpretative insights. Specifically the topics inferred
from the documents may be used to interpret the latent binary features associated with the matrix
factorization. These advantages will be demonstrated through experiments on legislative roll-call
data in Section 4.
2.4
Related work
The ideal point topic model (IPTM) was developed in [8], where the supervised Latent Dirichlet
Allocation (sLDA) [4] model was used to link empirical topic-usage frequencies to the latent factors
via regression. In that work the dimension of the latent factors was set to 1, e.g., fixing Kc = 1 in our
nomenclature. In [23] the authors proposed to jointly analyze the voting matrix and the associated
text through a mixture model, where each legislation?s latent feature factor is clustered to a mixture
component in coupled with that legislation?s document topic distribution ?. Note that in their case
each piece of legislation can only belong to one cluster, while in our case the latent binary features
for each document can be effectively treated as being grouped to multiple clusters [13] (a mixedmembership model, manifested in terms of the binary feature vectors). Similar research in linking
collaborative filtering and topic models can also be found in web content recommendation [1], movie
recommendation[19], and scientific paper recommendation [22]. None of these methods makes use
of the binary indicators as the characterization of associated documents, but perform linking via the
topic distribution ? and the latent (real) features in different ways.
4
3
Posterior Inference
We use Gibbs sampling for posterior inference over the latent variables, and only sampling equations
that are unique for this model are discussed here. The rest are similar to those in [24, 13]. In the
following we use p(?|?) to denote the conditional posterior of one variable given on all others.
Sampling {v:k , u:k }k=1:Kc Based on (3) and (4) the conditional posterior of v:k can be writQN
P c
? :j | K
ten as p(v:k |?) ? j=1 N (x
k=1 sk (Lu:k )(rj: v:k ), 1)N (v:k |0, IKr ). It can be shown that
PN
? ?k
p(v:k |?) = N (v:k |?v:k , ?v:k ), with mean ?v:k = sk ?v:k j=1 (Lu:k rj: )T x
:j and covariP
N
?k
T
?1
T
2
? :j = x
? :j ? LUVT rj:
+
ance matrix ?v:k = [IKr + sk j=1 (Lu:k rj: ) (Lu:k rj: )] , where x
Lu:k rj: v:k . By repeating the above procedure p(u:k |?) can be derived similarly.
Sampling {sk }k=1:Kc Based on (4) and (5) the conditional posterior of sk can be written
QN
P c
?1
? :j | K
as p(sk |?) ?
It can be shown that
k=1 sk (Lu:k )(rj: v:k ), 1)N (sk |0, ?k ).
j=1 N (x
P
N
2
2
? ?k
p(sk |?) = N sk |?sk , ?sk , with mean ?sk = ?sk j=1 ((Lu:k )(rj: v:k ))T x
:j and variance
P
N
?s2k = 1/(?k + j=1 ((Lu:k )(rj: v:k ))T ((Lu:k )(rj: v:k ))).
Sampling {?k , ?k }k=1:Kc
Based on (5), given a fixed
be sampled directly
truncation level Kc it can
PKc (k) 2
Kc ?k+1
1
from its posterior distribution: p(?k |?) = Gamma ?k |?c +
, 1 + 2 l=k ?l sl , where
2
Ql
(k)
?l = t=1,t6=k ?t . ?k can then be reconstructed from ?1:k as in (5).
Sampling {rjt }j=1:N,t=1:Kr Similar to the derivation in [24], p(rjt = 1|?) = 1 if Njt >
0, where Njt denotes the number of times document j used topic t. When Njt = 0,
based on (1) and (4) the conditional posterior of rjt can be written as p(rjt = 1|?) ?
?t
? ?k
exp{? 12 [(LhTt: )T (LhTt: ) ? 2(LhTt: )T x
:j ]}, where ht: represents the tth row of H =
?t +2?t (1??t )
?t
PKc
2
(1??
)
t=1:Kl
t
T
k=1 sk u:k v:k ; and p(rjt = 0|?) ? ?t +2?t (1??t ) . {lit }i=1:P is sampled as described in [13].
Adaptive sampler for MGP The above Gibbs sampler needs a predefined truncation level Kc .
In [3, 26] the authors proposed an adaptive sampler, tuning Kc as the sampler progresses, with
convergence of the chain guaranteed [16]. Specifically, the adaptation procedure is triggered with
probability p(t) = exp(z0 + z1 t) at the tth iteration, with z0 , z1 chosen so that adaptation occurs
frequently at the beginning of the chain but decreases exponentially fast. When the adaptation is
T
triggered in the tth iteration, let q? (t) = {k|d? (sk Lu:k v:k
RT ) ? ?} denotes in iteration t the
indices of the rank-1 matrices with the maximum-valued entry less than some pre-defined threshold
?, which intuitively has a negligible contribution at the tth iteration, and thus are deleted and Kc will
decrease. On the other hand, if q? (t) is empty then it suggests that more rank-1 matrices are needed,
in this case we increase Kc by one and draw u:Kc , v:Kc from their prior distributions respectively.
4
Experimental Results
4.1
Experiment setting
We have performed joint matrix and text analysis, considering the House of Representatives (House),
sessions 106 - 111 2 ; we model each session?s roll-call votes separately as binary matrix Y. Entry
yij = 1 denotes that the ith legislator?s response to legislation j is either ?Yea? or ?Yes? , and yij
= 0 denotes that the corresponding response is either ?Nay? or ?No?. The data are preprocessed in
the same way as described in [8]. We recommend to set the IBP hyperparameters ?l = ?r = 1,
MGP hyperparameters ?c = 3, FTM hyperparameters ? = 5 and topic model hyperparameter
? = 0.01. We also considered using a random-walk MH algorithm with non-informative gamma
prior to infer those hyperparameters, as described in [24, 3], and the Markov chain manifested
similar mixing performance. The truncation level Kc in the MGP is not fixed, but inferred from
the adaptive sampler, with threshold parameter ? set to 0.05 (it is recommended to be set small for
most applications). In the study below, for each model we run 5000 iterations of the Gibbs sampler,
with the first 1000 iterations discarded as burn-in, and 400 samples are collected, taking every tenth
iteration afterwards, to perform Bayesian estimate on the object of interest.
2
These data are available from thomas.loc.gov
5
4.2
Predicting random missing votes
In this section we study the classical problem of estimating the values of matrix data that are missing
uniformly at random (in-matrix missing votes), without the use of associated documents. We compare the model proposed in (4) to the probabilistic matrix factorization (PMF) found in [17, 18]. This
is done by decomposing the latent matrix X = ??T , where each row of ? and ?T are drawn from
a Gaussian distribution with mean and covariance matrix modeled by a Gaussian-Wishart distribution. To study the behavior of the proposed MGP prior in (5), we (i) vary the number of columns
(rank) Kc in ? and ? as a free parameter, and call this model PMF; and (ii) incorporate MGP into
the decomposition of X = ?S?T where S ? RKc ?Kc is a diagonal matrix with each diagonal element specified as sk . The model in (ii) is called PMF+MGP. Additionally, to check if the low-rank
assumption detailed in Section 2.2 is effective for BMF, we also compare the performance of the
BMF model originally proposed in [13], which we term BMF-Original.
We compared these models on predicting the missing values selected uniformly at random, with
different percentage (90%, 95%, 99%) of missingness. This study has been done on House data
from the 106 to 111 sessions; however, to conserve space we only summarized the experimental
results on the 110th House data, in Figure 1; similar results are observed across all sessions. In
Figure 1 each panel corresponds to a certain percentage of missingness; the horizontal axis is the
number of columns (rank), which varies as a free parameter of PMF, while the vertical axis is
the prediction accuracy. MGP is observed to be generally effective in modeling the rank across
three panels, and the low-rank assumption is critical to get good performance for the BMF. When
the percentage of missingness is relatively low, e.g., 90% or 95%, PMF performs better than BMF,
however when the percentage of missingness is high e.g., 99%, the BMF (with low rank assumption)
is very competitive with PMF. This is probably because of the way BMF encourages the sharing of
statistical strength among all rows and columns via the matrix H as described in [13], which is most
effective when data is scarce.
4.3
Predicting new bills based on text
We study the predictive power of the proposed model when the legislative roll-call votes and the
associated bill documents are modeled jointly, as described in Section 2.3. We compare our proposed
model with the IPTM in [8], where the authors fixed the rank Kc = 1 in IPTM; we term this
model IPTM(Kc = 1). In [8] the authors suggested that fixing the rank to one might be overrestrictive, thus we also propose to model the rank in the ideal point model using MGP, in a similar
way to how this was done for the PMF model, and call this model IPTM. We also compare our
model with that in [23], where the authors proposed to combine the factor analysis model and topic
model via a compounded mixture model, with all sessions of roll-call data are modeled jointly
via a Markov process. Since our main goal is to predict new bills but not modeling the matrices
dynamically, in the following experiments we remove the Markov process but model each session of
House data separately; we call this model FATM. In [23] the authors proposed to use a beta-Bernoulli
distributed binary variable bk to model if the kth rank-1 matrix is used in matrix decomposition.
When performing posterior inference we find that bk tends to be easily trapped in local maxima,
while MGP, which models the significance of usage (but not the binary usage) of each kth rank-1
matrix via sk , smoother estimates and better mixing were observed.
For each session the bills are partitioned into 6-folds, and we iteratively remove a fold, and train the
model with the remaining folds; predictions are then performed on the bills in the removed fold. The
experiment results are summarized in Figure 2. Note that since rj: is modeled via the stick-breaking
construction of IBP as in (1), the total number of latent binary features Kr is unbounded, and we
face the risk of having the latent binary features important for explaining voting Y and important for
explaining the associated text learned separately. This may lead to the undesirable consequence that
the latent features learned from text are not discriminative in predicting a new piece of legislation.
To reduce such risk, in practice we could either set ?r such that it strongly favor fewer latent binary
features, or we can truncate the stick breaking construction at a pre-defined level Kr . For a clearer
comparison with other models, where the number of topics are fixed, we choose the second approach
and let Kr vary as the maximum number of possible topics.
Across all sessions IPTM consistently performs better than its counterpart when Kc = 1; this again
demonstrates the effectiveness of MGP in modeling the rank. Although there is no significant advan6
95%?Missing
90%?Missing
0.97
Kc =?12
99%?Missing
0.97
Kc =?12
0.96
0.96
Kc =?9
Kc =?12
?12
0.95
0.95
0.94
0.94
0.93
0 86
0.86
0.85
0.9
1
5
10
20
Kc =?11
0 89
0.89
0.87
0.91
0.92
Kc =?13
0.9
0.88
0.92
0.93
0.91
50
1
5
10
BMF?Original
Proposed
20
50
PMF
PMF+MGP
1
5
10
20
50
Figure 1: Comparison of prediction accuracy for votes missing uniformly at random, for the 110th House
data. Different panels corresponds to different percentage of missingness, for each panel the vertical axis
represents accuracy and horizontal axis represents the rank set for PMF. For PMF+MGP and our proposed
method, inferred rank Kc is shown for the most-probable collection sample.
106th
107th
0.92
108th
0.92
0.93
0.9
0.9
0.91
0.88
0.88
0.89
0.86
0.86
0.87
0.84
0.84
0.85
0.82
0.82
0.83
0.8
0.8
0.81
0.78
0.78
30
50
100
150
200
300
50
100
150
200
300
30
0.92
0.95
0.89
0.9
0.93
0.87
0 88
0.88
0.91
0.85
0.83
0 79
0.79
0.82
0.77
0.8
50
100
150
200
300
Proposed
150
200
300
150
200
300
111th
0.87
0.84
0.81
100
0.89
0.86
30
50
110th
109th
0.91
0.79
30
0.85
0 83
0.83
0.81
30
50
100 150 200 300
FATM
IPTM
30
50
100
IPTM(Kc?=?1)
Figure 2: Prediction accuracy for held-out legislation across 106th - 111th House data; prediction of an entire
column of missing votes based on text. In each panel the vertical axis represents accuracy and the horizontal
axis represents the number of topics used for each model. Results are averaged across 6-folds, with variances
are too small to see.
tage of our proposed model when the truncation on the number of topics Kr (horizontal axis) is small
(e.g., 30-50), over-fitting is observed for all models except our proposed model. As we increase the
number of topics, the performance of other models drop significantly (vertical axis). Across all five
sessions, the best quantitative results are obtained by the proposed model when Kr > 100.
4.4
Latent binary feature interpretation
In this study we partition all the bills into two groups: (i) bills for which there is near-unanimous
agreement, with ?Yea? or ?Yes? more than 90%; (ii) contentious bills with percentage of votes
received as ?Yea? or ?Yes? less than 60%. By linking the inferred binary latent features to the
topics for those two groups, we can get insight into the characteristics of legislation and voting
patterns, e.g., what influenced a near-unanimous yes vote, and what influenced more contention.
Figure 3 compares the latent feature usage pattern of those two groups; the horizontal axis represents
the latent features, where we set Kr = 100 for illustration purpose, and the vertical axis is the
aggregated frequency that a feature/topic is used by all the bills in each of those two groups. The
frequency is normalized within each group for easy interpretation. For each group, we select three
discriminative features: ones heavily used in one group but rarely used in the other (these selected
features are highlighted in blue/red). For example, in the left panel the features highlighted in blue
are widely used by bills in the left group, but rarely used by bills in the right group. As observed
7
Binary feature usage pattern for unanimous agreed bills
Binary feature usage pattern for highly debated bills
0.08
0.1
0.07
Topic 31
0.08
0.06
Topic 62
0.05
Topic 83
Topic 73
0.06
Topic 22
Topic 38
0.04
Topic 31
0.03
Topic 83
Topic 38
0.04
0.02
Topic 62 Topic 73
Topic 22
0.02
0.01
0
0
10
20
30
40
50
60
70
80
90
0
100
0
10
20
30
40
50
60
70
80
90
100
Figure 3: Comparison of the frequencies of binary features usage between two groups of bills, left: nearunanimous affirmative bills (e.g., bills with percentage of votes received as ?Yes? or ?Yea? is more than 90%).
Right: contentious bills (e.g., bills with percentage of votes received as ?Yes? or ?Yea? is less than 60%).
Data from 110th House, when Kr = 100. The vertical axis represents the normalized frequency of using
feature/topic within the corresponding group. The six most discriminative features/topics (labeled in the figure)
are shown in Table 1
Table 1: Six discriminative topics of unanimous agreed/highly debated bills learned from the 110th house of
representatives, with top-ten most probable words shown. (R) and (B) represent the topics depicted in Figure 3
as red and blue respectively.
T OPIC 22 (B)
T OPIC 31(R)
T OPIC 38 (R)
T OPIC 62 (B)
T OPIC 73 (B)
T OPIC 83 (R)
CHILDREN
CHILD
YOUTH
PORNOGRAPHY
INTERNET
FATHER
FAMILY
PARENT
SCHOOL
EMERGENCY
CONCURRENT RESOLUTION
ADJOURN
MAJORITY LEADER
DESIGNEE
AVIATION
RECESS
MINORITY LEADER
F EBRUARY
MOTION OFFER
STAND
TAX
CORPORATION
TAXABLE
CREDIT
PENALTY
REVENUE
TAXPAYER
SPECIAL
FILE
SUBSTITUTE
PEOPLE
WORLD
HOME
SANITATION
WATER
INTERNATIONAL
SOUTHERN
COMPENSATION
ASSOCIATION
ECONOMIC
NATION
ATTACK
TERRORIST
PEOPLE
S EPTEMBER
VOLUNTEER
CITIZEN
PAKISTAN
LEGITIMATE
FUTURE
CLAUSE
PRINT
WAIVE
SUBSTITUTE
COMMITTEE AMENDMENT
READ
DEBATE
OFFER
DIVIDE AND CONTROL
MOTION
from Figure 3, the learned binary features are discriminative, as the usage pattern for those two
groups are quite different.
We also study the interpretation of those latent features by linking them to the topics inferred from
the texts. As an example, those six highlighted features are linked to their corresponding topics
and depicted in Table 1, with the top-ten most probable words within each topic shown. For the
unanimous agreed bills, we can read from Table 1 that they are highly probable to be related to
topics about the education of youth (Topic 22), or the prevention of terrorist (Topic 73). While the
bills from the contentious group tend to more related to making amendments to an existing piece of
legislation (Topic 83) or discussing taxation (Topic 38). Note that compared to conventional topic
modeling, these inferred topics are not only informative in semantic meaning of the bills, but also
discriminative in predicting the outcome of the bills.
5
Conclusion
A new methodology has been developed for the joint analysis of a matrix with associated text,
based on sharing latent binary features modeled via the Indian buffet process. The model has been
demonstrated on analysis of voting data from the US House of Representatives. Imposition of a lowrank representation for the latent real matrix has proven important, with this done in a new manner
via the multiplicative gamma process. Encouraging quantitative results are demonstrated, and the
model has also been shown to yield interesting insights into the meaning of the latent features. The
sharing of latent binary features provides a general joint learning framework for Indian buffet process
based models [9], where focused topic model and binary matrix factorization are two examples,
exploring other possibilities in different scenarios could be an interesting direction.
Acknowledgements
The authors would like to thank anonymous reviewers for providing useful comments. The research
reported here was supported by ARO, DOE, NGA, ONR, and DARPA (under the MSEE program).
8
References
[1] D. Agarwal and B. Chen. fLDA: matrix factorization through latent Dirichlet allocation. In
WSDM, 2010.
[2] J. H. Albert and S. Chib. Bayesian analysis of binary and polychotomous response data. Journal of the American Statistical Association, 1993.
[3] A. Bhattacharya and D. B. Dunson. Sparse Bayesian infinite factor models. Biometrika, 2011.
[4] D. M. Blei and Jon D. McAuliffe. Supervised topic models. In Advances in Neural Information
Processing Systems, 2007.
[5] D. M. Blei, A. Ng, and M. I. Jordan. Latent Dirichlet allocation. JMLR, 2003.
[6] J. Clinton, S. Jackman, and D. Rivers. The statistical analysis of roll call data. Am. Political
Sc. Review, 2004.
[7] T. Ferguson. A Bayesian analysis of some nonparametric problems. The Annals of Statistics,
1973.
[8] S. Gerrish and D.M. Blei. Predicting legislative roll calls from text. In ICML, 2011.
[9] T. L. Griffiths and Z. Ghahramani. The indian buffet process: An introduction and review.
Journal of Machine Learning Research, 12:1185?1224, 2011.
[10] T.L. Griffiths and Z. Ghahramani. Infinite latent feature models and the Indian buffet process.
In Advances in Neural Information Processing Systems, 2005.
[11] H. Ishwaran and L.F. James. Gibbs sampling methods for stick-breaking priors. J. American
Statistical Association, 2001.
[12] P. McCullagh and J. Nelder. Generalized Linear Models. Chapman and Hall, 1989.
[13] E. Meeds, Z. Ghahramani, R. Neal, and S. Roweis. Modeling dyadic data with binary latent
factors. In Advances in Neural Information Processing Systems. 2007.
[14] K. Miller, T. Griffiths, and M.I. Jordan. Nonparametric latent feature models for link prediction. In Advances in Neural Information Processing Systems, 2009.
[15] K.T. Poole. Recent developments in analytical models of voting in the U.S. congress. Am.
Political Sc. Review, 1988.
[16] G. O. Roberts and J. S. Rosenthal. Coupling and ergodicity of adaptive MCMC. Journal of
Applied Probability, 2007.
[17] R. Salakhutdinov and A. Mnih. Probabilistic matrix factorization. In Advances in Neural
Information Processing Systems, 2007.
[18] R. Salakhutdinov and A. Mnih. Bayesian probabilistic matrix factorization using Markov chain
Monte Carlo. In ICML, 2008.
[19] H. Shan and A. Banerjee. Generalized probabilistic matrix factorizations for collaborative
filtering. In ICDM, 2010.
[20] Y. W. Teh, D. G?r?r, and Z. Ghahramani. Stick-breaking construction for the Indian buffet
process. In AISTATS, 2007.
[21] Y. W. Teh, M. I. Jordan, Matthew J. Beal, and D. M. Blei. Hierarchical Dirichlet processes.
Journal of the American Statistical Association, 2006.
[22] C. Wang and D. M. Blei. Collaborative topic modeling for recommending scientific articles.
In KDD, 2011.
[23] E. Wang, D. Liu, J. Silva, D. B. Dunson, and L. Carin. Joint analysis of time-evolving binary
matrices and associated documents. In Advances in Neural Information Processing Systems,
2010.
[24] S. Williamson, C. Wang, K. A. Heller, and D. M. Blei. The IBP compound Dirichlet process
and its application to focused topic modeling. In ICML, 2010.
[25] X. Zhang, D. Dunson, and L. Carin. Hierarchical topic modeling for analysis of time-evolving
personal choices. In Advances in Neural Information Processing Systems 24. 2011.
[26] X. Zhang, D. Dunson, and L. Carin. Tree-structured infinite sparse factor model. In ICML,
2011.
9
|
4788 |@word briefly:1 proportion:1 open:2 decomposition:12 covariance:1 thereby:1 yea:5 liu:1 loc:1 score:1 selecting:1 denoting:1 document:38 existing:1 written:2 readily:1 partition:1 informative:3 kdd:1 enables:1 remove:2 designed:1 interpretable:1 drop:1 selected:3 website:2 fewer:1 beginning:1 ith:2 blei:6 provides:5 characterization:1 attack:1 liberal:1 zhang:4 five:1 unbounded:2 along:1 constructed:1 beta:2 prove:1 combine:1 fitting:1 nay:1 manner:2 behavior:1 frequently:2 wsdm:1 salakhutdinov:2 decomposed:2 gov:1 encouraging:1 considering:1 estimating:1 underlying:4 notation:1 panel:6 mass:1 what:4 interpreted:1 parsimony:1 affirmative:1 msee:1 developed:7 supplemental:2 corporation:1 quantitative:3 every:1 voting:6 nation:1 biometrika:1 demonstrates:1 stick:7 control:1 mcauliffe:1 negligible:2 scientist:1 local:1 tends:3 congress:1 consequence:1 might:1 burn:1 dynamically:1 suggests:1 limited:1 factorization:11 averaged:1 unique:1 taxpayer:1 practice:1 ance:1 prevalence:1 lcarin:1 procedure:2 empirical:2 evolving:2 mult:2 significantly:1 convenient:1 word:3 pre:2 griffith:3 get:2 undesirable:2 context:2 risk:2 writing:3 bill:23 map:2 demonstrated:6 missing:12 conventional:1 reviewer:1 focused:9 resolution:1 mixedmembership:1 legitimate:1 insight:5 ftm:10 utilizing:1 proving:1 justification:2 analogous:1 annals:1 construction:7 play:1 heavily:1 duke:4 us:1 agreement:1 element:2 conserve:1 utilized:2 labeled:1 observed:9 role:1 wang:3 decrease:3 removed:1 mentioned:1 intuition:1 personal:1 motivate:2 predictive:2 upon:2 meed:1 easily:1 joint:16 mh:1 darpa:1 represented:2 derivation:1 train:1 distinct:2 fast:3 effective:3 monte:1 sc:2 outcome:2 quite:1 slda:1 widely:1 valued:1 ability:1 favor:1 statistic:1 jointly:5 highlighted:3 beal:1 advantage:3 descriptive:2 sequence:1 triggered:2 analytical:1 propose:1 aro:1 product:2 adaptation:3 relevant:2 alleviates:1 mixing:2 achieve:1 tax:1 roweis:1 convergence:1 cluster:2 empty:1 parent:1 converges:1 object:1 coupling:2 completion:1 fixing:3 clearer:1 ij:7 school:1 received:3 lowrank:2 qt:1 ibp:7 progress:1 predicted:1 indicate:1 implies:1 bjt:4 direction:1 stochastic:1 material:2 education:1 clustered:1 anonymous:1 probable:4 yij:4 exploring:1 accompanying:1 wjn:2 considered:2 ground:1 credit:1 exp:2 hall:1 lawrence:1 mapping:1 predict:2 bj:13 matthew:1 vary:2 purpose:1 applicable:1 taxation:1 grouped:1 concurrent:1 gaussian:4 pn:1 derived:1 focus:1 consistently:1 rank:34 bernoulli:2 indicates:1 mainly:1 check:1 political:4 rigorous:1 am:2 inference:5 ferguson:1 typically:3 entire:3 kc:44 interested:2 issue:1 among:2 priori:2 development:2 prevention:1 special:1 having:1 ng:1 atom:1 sampling:7 chapman:1 represents:8 lit:1 icml:4 carin:4 jon:1 future:1 others:1 recommend:1 modern:1 chib:1 gamma:7 individual:1 replaced:1 ab:1 interest:2 highly:3 possibility:1 mnih:2 jackman:1 introduces:1 analyzed:1 mixture:3 yielding:1 held:1 chain:4 predefined:1 citizen:1 respective:1 tree:2 divide:1 walk:1 pmf:11 theoretical:2 column:19 modeling:15 subset:5 entry:3 father:1 too:1 reported:1 varies:1 interpretative:1 adaptively:2 international:1 river:1 probabilistic:5 polychotomous:1 again:2 choose:2 wishart:1 american:3 style:3 li:4 potential:1 summarized:4 explicitly:1 piece:10 multiplicative:2 performed:2 analyze:1 linked:1 red:2 competitive:1 contribution:2 collaborative:3 accuracy:5 roll:15 qk:1 characteristic:8 variance:2 miller:1 yield:6 correspond:1 amendment:2 yes:6 generalize:1 bayesian:6 rjt:6 lu:12 none:1 carlo:1 influenced:2 sharing:4 definition:1 frequency:9 james:1 associated:22 proof:1 gain:1 sampled:2 subsection:1 organized:1 agreed:3 originally:2 supervised:2 methodology:3 specify:1 response:3 done:4 shrink:1 strongly:1 just:1 ergodicity:1 hand:1 horizontal:5 web:1 ikl:1 banerjee:1 defines:1 logistic:1 lda:3 scientific:2 usage:11 normalized:3 counterpart:2 read:2 iteratively:1 semantic:1 neal:1 during:1 encourages:1 inferior:1 generalized:2 lhr:1 performs:2 motion:2 silva:1 meaning:5 wise:1 consideration:1 contention:1 clause:1 exponentially:2 discussed:2 interpretation:4 belong:1 linking:4 interpret:1 association:4 significant:3 imposing:2 gibbs:4 tuning:1 similarly:1 session:9 had:1 access:3 base:1 posterior:10 own:1 recent:2 dictated:1 showed:1 pkc:7 perplexity:1 scenario:1 compound:2 certain:1 manifested:5 binary:46 onr:1 discussing:1 additional:1 impose:3 employed:2 converge:1 aggregated:1 recommended:1 ii:4 rv:2 afterwards:1 full:1 multiple:1 desirable:1 infer:6 rj:17 legislative:4 compounded:1 smoother:1 characterized:2 youth:2 offer:2 icdm:1 locates:1 visit:1 impact:2 prediction:9 basic:1 regression:1 volunteer:1 albert:1 iteration:8 represent:3 agarwal:1 addition:1 separately:3 legislator:5 singular:1 source:1 rest:3 unlike:1 probably:1 file:1 comment:1 tend:1 ikr:3 zjn:5 spirit:1 effectiveness:1 jordan:3 call:18 integer:1 near:2 leverage:2 presence:1 ideal:2 easy:1 mgp:16 reduce:1 economic:1 whether:1 motivated:2 expression:1 six:3 linkage:6 penalty:1 categorial:1 xianxing:2 nomenclature:1 generally:1 useful:1 detailed:2 repeating:1 nonparametric:2 ten:3 tth:4 specifies:1 sl:1 xij:2 percentage:8 trapped:1 rosenthal:1 blue:3 hyperparameter:1 express:1 group:13 key:1 threshold:2 drawn:4 deleted:1 preprocessed:1 tenth:1 ht:1 utilize:2 contentious:3 missingness:5 imposition:2 legislation:32 parameterized:2 powerful:1 run:1 nga:1 throughout:1 family:1 home:1 draw:2 layer:1 bound:1 internet:1 guaranteed:1 emergency:1 shan:1 fold:5 strength:2 rkl:1 constraint:2 precisely:1 infinity:1 innovative:1 min:1 performing:1 relatively:1 structured:2 truncate:2 conjugate:1 across:11 increasingly:1 partitioned:1 making:2 intuitively:2 principally:1 equation:2 previously:2 remains:1 discus:1 committee:1 needed:1 know:1 merit:1 end:1 available:4 decomposing:1 pakistan:1 ishwaran:1 hierarchical:4 bhattacharya:1 lhtt:3 buffet:6 rp:2 thomas:1 original:2 top:3 dirichlet:10 denotes:5 remaining:1 substitute:2 graphical:1 ghahramani:4 classical:1 unanimous:5 move:1 question:2 print:1 occurs:1 rt:1 diagonal:2 southern:1 dp:5 kth:5 link:4 thank:1 majority:1 outer:1 topic:89 collected:1 tage:1 water:1 minority:1 assuming:1 hdp:4 modeled:11 pointwise:1 index:4 illustration:1 providing:1 ql:1 dunson:4 robert:1 debate:1 perform:3 teh:2 vertical:6 observation:2 markov:4 discarded:1 finite:2 compensation:1 maxk:2 extended:2 rn:1 inferred:7 introduced:1 bk:2 kl:4 specified:2 connection:1 z1:2 merges:1 learned:4 address:1 beyond:1 suggested:1 poole:1 below:2 pattern:6 program:1 max:1 power:1 critical:2 natural:1 treated:1 s2k:1 predicting:6 indicator:2 scarce:1 representing:2 movie:1 flda:1 axis:11 concludes:1 categorical:1 coupled:3 text:29 review:4 prior:5 acknowledgement:1 heller:1 relative:1 probit:3 loss:1 interesting:5 limitation:2 allocation:4 filtering:2 proven:1 revenue:1 article:1 principle:3 njt:3 share:3 row:11 token:1 placed:1 supported:1 truncation:7 t6:1 jth:3 free:2 explaining:2 terrorist:2 characterizing:2 taking:1 face:1 sparse:5 distributed:1 dimension:2 stand:1 world:1 qn:1 author:7 made:1 qualitatively:1 adaptive:4 collection:1 constituting:1 reconstructed:1 global:2 corpus:1 assumed:1 ibps:2 nelder:1 discriminative:6 leader:2 recommending:1 latent:50 sk:30 table:4 additionally:1 learn:1 expansion:3 williamson:1 clinton:1 aistats:1 significance:2 main:1 bmf:12 hyperparameters:4 child:2 dyadic:1 representative:3 fashion:1 precision:1 inferring:2 position:1 wish:2 debated:2 house:10 breaking:7 jmlr:1 weighting:1 bij:1 theorem:3 z0:2 specific:1 decay:1 concern:1 exists:1 effectively:1 kr:15 chen:1 depicted:2 recommendation:3 mij:2 corresponds:2 gerrish:1 conditional:5 goal:1 consequently:1 towards:1 shared:2 absence:1 content:2 change:1 mccullagh:1 infinite:5 specifically:3 uniformly:3 except:1 sampler:6 aviation:1 lemma:3 conservative:1 called:1 total:1 svd:1 experimental:2 vote:22 rarely:2 select:2 formally:1 people:2 indian:6 philosophy:1 incorporate:1 mcmc:2
|
4,185 | 4,789 |
Proper losses for learning from partial labels
? Cid-Sueiro
Jesus
Department of Signal Theory and Communications
Universidad Carlos III de Madrid
Legans-Madrid, 28911 Spain
[email protected]
Abstract
This paper discusses the problem of calibrating posterior class probabilities from
partially labelled data. Each instance is assumed to be labelled as belonging to
one of several candidate categories, at most one of them being true. We generalize
the concept of proper loss to this scenario, we establish a necessary and sufficient
condition for a loss function to be proper, and we show a direct procedure to
construct a proper loss for partial labels from a conventional proper loss. The
problem can be characterized by the mixing probability matrix relating the true
class of the data and the observed labels. The full knowledge of this matrix is not
required, and losses can be constructed that are proper for a wide set of mixing
probability matrices.
1
Introduction
The problem of learning multiple classes from data with imprecise label information has attracted
a recent attention in the literature. It arises in many different applications: Cour [1] cites some of
them: picture collections containing several faces per image and a caption that only specifies who is
in the picture but not which name matches which face, or video collections with labels taken from
annotations.
In a partially labelled data set, each instance is assigned to a set of candidate categories, at most only
one of them true. The problem is closely related to learning from noisy labels, which is common
in human-labelled data bases with multiple annotators [2] [3] medical imaging, crowdsourcing, etc.
Other related problems can be interpreted as particular forms of partial labelling: semisupervised
learning, or hierarchical classification in databases where some instances could be labelled with
respect to parent categories only. It is also a particular case of the more general problems of learning
from soft labels [4] or learning from measurements [5].
Several algorithms have been proposed to deal with partial labelling [1] [2] [6] [7] [8]. Though
some theoretical work has been addressed in order to analyze the consistency of algorithms [1] or
the information provided by uncertain data [8], little effort has been done to analyze the conditions
under which the true class can be inferred from partial labels.
In this paper we address the problem of estimating posterior class probabilities from partially labelled data. In particular, we obtain general conditions under which the posterior probability of the
true class given the observation can be estimated from training data with ambiguous class labels. To
do so, we generalize the concept of proper losses to losses that are functions of ambiguous labels,
and show that the capability to estimate posterior class probabilities using a given loss depends on
the probability matrix relating the ambiguous labels with the true class of the data. Each generalized proper loss can be characterized by the set (a convex polytope) of all admissible probability
matrices. Analyzing the structure of these losses is one of the main goals of this paper. Up to our
knowledge, the design of proper losses for learning from imperfect labels has not been addressed in
the area of Statistical Learning.
1
The paper is organized as follows: Sec. 2 formulates the problem discussed in the paper, Sec. 3
generalizes proper losses to scenarios with ambiguous labels, Sec. 4 proposes a procedure to design
proper losses for wide sets of mixing matrices, Sec. 5 discusses estimation errors and Sec. 6 states
some conclusions.
2
2.1
Formulation
Notation
Vectors are written in boldface, matrices in boldface capital and sets in calligraphic letters. For any
integer n, eni is a n-dimensional unit vector with all zero components apart from the i-th component
which is equal to one, and 1n is a n-dimensional all-ones vector. Superindex T denotes transposition.
We will use `() to denote a loss based on partial labels, and `? to losses based on true labels. The
Pn?1
simplex of n-dimensional probability vectors is Pn = {p ? [0, 1]n : i=0 pi = 1} and the set of
all left-stochastic matrices is M = {M ? [0, 1]d?c : MT 1d = 1c }. The number of classes is c, and
the number of possible partial label vectors is d ? 2c .
2.2
Learning from partial labels
Let X be a sample set, Y = {ecj , j = 0, 1, . . . , c ? 1}, a set of labels, and Z ? {0, 1}c a set of
partial labels. Sample (x, z) ? X ? Z is drawn from an unknown distribution P .
Partial label vector z ? Z is a noisy version of the true label y ? Y. Several authors [1] [6] [7] [8]
assume that the true label is always present in z, i.e., zj = 1 when yj = 1, but this assumption is
not required in our setting, which admits noisy label scenarios (as, for instance, in [2]). Without loss
of generality, we assume that Z contains only partial labels with nonzero probability (i.e. P {z =
b} > 0 for any b ? Z).
In general, we model the relationship between z and y through an arbitrary d ? c conditional mixing
probability matrix M(x) with components
mij (x) = P {z = bi |yj = 1, x}
(1)
where bi ? Z is the i-th element of Z for some arbitrary ordering.
Note that, in general, the mixing matrix could depend on x, though a constant mixing matrix [2] [6]
[7] [8] is a common assumption, as well as the statistical independence of the incorrect labels [6] [7]
[8]. In this paper we do not impose these assumptions.
The goal is to infer y given x without knowing model P . To do so, a set of partially labelled samples,
S = {(xk , zk ), k = 1, . . . , K} is available. True labels yk are not observed.
We will illustrate different partial label scenarios with a 3-class problem. Consider that each column
of MT corresponds to a label pattern (z0 , z1 , z2 ) following the ordering (0, 0, 0), (1, 0, 0), (0, 1, 0),
(0, 0, 1), (1, 1, 0), (1, 0, 1), (0, 1, 1), (1, 1, 1) (e.g. the first column contains P {z = (0, 0, 0)T |yj =
1), for j = 0, 1, 2).
A. Supervised learning: M =
B. Single noisy labels: M =
0
0
0
0
0
0
C. Semisupervised learning: M =
1
0
0
0
1
0
0
0
1
1??
?/2
?/2
?
?
?
0
0
0
?/2
1??
?/2
1??
0
0
0
0
0
0
0
0
0
0
0
?/2
?/2
1??
0
1??
0
!T
0
0
0
0
0
0
0
0
1??
0
0
0
0
0
0
0
0
0
0
0
0
!T
0
0
0
0
0
0
!T
D. True label with independent noisy labels:
M=
0
0
0
1 ? ? ? ?2
0
0
0
1 ? ? ? ?2
0
0
0
1 ? ? ? ?2
2
?/2
?/2
0
?/2
0
?/2
0
?/2
?/2
?2
?2
?2
!T
E. Two labels, one of them true: M =
1??
0
0
0
0
0
0
1??
0
0
0
1??
?/2
?/2
0
?/2
0
?/2
0
?/2
?/2
0
0
0
!T
The question that motivates our work is the following: knowing M (i.e. knowing the scenario and
the value of parameters ?, ? and ?), we can estimate accurate posterior class probabilities from
partially labelled data in all these cases, however, is it possible if ?, ? and ? are unknown? We will
see that the answer is negative for scenarios B,C,D, but it is positive for E. In the positive case, no
information is lost by the partial label process for infinite sample sizes. In the negative case, some
performance is lost as a consequence of the mixing process, that persists even for infinite sample
sizes 1
2.3
Inference through partial label probabilities
If the mixing matrix is known, a conceptually simple strategy to solve the partial label problem
consists of estimating posterior partial label probabilities, using them to estimate posterior class
probabilities and predict y. Since
P {z = bi |x} =
c?1
X
mij (x)P {yj = 1|x},
(2)
j=0
we can define vectors p(x) and ?(x) with components pi = P {z = bi |x} and ?j = P {yj = 1|x},
to write (2) as p(x) = M(x)?(x) and, thus,
?(x) = M+ (x)p(x)
+
T
?1
where M (x) = (M (x)M(x))
(3)
T
M (x) is the left inverse (pseudoinverse) of M(x).
Thus, a first condition to estimate ? from p given M is that the conditional mixing matrix has a left
inverse (i.e., the columns of M(x) are linearly independent).
There are some trivial cases where the mixing matrix has no pseudoinverse (for instance, if
P {z|y, x} = P {z|x}, all rows in M(x) are equal, and MT (x)M(x) is a rank 1 matrix, which
has no inverse), but these are degenerate cases of no practical interest. From a practical point of
view, the application of (3) states two major problems: (1) when the model P is unknown, even
knowing M, estimating p from data may be infeasible for d close to 2c and a large number of
classes (furthermore, posterior probability estimates will not be accurate if the sample size is small),
and (2) M(x) is generally unknown, and cannot be estimated from the partially labelled set, S.
The solution adopted in this paper for the first problem consists of estimating ? from data without
estimating p. This is discussed in the next section. The second problem is discussed in Section 4.
3
Loss functions for posterior probability estimation
The estimation of posterior probabilities from labelled data is a well known problem in statistics and
machine learning, that has received some recent attention in the machine learning literature [9] [10].
? ?
? ) is required such that ? is
In order to estimate posteriors from labelled data, a loss function `(y,
?
? )}. Losses satisfying this property are said to be Fisher consistent
a member of arg min?? Ey {`(y, ?
and are known as proper scoring rules. A loss is strictly proper if ? is the only member of this set.
? ?
? ) = ? if yj = 1 and ??j = 0.
A loss is regular if it is finite for any y, except possibly that `(y,
Proper scoring rules can be characterized by the Savage?s representation [11] [12]
Theorem 3.1 A regular scoring rule `? : Z ? Pc ? R is (strictly) proper if and only if
? ?
? ) = h(?
?)
`(y,
? ) + g(?
? )(y ? ?
(4)
? , for all
where h is a (strictly) concave function and g(?
? ) is a supergradient of h at the point ?
? ? Pc .
?
1
If the sample size is large (in particular for scenarios C and D), one could think of simply ignoring samples
with imperfect labels, and training the classifier with the samples whose class is known. However, in general,
there is some bias in this process, which eventually can degrade performance.
3
? if h(?) ? h(?
? )).
(Remind that g is a supergradient of h at ?
? ) + gT (? ? ?
In order to deal with partial labels, we generalize proper losses as follows
Definition Let y and z be random vectors taking values in Y and Z, respectively. A scoring rule
? ) is proper to estimate ? (with components ?j = P {yj = 1}) from z if
`(z, ?
? )}
? ? arg min Ez {`(z, ?
(5)
?
?
It is strictly proper if ? is the only member of this set.
This generalized family of proper scoring rules can be characterized by the following.
? ) is (strictly) proper to estimate ? from z if and only if the equivaTheorem 3.2 Scoring rule `(z, ?
lent loss
? ?
? ) = yT MT l(?
`(y,
? ),
(6)
? ) and bi is the i-th element in Z (according
where l(?
? ) is a vector with components `i (?
? ) = `(bi , ?
to some arbitrary ordering), is (strictly) proper.
Proof The proof is straightforward by noting that the expected loss can be expressed as
? )} =
Ez {`(z, ?
d?1
X
P {z = bi }`i (?
?) =
c?1
d?1 X
X
mij ?j `i (?
?)
i=0 j=0
i=0
? ?
? )}
= ? T MT l(?
? ) = Ey {yT MT l(?
? )} = Ey {`(y,
(7)
? ?
? )} = arg min?? Ey {`(y,
? )} and, thus, ` is (strictly) proper with
Therefore, arg min?? Ez {`(z, ?
?
respect to y iff ` is (strictly) proper.
? c, ?
Note that, defining vector ?l(?
? ) with components `?j (?
? ) = `(e
j ? ), we can write
?l(?
? ) = MT l(?
?)
(8)
We will use this vector representation of losses extensively in the following.
Th. 3.2 states that the proper character of a loss for estimating ? from z depends on M. For this
? ) is M-proper if it is proper to estimate ? from z.
reason, in the following we will say that `(z, ?
4
Proper losses for sets of mixing matrices
Eq. (8) may be useful to check if a given loss is M-proper. However, note that, since matrix MT is
?
d ? c, it has no left inverse, and we cannot take MT out from the left side of (8) to compute ` from `.
For any given M and any given equivalent loss ?l(?
? ), there is an uncountable number of losses l(?
?)
satisfying (8).
Example Let `? be an arbitrary proper loss for a 3-class problem. The losses
? ) = (z0 ? z1 z2 )`?0 (?
`(z, ?
? ) + (z1 ? z0 z2 )`?1 (?
? ) + (z2 ? z0 z1 )`?2 (?
?)
(9)
? ) = z0 `?0 (?
`0 (z, ?
? ) + z1 `?1 (?
? ) + z2 `?2 (?
?)
(10)
are M-proper for the mixing matrix M given by
1 if bi = ecj
mij =
0 otherwise
(11)
Note that M corresponds to a situation where labels are perfectly labelled, and z contains perfect
information about y (in fact, z = y with probability one).
? ), there are different mixing matrices such that the equivalent loss is the same.
Also, for any `(z, ?
4
Example The loss given by (9) is M-proper for the mixing matrix M in (11) and it is also N-proper,
for N with components
1/2 if bi = ecj + eck , for some k 6= j
nij =
(12)
0
otherwise
Matrix N corresponds to a situation where label z contains the true class and another noisy component taken at random from the other classes.
In general, if l(?
? ) is M-proper and N-proper with equivalent loss ?l(?
? ), then it is also Q-proper with
the same equivalent loss, for any Q in the form
Q = M(I ? D) + ND
(13)
where D is a diagonal nonnegative matrix (note that Q is a probability matrix, because QT 1d = 1c ).
This is because
QT l(?
? ) = (I ? D)MT l(?
? ) + DNT l(?
? ) = (I ? D)?l(?
? ) + D?l(?
? ) = ?l(?
?)
(14)
More generally, for arbitrary non-diagonal matrices D, provided that Q is a probability matrix, l(?
?)
is Q-proper.
Example Assuming diagonal D, if M and N are the mixing matrices defined in (11) and (12),
respectively, the loss (9) is Q-proper for any mixing matrix Q in the form (13). This corresponds to
a matrix with components
?
djj
if bi = ecj
?
(15)
qij = (1 ? djj )/2 if bi = ecj + eck , for some k 6= j
0
otherwise
That is, the loss in (9) is proper for any situation where the label z contains the true class and
possibly another class taken at random, and the probability that the true label is corrupted may be
class-dependent.
4.1
Building proper losses from ambiguity sets
The ambiguity on M for a given loss l(?
? ) can be used to deal with the second problem mentioned
in Sec. 2.3: in general, the mixing matrix may be unknown, or, even if it is known, it may depend
on the observation, x. Thus, we need a procedure to design losses that are proper for a wide family
of mixing matrices. In general, given a set of mixing matrices, Q, we will say that ` is Q-proper if
it is M-proper for any M ? Q
The following result provides a way to construct a proper loss ` for partial labels from a given
?
conventional proper loss `.
Theorem 4.1 For 0 ? j ? c ? 1, let Vj = {vij ? Pd , 1 ? i ? nj } be a set of nj > 0 probability
Pc?1
d
vectors with dimension d, such that j=0 nj = d and span(?c?1
j=0 Vj ) = R and let Q = {M ?
c
M : Mej ? span(Vj ) ? Pd }.
? ?
? ), there exists a loss `(z, ?
? ) which is (strictly) Q-proper.
Then, for any (strictly) proper loss `(y,
Proof The proof is constructive. Let V be a d?d matrix whose columns are the elements of ?c?1
j=0 Vj ,
c?1
d
which is invertible since span(?j=0 Vj ) = R . Let c(?
? ) be a d ? 1 vector such that ci (?
? ) = `?j (?
?)
d
if Vei ? Vj .
? ) be a loss defined by vector l(?
Let `(z, ?
? ) = (VT )?1 c(?
? ).
Consider the set R = {M ? M : Mecj ? Vj for all j} (which is not empty because nj > 0).
? ) is
Since the columns of any M ? R are also columns of V, then MT l(?
? ) = ?l(?
? ) and, thus, `(z, ?
M-proper. Therefore, it is also proper for any affine combination of matrices in R inside Pd . But
? ) is M-proper for all M ? Q (i.e. it is Q-proper).
span(R) ? Pd = Q. Thus, `(z, ?
5
Theorem 4.1 shows that we can construct proper losses for learning from partial labels by specifying
the points of sets Vj , j = 0, . . . , c ? 1. Each of these sets defines an ambiguity set Aj = span(Vj ) ?
Pd which represents all admissible conditional distributions for P (z|yj = 1). If the columns of the
true mixing matrix M are members of the ambiguity set, the resulting loss can be used to estimate
posterior class probabilities from the observed partial labels.
Thus, a general procedure to design a loss function for learning from partial labels is:
? ?
?)
1. Select a proper loss, `(y,
2. Define the ambiguity sets by choosing, for each class j, a set Vj of nj linearly independent
basis vectors for each class. The whole set of d basis vectors must be linearly independent.
3. Construct matrix V whose columns comprise all basis vectors.
4. Construct binary matrix U with uji = 1 if the i-th column of V is in Vj , and uji = 0
otherwise.
5. Compute the desired proper loss vector as
l(?
? ) = (VT )?1 U?l(?
?)
(16)
Since the ambiguity set Aj is the intersection of a nj -dimensional linear subspace with the ddimensional probability simplex, it is a nj ? 1 dimensional convex polytope whose vertices lie
in distinct (nj ? 1)-faces of Pd . These vertices must have a set of at least nj ? 1 zero components
which cannot be a set of zeros in any other vertex.
This has two consequences: (1) we can define the ambiguity sets from these vertices, and (2), the
choice is not unique, because the number of vertices can be higher than nj ? 1.
? ?
? ) is non degenerate, Q contains all mixing matrices for which a loss is proper:
If proper loss `(y,
? , then a = 0. Under the conditions of
Theorem 4.2 Let us assume that, if aT?l(?
? ) = 0 for any ?
? ) is not M-proper.
Theorem 4.1, for any M ? M \ Q, `(z, ?
Proof Since the columns of V are in the ambiguity sets and form a basis of Rd , span(?c?1
j=0 Aj ) =
Pc
d
R . Thus, the n-th column of any arbitrary M can be represented as mn = j=1 ?n,j wj for some
wj ? Aj and some coefficients ?nj . If M ?
/ Q, ?nj 6= 0 for some j 6= n and some n. Then
Pl
?
? . Therefore, `(z, ?
? ) is not
?) =
mTn l(?
?
`
(?
?
),
which
cannot
be
equal to `n (?
? ) for all ?
j=1 nj j
M-proper.
4.2
Virtual labels
The analysis above shows a procedure to construct proper losses from ambiguity sets. The main
result of this section is to show that (16) is actually a universal representation, in the sense that any
proper loss can be represented in this form, and we generalize the Savage?s representation providing
and explicit formula for Q-proper losses.
? ) is (strictly) Q-proper for some matrix set Q with equivalent loss
Theorem 4.3 Scoring rule `(z, ?
? ?
? ) if and only if
`(y,
? ) = h(?
? ).
`(z, ?
? ) + g(?
? )T (UT V?1 z ? ?
(17)
? g(?
where h is the (strictly) concave function from the Savage?s representation for `,
? ) is a supergradient of h, V is a d ? d non-singular matrix and U is a binary matrix with only one unit value
at each row.
Moreover, the ambiguity set of class j is Aj = span(Vj ), where Vj is the set of all columns in V
such that uji = 1.
Proof See the Appendix.
Comparing (4) with (17), the effect of imperfect labelling becomes clear: the unknown true label y
? = UT V?1 z, which is a linear combination of the partial labels.
is replaced by a virtual label y
6
4.3
Admissible scenarios
The previous analysis shows that, in order to calibrate posterior probabilities from partial labels
in scenarios where the mixing matrix is known, two conditions are required: (1) the rows of any
admissible mixing matrix must be contained in the admissible sets, (2) the basis of all admissible
sets must be linearly independent. It is not difficult to see that the parametric matrices in scenarios B,
C and D defined in Section 2.2 cannot be generated using a set of basis satisfying these constraints.
On the contrary, scenario E is admissible, as we have shown in the example in Section 4.
5
Estimation Errors
If the true mixing matrix M is not in Q, a Q-proper loss may fail to estimate ?. The consequences
of this can be analyzed using the expected loss, given by
.
? ) = E{`(z, ?
? )} = ? T MT l(?
L(?, ?
? ) = ? T MT (VT )?1 U?l(?
?)
(18)
? ) = ? T?l(?
If M ? Q, then L(?, ?
? ). However, if M ?
/ Q, then we can decompose M = MQ + N,
where MQ is the orthogonal projection of M in Q. Then
? ) = ? T NT (VT )?1 U?l(?
L(?, ?
? ) + ? T?l(?
?)
(19)
Example The effect of a bad choice of the ambiguity set can be illustrated using the loss in (9) in
? k2 (the square error) and ?lj (?
two cases: ?lj (?
? ) = kecj ? ?
? ) = ? ln(?
?j ) (the cross entropy). As we
have discussed before, loss (9) is proper for any scenario where label z contains the true class and
possibly another class taken at random. Let us assume that the true mixing matrix is
!T
0.5 0
0
0.4 0.1 0
0
0.5 0
0.3 0
0.2
M=
(20)
0
0
0.6 0
0.2 0.2
(were each column of MT corresponds to a label vector (z0 , z1 , z2 ) following the ordering (1, 0, 0),
(0, 1, 0), (0, 0, 1), (1, 1, 0), (1, 0, 1), (0, 1, 1). Fig. 1 shows the expected loss in (19) for the square
? over the probability simplex P3 , for
error (left) and the cross entropy (right), as a function of ?
? = (0.45, 0.15, 0.4)T . Since M ?
/ Q, the estimated posterior minimizing expected loss, ??? (which
is unique because both losses are strictly proper), does not coincide with the true posterior.
?
Figure 1: Square loss (left) and cross entropy (right) in the probability simplex, as a function of ?
for ? = (0.45, 0.15, 0.4)T
? ? does not depend on the choice of the cost and, thus,
It is important to note that the minimum ?
the estimation error is invariant to the choice of the strict proper loss (though this could be not true
when ? is estimated from an empirical distribution). This is because, using (19) and noting that the
expected proper loss is
.
? ?
?
? ) = Ey `(y,
? ) = ? T?l(?
L(?,
?
?)
(21)
7
we have
? ) = L(UT V?1 M?, ?
?)
L(?, ?
?
T
(22)
?1
? = U V M?, the estimation error is
Since (22) is minimum for ?
? ? k2 = k(I ? UT V?1 M)?k2
k? ? ?
which is independent on the particular choice of the equivalent loss.
(23)
If `? is proper but not strictly proper, the minimum may be not unique. For instance, for the 0 ? 1
?
? providing the same decisions than ? is a minimum of L(?,
? ). Therefore, those values
loss, any ?
?
T ?1
of ? with ? and U V M? in the same decision region are not influenced by a bad choice of the
ambiguity set. Unfortunately, since the set of boundary decision points is not linear (but piecewise
linear) one can always find points ? that are affected by this choice. Therefore, a wrong choice
of the ambiguity set always changes the boundary decision. Summarizing, the ambiguity set for
probability estimation is not larger than that for classification.
6
Conclusions
In this paper we have generalized proper losses to deal with scenarios with partial labels. Proper
losses based on partial labels can be designed to cope with different mixing matrices. We have also
generalized the Savage?s representation of proper losses to obtain an explicit expression for proper
losses as a function of a concave generator.
Appendix: Proof of Theorem 4.3
? ?
? ) is (strictly) Q-proper for some matrix set Q with equivalent loss `(y,
? ).
Let us assume that `(z, ?
Let Qj be the set of the j-th rows of all matrices in Q, and take Aj = span(Qj ) ? Pd . Then any
vector m ? Aj is affine combination of vectors in Qj and, thus, mT l(?
? ) = ?l(?
? ). Therefore, if
span(Qi ) has dimension ni , we can take a basis Vi ? Qi of ni linearly independent vectors such
that Ai = span(Qi ) ? Pd .
By construction l(?
? ) = (VT )?1 U?l(?
? ). Combining this equation with the Savage?s representation
in (4), we get
? ) = zT l(?
? 1Tc )T g(?
`(z, ?
? ) = zT (VT )?1 U(h(?
? )1c + (I ? ?
?)
? T )g(?
= h(?
? )zT 1d + zT (VT )?1 (U ? 1d ?
?)
?)
= h(?
? ) + g(?
? )T (UT V?1 z ? ?
(24)
which is the desired result.
Now, let us assume that (17) is true. Then
? T )g(?
l(?
? ) = h(?
? )1d + ((VT )?1 U ? 1d ?
? ).
For any matrix M ? M such that
MT ecj
(25)
? Aj , we have
? T )g(?
MT l(?
? ) = h(?
? )MT 1d + (MT (VT )?1 U ? M1d ?
?)
(26)
If M ? Q, then we can express each column, j, of M as a convex combination of the columns in
V with uji = 1, thus M = V? for some matrix ? with the coefficients of the convex combination
at the corresponding positions of unit values in U. Then MT (VT )?1 U = ?U = I. Using this in
(26), we get
? T )g(?
MT l(?
? ) = h(?
? )1c + (Ic ? 1c ?
? ) = ?l(?
? ).
(27)
Applying Theorem 3.2, the proof is complete.
Acknowledgments
This work was partially funded by project TEC2011-22480 from the Spanish Ministry of Science
and Innovation, project PRI-PIBIN-2011-1266 and by the IST Programme of the European Community, under the PASCAL2 Network of Excellence, IST-2007-216886. Thanks to Ra?ul SantosRodr??guez and Dar??o Garc??a-Garc??a for their constructive comments about this manuscript
8
References
[1] T. Cour, B. Sapp, and B. Taskar, ?Learning from partial labels,? Journal of Machine Learning
Research, vol. 12, pp. 1225?1261, 2011.
[2] V. C. Raykar, S. Yu, L. H. Zhao, G. H. Valadez, C. Florin, L. Bogoni, and L. Moy, ?Learning
from crowds,? Journal of Machine Learning Research, vol. 99, pp. 1297?1322, August 2010.
[3] V. S. Sheng, F. Provost, and P. G. Ipeirotis, ?Get another label? improving data quality and
data mining using multiple, noisy labelers,? in Procs. of the 14th ACM SIGKDD international
conference on Knowledge discovery and data mining, ser. KDD ?08. New York, NY, USA:
ACM, 2008, pp. 614?622.
[4] E. C?ome, L. Oukhellou, T. Denux, and P. Aknin, ?Mixture model estimation with soft labels,? in Soft Methods for Handling Variability and Imprecision, ser. Advances in Soft Computing, D. Dubois, M. Lubiano, H. Prade, M. Gil, P. Grzegorzewski, and O. Hryniewicz, Eds.
Springer Berlin / Heidelberg, 2008, vol. 48, pp. 165?174.
[5] P. Liang, M. Jordan, and D. Klein, ?Learning from measurements in exponential families,? in
Proceedings of the 26th Annual International Conference on Machine Learning. ACM, 2009,
pp. 641?648.
[6] R. Jin and Z. Ghahramani, ?Learning with multiple labels,? Advances in Neural Information
Processing Systems, vol. 15, pp. 897?904, 2002.
[7] C. Ambroise, T. Denoeux, G. Govaert, and P. Smets, ?Learning from an imprecise teacher:
probabilistic and evidential approaches,? in Applied Stochastic Models and Data Analysis,
2001, vol. 1, pp. 100?105.
[8] Y. Grandvalet and Y. Bengio, ?Semi-supervised learning by entropy minimization,? 2005.
[9] M. Reid and B. Williamson, ?Information, divergence and risk for binary experiments,? Journal of Machine Learning Research, vol. 12, pp. 731?817, 2011.
[10] H. Masnadi-Shirazi and N. Vasconcelos, ?Risk minimization, probability elicitation, and costsensitive svms,? in Proceedings of the International Conference on Machine Learning, 2010,
pp. 204?213.
[11] L. Savage, ?Elicitation of personal probabilities and expectations,? Journal of the American
Statistical Association, pp. 783?801, 1971.
[12] T. Gneiting and A. Raftery, ?Strictly proper scoring rules, prediction, and estimation,? Journal
of the American Statistical Association, vol. 102, no. 477, pp. 359?378, 2007.
9
|
4789 |@word version:1 nd:1 contains:7 savage:6 z2:6 comparing:1 nt:1 guez:1 attracted:1 written:1 must:4 kdd:1 designed:1 xk:1 transposition:1 provides:1 constructed:1 direct:1 incorrect:1 consists:2 qij:1 inside:1 excellence:1 ra:1 expected:5 eck:2 little:1 becomes:1 spain:1 provided:2 estimating:6 notation:1 moreover:1 project:2 interpreted:1 nj:13 concave:3 classifier:1 k2:3 wrong:1 ser:2 unit:3 medical:1 reid:1 positive:2 before:1 persists:1 gneiting:1 consequence:3 analyzing:1 specifying:1 bi:11 practical:2 unique:3 acknowledgment:1 yj:8 lost:2 procedure:5 denoeux:1 area:1 empirical:1 universal:1 projection:1 imprecise:2 regular:2 get:3 cannot:5 close:1 risk:2 applying:1 conventional:2 equivalent:7 yt:2 straightforward:1 attention:2 convex:4 rule:8 mq:2 ambroise:1 construction:1 caption:1 element:3 satisfying:3 database:1 observed:3 taskar:1 wj:2 region:1 ordering:4 yk:1 mentioned:1 pd:8 personal:1 depend:3 basis:7 represented:2 distinct:1 choosing:1 crowd:1 whose:4 larger:1 solve:1 say:2 otherwise:4 statistic:1 think:1 noisy:7 ome:1 combining:1 mixing:26 degenerate:2 iff:1 ecj:6 cour:2 parent:1 empty:1 perfect:1 illustrate:1 qt:2 received:1 eq:1 ddimensional:1 closely:1 stochastic:2 human:1 virtual:2 garc:2 decompose:1 strictly:16 pl:1 ic:1 predict:1 major:1 estimation:9 label:59 minimization:2 always:3 pn:2 rank:1 check:1 sigkdd:1 sense:1 summarizing:1 inference:1 dependent:1 lj:2 arg:4 classification:2 proposes:1 equal:3 construct:6 comprise:1 procs:1 vasconcelos:1 represents:1 yu:1 simplex:4 piecewise:1 masnadi:1 divergence:1 replaced:1 interest:1 mining:2 analyzed:1 mixture:1 pc:4 accurate:2 partial:26 necessary:1 orthogonal:1 desired:2 theoretical:1 nij:1 uncertain:1 instance:6 column:15 soft:4 formulates:1 calibrate:1 cost:1 vertex:5 answer:1 teacher:1 corrupted:1 thanks:1 international:3 probabilistic:1 universidad:1 invertible:1 ambiguity:14 containing:1 possibly:3 american:2 zhao:1 valadez:1 de:1 vei:1 sec:6 dnt:1 coefficient:2 depends:2 vi:1 view:1 analyze:2 carlos:1 capability:1 annotation:1 square:3 ni:2 who:1 conceptually:1 generalize:4 evidential:1 influenced:1 ed:1 definition:1 pp:11 proof:8 knowledge:3 ut:5 sapp:1 organized:1 actually:1 uc3m:1 manuscript:1 higher:1 supervised:2 formulation:1 done:1 though:3 generality:1 furthermore:1 lent:1 sheng:1 defines:1 costsensitive:1 quality:1 aj:8 shirazi:1 semisupervised:2 name:1 effect:2 calibrating:1 concept:2 true:23 building:1 usa:1 assigned:1 imprecision:1 nonzero:1 pri:1 illustrated:1 deal:4 raykar:1 spanish:1 ambiguous:4 djj:2 generalized:4 tsc:1 complete:1 image:1 common:2 mt:21 discussed:4 association:2 relating:2 measurement:2 ai:1 rd:1 consistency:1 funded:1 etc:1 base:1 gt:1 labelers:1 posterior:15 recent:2 apart:1 scenario:13 calligraphic:1 binary:3 vt:10 scoring:8 minimum:4 ministry:1 impose:1 ey:5 signal:1 semi:1 full:1 multiple:4 infer:1 match:1 characterized:4 cross:3 supergradient:3 qi:3 prediction:1 mej:1 expectation:1 addressed:2 singular:1 strict:1 comment:1 member:4 contrary:1 jordan:1 integer:1 noting:2 iii:1 bengio:1 independence:1 florin:1 perfectly:1 imperfect:3 knowing:4 qj:3 expression:1 ul:1 effort:1 moy:1 york:1 dar:1 generally:2 useful:1 clear:1 extensively:1 svms:1 category:3 specifies:1 zj:1 gil:1 estimated:4 per:1 klein:1 write:2 vol:7 affected:1 express:1 ist:2 drawn:1 capital:1 imaging:1 inverse:4 letter:1 family:3 p3:1 decision:4 appendix:2 nonnegative:1 annual:1 mtn:1 constraint:1 min:4 span:10 department:1 according:1 combination:5 belonging:1 character:1 invariant:1 handling:1 taken:4 ln:1 equation:1 discus:2 eventually:1 fail:1 adopted:1 generalizes:1 available:1 hierarchical:1 denotes:1 uncountable:1 ghahramani:1 establish:1 question:1 strategy:1 parametric:1 diagonal:3 cid:1 said:1 govaert:1 subspace:1 berlin:1 degrade:1 polytope:2 trivial:1 reason:1 boldface:2 assuming:1 relationship:1 remind:1 providing:2 minimizing:1 innovation:1 liang:1 difficult:1 unfortunately:1 negative:2 design:4 proper:74 motivates:1 unknown:6 zt:4 observation:2 eni:1 finite:1 jin:1 defining:1 situation:3 communication:1 variability:1 provost:1 arbitrary:6 august:1 community:1 inferred:1 required:4 z1:6 address:1 elicitation:2 pattern:1 video:1 pascal2:1 ipeirotis:1 mn:1 dubois:1 picture:2 raftery:1 literature:2 discovery:1 loss:75 annotator:1 generator:1 jesus:1 sufficient:1 consistent:1 affine:2 vij:1 grandvalet:1 pi:2 row:4 infeasible:1 bias:1 side:1 wide:3 face:3 taking:1 boundary:2 dimension:2 author:1 collection:2 coincide:1 programme:1 cope:1 pseudoinverse:2 assumed:1 uji:4 zk:1 ignoring:1 improving:1 heidelberg:1 williamson:1 european:1 vj:13 main:2 linearly:5 whole:1 fig:1 madrid:2 ny:1 position:1 explicit:2 exponential:1 candidate:2 lie:1 admissible:7 z0:6 theorem:8 formula:1 bad:2 admits:1 exists:1 ci:1 labelling:3 entropy:4 intersection:1 tc:1 simply:1 ez:3 bogoni:1 expressed:1 contained:1 partially:7 springer:1 mij:4 cite:1 corresponds:5 acm:3 conditional:3 goal:2 labelled:12 fisher:1 change:1 infinite:2 except:1 e:1 select:1 arises:1 constructive:2 crowdsourcing:1
|
4,186 | 479 |
Stationarity of Synaptic Coupling Strength Between
Neurons with Nonstationary Discharge Properties
Mark R. Sydorenko and Eric D. Young
Dept. of Biomedical Engineering & Center for Hearing Sciences
The Johns Hopkins School of Medicine
720 Rutland Avenue
Baltimore. Maryland 21205
Abstract
Based on a general non-stationary point process model, we computed estimates of
the synaptic coupling strength (efficacy) as a function of time after stimulus onset
between an inhibitory interneuron and its target postsynaptic cell in the feline dorsal
cochlear nucleus. The data consist of spike trains from pairs of neurons responding
to brief tone bursts recorded in vivo. Our results suggest that the synaptic efficacy is
non-stationary. Further. synaptic efficacy is shown to be inversely and
approximately linearly related to average presynaptic spike rate. A second-order
analysis suggests that the latter result is not due to non-linear interactions. Synaptic
efficacy is less strongly correlated with postsynaptic rate and the correlation is not
consistent across neural pairs.
1
INTRODUCTION
The aim of this study was to investigate the dynamic properties of the inhibitory effect of
type IT neurons on type IV neurons in the cat dorsal cochlear nucleus (DeN). Type IV cells
are the principal (output) cells of the DCN and type II cells are inhibitory intemeurons (Voigt
& Young 1990). In particular. we examined the stationarity of the efficacy of inhibition of
neural activity in a type IV neuron by individual action potentials (APs) in a type II neuron.
Synaptic efficacy. or effectiveness, is defmed as the average number of postsynaptic (type IV)
APs eliminated per presynaptic (type IT) AP .
This study was motivated by the observation that post-stimulus time histograms of type IV
neurons often show gradual recovery ("buildup") from inhibition (Rhode et al. 1983; Young
& Brownell 1976) which could arise through a weakening of inhibitory input over time.
11
12
Sydorenko and Young
Correlograms of pairs of DCN units using long duration stimuli are reported to display
inhibitory features (Voigt & Young 1980; Voigt & Young 1990) whereas correlograms using
short stimuli are reported to show excitatory features (Gochin et a1. 1989). This difference
might result from nonstationarity of synaptic coupling. Finally, pharmacological results
(Caspary et al. 1984) and current source-density analysis of DCN responses to electrical
stimulation (Manis & Brownell 1983) suggest that this synapse may fatigue with activity.
Synaptic efficacy was investigated by analyzing the statistical relationship of spike trains
recorded simultaneously from pairs of neurons in vivo. We adopt a first order (linear) nonstationary point process model that does not impose a priori restrictions on the presynaptic
process's distribution. Using this model, estimators of the postsynaptic impulse response to a
presynaptic spike were derived using martingale theory and a method of moments approach.
To study stationarity of synaptic efficacy, independent estimates of the impulse response
were derived over a series of brief time windows spanning the stimulus duration. Average
pre- and postsynaptic rate were computed for each window, as well. In this report, we
summarize the results of analyzing the dependence of synaptic efficacy (derived from the
impulse response estimates) on post-stimulus onset time, presynaptic average rate.
postsynaptic average rate, and presynaptic interspike interval.
2
METHODS
2.1
DATA COLLECTION
Data were collected from unanesthetized cats that had been decerebrated at the level of the
superior colliculus. We used a posterior approach to expose the DCN that did not require
aspiration of brain tissue nor disruption of the local blood supply. Recordings were made
using two platinum-iridium electrodes.
The electrodes were advanced independently until a type II unit was isolated on one electrode
and a type IV unit was isolated on the other electrode. Only pairs of units with best
frequencies (BFs) within 20% were studied. The data consist of responses of the two units to
500-4000 repetitions of a 100-1500 millisecond tone. The frequency of the tone was at the
type II BF and the tone level was high enough to elicit activity in the type II unit for the
duration of the presentation, but low enough not to inhibit the activity of the type IV unit
(usually 5-10 dB above the type II threshold). Driven discharge rates of the two units ranged
from 15 to 350 spikes per second. A silent recovery period at least four times longer than the
tone burst duration followed each stimulus presentation.
2.3
DATA ANALYSIS
The stimulus duration is divided into 3 to 9 overlapping or non-overlapping time windows ('a'
thru 'k' in figure 1). A separate impulse response estimate, presynaptic rate. and postsynaptic
rate computation is made using only those type II and type IV spikes that fall within each
window. The effectiveness of synaptic coupling during each window is calculated from the
area bounded by the impulse response feature and the abscissa (shaded area in figure 1). The
effectiveness measure has units of number of spikes.
The synaptic impulse response is estimated using a non-stationary method of moments
algorithm. The estimation algorithm is based on the model depicted in figure 2. The thick
gray line encircles elements belonging to the postsynaptic (type IV) cell. The neural network
surrounding the postsynaptic cell is modelled as a I-dimensional multivariate counting
process. Each element of the I-dimensional counting process is an input to the postsynaptic
Stationarity of Synaptic Coupling Strength Between Neurons
cell. One of these input elements is the presynaptic (type II) cell under observation. The
input processes modulate the postsynaptic cell's instantaneous rate function, Aj(t). Roughly
speaking, A.j(t) is the conditional flring probability of neuron j given the history of the input
events up to time t.
200
~I
Vl
++
'-'
0
400
Q)
y,.,
a
b
SR
U'
Q)
~ ~
~o.
TYFE II PST
HISTOGRAM
Post-Stimulus Time
.......
. . . SR???????????
+t
Vl
'-'
....-
~
TYFE IV PST
HISTOGRAM
Post-Stimulus Time
F',,,,,
~ Kh2(t)
I ? , , , , , , ??
I
Figure 1: Analysis of Non-stationary Synaptic Coupling
,,,?....._ ..._-...
...._
...
N?
Np
J
~
........., ....
Nj+1
...
"-
\" ..
\
NJ ?
The transformation K describes how the
input processes influence Aj(t). We model
this transformation as a linear sum of an
intrinsic rate component and the contribution
of all the presynaptic processes:
Aj(t}
= KOj(t}+
?J
Kljk(t,U} dNk{U)
k=1
(1)
where KO describes the intrinsic rate and the
K 1 describe the impulse response of the
postsynaptic cell in response to an input
event. The output of the postsynaptic neuron
is modeled as the integral of this rate
function plus a mean-zero noise process, the
innovation martingale (Bremaud 1981):
~
Figure 2
Nj ..._ ..??? ""
Nj(t}
../ ....
=
11
TO
Aj{U) du + Mj{t).
(2)
An algorithm for estimating the first order
kernel, Kl, was derived without assuming
13
14
Sydorenko and Young
anything about the distribution of the presynaptic process and without assuming stationary
flrst or second order product densities (Le., without assuming stationary rate or stationary
auto-correlation). One or more such assumptions have been made in previous method of
moments based algorithms for estimating neural interactions (Chornoboy et al. 1988 describe
a maximum likelihood approach that does not require these assumptions).
Since Kl is assumed to be stationary during the windowed interval (figure 1) while the
process product densities are non-stationary (see PSTHs in figure 1), Kl is an average of
separate estimates of K 1 computed at each point in time during the windowed interval:
".KliAt
. . (Il) = -1
".... (Il Il)
KliAti, tj
nil
L
~-tf=t'\ tfeI
(3)
where K 1 inside the summation is an estimate of the impulse response of neuron i at time t?
to a spike from neuron j at time tf (times are relative to stimulus onset); the digitization bin
width D. (= 0.3 msec in our case) determines the location of the discrete time points as well
as the number of separate kernel estimates, nL\, within the windowed interval, I. The time
dependent kernel, Kl(','), is computed by deconvolving the effects of the presynaptic process
distribution, described by rii below, from the estimate of the cross-cumulant density, qij:
Klilt?, tf)
qj(UIl,V Il)
where:
= Lqj{vll, tf)f~(t?_VIl,tf)D.
= Pij(UIl,VIl)- Pi (UIlJPj(VIl)'
(5)
= ~(UIl,VIl) + o(UIl_VIlJPj(vll),
f~I(UIl,~) = ~-l[
1
J'
Il
fjj(UIl,V Il)
(6)
.r[fjj (UIl,V )]
pj (tf)
t,
= #{spike in neuron j during [tf
tf+~ )} /
(4)
(7)
(#{ trials) D.) ,
(8)
#{ spike in i during [t~.A, t~+D. ) and spike in j during [tf.A, tf+D. )}
Pij(t?,tf) =
2
2
2
2,
2
#{ trials} D.
(9)
where Be-> is the dirac delta function; ~and .r 1 are the DFf and inverse DFf, respectively;
and #{.} is the number of members in the set described inside the braces. If the presynaptic
process is Poisson distributed, expression (4) simplifles to:
K ..(t~
11J
I,
t~) = qj{t~, tf)
J
Il)
pj tj
,., (
(to)
Under mild (physiologically justiflable) conditions, the estimator given by (3) converges in
quadratic mean and yields an asymptotically unbiased estimate of the true impulse response
function (in the general, (4), and Poisson presynaptic process, (10), cases).
3
RESULTS
Figure 3 displays estimates of synaptic impulse response functions computed using tradi tional
cross-correlation analysis and compares them to estimates computed using the method of
moments algorithms described above. (We use the deflnition of cross-correlation given by
Voigt & Young 1990; equivalent to the function given by dividing expression (10) by
Stationarity of Synaptic Coupling Strength Between Neurons
expression (9) after averaging across all tj-) Figure 3A compares estimates computed from
the responses of a real type II and type IV unit during the flrst 15 milliseconds of stimulation
(where nonstationarity is greatest). Note that the cross-correlation estimate is distorted due to
the nonstationarity of the underlying processes. This distortion leads to an overestimation of
the effectiveness measure (shaded area) as compared to that yielded by the method of
moments algorithm below. Figure 3B compares estimates computed using a simulated data
set where the presynaptic neuron had regular (non-Poisson) discharge properties. Note the
characteristic ringing pattern in the cross-correlation estimate as well as the larger feature
amplitude in the non-Poisson method of moments estimate.
(A)
(B)
Cross-correlogram
Cross-correlogram
30~--------~----------,
15
O~~Pri~~++~rHrH~~~
-15
-30+T""T"T""T"T""T""'1-r-r-r-,r""'T""Il"'T"'1l'""T""1l'""T""1r-f
-10
-5
0
5
10
-25
-50
milliseconds
0
25
50
milliseconds
Method of Moments
Method of Moments
30 .
15 -
:
o
-15
-5
0
5
10
milliseconds
'T
.
-
-30
-10
~
p
-50
?
I
-25
.
If-.
I
0
25
milliseconds
50
Figure 3
Results from one analysis of eight different type II / type IV pairs are shown in flgure 4. For
each pair, the effectiveness and the presynaptic (type ll) average rate during each window are
plotted and fit with a least squares line. Similar analyses were performed for effectiveness
versus postsynaptic rate and for effectiveness versus post-stimulus-onset time. The number of
pairs showing a positive or negative correlation of effectiveness with each parameter are
tallied of table 1. The last column shows the average correlation coefflcient of the lines fit to
the eight sets of data. Note that: Synaptic efficacy tends to increase with time; there is no
consistent relationship between synaptic efflcacy and postsynaptic rate; there is a strong
inverse and linear correlation between synaptic efflcacy and presynaptic rate in 7 out of 8
pairs.
If the data appearing in figure 4 had been plotted as effectiveness versus average interspike
interval (reciprocal of average rate) of the presynaptic neuron, the result would suggest that
synaptic effIcacy increases with average inter-spike interval. This result would be consistent
with the interpretation that the effectiveness of an input event is suppressed by the occurrence
of an input event immediately before it. The linear model initially used to analyze these data
neglects the possibility of such second order effects.
15
16
Sydorenko and Young
Table 1: Summary of Results
NUMBER OF
PAIRS WITH
POSITWE
SLOPE
NUMBER OF
PAIRS WITH
NEGATIVE
SLOPE
AVERAGE LINEAR
REGRESSION
CORRELATION
COEFFICIENT
Effectiveness
-vsPost Stimulus Onset Time
7/8
1/8
0.83
Effectiveness
-vsAverage Postsynaptic Rate
5/8
3 /8
0.72
Effectiveness
-vsA verage Presynaptic Rate
1/8
7/8
0.89
GRAPH
0.2 __- - - - - - - - - - - - ,
.
:
??t???????????????t????????????????
- !
!
0.15
0.2-r-------------,
...... ----- ............
_-_ .......................... .
? _---_ .............
..
...
?????
.
.
.
.
?
.
0.15
?:
.i
-
:
.
!
-1
\
!-
.................. _................ _.................. .
............. _
???????????-t???????????????.:.???~??????????t~ ????? ....... .
i I'!
~~......)~..
......
0.05
A~J
.
+ -.
I
:"
150
200
250
100
Type II Rate (spikes/sec)
Figure 4
?
-
a
.
~
:
a
+
?
,.
Ai
+
.J
:
:
? 1
of'
0
:
i
~'~
i
O~~~~~~~. .~~~~
50
i
==a:=;;
a
:
:
..
....
. -?????????. :- ??+-.Q.-..... ---t ????? -- ??? -??? -.~ ?.? -?? - ..????? ?.
+
~
+ ~
0.05
-:---- ..... :-::
~~:
:
::
I
????
?
0
10
15
20
Type II Inter-spike Interval (millisec)
0
5
Figure 5
We used a modification of the analysis described in the methods to investigate second order
effects. Rather than window small segments of the stimulus duration as in figure I, the entire
duration was used in this analysis. Impulse response estimates were constructed conditional
Stationarity of Synaptic Coupling Strength Between Neurons
on presynaptic interspike interval. For example, the first estimate was constructed using
presynaptic events occurring after alms interspike interval, the second estimate was based
on events after a 2 ms interval, and so on.
The results of the second order analysis are shown in figure 5. Note that there is no
systematic relationship between conditioning interspike interval and effectiveness. In fact.
lines fitted to these points tend to be horizontal, suggesting that there are no significant
second order effects under these experimental conditions.
Our results suggest that synaptic efficacy is inversely and roughly linearly related to average
presynaptic rate. We have attempted to understand the mechanism of the observed decrease
in efficacy in terms of a model that asswnes stationary synaptic coupling mechanisms. The
model was designed to address the following hypothesis: Could the decrease in synaptic
efficacy at high input rates be due to an increase in the likelihood of driving the stochastic
intensity below zero, and, hence decreasing the apparent efficacy of the input due to clipping?
The answer was pursued by attempting to reproduce the data collected for the 3 best type II /
type IV pairs in our data set. Real data recorded from the presynaptic unit are used as input
to these models. The parameters of the models were adjusted so that the first moment of the
output process had the same quantitative trajectory as that seen in the real postsynaptic unit.
The simulated data were analyzed by the same algorithms used to analyze the real data. Our
goal was to compare the simulated results with the real results. If the simulated data showed
the same inverse relationship between presynaptic rate and synaptic efficacy as the real data,
it would suggest that the phenomenon is due to non-linear clipping by the postsynaptic unit.
The simulation algorithm was based on the model described in figure 2 and equation (1) but
with the following modifications:
?
?
?
The experimentally determined type IV PST profile was substituted for KO (this term
represents the average combined influence of all extrinsic inputs to the type IV cell plus
the intrinsic spontaneous rate).
An impulse response function estimated from the data was substituted for Kl (this kernel
is stationary in the simulation model).
The convolution of the experimentally determined type II spikes with the first-order
kernel was used to perturb the output cell's stochastic intensity:
Al{t)
= MAX [0,
Pl{t) +
L
Kl 12{t - Ui)
1
dN2 (Ui) =s
where: dN2(t)
Real type n cell spike record, and
PI (t)
PST profile of real type IV cell.
The output process was simulated as a non-homogeneous Poisson process with ),,1 (t) as
its parameter. This process was modified by a 0.5 msec absolute dead time.
The simulated data were analyzed in the same manner as the real data.
=
=
?
?
The dependence of synaptic efficacy on presynaptic rate in the simulated data was compared
to the corresponding real data. In lout of the 3 cases, we observed an inverse relationship
between input rate and efficacy despite the use of a stationary first order kernel in the
simulation. The similarity between the real and simulated results for this one case suggests
that the mechanism may be purely statistical rather than physiological (e.g., not presynaptic
depletion or postsynaptic desensitization). The other 2 simulations did not yield a strong
dependence of effectiveness on input rate and, hence, failed to mimic the experimental
results. In these two cases, the results suggest that the mechanism is not due solely to
clipping, but involves some additional, possibly physiological, mechanisms.
17
18
Sydorenko and Young
4
CONCLUSIONS
1) The amount of inhibition imparted to type IV units by individual presynaptic type II unit
action potentials (expressed as the expected nwnber of type N spikes eliminated per type
II spike) is inversely and roughly linearly related to the average rate of the type II unit.
(2) There is no evidence for second order synaptic effects at the type II spike rates tested. In
other words, the inhibitory effect of two successive type II spikes is simply the linear
sum of the inhibition imparted by each individual spike.
(3) There is no consistent relationship between type II I type IV synaptic efficacy and
postsynaptic (type IV) rate.
(4) Simulations, in some cases, suggest that the inverse relationship between presynaptic rate
and effectiveness may be reproduced using a simple statistical model of neural
interaction.
(5) We found no evidence that would explain the discrepancy between Voigt and Young's
results and Gochin's results in the DCN. Gochin observed correlogram features
consistent with monosynaptic excitatory connections within the DCN when short tone
bursts were used as stimuli. We did not observe excitatory features between any unit
pairs using short tone bursts.
Acknowledgements
Dr. Alan Karr assisted in developing Eqns. 1-10. E. Nelken provided helpful comments.
Research supported by NIH grant DCOO115.
References
Bremaud, P. (1981). Point Processes and Queues: Martingale Dynamics. New York,
Springer-Verlag.
Caspary, D.M., Rybak, L.P.et al. (1984). "Baclofen reduces tone-evoked activity of cochlear
nucleus neurons." Hear Res. 13: 113-22.
Chomoboy, E.S., Schramm, L.P.et al. (1988). "Maximum likelihood identification of neural
point process systems." BioI Cybem. 59: 265-75.
Gochin, P.M., Kaltenbach, J.A.et al. (1989). "Coordinated activity of neuron pairs in
anesthetized rat dorsal cochlear nucleus." Brain Res. 497: 1-11.
Manis, P.B. & Brownell, W.E. (1983). "Synaptic organization of eighth nerve afferents to cat
dorsal cochlear nucleus." J Neurophysiol. 50: 1156-81.
Rhode, W.S., Smith, P.H.et al. (1983). "Physiological response properties of cells labeled
intracellularly with horseradish peroxidase in cat dorsal cochlear nucleus." J Comp Neurol.
213: 426-47.
Voigt, H.F. & Young, ED. (1980). "Evidence of inhibitory interactions between neurons in
dorsal cochlear nucleus." J Neurophys. 44: 76-96.
Voigt, H.F. & Young, E.D. (1990). "Cross-correlation analysis of inhibitory interactions in
the Dorsal Cochlear Nucleus." J Neurophys. 54: 1590-1610.
Young, E.D. & Brownell, W.E. (1976). "Responses to tones and noise of single cells in
dorsal cochlear nucleus of unanesthetized cats." J Neurophys. 39: 282-300.
|
479 |@word mild:1 trial:2 bf:1 simulation:5 gradual:1 moment:9 series:1 efficacy:19 dff:2 current:1 neurophys:3 john:1 interspike:5 designed:1 aps:2 stationary:12 pursued:1 tone:9 reciprocal:1 smith:1 short:3 record:1 location:1 successive:1 windowed:3 burst:4 correlograms:2 constructed:2 supply:1 qij:1 inside:2 manner:1 inter:2 expected:1 roughly:3 abscissa:1 nor:1 brain:2 decreasing:1 window:7 provided:1 estimating:2 bounded:1 underlying:1 monosynaptic:1 tallied:1 ringing:1 transformation:2 nj:4 quantitative:1 unit:17 grant:1 positive:1 before:1 engineering:1 local:1 tends:1 despite:1 analyzing:2 solely:1 approximately:1 ap:1 rhode:2 might:1 plus:2 studied:1 examined:1 evoked:1 suggests:2 shaded:2 area:3 kh2:1 elicit:1 pre:1 word:1 regular:1 suggest:7 influence:2 restriction:1 equivalent:1 center:1 dcn:6 duration:7 independently:1 feline:1 recovery:2 immediately:1 estimator:2 bfs:1 vll:2 discharge:3 target:1 spontaneous:1 homogeneous:1 hypothesis:1 element:3 intracellularly:1 labeled:1 observed:3 electrical:1 decrease:2 inhibit:1 ui:2 overestimation:1 dynamic:2 aspiration:1 segment:1 purely:1 eric:1 neurophysiol:1 cat:5 surrounding:1 train:2 describe:2 apparent:1 larger:1 distortion:1 reproduced:1 interaction:5 product:2 dirac:1 electrode:4 converges:1 coupling:9 school:1 flgure:1 strong:2 dividing:1 involves:1 thick:1 stochastic:2 bin:1 require:2 lqj:1 gochin:4 summation:1 adjusted:1 pl:1 assisted:1 uil:6 driving:1 adopt:1 estimation:1 expose:1 platinum:1 repetition:1 tf:12 horseradish:1 aim:1 modified:1 rather:2 derived:4 likelihood:3 helpful:1 tional:1 dependent:1 vl:2 weakening:1 entire:1 initially:1 reproduce:1 priori:1 eliminated:2 represents:1 deconvolving:1 mimic:1 discrepancy:1 report:1 stimulus:15 np:1 bremaud:2 simultaneously:1 individual:3 stationarity:6 organization:1 investigate:2 possibility:1 analyzed:2 nl:1 tj:3 integral:1 iv:19 re:2 plotted:2 isolated:2 fitted:1 column:1 clipping:3 hearing:1 reported:2 answer:1 combined:1 density:4 systematic:1 hopkins:1 recorded:3 possibly:1 dr:1 dead:1 suggesting:1 potential:2 schramm:1 sec:1 coefficient:1 coordinated:1 afferent:1 onset:5 performed:1 analyze:2 slope:2 vivo:2 contribution:1 il:8 square:1 characteristic:1 yield:2 modelled:1 identification:1 vil:4 trajectory:1 comp:1 tissue:1 history:1 explain:1 flrst:2 nonstationarity:3 synaptic:29 ed:1 frequency:2 pst:4 amplitude:1 nerve:1 response:18 synapse:1 strongly:1 biomedical:1 correlation:11 until:1 horizontal:1 overlapping:2 aj:4 gray:1 impulse:12 dnk:1 effect:7 intemeurons:1 ranged:1 unbiased:1 true:1 hence:2 pri:1 pharmacological:1 during:8 width:1 defmed:1 ll:1 eqns:1 anything:1 rat:1 m:1 fatigue:1 disruption:1 instantaneous:1 nih:1 superior:1 psths:1 stimulation:2 conditioning:1 interpretation:1 significant:1 ai:1 voigt:7 had:4 longer:1 similarity:1 inhibition:4 posterior:1 multivariate:1 showed:1 imparted:2 driven:1 verlag:1 flring:1 seen:1 additional:1 impose:1 period:1 ii:21 reduces:1 alan:1 cross:8 long:1 divided:1 post:5 a1:1 fjj:2 ko:2 regression:1 poisson:5 histogram:3 kernel:6 cell:16 whereas:1 baltimore:1 interval:11 source:1 brace:1 sr:2 comment:1 recording:1 tend:1 db:1 member:1 effectiveness:16 nonstationary:2 counting:2 enough:2 fit:2 silent:1 avenue:1 qj:2 motivated:1 expression:3 queue:1 buildup:1 speaking:1 york:1 action:2 amount:1 inhibitory:8 millisecond:6 estimated:2 delta:1 per:3 extrinsic:1 discrete:1 four:1 threshold:1 blood:1 pj:2 asymptotically:1 graph:1 sum:2 colliculus:1 inverse:5 distorted:1 followed:1 display:2 quadratic:1 chornoboy:1 yielded:1 activity:6 strength:5 attempting:1 developing:1 verage:1 belonging:1 across:2 describes:2 postsynaptic:20 suppressed:1 modification:2 den:1 depletion:1 equation:1 mechanism:5 eight:2 observe:1 appearing:1 occurrence:1 responding:1 manis:2 medicine:1 neglect:1 perturb:1 spike:21 dependence:3 separate:3 maryland:1 simulated:8 digitization:1 cochlear:9 presynaptic:27 collected:2 spanning:1 assuming:3 modeled:1 relationship:7 innovation:1 negative:2 rii:1 peroxidase:1 neuron:21 observation:2 convolution:1 intensity:2 nwnber:1 pair:14 kl:6 connection:1 address:1 usually:1 below:3 pattern:1 eighth:1 lout:1 summarize:1 hear:1 max:1 greatest:1 event:6 advanced:1 brief:2 inversely:3 auto:1 thru:1 acknowledgement:1 relative:1 versus:3 nucleus:9 pij:2 consistent:5 pi:2 excitatory:3 summary:1 supported:1 last:1 understand:1 karr:1 fall:1 absolute:1 anesthetized:1 distributed:1 calculated:1 dn2:2 nelken:1 collection:1 made:3 cybem:1 assumed:1 physiologically:1 table:2 mj:1 du:1 investigated:1 substituted:2 did:3 linearly:3 noise:2 arise:1 profile:2 martingale:3 msec:2 young:14 showing:1 neurol:1 physiological:3 iridium:1 evidence:3 consist:2 intrinsic:3 occurring:1 interneuron:1 depicted:1 simply:1 failed:1 encircles:1 correlogram:3 expressed:1 springer:1 determines:1 conditional:2 modulate:1 goal:1 presentation:2 bioi:1 experimentally:2 determined:2 averaging:1 principal:1 nil:1 experimental:2 attempted:1 mark:1 latter:1 dorsal:8 cumulant:1 dept:1 tested:1 phenomenon:1 correlated:1
|
4,187 | 4,790 |
Coupling Nonparametric Mixtures via
Latent Dirichlet Processes
John Fisher
MIT CSAIL
[email protected]
Dahua Lin
MIT CSAIL
[email protected]
Abstract
Mixture distributions are often used to model complex data. In this paper, we develop a new method that jointly estimates mixture models over multiple data sets
by exploiting the statistical dependencies between them. Specifically, we introduce a set of latent Dirichlet processes as sources of component models (atoms),
and for each data set, we construct a nonparametric mixture model by combining
sub-sampled versions of the latent DPs. Each mixture model may acquire atoms
from different latent DPs, while each atom may be shared by multiple mixtures.
This multi-to-multi association distinguishes the proposed method from previous
ones that require the model structure to be a tree or a chain, allowing more flexible
designs. We also derive a sampling algorithm that jointly infers the model parameters and present experiments on both document analysis and image modeling.
1
Introduction
Mixture distributions have been widely used for statistical modeling of complex data. Classical
formulations specify the number of components a priori, leading to difficulties in situations where
the number is either unknown or hard to estimate in advance. Bayesian nonparametric models,
notably those based on Dirichlet processes (DPs) [14, 16], have emerged as an important method to
address this issue. The basic idea of DP mixture models is to use a sample of a DP, which is itself a
distribution over a countably infinite set, as the prior for component parameters.
One significant assumption underlying a DP mixture model is that observations are infinitely exchangeable. This assumption does not hold in the cases with multiple groups of data, where samples in different groups are generally not exchangeable. Among various approaches to this issue,
hierarchical Dirichlet processes (HDPs) [20], which organize DPs into a tree with parents acting as
the base measure for children, is one of the most popular. HDPs have been extended in a variety of
ways. Kim and Smyth [9] incorporated group-specific random perturbations, allowing component
parameters to vary across different groups. Ren et al. [17] proposed dynamic HDPs, which combine
the DP at a previous time step with a new one at the current time step.
Other methods have also been developed. MacEachern [13] proposed a DDP model that allows parameters to vary following a stochastic process. Griffin and Steel [6] proposed the order-based DDP,
where atoms can be weighted differently via the permutation of the Beta variables for stick-breaking.
Chung and Dunson [3] carried this approach further, using local predictors to select subsets of atoms.
Recently, the connections between Poisson, Gamma, and Dirichlet processes have been exploited.
Rao and Teh [15] proposed the spatially normalized Gamma process, where a set of dependent DPs
can be derived by normalizing restricted projections of an auxiliary Gamma process over overlapping sub-regions. Lin et al [12] proposed a new construction of dependent DPs, which supports
dynamic evolution of a DP through operations on the underlying Poisson processes.
Our primary goal here is to describe multiple groups of data through coupled mixture models. Sharing statistical properties across different groups allows for more reliable model estimation, especially
1
when the observed samples in each group are limited or noisy. From a probabilistic standpoint, this
framework can be obtained by devising a joint stochastic process that generates DPs with mutual
dependency. Particularly, it is desirable to have a design that satisfies three properties: (1) Sharing
of mixture components (atoms) between groups. (2) The marginal distribution of atoms for each
group remains a DP. (3) Flexible configuration of inter-group dependencies. For example, the prior
weight of a common atom can vary across groups.
Achieving these goals simultaneously is nontrivial. Whereas several existing constructions [3, 6, 12,
15] meet the first two properties, they impose restrictions on the model structure (e.g. the groups need
to be arranged into a tree or a chain). We present a new framework to address this issue. Specifically,
we express mixture models for each group as a stochastic combination over a set of latent DPs. The
multi-to-multi association between data groups and latent DPs provides much greater flexibility to
model configurations, as opposed to prior work (we provide a detailed comparison in section 3.2).
We also derive an MCMC sampling method to infer model parameters from grouped observations.
2
Background
We provide a review of Dirichlet processes in order to lay the theoretical foundations of the method
described herein. We also discuss the related construction of dependent DPs proposed by [12], which
exploits the connection between Poisson and Dirichlet processes to support various operations.
A Dirichlet process, denoted by DP(?B), is a distribution over probability measures, which is
characterized by a concentration parameter ? and a base measure B over an underlying space ?.
Each sample path D ? DP(?B) is itself a distribution over ?. Sethuraman [18] showed that D is
almost surely discrete (with countably infinite support), and can be expressed as
D=
?
X
?k ? ? k ,
with ?k = vk
k=1
k?1
Y
l=1
(1 ? vl ), vk ? Beta(1, ?).
(1)
This is known as the stick breaking representation of a DP. This discrete nature makes a DP particularly suited to serve as a prior for component parameters in mixture models.
Generally, in a DP mixture model, each data sample xi is considered to be generated from a component model with parameter ?i , denoted by G(?i ). The component parameters are samples from D,
which is itself a realization of a DP. The formulation is given below
D ? DP(?B),
?i ? D, xi ? G(?i ), for i = 1, . . . , n.
(2)
As D is an infinite series, it is infeasible to instantiate D. As such, the Chinese restaurant process,
given by Eq. 3, is often used to directly sample the component parameters, with D integrated out.
p(?i |? /i ) =
K/i
X
k=1
m/i (k)
?
??k +
B.
? + (n ? 1)
? + (n ? 1)
(3)
Here, ? /i denotes all component parameters except ?i , K/i denotes the number of distinct atoms
among them, and m/i (k) denotes the number of occurrences of the atom ?k . When xi is given, the
likelihood to generate xi conditioned on ?i can be incorporated, resulting in an modulated sampling
scheme described below. Let f (xi ; ?) denote the likelihood to generate xi w.r.t. G(?), and f (xi ; B)
denote the marginal likelihood w.r.t. the parameter prior B. Then, with a probability proportional
to m/i (k)f (xi ; ?k ), we set ?i = ?k , and with a probability proportional to ?f (xi ; B), we draw an
new atom from B(?|xi ), which is the posterior parameter distribution given xi .
Recently, Lin et al. [12] proposed a new construction of DPs based on the connections between
Poisson, Gamma, and Dirichlet processes. The construction provides three operations to derive new
DPs depending on existing ones, which we will use to develop the coupled DP model. Here, we
provide a brief review of these operations.
(1) Superposition. Let Dk ? DP(?k Bk ) for k = 1, . . . , K be independent DPs and (c1 , . . . , cK ) ?
Dir(?1 , . . . , ?K ). Then the stochastic convex combination of these DPs as below remains a DP:
c1 D1 + ? ? ? + cK DK ? DP(?1 B1 + ? ? ? + ?K BK ).
2
(4)
H1
q11
H2
q21 q31
D1
q22 q32
D2
?1i
D3
?2i
x1i
Q
?4i
x3i
n2
?s
Hs
Groups
B
ML
q42
D4
?3i
x2i
n1
Latent DPs
ct
zti
k
rtk
xti
Atoms
1
1
nt
M
x4i
n3
n4
Figure 1: This shows the graphical model
of the coupled DP formulation on a case
with four groups and two latent DPs. Each
mixture model Dt inherits atoms from Hs
with a probability qts , resulting in Eq.(7).
Figure 2: The reformulated model for Gibbs
sampling contains latent DPs, groups of data, and
atoms. Each sample xti is attached a label zti that
assigns it an atom ?zti . To generate zti , we draw
a latent DP (from Mult(ct )) and choose a label
therefrom. In sampling, Hs is integrated out, resulting in mutual dependency between zti , as in
the Chinese restaurant process.
P?
(2) Sub-sampling. Let D = k=1 ?k ??k ? DP (?B). One obtains a new DP by sub-sampling D
via independent Bernoulli trials. Given a sub-sampling probability q, one draws a binary value rk
with Pr(rk = 1) = q for each atom ?k to decide whether to retain it, resulting in a DP as
X
Sq (D) ,
?k0 ??k ? DP(?qB).
(5)
k:rk =1
0
Here, Sq denotes the sub-sampling operation
P(with probability q), and ?k is the re-normalized coefficient for ?k , which is given by ?k0 = ?k / k rk ?k .
P?
(3) Transition. Given D = k=1 ?k ??k ? DP (?B), perturbing the locations of each atom folP?
lowing a probabilistic transition kernel T also yields a new DP, given by T (D) , k=1 ?k ?T (?k ) .
While these operations were originally developed to evolve a DP along a Markov chain, we show in
the next section that they can also be utilized to construct models with different structures.
3
Coupled Nonparametric Mixture Models
Our primary goal is to develop a joint formulation over group-wise DP mixture models where components are shared across different groups and the weights and parameters of shared components
vary across groups. We propose a new construction illustrated in Figure 1. Suppose there are M
groups of data, each with a mixture model. They are coupled by ML latent DPs. The generative
formulation is then described as follows: First, generate ML latent DPs independently, as
Hs ? DP (?s B),
for s = 1, . . . , ML .
(6)
Second, generate M dependent DPs, each for a group of data, by combining the sub-sampled versions of the latent DPs through stochastic convex combination. For each t = 1, . . . , M ,
Dt =
ML
X
s=1
cts Sqts (Hs ), with (ct1 , . . . , ctML ) ? Dir(?1 qt1 , . . . , ?ML qtML ).
(7)
Intuitively, for each group of data (say the t-th), we choose a subset of atoms from each latent source
and bring them together to generate Dt . Here, qts is the prior probability that an
Patom in Hs will be
inherited by Dt . Note that this formulation can be further extended into Dt = s cts Tt (Sqts (Hs )).
Here, Tt is a probabilistic transition kernel. Using the transition operation, this extension allows
parameters to vary across different groups. Particularly, the atom parameter would be an adapted
version from Tt (?k , ?) instead of ?k itself, when the atom ?k is inherited by Dt .
3
Third, generate the component parameters and data samples in the standard way, as
?t,i |Dt ? Dt , and xt,i |?t,i ? G(?t,i ),
for i = 1, . . . , nt , t = 1, . . . , M.
(8)
Here, xt,i is the i-th data sample in the t-th group, and ?t,i is the associated atom parameter.
3.1
Theoretical Analysis
The following theorems (proofs provided in supplementary material) demonstrate that, as a result of
the construction above, the marginal distribution of Dt is a DP:
PML
?s qts .
Theorem 1. The stochastic process Dt given by Eq.(7) has Dt ? DP(?t B), with ?t = s=1
We also show that they are dependent, with the covariance given by the theorem below.
Theorem 2. Let t1 6= t2 and U be a measurable subset of ?, then
Cov(Dt1 (U ), Dt2 (U )) =
ML
1 X
(?s qt1 s qt2 s )2
B(U )(1 ? B(U )).
?t1 ?t2 s=1 ?s qt1 s qt2 s + 1
(9)
It can be seen that the hyper-parameters influence the model characteristics in different ways. The
inheritance probabilities (i.e. the q-values) control how closely the models are coupled. Two models
are strongly coupled, if there exists a subset of latent DPs, from which both inherit atoms with high
probabilities, while their coupling is much weaker if the associated q-values are set differently. The
latent concentration parameters (i.e. the values of ?s ) control how frequently new atoms are created.
Generally, higher values of ?s lead to more atoms being associated with the data, resulting in finer
clusters. Another important factor is ML , the number of latent DPs. A large number of latent DPs
provides fine-grained control of the model configuration at the cost of increased complexity.
3.2
Comparison with Other Models
We review related approaches and discuss their differences with the one proposed here. Similar
to this work, HDPs [20] model grouped data. Such models must be arranged into a tree, i.e. each
child can only have one parent. Our model allows the mixture model for each group to inherit from
multiple sources, making it applicable to more general contexts.
It is worth emphasizing that enabling inheritance from multiple parents is not just a straightforward extension, as it entails both theoretical and practical challenges: First, to combine atoms from
multiple DPs while guaranteeing that the resultant process remains a DP requires careful design of
the formulation (e.g. the combination coefficients should be from a Dirichlet distribution, and each
parent DP should be properly sub-sampled). Second, the sampling procedure has to determine the
source of each atom, which, again, is nontrivial and needs special algorithmic design (see section 4)
to maintain the detailed balance.
SN?P [15] defines a gamma process G over an extended space. For each group t, a DP Dt is
derived through normalized restriction of G into a measurable subset. The DPs derived on overlapped subsets are dependent.P
Though motivated differently, this construction can be reduced to a
formulation in the form Dt = j?Rt ctj Hj , where Rt is the subset of latent DPs used for Dt . Compared to Eq.(7), we can see that it is essentially a special case of the present construction without
sub-sampling (i.e. all q-values equal 1). Consequently, the combination coefficients have to satisfy
(ctj )j?Rt ? Dir((?j )j?Rt ), implying that the relative weights of two latent sources are restricted to
be the same in all groups that inherit from both. In contrast, the approach here allows the weights of
latent DPs to vary across groups. Also, SN?P doesn?t allow atom parameters to vary across groups.
4
Sampling Algorithm
This section introduces a Gibbs sampling algorithm to jointly estimate the mixture models of multiple groups. Overall, this algorithm is an extension to the Chinese restaurant process, with several
new aspects: (1) The conditional probability of labels depend on the total number of samples associated with it over the entire corpus (instead of that within a specific group). Note that it also differs
4
from HDP, where such probabilities depend on the number of associated tables. (2) Each group
maintains a distribution over the latent DPs to choose from, which reflects the different contributions of these sources. (3) It leverages the sub-sampling operation to explicitly control the model
complexity. In particular, each group maintains indicators of whether particular atoms are inherited,
and as a consequence, the ones that are deemed irrelevant are put out of scope. (4) As there are multiple latent DPs, for each atom, there is uncertainty about where it comes from. We have a specific
step that takes this into account, which allows reassigning an atom to different sources.
We first set up the notations. Recall that there are M groups of data, and ML latent DPs to link
between them. The observations in the t-th group are xt1 , . . . , xtnt . We use ?k to denote an atom.
Note here that the index k is a globally unique identifier of the atom, which would not be changed
during atom relocation. Since an atom may correspond to multiple data samples. Instead of instantiating the parameter ?ti for each data sample xti , we attach to xti an indicator zti that associates
the sample to a particular atom. This is equivalent to setting ?ti = ?zti . To facilitate the sampling
process, for each atom ?k , we maintain an indicator sk specifying the latent DP that contains it, and
a set of counters {mtk }, where mtk equals the number of associated data samples in t-th group. We
also maintain a set Is for Hs (the s-th latent DP), which contains the indices of all atoms therein.
The model in Eq.(7) and (8) can then be reformulated, as shown in Fig 2. It consists of four steps:
(1) Generate latent DPs: for each s = 1, . . . , ML , we draw Hs ? DP(?s B). (2) Generate the
combination coefficients: for each group t, we draw (ct1 , . . . , ctML ) ? Dir(?1 qt1 , . . . , ?ML qtML ),
which gives the group-specific prior over the sources for the t-th group. (3) Decide inheritance:
for each atom ?k , we draw a binary variable rtk with Pr(rtk = 1) = qtsk to indicate whether ?k is
inherited by the t-th group. Here sk is the index of the latent DP which ?k is from. (4) Generate
data: to generate xti , we first choose a latent DP by drawing u ? Mult(ct1 , . . . , ctML ), then draw
an atom from Hu , using it to produce xti . Based on this formulation, we derive the following Gibbs
sampling steps to update the atom parameters and other hidden variables.
(1) Update labels. Recall that each data sample xti is associated with a label variable zti that
indicates the atom accounting for xti . To draw zti , we first have to choose a particular latent DP
as the source (we denote the index of this DP by uti ). Let z/ti denote all labels except zti , and rt
denote the inheritance indicators. Then, we get the likelihood of xti (with Hs integrated out) as
!
X
1
p(xti |uti = s, rt , z/ti ) =
m?k/ti f (xti ; ?k ) + qts ?s f (xti ; B) . (10)
wst/i + qts ?s
k?Is :rtk =1
Here, m?k/ti is the total number of samples associated with ?k in all groups (except
R for xti ), wst/i =
P
k?Is :rtk =1 m?k/ti , f (xti ; ?k ) is the pdf at xti w.r.t. ?k , and f (xti ; B) = ? f (xti ; ?)B(?)d?.
Derivations of this and other formulas for sampling are in the supplemental document. Hence,
p(uti = s|others) ? p(uti = s|ct )p(xti |uti = s, z/ti ) = cts p(xti |uti = s, z/ti ).
(11)
Here, ct = (ct1 , . . . , ctML ) are the group-specific prior over latent sources. Once a latent DP is
chosen (using the formula above), we can then draw a particular atom. This is similar to the Chinese
restaurant process: with a probability proportional to m?k/ti f (xti ; ?k ), we set zti = k, and with a
probability proportional to qts ?s f (xti ; B), we draw a new atom from B(?|xi ). Only the atoms that
is contained in Hs and has rtk = 1 (inherited by Dt ) can be drawn at this step.
We have to modify relevant quantities accordingly, such as mtk , ws , and Is , when a label zti is
changed. Moreover, when a new atom ?k is created, it will be initially assigned to the latent DP that
generates it (i.e. setting sk = uti ).
(2) Update inheritance indicators. If an atom ?k is associated with some data in the t-th group,
then we know for sure that it is inherited by Dt , and thus we can set rtk = 1. However, if ?k is not
observed, it doesn?t imply rtk = 0. For such an atom (suppose it is from Hs ), we have
?(?s/t , nt )
Pr(rtk = 1|others)
qts ? p(zt |rtk = 1, others)
qts
=
=
. (12)
Pr(rtk = 0|others)
(1 ? qts ) ? p(zt |rtk = 0, others)
1 ? qts ?(?s/t + m?k/t , nt )
P
Here, ?s/t = qts ?s + k0 ?Is ?{k} m?k/t and m?k0 /t is the number of samples associated with
k 0 in all other groups (excluding the ones in the t-th group). ? is a function defined by ?(?, n) =
Qn?1
i=0 (? +i) = ?(? +n)/?(? ). Intuitively, when m?k is large (indicating that ?k appears frequently
5
HDP (train)
HDP (test)
S?LDP (train)
S?LDP (test)
SNGP (train)
SNGP (test)
M?LDP (train)
M?LDP (test)
4500
4000
SN?P
perplexity
3500
3000
2500
2800
1500
0
Figure 3: model
structures.
2400
2200
2000
2000
M-LDP
# train docs = 400
# train docs = 800
# train docs = 1200
2600
perplexity
HDP /
S-LDP
1800
200
400
600
800
# training docs
1000
1200
0
5
10
?
15
20
Figure 4: The results on NIPS data ob- Figure 5: The results on NIPS data ustained with training sets of different sizes.
ing M-LDP, with different ? values.
in other groups) or nt is large, ?k is likely to appear in the t-th group if it is inherited. Under such
circumstances, if ?k not seen, then it is probably not inherited.
(3) Update combination coefficients. The coefficients ct = (ct1 , . . . , ctML ) reflect the relative
contribution of each latent DP to the t-th group. ct follows a Dirichlet distribution a priori (see
Eq.(7). Given zt , the labels of all samples in the t-th group, we have
?
?
X
X
ct |zt ? Dir ??1 qt1 +
mtk , . . . , ?ML qtML +
mtk ? .
(13)
k?I1
Here,
P
k?Is
k?IML
mtk is the total number of samples in the t-th group that are associated with Hs .
(4) Update atom parameters. Given all the labels, we can update the atoms, by re-drawing their
parameters from the posterior distributions. Let Xk denote the set of all data samples associated with
the k-th atom, then we can draw ?k ? B(?|Xk ), where B(?|X
Q k ) denotes the posterior distribution
conditioned on Xk , with the pdf given by B(?|Xk ) ? B(?) x?Xk f (xk ; ?).
(5) Reassign atoms. In this model, each atom is almost surely from a unique latent DP (i.e. it never
comes from two distinct sources). This leads to an important question: How to we assign atoms to
latent DPs? Initially, an atom is assigned to the latent DP from which it is generated. This is not
necessarily optimal. Here, we treat the assignment of each atom as a variable. Consider an atom ?k ,
with sk indicating its corresponding source DP. Then, we have
Y
Y
p(sk = j|others) =
qts
(1 ? qts ).
(14)
t:rtk =1
t:rtk =0
When an atom ?k that was in Hs is reassigned to Hs0 , we have to move the index k from Is to Is0 .
5
Experiments
The framework developed in this paper provides a generic tool to model grouped data. In this section, we present experiments on two applications: document analysis and scene modeling. The
primary goal is to demonstrate the key distinctions between the proposed approach and other nonparametric methods, and study how the new design influences empirical performance.
5.1
Document Analysis
Topic models [1, 2, 7, 20] have been widely used for statistical analysis of documents. In general, a
topic model comprises a set of topics, each associated with a multinomial distribution, from which
words can be independently generated. Here, we formulate a Coupled Topic Model by extending
LDA [2] to model multiple groups of documents. Specifically, it associates the t-th group with a
mixture of topics, characterized by a DP sample Dt . With this given, the words in a document are
generated independently, each from a topic drawn from Dt . To exploit the statistical dependency
between groups, we further introduce a set of latent DPs to link between these mixtures, as described
6
above. The NIPS (1-17) database [5], which contains 2484 papers published from 1987 to 2003, is
used in our experiments. We clean the data by removing the words that occur fewer than 10 times
over the corpus and those that appear in more than 2000 papers, resulting in a reduced vocabulary
comprised of 11729 words. The data are divided into 17 groups, one for each year.
We perform experiments on several configurations, with different ways to connect between latent
sources and data groups, as illustrated in Figure 3. (1) Single Latent DP (S-LDP): there is only one
latent DP connecting to all groups, with q-values set to 0.5. Though with a structure similar to HDP,
the formulation is actually different: HDP generates group-specific mixtures by using the latent DP
as the base measure, while our model involves explicit sub-sampling. (2) Multi Latent DP (M-LDP):
there are two types of latent DPs ? local and global ones. The local latent DPs are introduced to help
sharing statistical strength among the groups close to each other, so as to capture the intuition that
papers published in consecutive years are more likely to share topics than those published in distant
years. The inheritance probability from a local latent DP Hs to Dt is set as qts = exp(?|t ? s|/?).
Also, recognizing that some topics may be shared across the entire corpus, we also introduce a
global latent DP, from which every group inherit atoms with the same probability, which allows
distant groups to be connected. This design illustrates the flexibility of the proposed framework and
how one can leverage this flexibility to address practical needs.
For comparison, we also consider another setting of q-values under the M-LDP structure: to set
qts = I(|t ? s| ? ?), that is to connect Dt and Hs only when |t ? s| ? ?, with qts = 1. Under
this special setting, the formulation reduces to SN?P [15]. We also test HDP following exactly the
settings given in [20]: ?0 ? Gamma(0.1, 0.1) and ? ? Gamma(5, 0.1). Other design parameters
are set as below. We place a weak prior over ?s for each latent DP, as ?s ? Gamma(0.1, 0.1), and
periodically update its value. The base distribution B is assumed to be Dir(1), which is actually a
uniform distribution over the probability simplex.
The first experiment is to compare different methods on training sets of various sizes. We divide
all papers into two disjoint halves, respectively for training and testing. In each test, models are
estimated upon a subset of specific size randomly chosen from the training corpus. The learned
models are then respectively tested on the training subset and the held-out testing set, so as to study
the gap between empirical and generalized performance, which is measured in terms of perplexity.
From Figure 4, we observe: (1) In general, as the training set size increases, the perplexity evaluated
on the training set increases and that on the testing set decreases. However, such convergence is
faster when local coupling is used (e.g. in SN?P and M-LDP). This suggests that the sharing of
statistical strength through local latent DPs improves the reliability of the estimation, especially
when the training data are limited. (2) Even when the training set size is increased to 1200, the
methods using local coupling still yield lower perplexity than others. This is partly ascribed to the
model structure. For example, the papers published in consecutive years tend to share lots of topics,
however, the topics may not be as similar when you compare papers published recently to those
a decade ago. A set of local latent DPs may capture such relations more effectively than a single
global one. (3) The proposed method under M-LDP setting outperforms other methods, including
SN?P. In M-LDP, the contribution of Hs to Dt decreases gracefully as |t ? s| increases. This
way encourages each latent DP to be locally focused, while allowing the atoms therein to be shared
across the entire corpus. This is enabled through the use of explicit sub-sampling. The SN?P,
instead, provides no mechanism to vary the contributions of the latent DPs, and has to make a hard
limit of their spans to achieve locality. Whereas this issue could be addressed through multiple level
of latent nodes with different spans, it will increase the complexity, and thus the risk of overfitting.
For M-LDP, recall that we set qts = exp(?|t ? s|/?). Here, ? is an important design parameter
that controls the range of local coupling. The results acquired with different ? values are shown in
Figure 5. Optimal performance is attained when the choice of ? balances the need to share atoms
and the desire to keep the latent DPs locally focused. Generally, the optimum of ? depends on data.
When the training set is limited, one may increase its value to enlarge the coupling range.
5.2
Scene Modeling
Scene modeling is an important task in computer vision. Among various approaches, topic models
that build upon bag-of-features image representation [4, 11, 21] have become increasingly popular
7
100
HDP (train)
HDP (test)
S?LDP (train)
S?LDP (test)
SNGP (train)
SNGP (test)
M?LDP (train)
M?LDP (test)
90
80
water
cascade
coast
hill
ocean
sky
perplexity
70
mountain
snowy
60
50
40
30
boardwalk
swamp
20
50
Figure 6: This figure shows example images in all eight categories
selected for the experiment.
100
150
200 250 300 350
# training images
400
450
500
Figure 7: The results on SUN data,
with training sets of different sizes.
and are widely used for statistical modeling of visual scenes. Along this trend, Dirichlet processes
have also been employed to discover visual topics from observed scenes [10, 19].
We apply the proposed method to jointly model the topics in multiple scene categories. Rather than
pursuing the optimal scene model, here we primarily aimed at comparing different nonparametric
methods in mixture model estimation, under a reasonable setting. We choose a subset from the
SUN database [22]. The selected set comprises eight outdoor categories: mountain snowy, hill,
boardwalk, swamp, water cascade, ocean, coast and sky. The number of images in each category
ranges from 50 to 100. Figure 6 shows some example images. We can see that some categories
are similar (e.g. ocean and coast, boardwalk and swamp, etc), while others are largely different. To
derive the image representation, PCA-SIFT [8] descriptors are densely extracted from each training
image, and then pooled together and quantized using K-means into 512 visual words. In this way,
each image can be represented as a histogram of 512 bins.
All methods mentioned above are compared. For M-LDP, we introduce a global latent DP to capture
common topics, with q-values set uniformly to 0.5, and a set of local latent DPs, each for a category.
The prior probability of inheriting from the corresponding latent DP is 1.0, and that from other local
DPs is 0.2. Whereas no prior knowledge about the similarity between categories is assumed, the
latent DPs incorporated in this way still provide a mechanism for local coupling. For SN?P, we use
28 latent DPs, each connected to a pair of categories. Again, we divide the data into two disjoint
halves, respectively for training and testing, and evaluate the performance in terms of perplexity. The
results are shown in Figure 7, where we can observe trends similar to those that we have seen on the
NIPS data: local coupling helps model estimation, and our model under the M-LDP setting further
reduces the perplexity (from 37 to 31, as compared to SN?P). This is due to the more flexible way
to configure local coupling that allows the weights of latent DPs to vary.
6
Conclusion
We have presented a principled approach to modeling grouped data, where mixture models for different groups are coupled via a set of latent DPs. The proposed framework allows each mixture
model to inherit from multiple latent DPs, and each latent DP to contribute differently to different groups, thus providing great flexibility for model design. The experiments on both document
analysis and image modeling has clearly demonstrated the utility of such flexibility. Particularly,
the proposed method makes it possible to make various modeling choices, e.g. the use of latent
DPs with different connection patterns, substantially improving the effectiveness of the estimated
models. While q-values are treated as design parameters, it should be possible to extend this framework to incorporate prior models over these and other parameters. Such extensions should lead to
constructions with richer structure capable of addressing more complex problems.
Acknowledgements
This research was partially supported by the Office of Naval Research Multidisciplinary Research
Initiative (MURI) program, award N000141110688 and by DARPA award FA8650-11-1-7154.
8
References
[1] David Blei and John Lafferty. Correlated topic models. In Proc. of NIPS?06, 2006.
[2] David M. Blei, Andrew Y. Ng, and Michael I. Jordan. Latent Dirichlet Allocation. Journal of
Machine Learning Research, 3:993?1022, 2003.
[3] Yeonseung Chung and David B. Dunson. The local Dirichlet Process. Annals of the Inst. of
Stat. Math., 63(1):59?80, 2009.
[4] Li Fei-fei. A bayesian hierarchical model for learning natural scene categories. In Proc. of
CVPR?05, 2005.
[5] Amir Globerson, Gal Chechik, Fernando Pereira, and Naftali Tishby. Euclidean embedding of
co-occurrence data. JMLR, 8, 2007.
[6] J. E Griffin and M. F. J Steel. Order-Based Dependent Dirichlet Processes. Journal of the
American Statistical Association, 101(473):179?194, March 2006.
[7] Thomas Hofmann. Probabilistic latent semantic indexing. In Proc. of ACM SIGIR?99, 1999.
[8] Yan Ke and Rahul Sukthankar. Pca-sift: A more distinctive representation for local image
descriptors. In Proc. of CVPR?04, 2004.
[9] Seyoung Kim and Padhraic Smyth. Hierarchical dirichlet processes with random effects. In
Proc. of NIPS?06, 2006.
[10] Jyri J. Kivinen, Erik B. Sudderth, and Michael I. Jordan. Learning multiscale representations
of natural scenes using dirichlet processes. In Proc. of CVPR?07, 2007.
[11] S. Lazebnik, C. Schmid, and J. Ponce. Beyond bags of features: Spatial pyramid matching for
recognizing natural scene categories. In Proc. of CVPR?06, 2006.
[12] Dahua Lin, Eric Grimson, and John Fisher. Construction of dependent dirichlet processes
based on poisson processes. In Advances of NIPS?10, 2010.
[13] Steven N. MacEachern. Dependent Nonparametric Processes. In Proceedings of the Section
on Bayesian Statistical Science, 1999.
[14] Radford M. Neal. Markov Chain Sampling Methods for Dirichlet Process Mixture Models.
Journal of computational and graphical statistics, 9(2):249?265, 2000.
[15] Vinayak Rao and Yee Whye Teh. Spatial Normalized Gamma Processes. In Proc. of NIPS?09,
2009.
[16] Carl Edward Rasmussen. The Infinite Gaussian Mixture Model. In Proc. of NIPS?00, 2000.
[17] Lu Ren, David B. Dunson, and Lawrence Carin. The Dynamic Hierarchical Dirichlet Process.
In Proc. of ICML?08, New York, New York, USA, 2008. ACM Press.
[18] J. Sethuraman. A Constructive Definition of Dirichlet Priors. Statistica Sinica, 4(2):639?650,
1994.
[19] Erik B. Sudderth, Antonio Torralba, William Freeman, and Alan Willsky. Describing visual
scenes using transformed dirichlet processes. In Proc. of NIPS?05, 2005.
[20] Yee Whye Teh, Michael I. Jordan, Matthew J. Beal, and David M. Blei. Hierarchical Dirichlet
Processes. Journal of the American Statistical Association, 101(476):1566?1581, 2006.
[21] Chang Wang, David Blei, and Fei-Fei Li. Simultaneous image classification and annotation.
In Proc. of CVPR?09, 2009.
[22] J. Xiao, J. Hays, K. Ehinger, A. Oliva, and A. Torralba. Sun database: Large-scale scene
recognition from abbey to zoo. In Proc. of CVPR?10, 2010.
9
|
4790 |@word h:17 trial:1 version:3 hu:1 d2:1 covariance:1 accounting:1 configuration:4 series:1 q32:1 contains:4 document:8 outperforms:1 existing:2 current:1 comparing:1 nt:5 must:1 john:3 periodically:1 distant:2 hofmann:1 update:7 implying:1 generative:1 instantiate:1 devising:1 fewer:1 half:2 accordingly:1 selected:2 xk:6 amir:1 blei:4 provides:5 quantized:1 node:1 location:1 contribute:1 math:1 along:2 beta:2 become:1 initiative:1 consists:1 combine:2 introduce:4 acquired:1 ascribed:1 inter:1 notably:1 frequently:2 multi:5 zti:12 freeman:1 globally:1 xti:21 provided:1 discover:1 underlying:3 q21:1 notation:1 moreover:1 snowy:2 mountain:2 substantially:1 developed:3 lowing:1 supplemental:1 gal:1 sky:2 every:1 ti:10 exactly:1 stick:2 exchangeable:2 control:5 appear:2 organize:1 t1:2 local:16 treat:1 modify:1 limit:1 consequence:1 meet:1 path:1 therein:2 specifying:1 suggests:1 co:1 limited:3 range:3 practical:2 unique:2 globerson:1 testing:4 differs:1 sq:2 procedure:1 empirical:2 yan:1 mult:2 cascade:2 projection:1 chechik:1 word:5 matching:1 get:1 close:1 put:1 context:1 influence:2 risk:1 yee:2 sukthankar:1 restriction:2 measurable:2 equivalent:1 demonstrated:1 straightforward:1 independently:3 convex:2 focused:2 formulate:1 sigir:1 ke:1 assigns:1 d1:2 enabled:1 embedding:1 swamp:3 annals:1 construction:11 suppose:2 smyth:2 carl:1 overlapped:1 associate:2 trend:2 recognition:1 particularly:4 utilized:1 lay:1 muri:1 database:3 observed:3 steven:1 wang:1 capture:3 region:1 connected:2 sun:3 counter:1 decrease:2 mentioned:1 intuition:1 principled:1 grimson:1 complexity:3 dhlin:1 dt2:1 dynamic:3 depend:2 serve:1 upon:2 distinctive:1 eric:1 joint:2 darpa:1 differently:4 k0:4 various:5 represented:1 derivation:1 train:11 distinct:2 describe:1 qt2:2 hyper:1 emerged:1 widely:3 supplementary:1 richer:1 say:1 drawing:2 is0:1 cvpr:6 qt1:5 cov:1 statistic:1 jointly:4 itself:4 noisy:1 beal:1 propose:1 relevant:1 combining:2 realization:1 flexibility:5 achieve:1 exploiting:1 parent:4 cluster:1 convergence:1 extending:1 optimum:1 produce:1 guaranteeing:1 help:2 coupling:9 develop:3 derive:5 stat:1 depending:1 andrew:1 measured:1 eq:6 edward:1 auxiliary:1 involves:1 come:2 indicate:1 closely:1 stochastic:6 material:1 bin:1 require:1 assign:1 extension:4 hold:1 considered:1 exp:2 great:1 lawrence:1 algorithmic:1 scope:1 matthew:1 vary:9 consecutive:2 torralba:2 abbey:1 estimation:4 proc:13 applicable:1 bag:2 label:9 superposition:1 grouped:4 tool:1 weighted:1 reflects:1 mit:4 clearly:1 gaussian:1 ck:2 ctj:2 rather:1 hj:1 office:1 derived:3 inherits:1 ponce:1 naval:1 vk:2 properly:1 bernoulli:1 likelihood:4 indicates:1 contrast:1 kim:2 inst:1 dependent:9 vl:1 integrated:3 entire:3 initially:2 hidden:1 w:1 relation:1 transformed:1 ldp:20 i1:1 issue:4 among:4 flexible:3 overall:1 denoted:2 priori:2 classification:1 spatial:2 special:3 mutual:2 marginal:3 equal:2 construct:2 once:1 never:1 enlarge:1 atom:60 sampling:20 ng:1 reassigning:1 icml:1 carin:1 simplex:1 t2:2 others:8 primarily:1 distinguishes:1 randomly:1 gamma:9 simultaneously:1 densely:1 n1:1 maintain:3 william:1 introduces:1 mixture:28 configure:1 held:1 chain:4 capable:1 wst:2 tree:4 divide:2 euclidean:1 re:2 theoretical:3 increased:2 modeling:9 rao:2 vinayak:1 assignment:1 cost:1 addressing:1 subset:10 predictor:1 comprised:1 uniform:1 recognizing:2 tishby:1 dependency:5 connect:2 dir:6 csail:3 retain:1 probabilistic:4 michael:3 together:2 connecting:1 again:2 reflect:1 q11:1 opposed:1 choose:6 padhraic:1 american:2 chung:2 leading:1 li:2 account:1 pooled:1 relocation:1 coefficient:6 satisfy:1 explicitly:1 depends:1 h1:1 lot:1 maintains:2 inherited:8 annotation:1 contribution:4 descriptor:2 characteristic:1 iml:1 largely:1 yield:2 correspond:1 weak:1 bayesian:3 ren:2 lu:1 zoo:1 worth:1 finer:1 published:5 ago:1 simultaneous:1 sharing:4 definition:1 resultant:1 associated:13 proof:1 sampled:3 popular:2 recall:3 knowledge:1 infers:1 improves:1 actually:2 appears:1 originally:1 dt:21 higher:1 attained:1 specify:1 rahul:1 formulation:11 arranged:2 though:2 strongly:1 evaluated:1 just:1 multiscale:1 overlapping:1 defines:1 multidisciplinary:1 lda:1 boardwalk:3 facilitate:1 effect:1 usa:1 normalized:4 evolution:1 hence:1 assigned:2 spatially:1 semantic:1 illustrated:2 neal:1 during:1 encourages:1 naftali:1 d4:1 generalized:1 x4i:1 whye:2 pdf:2 hill:2 tt:3 demonstrate:2 bring:1 image:12 wise:1 coast:3 lazebnik:1 recently:3 common:2 multinomial:1 perturbing:1 attached:1 association:4 extend:1 dahua:2 significant:1 gibbs:3 reliability:1 entail:1 similarity:1 etc:1 base:4 posterior:3 showed:1 irrelevant:1 perplexity:8 hay:1 binary:2 exploited:1 seen:3 greater:1 impose:1 employed:1 surely:2 determine:1 fernando:1 multiple:14 desirable:1 infer:1 reduces:2 ing:1 alan:1 faster:1 characterized:2 lin:4 divided:1 award:2 instantiating:1 basic:1 oliva:1 essentially:1 circumstance:1 poisson:5 vision:1 histogram:1 kernel:2 pyramid:1 c1:2 whereas:3 background:1 fine:1 addressed:1 sudderth:2 source:13 standpoint:1 sure:1 probably:1 tend:1 lafferty:1 effectiveness:1 jordan:3 leverage:2 variety:1 restaurant:4 idea:1 whether:3 motivated:1 pca:2 utility:1 reformulated:2 fa8650:1 york:2 reassign:1 antonio:1 generally:4 detailed:2 aimed:1 nonparametric:7 locally:2 category:10 reduced:2 generate:11 estimated:2 rtk:14 hdps:4 disjoint:2 discrete:2 express:1 group:63 key:1 four:2 achieving:1 drawn:2 d3:1 clean:1 year:4 uncertainty:1 you:1 place:1 almost:2 reasonable:1 decide:2 uti:7 pursuing:1 draw:11 doc:4 griffin:2 ob:1 ct:10 ddp:2 nontrivial:2 adapted:1 occur:1 strength:2 fei:4 n3:1 scene:12 generates:3 aspect:1 span:2 qb:1 combination:7 march:1 across:10 increasingly:1 pml:1 n4:1 making:1 intuitively:2 restricted:2 pr:4 indexing:1 remains:3 discus:2 describing:1 mechanism:2 know:1 operation:8 eight:2 observe:2 hierarchical:5 apply:1 generic:1 ocean:3 occurrence:2 thomas:1 denotes:5 dirichlet:23 graphical:2 exploit:2 especially:2 chinese:4 build:1 classical:1 move:1 question:1 quantity:1 primary:3 concentration:2 rt:6 dp:106 link:2 gracefully:1 topic:15 water:2 willsky:1 hdp:9 erik:2 index:5 providing:1 balance:2 acquire:1 sinica:1 dunson:3 steel:2 design:10 zt:4 unknown:1 perform:1 allowing:3 teh:3 observation:3 markov:2 enabling:1 situation:1 extended:3 incorporated:3 excluding:1 incorporate:1 perturbation:1 bk:2 introduced:1 pair:1 david:6 ct1:5 connection:4 distinction:1 herein:1 learned:1 nip:10 address:3 beyond:1 below:5 mtk:6 pattern:1 challenge:1 program:1 reliable:1 including:1 difficulty:1 treated:1 attach:1 natural:3 indicator:5 kivinen:1 scheme:1 x2i:1 brief:1 imply:1 sethuraman:2 created:2 carried:1 deemed:1 coupled:9 schmid:1 sn:9 prior:13 review:3 inheritance:6 acknowledgement:1 evolve:1 relative:2 permutation:1 sngp:4 proportional:4 allocation:1 foundation:1 h2:1 xiao:1 share:3 changed:2 supported:1 rasmussen:1 infeasible:1 weaker:1 allow:1 q22:1 vocabulary:1 transition:4 doesn:2 qn:1 obtains:1 countably:2 keep:1 ml:12 global:4 overfitting:1 n000141110688:1 b1:1 corpus:5 xt1:1 assumed:2 xi:12 latent:66 decade:1 sk:5 table:1 reassigned:1 nature:1 improving:1 complex:3 necessarily:1 inheriting:1 inherit:5 statistica:1 n2:1 identifier:1 child:2 fig:1 ehinger:1 sub:12 comprises:2 explicit:2 pereira:1 x1i:1 outdoor:1 breaking:2 jmlr:1 third:1 grained:1 rk:4 theorem:4 emphasizing:1 formula:2 specific:7 xt:2 removing:1 sift:2 dk:2 normalizing:1 exists:1 effectively:1 conditioned:2 illustrates:1 gap:1 suited:1 locality:1 x3i:1 infinitely:1 likely:2 visual:4 expressed:1 contained:1 desire:1 partially:1 chang:1 radford:1 dt1:1 satisfies:1 extracted:1 acm:2 conditional:1 goal:4 seyoung:1 consequently:1 careful:1 hs0:1 shared:5 fisher:3 hard:2 specifically:3 infinite:4 except:3 uniformly:1 acting:1 total:3 partly:1 indicating:2 select:1 support:3 maceachern:2 modulated:1 qts:17 constructive:1 evaluate:1 mcmc:1 tested:1 correlated:1
|
4,188 | 4,791 |
Projection Retrieval for Classification
Artur Dubrawski
School of Computer Science
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
Madalina Fiterau
Machine Learning Department
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
Abstract
In many applications, classification systems often require human intervention in
the loop. In such cases the decision process must be transparent and comprehensible, simultaneously requiring minimal assumptions on the underlying data distributions. To tackle this problem, we formulate an axis-aligned subspace-finding
task under the assumption that query specific information dictates the complementary use of the subspaces. We develop a regression-based approach called
RECIP that efficiently solves this problem by finding projections that minimize a
nonparametric conditional entropy estimator. Experiments show that the method
is accurate in identifying the informative projections of the dataset, picking the
correct views to classify query points, and facilitates visual evaluation by users.
1
Introduction and problem statement
In the domain of predictive analytics, many applications which keep human users in the loop require
the use of simple classification models. Often, it is required that a test-point be ?explained? (classified) using a simple low-dimensional projection of the original feature space. This is a Projection
Retrieval for Classification problem (PRC). The interaction with the user proceeds as follows: the
user provides the system a query point; the system searches for a projection in which the point can
be accurately classified; the system displays the classification result as well as an illustration of how
the classification decision was reached in the selected projection.
Solving the PRC problem is relevant in many practical applications. For instance, consider a nuclear
threat detection system installed at a border check point. Vehicles crossing the border are scanned
with sensors so that a large array of measurements of radioactivity and secondary contextual information is being collected. These observations are fed into a classification system that determines
whether the scanned vehicle may carry a threat. Given the potentially devastating consequences of
a false negative, a border control agent is requested to validate the prediction and decide whether
to submit the vehicle for a costly further inspection. With the positive classification rate of the system under strict bounds because of limitations in the control process, the risk of false negatives is
increased. Despite its crucial role, human intervention should only be withheld for cases in which
there are reasons to doubt the validity of classification. In order for a user to attest the validity of a
decision, the user must have a good understanding of the classification process, which happens more
readily when the classifier only uses the original dataset features rather than combinations of them,
and when the discrimination models are low-dimensional.
In this context, we aim to learn a set of classifiers in low-dimensional subspaces and a decision
function which selects the subspace under which a test point is to be classified. Assume we are
given a dataset {(x1 , y1 ) . . . (xn , yn )} ? X n ? {0, 1}n and a class of discriminators H. The model
will contain a set ? of subspaces of X ; ? ? ?, where ? is the set of all axis-aligned subspaces
of the original feature space, the power set of the features. To each projection ?i ? ? corresponds
one discriminator from a given hypothesis space hi ? H. It will also contain a selection function
g : X ? ? ? H, which yields, for a query point x, the projection/discriminator pair with which this
point will be classified. The notation ?(x) refers to the projection of the point x onto the subspace
1
? while h(?(x)) represents the predicted label for x. Formally, we describe the model class as
Md
=
{? = {? : ? ? ?, dim(?) ? d},
H = {hi : hi ? H, h : ?i ? Y, ?i = 1 . . . |?|},
g ? {f : X ? {1 . . . |?|}}
.
where dim(?) presents the dimensionality of the subspace determined by the projection ?. Note that
only projections up to size d will be considered, where d is a parameter specific to the application.
The set H contains one discriminator from the hypothesis class H for each projection.
Intuitively, the aim is to minimize the expected classification error over Md , however, a notable
modification is that the projection and, implicitly, the discriminator, are chosen according to the
data point that needs to be classified. Given a query x in the space X , g(x) will yield the subspace
?g(x) onto which the query is projected and the discriminator hg(x) for it. Distinct test points can
be handled using different combinations of subspaces and discriminators. We consider models that
minimize 0/1 loss. Hence, the PRC problem can be stated as follows:
h
i
M ? = arg min EX ,Y y 6= hg(x) (?g(x) (x))
M ?Md
There are limitations to the type of selection function g that can be learned. A simple example for
which g can be recovered is a set of signal readings x for which, if one of the readings xi exceeds a
threshold ti , the label can be predicted just based on xi . A more complex one is a dataset containing
regulatory variables, that is, for xi in the interval [ak , bk ] the label only depends on (x1k . . . xnk k ) datasets that fall into the latter category fulfill what we call the Subspace-Separability Assumption.
This paper proposes an algorithm called RECIP that solves the PRC problem for a class of nonparametric classifiers. We evaluate the method on artificial data to show that indeed it correctly identifies
the underlying structure for data satisfying the Subspace-Separability Assumption. We show some
case studies to illustrate how RECIP offers insight into applications requiring human intervention.
The use of dimensionality reduction techniques is a common preprocessing step in applications
where the use of simplified classification models is preferable. Methods that learn linear combinations of features, such as Linear Discriminant Analysis, are not quite appropriate for the task considered here, since we prefer to natively rely on the dimensions available in the original feature space.
Feature selection methods, such as e.g. lasso, are suitable for identifying sets of relevant features,
but do not consider interactions between them. Our work better fits the areas of class dependent
feature selection and context specific classification, highly connected to the concept of Transductive
Learning [6]. Other context-sensitive methods are Lazy and Data-Dependent Decision Trees, [5] and
[10] respectively. In Ting et al [14], the Feating submodel selection relies on simple attribute splits
followed by fitting local predictors, though the algorithm itself is substantially different. Obozinski
et al present a subspace selection method in the context of multitask learning [11]. Go et al propose
a joint method for feature selection and subspace learning [7], however, their classification model
is not particularly query specific. Alternatively, algorithms that transform complex or unintelligible models with user-friendly equivalents have been proposed [3, 2, 1, 8]. Algorithms specifically
designed to yield understandable models are a precious few. Here we note a rule learning method
described in [12], even though the resulting rules can make visualization difficult, while itemset
mining [9] is not specifically designed for classification. Unlike those approaches, our method is
designed to retrieve subsets of the feature space designed for use in a way that is complementary to
the basic task at hand (classification) while providing query-specific information.
2
Recovering informative projections with RECIP
To solve PRC, we need means by which to ascertain which projections are useful in terms of discriminating data from the two classes. Since our model allows the use of distinct projections depending
on the query point, it is expected that each projection would potentially benefit different areas of the
feature space. A(?) refers to the area of the feature space where the projection ? is selected.
A(?)
=
The objective becomes
h
i
min E(X ?Y) y 6= hg(x) (?g(x) (x))
=
M ?Md
{x ? X : ?g(x) = ?}
min
M ?Md
2
X
???
p(A(?))E y 6= hg(x) (?g(x) (x))|x ? A(?)
.
The expected classification error over A(?) is linked to the conditional entropy of Y |X. Fano?s
inequality provides a lower bound on the error while Feder and Merhav [4] derive a tight upper
bound on the minimal error probability in terms of the entropy. This means that conditional entropy
characterizes the potential of a subset of the feature space to separate data, which is more generic
than simply quantifying classification accuracy for a specific discriminator.
In view of this connection between classification accuracy and entropy, we adapt the objective to:
X
min
p(A(?))H(Y |?(X); X ? A(?))
(1)
M ?Md
???
The method we propose optimizes an empirical analog of (1) which we develop below and for which
we will need the following result.
Proposition 2.1. Given a continuous variable X ? X and a binary variable Y , where X is sampled
from the mixture model f (x) = p(y = 0)f0 (x) + p(y = 1)f1 (x) = p0 f0 (x) + p1 f1 (x) ,
then H(Y |X) = ?p0 log p0 ? p1 log p1 ? DKL (f0 ||f ) ? DKL (f1 ||f ) .
Next, we will use the nonparametric estimator presented in [13] for Tsallis ?-divergence. Given
samples Ui ? U, with i = 1, n and Vj ? V with j = 1, m, the divergence is estimated as follows:
n
i
1 h 1 X (n ? 1)?k (Ui , U \ ui )d 1??
T?? (U ||V ) =
B(k,
?)
?
1
,
(2)
1 ? ? n i=1
m?k (Ui , V )d
where d is the dimensionality of the variables U and V and ?k (z, Z) represents the distance from z
to its k th nearest neighbor of the set of points Z. For ? ? 1 and n ? ?, T?? (u||v) ? DKL (u||v).
2.1
Local estimators of entropy
We will now plug (2) in the formula obtained by Proposition 2.1 to estimate the quantity (1). We
use the notation X0 to represent the n0 samples from X which have the labels Y equal to 0, and X1
to represent the n1 samples from X which have the labels set to 1. Also, Xy(x) represents the set of
samples that have labels equal to the label of x and X?y(x) the data that have labels opposite to the
label of x.
? |X; X ? A) = ?H(p0 ) ? H(p1 ) ? T?(f0x ||f x ) ? T?(f1x ||f x ) + C
H(Y
??1
? |X; X ? A) ?
H(Y
n0
(n ? 1)? (x , X \ x )d 1??
1 X
0
k i
0
i
I[xi ? A]
n0 i=1
n?k (xi , X \ xi )d
+
n1
(n ? 1)? (x , X \ x )d 1??
1 X
1
k i
1
i
I[xi ? A]
n1 i=1
n?k (xi , X \ xi )d
?
n0
(n ? 1)? (x , X \ x )d 1??
1 X
0
k i
0
i
I[xi ? A]
n0 i=1
n?k (xi , X1 \ xi )d
+
n1
(n ? 1)? (x , X \ x )d 1??
1 X
1
k i
1
i
I[xi ? A]
n1 i=1
n?k (xi , X0 \ xi )d
?
n
(n ? 1)? (x , X
d 1??
1X
k i
y(xi ) \ xi )
I[xi ? A]
n i=1
n?k (xi , X?y(xi ) \ xi )d
The estimator for the entropy of the data that is classified with projection ? is as follows:
n
(n ? 1)? (?(x ), ?(X
d 1??
X
k
i
y(xi ) ) \ ?(xi ))
? |?(X); X ? A(?)) ? 1
H(Y
I[xi ? A(?)]
(3)
d
n i=1
n?k (?(xi ), ?(X?y(xi ) \ xi ))
From 3 and using the fact that I[xi ? A(?)] = I[?g(xi ) = ?] for which we use the notation
I[g(xi ) ? ?], we estimate the objective as
n
(n ? 1)? (?(x ), ?(X
d 1??
X 1X
k
i
y(xi ) ) \ ?(xi ))
min
I[g(xi ) ? ?]
(4)
M ?Md
n i=1
n?k (?(xi ), ?(X?y(xi ) \ xi ))d
???
3
Therefore, the contribution of each data point to the objective corresponds to a distance ratio on the
projection ? ? where the class of the point is obtained with the highest confidence (data is separable
in the neighborhood of the point). We start by computing the distance-based metric of each point on
each projection of size up to d - there are d? such projections.
This procedure yields an extended set of features Z, which we name local entropy estimates:
? (? (x ), ? (X
k j
i
j
y(xi ) ) \ ?j (xi )) d(1??)
Zij =
? ? 1 j ? {1 . . . d? }
?k (?j (xi ), ?j (X?y(xi ) ) \ ?j (xi ))
(5)
For each training data point, we compute the best distance ratio amid all the projections, which is
simply Ti = minj?[d? ] Zij .
The objective can be then further rewritten as a function of the entropy estimates:
min
M ?Md
n X
X
I[g(xi ) ? ?j ]Zij
From the definition of T, it is also clear that
n X
X
I[g(xi ) ? ?j ]Zij
min
M ?Md
2.2
(6)
i=1 ?j ??
?
n
X
i=1 ?j ??
Ti .
(7)
i=1
Projection selection as a combinatorial problem
Considering form (6) of the objective, and given that the estimates Zij are constants, depending only
on the training set, the projection retrieval problem is reduced to finding g for all training points,
which will implicitly select the projection set of the model. Naturally, one might assume the bestperforming classification model is the one containing all the axis-aligned subspaces. This model
achieves the lower bound (7) for the training set. However, the larger the set of projections, the more
values the function g takes, and thus the problem of selecting the correct projection becomes more
difficult. It becomes apparent that the number of projections should be somehow restricted to allow
intepretability. Assuming a hard threshold of at most t projections, the optimization (6) becomes
an entry selection problem over matrix Z where one value must be picked from each row under a
limitation on the number of columns that can be used. This problem cannot be solved exactly in
polynomial time. Instead, it can be formulated as an optimization problem under `1 constraints.
2.3
Projection retrieval through regularized regression
To transform the projection retrieval to a regression problem we consider T, the minimum obtainable
value of the entropy estimator for each point, as the output which the method needs to predict. Each
row i of the parameter matrix B represents the degrees to which the entropy estimates on each
projection contribute to the entropy estimator of point xi . Thus, the sum over each row of B is 1,
and the regularization penalty applies to the number of non-zero columns in B.
?
min ||T ? (Z
B
B)J|?|,1 ||22
d
X
+?
[Bi 6= 0]
(8)
i=1
subject to
|Bk |`1 = 1 k = 1, n
where
(Z B)ij = Zij + Bij and J is a matrix of ones.
The problem with this optimization is that it is not convex. A typical walk-around of this issue is
to use the convex relaxation for Bi 6= 0, that is `1 norm. This would transform the penalized term
Pd?
Pd?
Pn
to i=1 |Bi |`1 . However, i=1 |Bi |`1 = k=1 |Bk |`1 = n , so this penalty really has no effect.
An alternative mechanism to encourage the non-zero elements in B to populate a small number
of columns is to add a penalty term in the form of B?, where ? is a d? -size column vector with
each element representing the penalty for a column in B. With no prior information about which
subspaces are more informative, ? starts as an all-1 vector. An initial value for B is obtained through
the optimization (8). Since our goal is to handle data using a small number of projections, ? is then
updated such that its value is lower for the denser columns in B. This update resembles the reweighing in adaptive lasso. The matrix B itself is updated, and this 2-step process continues until
convergence of ?. Once ? converges, the projections corresponding to the non-zero columns of B
are added to the model. The procedure is shown in Algorithm 1.
4
Algorithm 1: RECIP
? = [1 . . . 1]
repeat
P|P I|
b = arg minB ||T ? i=1 < Z, B > ||22 + ?|B?|`1
subject to
|Bk |`1 = 1
k = 1...n
?k = |Bi |`1
i = . . . d? (update the differential penalty)
? = 1 ? |?|?`
1
until ? converges
return ? = {?i ; |Bi |`1 > 0 ?i = 1 . . . d? }
2.4
Lasso for projection selection
We will compare our algorithm to lasso regularization that ranks the projections in terms of their
potential for data separability. We write this as an `1 -penalized optimization on the extended feature
set Z, with the objective T : min? |T ? Z?|2 + ?|?|`1 . The lasso penalty to the coefficient vector
encourages sparsity. For a high enough ?, the sparsity pattern in ? is indicative of the usefulness of
the projections. The lasso on entropy contributions was not found to perform well as it is not query
specific and will find one projection for all data. We improved it by allowing it to iteratively find
projections - this robust version offers increased performance by reweighting the data thus focusing
on different subsets of it. Although better than running lasso on entropy contributions, the robust
lasso does not match RECIP?s performance as the projections are selected gradually rather than
jointly. Running the standard lasso on the original design matrix yields a set of relevant variables
and it is not immediately clear how the solution would translate to the desired class.
2.5
The selection function
Once the projections are selected, the second stage of the algorithm deals with assigning the projection with which to classify a particular query point. An immediate way of selecting the correct
projection starts by computing the local entropy estimator for each subspace with each class assignment. Then, we may select the label/subspace combination that minimizes the empirical entropy.
? (? (x), ? (X )) dim(?i )(1??)
k i
i
?
i = 1 . . . d? , ? ? 1
(9)
(i? , ?? ) = arg min
?
(?
(x),
?
(X
))
k i
i
??
i,?
3
Experimental results
In this section we illustrate the capability of RECIP to retrieve informative projections of data and
their use in support of interpreting results of classification. First, we analyze how well RECIP can
identify subspaces in synthetic data whose distribution obeys the subspace separability assumption
(3.1). As a point of reference, we also present classification accuracy results (3.2) for both the
synthetic data and a few real-world sets. This is to quantify the extent of the trade-off between
fidelity of attainable classifiers and desired informativeness of the projections chosen by RECIP. We
expect RECIP?s classification performance to be slightly, but not substantially worse when compared
to relevant classification algorithms trained to maximize classification accuracy. Finally, we present
a few examples (3.3) of informative projections recovered from real-world data and their utility in
explaining to human users the decision processes applied to query points.
A set of artificial data used in our experiments contains q batches of data points, each of them made
classifiable with high accuracy using one of available 2-dimensional subspaces (x1k , x2k ) with k ?
{1 . . . q}. The data in batch k also have the property that x1k > tk . This is done such that the group a
point belongs to can be detected from x1k , thus x1k is a regulatory variable. We control the amount of
noise added to thusly created synthetic data by varying the proportion of noisy data points in each
batch. The results below are for datasets with 7 features each, with number of batches q ranging
between 1 and 7. We kept the number of features specifically low in order to prevent excessive
variation between any two sets generated this way, and to enable computing meaningful estimates
of the expectation and variance of performance, while enabling creation of complicated data in
which synthetic patterns may substantially overlap (using 7 features and 7 2-dimensional patterns
implies that dimensions of at least 4 of the patterns will overlap). We implemented our method
5
to be scalable to the size and dimensionality of data and although for brevity we do not include a
discussion of this topic here, we have successfully run RECIP against data with 100 features.
The parameter ? is a value close to 1, because the Tsallis divergence converges to the KL divergence
as alpha approaches 1. For the experiments on real-world data, d was set to n (all projections were
considered). For the artificial data experiments, we reported results for d = 2 as they do not change
significantly for d >= 2 because this data was synthesized to contain bidimensional informative
projections. In general, if d is too low, the correct full set of projections will not be found, but it may
be recovered partially. If d is chosen too high, there is a risk that a given selected projection p will
contain irrelevant features compared to the true projection p0 . However, this situation only occurs
if the noise introduced by these features in the estimators makes the entropy contributions on p and
p0 statistically indistinguishable for a large subset of data. The users will choose d according to the
desired/acceptable complexity of the resulting model. If the results are to be visually interpreted by
a human, values of 2 or 3 are reasonable for d.
3.1
Recovering informative projections
Table 1 shows how well RECIP recovers the q subspaces corresponding to the synthesized batches of
data. We measure precision (proportion of the recovered projections that are known to be informative), and recall (proportion of known informative projections that are recovered by the algorithm).
in Table 1, rows correspond to the number of distinct synthetic batches injected in data, q, and subsequent columns correspond to the increasing amounts of noise in data. We note that the observed
precision is nearly perfect: the algorithm makes only 2 mistakes over the entire set of experiments,
and those occur for highly noisy setups. The recall is nearly perfect as long as there is little overlap
among the dimensions, that is when the injections do not interfere with each other. As the number
of projections increases, the chances for overlap among the affected features also increase, which
makes the data more confusing resulting on a gradual drop of recall until only about 3 or 4 of the
7 known to be informative subspaces can be recovered. We have also used lasso as described in
2.4 in an attempt to recover projections. This setup only manages to recover one of the informative
subspaces, regardless of how the regularization parameter is tuned.
3.2
Classification accuracy
Table 2 shows the classification accuracy of RECIP, obtained using synthetic data. As expected, the
observed performance is initially high when there are few known informative projections in data and
it decreases as noise and ambiguity of the injected patterns increase.
Most types of ensemble learners would use a voting scheme to arrive at the final classification of a
testing sample, rather than use a model selection scheme. For this reason, we have also compared
predictive accuracy revealed by RECIP against a method based on majority voting among multiple
candidate subspaces. Table 4 shows that the accuracy of this technique is lower than the accuracy of
RECIP, regardless of whether the informative projections are recovered by the algorithm or assumed
to be known a priori. This confirms the intuition that a selection-based approach can be more
effective than voting for data which satisfies the subspace separability assumption.
For reference, we have also classified the synthetic data using K-Nearest-Neighbors algorithm using
all available features at once. The results of that experiment are shown in Table 5. Since RECIP uses
neighbor information, K-NN is conceptually the closest among the popular alternatives. Compared
to RECIP, K-NN performs worse when there are fewer synthetic patterns injected in data to form
informative projections. It is because some features used then by K-NN are noisy. As more features
become informative, the K-NN accuracy improves. This example shows the benefit of a selective
approach to feature space and using a subset of the most explanatory projections to support not only
explanatory analyses but also classification tasks in such circumstances.
3.3
RECIP case studies using real-world data
Table 3 summarizes the RECIP and K-NN performance on UCI datasets. We also test the methods using Cell dataset containing a set of measurements such as the area and perimeter biological
cells with separate labels marking cells subjected to treatment and control cells. In Vowel data, the
nearest-neighbor approach works exceptionally well, even outperforming random forests (0.94 accuracy), which is an indication that all features are jointly relevant. For d lower than the number
of features, RECIP picks projections of only one feature, but if there is no such limitation, RECIP
picks the space of all the features as informative.
6
Table 1: Projection recovery for artificial datasets with 1 . . . 7 informative features and noise level
0 . . . 0.2 in terms of mean and variance of Precision and Recall. Mean/var obtained for each setting
by repeating the experiment with datasets with different informative projections.
PRECISION
1
2
3
4
5
6
7
0
1
1
1
1
1
1
1
0.02
1
1
1
1
1
1
1
Mean
0.05
1
1
1
1
1
1
1
1
2
3
4
5
6
7
0
1
1
1
0.9643
0.7714
0.6429
0.6327
0.02
1
1
1
0.9643
0.7429
0.6905
0.5918
Mean
0.05
1
1
0.9524
0.9643
0.8286
0.6905
0.5918
0
0
0
0
0
0
0
0
0.02
0
0
0
0
0
0
0
Variance
0.05
0
0
0
0
0
0
0
0.1
0.0306
0
0
0
0
0
0
0.2
0.0306
0
0
0
0
0
0
0
0
0
0
0.0077
0.0163
0.0113
0.0225
0.02
0
0
0
0.0077
0.0196
0.0113
0.02
Variance
0.05
0
0
0.0136
0.0077
0.0049
0.0272
0.0258
0.1
0
0
0.0136
0.0077
0.0082
0.0113
0.0233
0.2
0
0
0
0.0128
0.0278
0.0113
0.02
0.1
0.0008
0.0001
0.0028
0.0025
0.0036
0.0025
0.0042
0.2
0.0007
0.0001
0.0007
0.0032
0.0044
0.0027
0.0045
0.1
0.0001
0.0001
0.0007
0.0014
0.0019
0.0023
0.0021
0.2
0.0000
0.0001
0.0007
0.0020
0.0023
0.0021
0.0020
0.1
0.9286
1
1
1
1
1
1
0.2
0.9286
1
1
1
1
1
1
RECALL
0.1
1
1
0.9524
0.9643
0.8571
0.6905
0.5714
0.2
1
1
1
0.9286
0.7714
0.6905
0.551
Table 2: RECIP Classification Accuracy on Artificial Data
1
2
3
4
5
6
7
0
0.9751
0.9333
0.9053
0.8725
0.8113
0.7655
0.7534
1
2
3
4
5
6
7
0
0.9751
0.9333
0.9053
0.8820
0.8714
0.8566
0.8429
CLASSIFICATION ACCURACY
Mean
Variance
0.02
0.05
0.1
0.2
0
0.02
0.05
0.9731 0.9686 0.9543 0.9420
0.0000 0.0000
0.0000
0.9297 0.9227 0.9067 0.8946
0.0001 0.0001
0.0001
0.8967 0.8764 0.8640 0.8618
0.0004 0.0005
0.0016
0.8685 0.8589 0.8454 0.8187
0.0020 0.0020
0.0019
0.8009 0.8105 0.8105 0.7782
0.0042 0.0044
0.0033
0.7739 0.7669 0.7632 0.7511
0.0025 0.0021
0.0026
0.7399 0.7347 0.7278 0.7205
0.0034 0.0040
0.0042
CLASSIFICATION ACCURACY - KNOWN PROJECTIONS
Mean
Variance
0.02
0.05
0.1
0.2
0
0.02
0.05
0.9731 0.9686 0.9637 0.9514
0.0000 0.0000
0.0000
0.9297 0.9227 0.9067 0.8946
0.0001 0.0001
0.0001
0.8967 0.8914 0.8777 0.8618
0.0004 0.0005
0.0005
0.8781 0.8657 0.8541 0.8331
0.0011 0.0011
0.0014
0.8641 0.8523 0.8429 0.8209
0.0015 0.0015
0.0018
0.8497 0.8377 0.8285 0.8074
0.0014 0.0015
0.0016
0.8371 0.8256 0.8122 0.7988
0.0015 0.0018
0.0018
Table 3: Accuracy of K-NN and RECIP
Dataset
KNN
RECIP
In Spam data, the two most informative projections are
Breast Cancer Wis 0.8415
0.8275
?Capital Length Total? (CLT)/?Capital Length Longest?
Breast Tissue 1.0000
1.0000
(CLL) and CLT/?Frequency of word your? (FWY). FigCell 0.7072
0.7640
ure 1 shows these two projections, with the dots repreMiniBOONE* 0.7896
0.7396
senting training points. The red dots represent points laSpam 0.7680
0.7680
Vowel 0.9839
0.9839
beled as spam while the blue ones are non-spam. The
circles are query points that have been assigned to be classified with the projection in which they are plotted. The green circles are correctly classified points,
while the magenta circles - far fewer - are the incorrectly classified ones. Not only does the importance of text in capital letters make sense for a spam filtering dataset, but the points that select those
projections are almost flawlessly classified. Additionally, assuming the user would need to attest the
validity of classification for the first plot, he/she would have no trouble seeing that the circled data
points are located in a region predominantly populated with examples of spam, so any non-spam
entry appears suspicious. Both of the magenta-colored cases fall into this category, and they can be
therefore flagged for further investigation.
7
Informative Projection for the Spam dataset
2000
1500
1000
500
0
Informative Projection for the Spam dataset
12
Frequency of word ?your?
Capital Run Length Longest
2500
10
8
6
4
2
0
500
1000
1500
2000
2500
Capital Run Length Total
3000
0
3500
0
2000
4000
6000
8000
10000
12000 14000
Capital Run Length Total
16000
Figure 1: Spam Dataset Selected Subspaces
Table 4: Classification accuracy using RECIP-learned projections - or known projections, in the
lower section - within a voting model instead of a selection model
1
2
3
4
5
6
7
1
2
3
4
5
6
7
CLASSIFICATION ACCURACY - VOTING ENSEMBLE
Mean
Variance
0
0.02
0.05
0.1
0.2
0
0.02
0.05
0.1
0.9751 0.9731 0.9686 0.9317 0.9226
0.0000 0.0000
0.0000 0.0070
0.7360 0.7354 0.7331 0.7303 0.7257
0.0002 0.0002
0.0001 0.0002
0.7290 0.7266 0.7163 0.7166 0.7212
0.0002 0.0002
0.0008 0.0006
0.6934 0.6931 0.6932 0.6904 0.6867
0.0008 0.0008
0.0008 0.0008
0.6715 0.6602 0.6745 0.6688 0.6581
0.0013 0.0014
0.0013 0.0014
0.6410 0.6541 0.6460 0.6529 0.6512
0.0008 0.0007
0.0010 0.0006
0.6392 0.6342 0.6268 0.6251 0.6294
0.0009 0.0011
0.0012 0.0012
CLASSIFICATION ACCURACY - VOTING ENSEMBLE, KNOWN PROJECTIONS
Mean
Variance
0
0.02
0.05
0.1
0.2
0
0.02
0.05
0.1
0.9751 0.9731 0.9686 0.9637 0.9514
0.0000 0.0000
0.0000 0.0001
0.7360 0.7354 0.7331 0.7303 0.7257
0.0002 0.0002
0.0001 0.0002
0.7409 0.7385 0.7390 0.7353 0.7325
0.0010 0.0012
0.0010 0.0011
0.7110 0.7109 0.7083 0.7067 0.7035
0.0041 0.0041
0.0042 0.0042
0.7077 0.7070 0.7050 0.7034 0.7008
0.0015 0.0015
0.0015 0.0016
0.6816 0.6807 0.6801 0.6790 0.6747
0.0008 0.0008
0.0008 0.0008
0.6787 0.6783 0.6772 0.6767 0.6722
0.0008 0.0009
0.0009 0.0008
0.2
0.0053
0.0001
0.0002
0.0009
0.0013
0.0005
0.0012
0.2
0.0000
0.0001
0.0010
0.0043
0.0016
0.0009
0.0008
Table 5: Classification accuracy for artificial data with the K-Nearest Neighbors method
CLASSIFICATION ACCURACY - KNN
1
2
3
4
5
6
7
4
0
0.7909
0.7940
0.7964
0.7990
0.8038
0.8043
0.8054
0.02
0.7843
0.7911
0.7939
0.7972
0.8024
0.8032
0.8044
Mean
0.05
0.7747
0.7861
0.7901
0.7942
0.8002
0.8015
0.8028
0.1
0.7652
0.7790
0.7854
0.7904
0.7970
0.7987
0.8004
0.2
0.7412
0.7655
0.7756
0.7828
0.7905
0.7930
0.7955
0
0.0002
0.0001
0.0000
0.0001
0.0001
0.0001
0.0001
0.02
0.0002
0.0001
0.0001
0.0001
0.0001
0.0001
0.0001
Variance
0.05
0.0002
0.0001
0.0001
0.0001
0.0001
0.0001
0.0001
0.1
0.0002
0.0001
0.0000
0.0001
0.0001
0.0001
0.0001
0.2
0.0002
0.0001
0.0000
0.0001
0.0001
0.0001
0.0001
Conclusion
This paper considers the problem of Projection Recovery for Classification. It is relevant in applications where the decision process must be easy to understand in order to enable human interpretation
of the results. We have developed a principled, regression-based algorithm designed to recover small
sets of low-dimensional subspaces that support interpretability. It optimizes the selection using individual data-point-specific entropy estimators. In this context, the proposed algorithm follows the
idea of transductive learning, and the role of the resulting projections bears resemblance to high confidence regions known in conformal prediction models. Empirical results obtained using simulated
and real-world data show the effectiveness of our method in finding informative projections that
enable accurate classification while maintaining transparency of the underlying decision process.
Acknowledgments
This material is based upon work supported by the NSF, under Grant No. IIS-0911032.
8
References
[1] Mark W. Craven and Jude W. Shavlik. Extracting Tree-Structured Representations of Trained Networks.
In David S. Touretzky, Michael C. Mozer, and Michael E. Hasselmo, editors, Advances in Neural Information Processing Systems, volume 8, pages 24?30. The MIT Press, 1996.
[2] Pedro Domingos. Knowledge discovery via multiple models. Intelligent Data Analysis, 2:187?202, 1998.
[3] Eulanda M. Dos Santos, Robert Sabourin, and Patrick Maupin. A dynamic overproduce-and-choose
strategy for the selection of classifier ensembles. Pattern Recogn., 41:2993?3009, October 2008.
[4] M. Feder and N. Merhav. Relations between entropy and error probability. Information Theory, IEEE
Transactions on, 40(1):259?266, January 1994.
[5] Jerome H. Friedman, Ron Kohavi, and Yeogirl Yun. Lazy decision trees, 1996.
[6] A. Gammerman, V. Vovk, and V. Vapnik. Learning by transduction. In In Uncertainty in Artificial
Intelligence, pages 148?155. Morgan Kaufmann, 1998.
[7] Quanquan Gu, Zhenhui Li, and Jiawei Han. Joint feature selection and subspace learning, 2011.
[8] Bing Liu, Minqing Hu, and Wynne Hsu. Intuitive representation of decision trees using general rules and
exceptions. In Proceedings of Seventeeth National Conference on Artificial Intellgience (AAAI-2000),
July 30 - Aug 3, 2000, pages 615?620, 2000.
[9] Michael Mampaey, Nikolaj Tatti, and Jilles Vreeken. Tell me what i need to know: succinctly summarizing data with itemsets. In Proceedings of the 17th ACM SIGKDD international conference on Knowledge
discovery and data mining, KDD ?11, pages 573?581, New York, NY, USA, 2011. ACM.
[10] Mario Marchand and Marina Sokolova. Learning with decision lists of data-dependent features. JOURNAL OF MACHINE LEARNING REASEARCH, 6, 2005.
[11] Guillaume Obozinski, Ben Taskar, and Michael I. Jordan. Joint covariate selection and joint subspace
selection for multiple classification problems. Statistics and Computing, 20(2):231?252, April 2010.
[12] Michael J. Pazzani, Subramani Mani, and W. Rodman Shankle. Beyond concise and colorful: Learning
intelligible rules, 1997.
[13] B. Poczos and J. Schneider. On the estimation of alpha-divergences. AISTATS, 2011.
[14] Kai Ting, Jonathan Wells, Swee Tan, Shyh Teng, and Geoffrey Webb. Feature-subspace aggregating:
ensembles for stable andunstable learners. Machine Learning, 82:375?397, 2011. 10.1007/s10994-0105224-5.
9
|
4791 |@word multitask:1 version:1 polynomial:1 norm:1 proportion:3 hu:1 confirms:1 gradual:1 p0:6 attainable:1 pick:2 concise:1 carry:1 reduction:1 initial:1 liu:1 contains:2 zij:6 selecting:2 tuned:1 recovered:7 contextual:1 assigning:1 must:4 readily:1 subsequent:1 informative:22 kdd:1 wynne:1 unintelligible:1 designed:5 update:2 n0:5 discrimination:1 drop:1 intelligence:1 selected:6 fewer:2 plot:1 indicative:1 inspection:1 colored:1 provides:2 contribute:1 ron:1 differential:1 become:1 suspicious:1 fitting:1 x0:2 indeed:1 expected:4 p1:4 beled:1 little:1 considering:1 increasing:1 becomes:4 underlying:3 notation:3 what:2 santos:1 sabourin:1 interpreted:1 substantially:3 minimizes:1 developed:1 finding:4 ti:3 friendly:1 tackle:1 voting:6 preferable:1 exactly:1 classifier:5 control:4 grant:1 intervention:3 yn:1 reasearch:1 colorful:1 positive:1 local:4 aggregating:1 mistake:1 installed:1 consequence:1 despite:1 ak:1 ure:1 might:1 itemset:1 resembles:1 tsallis:2 analytics:1 bi:6 statistically:1 obeys:1 practical:1 acknowledgment:1 testing:1 procedure:2 area:4 empirical:3 significantly:1 dictate:1 projection:78 confidence:2 word:2 refers:2 seeing:1 prc:5 onto:2 cannot:1 selection:19 close:1 risk:2 context:5 equivalent:1 go:1 regardless:2 convex:2 formulate:1 identifying:2 immediately:1 recovery:2 artur:1 estimator:9 insight:1 array:1 submodel:1 nuclear:1 rule:4 retrieve:2 handle:1 variation:1 updated:2 s10994:1 tan:1 user:10 awd:1 us:2 hypothesis:2 domingo:1 pa:2 crossing:1 element:2 satisfying:1 particularly:1 located:1 continues:1 observed:2 role:2 itemsets:1 taskar:1 solved:1 region:2 connected:1 trade:1 highest:1 decrease:1 principled:1 intuition:1 recip:25 pd:2 ui:4 complexity:1 mozer:1 dynamic:1 trained:2 solving:1 tight:1 predictive:2 creation:1 upon:1 learner:2 gu:1 joint:4 recogn:1 distinct:3 describe:1 effective:1 query:13 artificial:8 detected:1 tell:1 neighborhood:1 precious:1 quite:1 apparent:1 larger:1 solve:1 denser:1 whose:1 kai:1 statistic:1 knn:2 transductive:2 transform:3 itself:2 jointly:2 noisy:3 final:1 indication:1 propose:2 interaction:2 aligned:3 loop:2 relevant:6 uci:1 translate:1 intuitive:1 validate:1 convergence:1 perfect:2 converges:3 ben:1 tk:1 illustrate:2 develop:2 depending:2 derive:1 ij:1 nearest:4 school:1 aug:1 solves:2 recovering:2 c:2 predicted:2 implies:1 implemented:1 quantify:1 correct:4 attribute:1 human:7 enable:3 material:1 require:2 transparent:1 f1:3 really:1 investigation:1 proposition:2 biological:1 seventeeth:1 around:1 considered:3 visually:1 predict:1 achieves:1 estimation:1 label:11 combinatorial:1 sensitive:1 hasselmo:1 quanquan:1 successfully:1 mit:1 sensor:1 aim:2 rather:3 fulfill:1 pn:1 varying:1 longest:2 she:1 rank:1 check:1 sigkdd:1 sense:1 summarizing:1 dim:3 dependent:3 nn:6 entire:1 jiawei:1 xnk:1 initially:1 explanatory:2 relation:1 selective:1 selects:1 arg:3 classification:41 issue:1 fidelity:1 among:4 priori:1 proposes:1 equal:2 once:3 devastating:1 represents:4 excessive:1 nearly:2 intelligent:1 few:4 simultaneously:1 divergence:5 national:1 individual:1 n1:5 vowel:2 attempt:1 friedman:1 detection:1 highly:2 mining:2 evaluation:1 mixture:1 hg:4 perimeter:1 accurate:2 encourage:1 xy:1 tree:4 walk:1 bestperforming:1 plotted:1 desired:3 circle:3 minimal:2 increased:2 column:8 vreeken:1 instance:1 classify:2 assignment:1 subset:5 entry:2 predictor:1 usefulness:1 too:2 reported:1 synthetic:8 international:1 cll:1 discriminating:1 off:1 picking:1 michael:5 ambiguity:1 aaai:1 containing:3 choose:2 amid:1 worse:2 return:1 doubt:1 li:1 potential:2 coefficient:1 notable:1 depends:1 vehicle:3 view:2 picked:1 mario:1 linked:1 characterizes:1 reached:1 attest:2 start:3 analyze:1 capability:1 complicated:1 recover:3 red:1 contribution:4 minimize:3 accuracy:21 variance:9 kaufmann:1 efficiently:1 ensemble:5 yield:5 identify:1 correspond:2 conceptually:1 accurately:1 manages:1 tissue:1 classified:11 minj:1 touretzky:1 definition:1 against:2 frequency:2 naturally:1 recovers:1 sampled:1 hsu:1 dataset:10 treatment:1 popular:1 recall:5 knowledge:2 dimensionality:4 improves:1 obtainable:1 nikolaj:1 focusing:1 appears:1 improved:1 april:1 done:1 though:2 just:1 stage:1 until:3 jerome:1 hand:1 reweighting:1 somehow:1 interfere:1 resemblance:1 name:1 effect:1 validity:3 requiring:2 contain:4 concept:1 true:1 mani:1 hence:1 regularization:3 assigned:1 iteratively:1 deal:1 indistinguishable:1 encourages:1 sokolova:1 yun:1 performs:1 interpreting:1 ranging:1 predominantly:1 common:1 volume:1 analog:1 he:1 interpretation:1 synthesized:2 mellon:2 measurement:2 populated:1 fano:1 dot:2 stable:1 f0:3 han:1 add:1 patrick:1 closest:1 subramani:1 optimizes:2 belongs:1 irrelevant:1 inequality:1 binary:1 outperforming:1 morgan:1 minimum:1 schneider:1 maximize:1 signal:1 clt:2 ii:1 full:1 multiple:3 july:1 transparency:1 exceeds:1 match:1 adapt:1 plug:1 offer:2 long:1 retrieval:5 marina:1 dkl:3 prediction:2 scalable:1 regression:4 basic:1 breast:2 circumstance:1 cmu:2 metric:1 expectation:1 jude:1 represent:3 cell:4 interval:1 crucial:1 kohavi:1 unlike:1 minb:1 strict:1 subject:2 facilitates:1 effectiveness:1 jordan:1 call:1 extracting:1 revealed:1 split:1 enough:1 easy:1 fit:1 lasso:10 opposite:1 idea:1 whether:3 handled:1 utility:1 feder:2 x1k:5 penalty:6 poczos:1 york:1 useful:1 clear:2 amount:2 nonparametric:3 repeating:1 category:2 reduced:1 nsf:1 estimated:1 correctly:2 blue:1 gammerman:1 carnegie:2 write:1 affected:1 threat:2 group:1 threshold:2 capital:6 prevent:1 kept:1 relaxation:1 sum:1 run:4 letter:1 injected:3 uncertainty:1 classifiable:1 arrive:1 almost:1 reasonable:1 decide:1 decision:11 prefer:1 acceptable:1 confusing:1 summarizes:1 x2k:1 bound:4 hi:3 followed:1 display:1 marchand:1 scanned:2 occur:1 constraint:1 your:2 min:10 separable:1 injection:1 department:1 marking:1 according:2 structured:1 combination:4 craven:1 ascertain:1 slightly:1 separability:5 wi:1 modification:1 happens:1 tatti:1 explained:1 intuitively:1 restricted:1 gradually:1 visualization:1 bing:1 flawlessly:1 mechanism:1 know:1 fed:1 subjected:1 conformal:1 available:3 rewritten:1 appropriate:1 generic:1 alternative:2 batch:6 comprehensible:1 original:5 running:2 include:1 trouble:1 madalina:1 maintaining:1 ting:2 objective:7 added:2 quantity:1 occurs:1 reweighing:1 strategy:1 costly:1 md:9 subspace:31 distance:4 separate:2 simulated:1 majority:1 me:1 topic:1 collected:1 discriminant:1 extent:1 reason:2 considers:1 dubrawski:1 assuming:2 length:5 illustration:1 providing:1 ratio:2 difficult:2 setup:2 october:1 merhav:2 statement:1 potentially:2 robert:1 webb:1 negative:2 stated:1 design:1 understandable:1 perform:1 allowing:1 upper:1 observation:1 datasets:5 withheld:1 enabling:1 incorrectly:1 immediate:1 situation:1 extended:2 january:1 y1:1 jilles:1 bk:4 introduced:1 pair:1 required:1 kl:1 david:1 connection:1 discriminator:8 learned:2 beyond:1 proceeds:1 below:2 pattern:7 reading:2 sparsity:2 green:1 interpretability:1 power:1 suitable:1 overlap:4 rely:1 regularized:1 representing:1 scheme:2 identifies:1 axis:3 created:1 text:1 prior:1 understanding:1 circled:1 discovery:2 loss:1 expect:1 bear:1 limitation:4 filtering:1 var:1 geoffrey:1 agent:1 degree:1 usa:1 informativeness:1 editor:1 row:4 cancer:1 succinctly:1 penalized:2 repeat:1 supported:1 populate:1 allow:1 understand:1 shavlik:1 fall:2 neighbor:5 explaining:1 benefit:2 dimension:3 xn:1 world:5 made:1 adaptive:1 projected:1 preprocessing:1 simplified:1 spam:9 far:1 transaction:1 alpha:2 implicitly:2 keep:1 pittsburgh:2 assumed:1 xi:44 alternatively:1 search:1 regulatory:2 continuous:1 table:11 additionally:1 learn:2 robust:2 pazzani:1 forest:1 requested:1 complex:2 domain:1 submit:1 vj:1 aistats:1 border:3 noise:5 intelligible:1 complementary:2 x1:3 transduction:1 ny:1 precision:4 natively:1 candidate:1 bij:1 formula:1 magenta:2 specific:8 covariate:1 list:1 false:2 vapnik:1 importance:1 entropy:19 simply:2 visual:1 lazy:2 partially:1 applies:1 pedro:1 corresponds:2 determines:1 relies:1 chance:1 satisfies:1 obozinski:2 acm:2 conditional:3 goal:1 formulated:1 flagged:1 quantifying:1 exceptionally:1 hard:1 change:1 determined:1 specifically:3 typical:1 vovk:1 called:2 total:3 secondary:1 teng:1 experimental:1 meaningful:1 exception:1 formally:1 select:3 guillaume:1 support:3 bidimensional:1 latter:1 mark:1 jonathan:1 brevity:1 evaluate:1 senting:1 ex:1
|
4,189 | 4,792 |
Fusion with Diffusion for Robust Visual Tracking
Yu Zhou1?, Xiang Bai1 , Wenyu Liu1 , Longin Jan Latecki2
1
Dept. of Electronics and Information Engineering, Huazhong Univ. of Science and Technology, P. R. China
2
Dept. of Computer and Information Sciences, Temple Univ., Philadelphia, USA
{zhouyu.hust,xiang.bai}@gmail.com,[email protected],[email protected]
Abstract
A weighted graph is used as an underlying structure of many algorithms like semisupervised learning and spectral clustering. If the edge weights are determined by
a single similarity measure, then it hard if not impossible to capture all relevant
aspects of similarity when using a single similarity measure. In particular, in
the case of visual object matching it is beneficial to integrate different similarity
measures that focus on different visual representations.
In this paper, a novel approach to integrate multiple similarity measures is proposed. First pairs of similarity measures are combined with a diffusion process on
their tensor product graph (TPG). Hence the diffused similarity of each pair of objects becomes a function of joint diffusion of the two original similarities, which
in turn depends on the neighborhood structure of the TPG. We call this process
Fusion with Diffusion (FD). However, a higher order graph like the TPG usually
means significant increase in time complexity. This is not the case in the proposed
approach. A key feature of our approach is that the time complexity of the diffusion on the TPG is the same as the diffusion process on each of the original
graphs. Moreover, it is not necessary to explicitly construct the TPG in our framework. Finally all diffused pairs of similarity measures are combined as a weighted
sum. We demonstrate the advantages of the proposed approach on the task of
visual tracking, where different aspects of the appearance similarity between the
target object in frame t ? 1 and target object candidates in frame t are integrated. The obtained method is tested on several challenge video sequences and the
experimental results show that it outperforms state-of-the-art tracking methods.
1
Introduction
The considered problem has a simple formulation: Given are multiple similarities between the same
set of n data points, each similarity can be represented as a weighted graph. The goal is to combine
them to a single similarity measure that best reflects the underlying data manifold. Since the set of
nodes is the same, it is easy to combine the graphs into a single weighted multigraph, where there
are multiple edges between the same pair of vertices representing different similarities. Then our
task can be stated as finding a mapping from the multigraph to a weighted simple graph whose edge
weights best represent the similarity of the data points. Of course, this formulation is not precise,
since generally the data manifold is unknown, and hence it is hard to quantify the ?best?. However,
it is possible to evaluate the quality of the combination experimentally in many applications, e.g.,
the tracking performance considered in this paper.
There are many possible solutions to the considered problem. One of the most obvious ones is a
weighted linear combination of the similarities. However, this solution does not consider the similarity dependencies of different data points. The proposed approach aims to utilize the neighborhood
structure of the multigraph in the mapping to the weighted simple graph.
?
Part of this work was done while the author was visiting Temple University
1
Given two different similarity measures, we first construct their Tensor Product Graph (TPG). Then
we jointly diffuse both similarities with a diffusion process on TPG. However, while the original
graphs representing the two measures have n nodes, their TPG has n2 nodes, which significantly
increases the time complexity of the diffusion on TPG. To address this problem, we introduce an
iterative algorithm that operates on the original graphs and prove that it is equivalent to the diffusion
on TPG. We call this process Fusion with Diffusion (FD). FD is a generalization of the approached
in [26], where only a single similarity measure is considered. While the diffusion process on TPG in
[26] is used to enhances a single similarity measure, our approach aims at combining two different
similarity measures so that they enhance and constrain each others.
Although algorithmically very different, our motivation is similar to co-training style algorithms in
[5, 23, 24] where multiple cues are fused in an iterative learning process. The proposed approach is
also related to the semi-supervised learning in [6, 7, 21, 28, 29]. For online tracking task, we only
have the label information from the current frame, which can be regarded as the labeled data, and
the label information in the next frame is unavailable, which can be regarded as unlabeled data. In
this context, FD jointly propagates two similarities of the unlabeled data to the labeled data. The
obtained new diffused similarity, can be then interpreted as the label probability over the unlabeled
data. Hence from the point of view of visual tracking, but in the spirit of semi-supervised learning,
our approach utilizes the unlabeled data from the next frame for improved visual similarity to the
labeled data representing the tracked objets.
Visual tracking is an important issue in computer vision and has many practical applications. The
challenges in designing a tracking system are often caused by shape deformation, occlusion, viewpoints variances, and background clutter. Different strategies have been proposed to obtain robust
tracking systems. In [8, 12, 14, 16, 25, 27], matching based strategy is utilized. Discriminate appearance model of the target is extracted from the current frame, then the optimal target is estimated
based on the distance/similatity between the appearance model and the candidate in the hypothesis
set. Classification based strategies are introduced in [1, 2, 3, 4, 10, 11]. Tracking task is transformed
into foreground and background binary classification problem in this framework. [15, 20] try to
combine both of those two frameworks. In this paper, we focus on improving the distance/similarity
measure to improve the matching based tracking strategy. Our motivation is similar to [12], where
metric learning is proposed to improve the distance measure. However, different from [12], multiple
cues are fused to improve the similarity in our approach. Moreover, the information from the forthcoming frame is also used to improve the similarity. This leads to more stable tracking performance
than in [12].
Multiple cues fusion seem to be an effective way to improve the tracking performance. In [13],
multiple feature fusion is implemented based on sampling the state space. In [20], the tracking task
is formulated as the combination of different trackers, three different trackers are combined into a
cascade. Different from those methods, we combine different similarities into a single similarity
measure, which makes our method a more general for integrating various appearance models.
In summary, we propose a novel framework for integration of multiple similarity measures into a
single consistent similarity measure, where the similarity of each pair of data points depends on their
similarity to other data points. We demonstrate its superior performance on a challenging task of
tracking by visual matching.
2
Problem Formulation
The problem of matching based visual tracking boils down to the following simple formulation.
Given the target in frame It?1 which can be represented as image patch I1 enclosing the target, and
the set of candidate target patches in frame It , C = {In | n = 2, ..., N }, the goal is to determine
which patch in C corresponds to the target in frame It?1 . Of course, one can make this setting more
complicated, e.g., by considering more frames, but we consider this simple formulation in this paper.
The candidate set C is determined by the motion model, which is particularly simple in our setting.
The size of all the image patches is fixed and the candidate set is composed of patches in frame It
inside a search radius r, i.e. ||c(In ) ? c(I1 )|| < r, where c is the 2-D coordinate of center position
of the image patch.
2
Let S be a similarity measure defined on the set of the image patches V = {I1 } ? C, i.e., S is a
function from V ? V into positive real numbers. Then our tracking goal can be formally stated as
I? = arg max S(I1 , X)
X?C
(1)
meaning that the patch in C with most similar appearance to patch I1 is selected as the target location
in frame t.
Since the appearance of the target object changes, e.g., due to motion and lighting changes, single
similarity measure is often not sufficient to identify the target in the next frame. Therefore, we consider a set of similarity measures S = {S1 , . . . SQ }, each S? defined on V ? V for ? = 1, . . . , Q.
For example, in our experimental results, each image patch is represented with three histograms
based on three different features, HOG[9], LBP[18], Haar-like feature[4], which lead to three different similarity measures. In other words, each pair of patches can be compared with respect to three
different appearance features.
We can interpret each similarity measure S? as the affinity matrix of a graph G? whose vertex set
is V , i.e., S? a N ? N matrix with positive entries, where N is the cardinality of V . Then we
can combine the graphs G? into a single multigraph whose edge weights corresponds to different
similarity measures S? .
However, in order to solve Eq. (1), we need a single similarity measure S. Hence we face a question
how to combine the measures in S into a single similarity measure. We propose a two stage approach
to answer this question. First, we combine pairs of similarity measures S? and S? into a single
measure P??,? , which is a matrix of size N ? N . P??,? is defined in Section 3 and it is obtained with
the proposed process called fusion with diffusion.
In the second stage we combine all P??,? for ?, ? = 1, . . . Q into a single similarity measure S
defined as a weighted matrix sum
X
(2)
S=
?? ?? P??,?
?,?
where ?? and ?? are positive weights associated with measures S? and S? defined in Section 5.
We also observe that in contrast to many tracking by matching methods, the combined measure S is
not only a function of similarities between I1 and the candidate patches in C, but also of similarities
of patches in C to each other.
3
Fusion with Diffusion
3.1 Single Graph on Consecutive Frames
Given a single graph G? = (V, S? ), a reversible Markov chain on V can be constructed with the
transition probability defined as
P? (i, j) = S? (i, j)/Di
(3)
PN
where Di = j=1 S? (i, j) is the degree of each vertex. Then the transition probability P? (i, j)
PN
inherits the positivity-preserving property j=1 P? (i, j) = 1, i = 1, ..., N .
The graph G? is fully connected graph in many applications. To reduce the influence of noisy points,
i.e., cluttered background patches in tracking, a local transition probability is used:
P? (i, j)
j ? kNN(i)
(Pk,? )(i, j) =
(4)
0
otherwise
Hence
the number of non-zero elements in each row is not larger than k, which implies
Pn
j=1 (Pk,? )(i, j) < 1. This inequality is important in our framework, since it guarantees the
converge of the diffusion process on the tensor product graph presented in the next section.
3.2 Tensor Product Graph of Two Similarities
Given are two graphs G? = (V, Pk,? ) and G? = (V, Pk,? ) defined in Sec. 3.1, we can define their
Tensor Product Graph (TPG) as
G? ? G? = (V ? V, P),
3
(5)
where P = Pk,? ? Pk,? is the Kronecker product of matrices defined as P(a, b, i, j) =
Pk,? (a, b) Pk,? (i, j). Thus, each entry of P relates four image patches. When Pk,? and Pk,? are
two N ? N matrices, then P is a N 2 ? N 2 matrix. However, as we will see in the next subsection,
we actually never compute P explicitly.
3.3 Diffusion Process on Tensor Product Graph
We utilize a diffusion process on TPG to combine the two similarity measures Pk,? and Pk,? . We
begin with some notations. The vec operator creates a column vector from a matrix M by stacking
2
the column vectors of M below one another. More formally vec : RN ?N ? RN is defined as
vec(M )g = (M )ij , where i = b(g ? 1)/N c + 1 and j = g mod N . The inverse operator vec?1
that maps a vector into a matrix is often called the reshape operator. We define a diagonal N ? N
matrix as
1
i=1
(6)
?(i, i) =
0
otherwise,
Only the entry representing the patch I1 is set to one and all other entries are set to zero in ?.
We observe that P is the adjacency matrix of TPG G? ? G? . We define a q-th iteration of the
diffusion process on this graph as
q
X
(P)e vec(?).
(7)
e=0
As shown in [26], this iterative process is guaranteed to converge to a nontrivial solution given by
lim
q??
q
X
(P)e vec(?) = (I ? P)?1 vec(?),
(8)
e=0
where I is a identity matrix. Following [26], we define
P??,? = P? = vec?1 ((I ? P)?1 vec(?))
(9)
We observe that our solution P? is a N ? N matrix.
We call the diffusion process to compute P? a Fusion with Diffusion (FD) process, since diffusion
on TPG G? ? G? is used to fuse two similarity measures S? and S? .
Since P is a N 2 ? N 2 matrix, FD process on TPG as defined in Eq. (7) may be computationally too
demanding. To compute P? effectively, instead of diffusing on TPG directly, we show in Section 3.4
that FD process on TPG is equivalent to an iterative process on N ? N matrices only. Consequently,
instead of an O(n6 ) time complexity, we obtain an O(n3 ) complexity. Then in Section 4 we further
reduce it to an efficient algorithm with time complexity O(n2 ), which can be used in real time
tracking algorithms.
3.4 Iterative Algorithm for Computing P?
T
We define P1 = P(k,?) P(k,?)
and
T q T
Pq+1 = Pk,? (Pk,? )q (Pk,?
) Pk,? + ?.
(10)
We iterate Eq.(10) until convergence, and as we prove in Proposition 1, we obtain
P? = lim Pq =vec?1 ((I ? P)?1 vec(?))
q??
(11)
The iterative process in Eq.(10) is a generalization of the process introduced in [26]. Consequently,
the following properties are simple extensions of the properties derived in [26]. However, we state
them explicitly, since we combine two different affinity matrices, while [26] considers only a single
matrix. In other words, we consider diffusion on TPG of two different graphs, while diffusion on
TPG of a single graph with itself is considered in [26].
Proposition 1
vec
(q+1)
lim P
q??
= lim
q??
q?1
X
Pe vec(?) = (I ? P)?1 vec(?).
e=0
4
(12)
Proof: Eq.(10) can be rewritten as
T q
T
P(q+1) = Pk,? (Pk,? )q (Pk,?
) Pk,?
+?
T (q?1)
T
T
= Pk,? [Pk,? (Pk,? )(q?1) (Pk,?
)
Pk,?
+ ?]Pk,?
+?
T (q?1)
T 2
= (Pk,? )2 (Pk,? )(q?1) (Pk,?
)
(Pk,?
) + Pk,? ? Pk,? + ?
= ???
T
T q
T q?1
= (Pk,? )q Pk,? Pk,?
(Pk,?
) + (Pk,? )q?1 ? (Pk,?
)
+ ??? + ?
T
T q
= (Pk,? )q Pk,? Pk,?
(Pk,?
) +
q?1
X
T e
(Pk,? )e ? (Pk,?
)
(13)
e=0
T
T q
Lemma 1 limq?? (Pk,? )q Pk,? Pk,?
(Pk,?
) =0
T q
Proof: It suffices to show that (Pk,? )q and (Pk,?
) go to 0, when q ? ?. This is true if and only
if every eigenvalue of Pk,? and Pk,? is less than one in absolute value. Since Pk,? and Pk,? has
nonnegative entries, this holds if its row sums are all less than one. As described in Sec.3.1, we have
PN
PN
b=1 (Pk,? )a,b < 1 and
j=1 (Pk,? )i,j < 1.
Lemma 1 shows that the first summand in Eq.(13) converges to zero, and consequently we have
(q+1)
lim P
q??
q?1
X
T e
= lim
(Pk,? )e ? (Pk,?
) .
q??
(14)
e=0
T e
Lemma 2 vec (Pk,? )e ? (Pk,?
) = (P)e vec(?) for e = 1, 2, . . . .
T l
Proof: Our proof is by induction. Suppose (P)l vec(?)=vec (Pk,? )l ? (Pk,?
) is true for e = l,
then for e = l + 1 we have
T
(P)l+1 vec(?) = P Pl vec(?) = vec Pk,? vec?1 (Pl vec(?)) Pk,?
T l
T
= vec Pk,? ((Pk,? )l ? (Pk,?
) ) Pk,?
T l+1
= vec (Pk,? )l+1 ? (Pk,?
)
and the proof of Lemma 2 is complete.
By Lemma 1 and Lemma 2, we obtain that
vec
q?1
X
!
e
(Pk,? ) ?
T e
(Pk,?
)
=
e=0
q?1
X
(P)e vec(?).
(15)
e=0
The following useful identity holds for the Kronecker Product [22]:
T
vec(Pk,? ?Pk,?
) = (Pk,? ? Pk,? )vec(?) = (P)vec(?)
Putting together (14), (15), (16), we obtain
vec lim P(q+1) = vec
q??
= lim
q??
q?1
X
lim
q??
q?1
X
(16)
!
e
(Pk,? ) ?
T e
(Pk,?
)
(17)
e=0
Pe vec(?) = (I ? P)?1 vec(?)=vec(P? ).
(18)
e=0
This proves Proposition 1.
We now show how FD could improve the original similarity measures. Suppose we have two similarity measures S? and S? . I1 denotes the image patch enclosing the target in frame t?1. According
to S? , there are many patches in frame t that have nearly equal similarity to I1 with patch In being most similar to I1 , while according to S? , I1 is clearly more similar to Im in frame t. Then
the proposed diffusion will enhance the similarity S? (I1 , Im ), since it will propagate faster the S?
similarity of I1 to Im than to the other patches. In contrast, the S? similarities will propagate with
similar speed. Consequently, the final joint similarity P? will have Im as the most similar to I1 .
5
Algorithm 1: Iterative Fusion with Diffusion Process
1
2
3
4
5
6
7
8
Input: Two matrices Pk,? , Pk,? ? RN ?N
Output: Diffusion result P? ? RN ?N
Compute P? = ?.
Compute u? = first column of Pk,? , u? = first column of Pk,?
Compute P? ? P? + u? uT? .
for i = 2, 3, . . . do
Compute u? ? Pk,? u?
Compute u? ? Pk,? u?
Compute P? ? P? + u? uT?
end
4
FD Algorithm
To effectively compute P? , we propose an iterative algorithm that takes the advantage of the structure
of matrix ?. Let u? be a N ? 1 vector containing the first column of Pk,? . We write Pk,? = [u? |R]
T
and Pk,? ? = [u? |0]. It follows then that Pk,? ? Pk,?
= u? uT? . Furthermore, if we denote
T j
(Pk,? )j ? (Pk,?
) = u?,j uT?,j , with u?,j being N ? 1, and uT?,j being 1 ? N , it follows that
j+1
j
T j+1
T j
T
T
Pk,?
? (Pk,?
)
= Pk,? (Pk,?
? (Pk,?
) )Pk,?
= Pk,? u?,j uT?,j Pk,?
= (Pk,? u?,j )(Pk,? u?,j )T = u?,j+1 uT?,j+1 .
Hence, we replaced one of the two N ? N matrix products with one matrix product between an
N ? N matrix and N ? 1 vector, and the other with a product of an N ? 1 by an 1 ? N vector. This
reduces the complexity of our algorithm from O(n3 ) to O(n2 ).
The final algorithm is shown in Alg. 1.
5
Weight Estimation
The weight ?? of measure S? is proportional to how well S? is able to distinguish the target I1
in frame It?1 from the background surrounding the target. Let {Bh | h = 1, ..., H} be a set of
background patches surrounding the target I1 in frame It?1 . The weight of S? is defined as
?? =
1
1
H
PH
h=1 S? (I1 , Bh )
(19)
Thus, the smaller the values of S? , the larger is the weight ?? . The weights of all similarity measures
PQ
are normalized so that ?=1 ?? = 1. The weights are computed for every frame in order to
accommodate appearance changes of the tracked object.
6
Experimental Results
We validate our tracking algorithm on eight challenging videos from [4] and [17]: Sylvester, Coke
Can, Tiger1, Cliff Bar, Coupon Book, Surfer, and Tiger2, PETS01D1. We compare our method with
six famous state-of-the-art tracking algorithms including Multiple Instance Learning tracker (MIL)
[4], Fragment tracker(Frag) [1], IVT [19], Online Adaboost tracker (OAB) [10], SemiBoost tracker
(Semi) [11], Mean-Shift (MS) tracker, and a simple weighted linear sum of multiple cues (Linear).
For the comparison methods, we run source code of Semi, Frag, MIL, IVT and OAB supplied by the
authors on the testing videos and use the parameters mentioned in their papers directly. For MS, we
implement it based on OpenCV. For Linear, we use three kinds of image features to get the affinity
and then simply calculate the average affinity and use the diffusion process mentioned in [26]. Note
that all the parameters in our algorithm were fixed for all the experiments.
In our experiments, HOG[9], LBP[18] and Haar-like[4] features are used to represent the image
patches. Hence each pair of patches is compared with three different similarities based on histograms
6
Cliff Bar
Coke Can
60
40
100
MS
Frag(KS)
Frag(EMD)
Frag(Chi)
IVT
Linear
Our
120
50
100
MS
Frag(KS)
Frag(EMD)
Frag(Chi)
IVT
Linear
Our
160
140
Center Location Error (pixel)
80
180
140
MS
Frag(KS)
Frag(EMD)
Frag(Chi)
IVT
Linear
Our
Center Location Error (pixel)
Center Location Error (pixel)
100
Center Location Error (pixel)
MS
Frag(KS)
Frag(EMD)
Frag(Chi)
IVT
Linear
Our
120
Tiger2
Tiger1
150
140
80
60
40
120
100
80
60
40
20
50
100
150
Frame #
200
250
300
0
50
100
250
300
350
MS
Frag(KS)
Frag(EMD)
Frag(Chi)
IVT
Linear
Our
120
100
150
100
80
60
40
200
Frame #
250
300
350
0
400
MS
Frag(KS)
Frag(EMD)
Frag(Chi)
IVT
Linear
Our
150
0
100
150
MS
Frag(KS)
Frag(EMD)
Frag(Chi)
IVT
Linear
Our
350
100
250
300
350
400
300
MS
Frag(KS)
Frag(EMD)
Frag(Chi)
IVT
Linear
Our
500
250
200
150
100
50
200
Frame #
600
400
400
300
200
100
50
20
0
0
50
PETS01D1
450
200
Center Location Error (pixel)
140
50
Surfer
250
180
160
20
0
Sylvester
Coupon Book
Center Location Error (pixel)
150
200
Frame #
0
Center Location Error (pixel)
0
0
Center Location Error (pixel)
0
20
0
0
50
100
150
200
Frame #
250
300
350
0
200
400
600
800
Frame #
1000
1200
0
1400
0
50
100
150
200
Frame #
250
300
350
0
50
400
100
150
200
250
Frame #
300
350
400
450
Figure 1: Center Location Error (CLE) versus frame number
of HOG, LBP, and Haar-like feature. For the experimental parameters, we set r = 15 pixels,
H = 300, k = 12 and the iteration number in Alg. 1 is set to 200.
To impartially and comprehensively compare our algorithm with other state-of-the-art trackers, we
used two kinds of quantitative comparisons Average Center Location Error (ACLE) and Precision
Score [4]. The results are shown in Table 1 and Table 2, respectively. Two kinds of curve evaluation
methodologies are also used Center Location Error (CLE) curve and Precision Plots curve1 . The
results are shown in Fig.1 and Fig.2, respectively.
Table 1: Average Center Location Error (ACLE measured in pixels). Red color indicates best
performance, Blue color indicates second best, Green color indicates the third best
Video
Coke Can
Cliff Bar
Tiger 1
Tiger2
Coup. Book
Sylvester
Surfer
PETS01D1
MS
43.7
43.8
45.5
47.6
20.0
20.0
17.0
18.1
OAB
25.0
34.6
39.8
13.2
17.7
35.0
13.4
7.1
IVT
37.3
47.1
50.2
98.5
32.2
96.1
19.0
241.8
Semi
40.5
57.2
20.9
39.3
65.1
21.0
9.3
158.9
Frag1
69.1
34.7
39.7
38.6
55.9
23.0
140.1
6.7
Frag2
69.0
34.0
26.7
38.8
56.1
12.2
139.8
7.2
Frag3
34.1
44.8
31.1
51.9
67.0
10.1
138.6
9.5
MIL
31.9
14.2
7.6
20.6
19.8
11.4
7.7
11.7
Linear
16.8
15.0
23.8
6.5
13.6
10.5
6.5
245.4
our
15.4
6.1
6.9
5.7
6.5
9.3
5.5
6.0
Table 2: Precision Score (precision at the fixed threshold of 15). Red color indicates best performance, Blue color indicates second best, Green color indicates the third best.
Video
Coke Can
Cliff Bar
Tiger 1
Tiger 2
Coupon Book
Sylvester
Surfer
PETS01D1
MS
0.11
0.08
0.05
0.06
0.16
0.46
0.59
0.38
OAB
0.21
0.21
0.17
0.65
0.18
0.30
0.61
1.00
IVT
0.15
0.19
0.03
0.01
0.21
0.06
0.40
0.01
Semi
0.18
0.34
0.52
0.44
0.41
0.53
0.89
0.29
Frag1
0.09
0.20
0.21
0.09
0.39
0.72
0.19
0.99
Frag2
0.09
0.23
0.38
0.09
0.39
0.78
0.21
0.97
Frag3
0.17
0.12
0.38
0.12
0.39
0.81
0.23
0.95
MIL
0.24
0.79
0.90
0.66
0.23
0.76
0.93
0.80
Linear
0.36
0.52
0.54
0.89
0.53
0.86
1.00
0.02
our
0.46
0.95
0.91
0.95
1.00
0.90
1.00
1.00
Comparison to matching based methods: MS, IVT, Frag and Linear are all matching based
tracking algorithms. In MS, famous Bhattacharyya coefficient is used to measure the distance between histogram distributions; for Frag, we test it under three different measurement strategies: the
1
More details about the meaning of Precision Plots can be found in [4]
7
0.8
0.8
0.8
0.8
0.7
0.7
0.7
0.7
0.6
0.6
0.6
0.6
0.5
0.2
0.1
0
0.4
MS
Frag(KS)
Frag(EMD)
Frag(Chi)
IVT
Linear
Our
0.3
0.5
0.2
0.1
0
0
5
10
15
20
25
30
Threshold
35
40
45
50
0
5
10
15
20
25
30
Threshold
35
40
45
0.5
0.4
MS
Frag(KS)
Frag(EMD)
Frag(Chi)
IVT
Linear
Our
0.3
Precision
1
0.9
Precision
1
0.9
Precision
0.2
0.1
0
50
0
5
10
15
20
35
40
45
0.2
0.1
0
50
1
1
0.9
0.9
0.9
0.9
0.8
0.8
0.8
0.8
0.7
0.7
0.7
0.7
0.6
0.6
0.6
0.2
0.1
0
0.4
MS
Frag(KS)
Frag(EMD)
Frag(Chi)
IVT
Linear
Our
0.3
0
5
10
15
20
25
30
Threshold
35
40
45
0.5
0.2
0.1
0
0
5
10
15
20
25
30
Threshold
35
40
45
0.5
0.4
MS
Frag(KS)
Frag(EMD)
Frag(Chi)
IVT
Linear
Our
0.3
50
5
10
15
20
0.2
0.1
0
35
40
45
50
0
5
10
15
20
25
30
Threshold
35
40
45
0.5
0.4
MS
Frag(KS)
Frag(EMD)
Frag(Chi)
IVT
Linear
Our
0.3
50
25
30
Threshold
0.6
Precision
Precision
1
0.4
0
PETS01D1
1
0.5
MS
Frag(KS)
Frag(EMD)
Frag(Chi)
IVT
Linear
Our
0.3
Surfer
Sylvester
Coupon Book
25
30
Threshold
0.5
0.4
MS
Frag(KS)
Frag(EMD)
Frag(Chi)
IVT
Linear
Our
0.3
Precision
Precision
Tiger2
1
0.9
0.4
Precision
Tiger1
Cliff Bar
Coke Can
1
0.9
MS
Frag(KS)
Frag(EMD)
Frag(Chi)
IVT
Linear
Our
0.3
0.2
0.1
50
0
0
5
10
15
20
25
30
Threshold
35
40
45
50
Figure 2: Precision Plots. The threshold is set to 15 in our experiments.
Kolmogorov-Smirnov statistic, EMD, and Chi-Square distance, represented as Frag1, Frag2, Frag3
in Table 1 and Table 2, respectively. For Linear Combination, the average similarity is used and
the diffusion process in [26] is used to improve the similarity measure. Our FD approach clearly
outperforms the other approaches, as shown in Table1 and Table2. Our tracking results achieve the
best performance in all the testing videos, especially for the Precision Plots shown in Table 2. Even
though we set the threshold to 15, which is more challenging for all the trackers, we still get three
1.00 scores. In some videos like sylvester and PETS01D1, Frag achieves comparable results with
our method, but it works badly in other videos which means that specific distance measure can only
work on some special cases but our fusion framework is robust for all the challenges that appear in
the videos. Our method is always batter than Linear Combination, which means that the fusion with
diffusion can really improve the tracking performance. The stability of our method can be clearly
seen in the plots of location error as the function of frame number in Fig.1. Our tracking results
are always stable, which means that we do not lose the target in the whole tracking process. This is
also reflected in the fact that our Precision is always batter than all the other methods under different
thresholds as shown in Fig.2.
Comparison to classification based methods: MIL and OAB are both classification based tracking
algorithms. For OAB, on-line Adaboost is used to train the classifier for the foreground and background classification. MIL combines multiple instance learning with on-line Adaboost. Haar-like
features are used in both methods. Again our method outperforms those two methods as can be seen
in Table1 and Table 2.
Comparison to semi-supervised learning based methods: SemiBoost combines semi-supervised
learning with on-line Adaboost. Our method is also similar to semi-supervised learning for we build
the graph model on consecutive frames, which means that both of our method and SemiBoost use
the information from the forthcoming frame. Our method is always better than SemiBoost as shown
in Table 1 and Table 2.
7
Conclusions
In this paper, a novel Fusion with Diffusion process is proposed for robust visual tracking. Pairs
of similarity measures are fused into a single similarity measure with a diffusion process on the
tensor product of two graphs determined by the two similarity measures. The proposed method has
time complexity of O(n2 ), which makes it suitable for real time tracking. It is evaluated on several challenging videos, and it significantly outperforms a large number of state-of-the-art tracking
algorithms.
Acknowledgments
We would like to thank all the authors for releasing their source codes and testing videos, since they
made our experimental evaluation possible. This work was supported by NSF Grants IIS-0812118,
BCS-0924164, OIA-1027897, and by the National Natural Science Foundation of China (NSFC)
Grants 60903096, 61222308 and 61173120.
8
References
[1] A. Adam, E. Rivlin, and I. Shimshoni. Robust fragment-based tracking using the integral histogram. In
IEEE Computer Society Conference on Computer Vision and Pattern Recognition(CVPR), pages 798?805,
2006.
[2] S. Avidan. Support vector tracking. IEEE Transactions on Pattern Analysis and Machine Intelligence,
26(8):1064?1072, 2004.
[3] S. Avidan. Ensemble tracking. IEEE Transactions on Pattern Analysis and Machine Intelligence,
29(2):261?271, 2007.
[4] B. Babenko, M. Yang, and S. Belongie. Robust object tracking with online multiple instance learning.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(8):1619?1632, 2011.
[5] X. Bai, B. Wang, C. Yao, W. Liu, and Z. Tu. Co-transduction for shape retrieval. IEEE Transactions on
Image Processing, 21(5):2747?2757, 2012.
[6] X. Bai, X. Yang, L. J. Latecki, W. Liu, and Z. Tu. Learning context sensitive shape similarity by graph
transduction. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(5):861?874, 2010.
[7] M. Belkin and P. Niyogi. Semi-supervised learning on riemannian manifolds. Machine Learning, 56(special Issue on clustering):209?239, 2004.
[8] D. Comaniciu, V. R. Member, and P. Meer. Kernel-based object tracking. IEEE Transactions on Pattern
Analysis and Machine Intelligence, 25(5):564?575, 2003.
[9] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In IEEE Computer Society
Conference on Computer Vision and Pattern Recognition(CVPR), pages 886?893, 2005.
[10] H. Grabner, M. Grabner, and H. Bischof. Real-time tracking via on-line boosting. In British Machine
Vision Conference(BMVC), pages 47?56, 2006.
[11] H. Grabner, C. Leistner, and H. Bischof. Semi-supervised on-line boosting for robust tracking. In European Conference on Computer Vision(ECCV), pages 234?247, 2008.
[12] N. Jiang, W. Liu, and Y. Wu. Learning adaptive metric for robust visual tracking. IEEE Transactions on
Image Processing, 20(8):2288?2300, 2011.
[13] J. Kwon and K. M. Lee. Visual tracking decomposition. In IEEE Computer Society Conference on
Computer Vision and Pattern Recognition(CVPR), 2010.
[14] J. Lim, D. Ross, R.-S. Lin, and M.-H. Yang. Incremental learning for visual tracking. In Advances in
Neural Information Processing Systems (NIPS), 2005.
[15] R. Liu, J. Cheng, and H. Lu. A robust boosting tracker with minimum error bound in a co-training
framework. In IEEE Interestial Conference on Computer Vision(ICCV), 2009.
[16] X. Mei and H. Ling. Robust visual tracking and vehicle classification via sparse representation. IEEE
Transactions on Pattern Analysis and Machine Intelligence, 33(11):2259?2272, 2011.
[17] X. Mei, H. Ling, Y. Wu, E. Blasch, and L. Bai. Minimum error bounded efficient l1 tracker with occlusion
detection. In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2011.
[18] T. Ojala, M. Pietik?ainen, and T. M?aenp?aa? . Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Transactions on Pattern Analysis and Machine Intelligence,
24(7):971?987, 2002.
[19] D. Ross, J. Kim, R.-S. Lin, and M.-H. Yang. Incremental learning for robust visual tracking. International
Journal of Computer Vision, 77(1):125?141, 2008.
[20] J. Santner, C. Leistner, A. Saffari, T. Pock, and H. Bischof. Prost: Parallel robust online simple tracking.
In IEEE Computer Society Conference on Computer Vision and Pattern Recognition(CVPR), 2010.
[21] K. Sinha and M.Belkin. Semi-supervised learning using sparse eigenfunction bases. In Advances in
Neural Information Processing Systems(NIPS), 2009.
[22] S. Vishwanathan, N. Schraudolph, R. Kondor, and K. Borgwardt. Graph kernels. Journal of Machine
Learning Research, 11(4):1201?1242, 2010.
[23] B. Wang, J. Jiang, W. Wang, Z.-H. Zhou, and Z. Tu. Unsupervised metric fusion by cross diffusion. In
IEEE Computer Society Conference on Computer Vision and Pattern Recognition(CVPR), 2012.
[24] W. Wang and Z. Zhou. A new analysis of co-training. In Internal Conference on Machine Learning(ICML), 2010.
[25] Y. Wu and J. Fan. Contextual flow. In IEEE Computer Society Conference on Computer Vision and
Pattern Recognition(CVPR), 2009.
[26] X. Yang and L. J. Latecki. Affinity learning on a tensor product graph with applications to shape and image
retrieval. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition(CVPR),
2011.
[27] W. Zhong, H. Lu, and M.-H. Yang. Robust object tracking via sparsity-based collaborative model. In
Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2012.
[28] D. Zhou, O. Bousquet, T. Lal, J. Weston, and B. Scholkopf. Learning with local and global consistency.
In Advances in Neural Information Processing Systems (NIPS), 2004.
[29] X. Zhu. Semi-supervised learning literature survey. In Technical Report 1530, Department of Computer
Sciences, University of Wisconsin, Madison, 2005.
9
|
4792 |@word kondor:1 dalal:1 smirnov:1 rivlin:1 triggs:1 propagate:2 decomposition:1 accommodate:1 bai:4 liu:4 electronics:1 fragment:2 score:3 bhattacharyya:1 outperforms:4 current:2 com:1 babenko:1 coke:5 contextual:1 gmail:1 shape:4 plot:5 ainen:1 cue:4 selected:1 intelligence:7 boosting:3 node:3 location:14 constructed:1 scholkopf:1 prove:2 combine:12 inside:1 introduce:1 p1:1 chi:17 considering:1 cardinality:1 latecki:3 becomes:1 begin:1 underlying:2 moreover:2 notation:1 bounded:1 kind:3 interpreted:1 finding:1 guarantee:1 quantitative:1 every:2 classifier:1 grant:2 appear:1 positive:3 engineering:1 local:3 pock:1 table2:1 nsfc:1 cliff:5 jiang:2 china:2 k:16 challenging:4 co:4 practical:1 acknowledgment:1 testing:3 implement:1 sq:1 mei:2 jan:1 significantly:2 cascade:1 matching:8 word:2 integrating:1 get:2 unlabeled:4 operator:3 bh:2 context:2 impossible:1 influence:1 equivalent:2 map:1 center:13 go:1 cluttered:1 survey:1 regarded:2 stability:1 meer:1 coordinate:1 target:16 suppose:2 designing:1 hypothesis:1 element:1 recognition:9 particularly:1 utilized:1 labeled:3 wang:4 capture:1 calculate:1 connected:1 mentioned:2 complexity:8 creates:1 pietik:1 joint:2 represented:4 various:1 kolmogorov:1 surrounding:2 train:1 univ:2 effective:1 approached:1 zhou1:1 neighborhood:2 whose:3 larger:2 solve:1 cvpr:9 otherwise:2 statistic:1 knn:1 niyogi:1 jointly:2 noisy:1 itself:1 final:2 online:4 advantage:2 sequence:1 eigenvalue:1 propose:3 product:13 tu:3 relevant:1 combining:1 achieve:1 multiresolution:1 validate:1 convergence:1 table1:2 adam:1 converges:1 incremental:2 object:9 measured:1 ij:1 eq:6 implemented:1 implies:1 quantify:1 radius:1 human:1 saffari:1 adjacency:1 suffices:1 generalization:2 really:1 leistner:2 proposition:3 im:4 extension:1 pl:2 hold:2 tracker:11 considered:5 mapping:2 surfer:5 opencv:1 achieves:1 consecutive:2 estimation:1 lose:1 label:3 ross:2 sensitive:1 weighted:9 reflects:1 clearly:3 always:4 aim:2 pn:5 zhou:3 zhong:1 mil:6 derived:1 focus:2 inherits:1 indicates:6 contrast:2 kim:1 integrated:1 transformed:1 i1:17 pixel:10 issue:2 classification:7 arg:1 art:4 integration:1 special:2 equal:1 construct:2 never:1 emd:17 sampling:1 yu:1 unsupervised:1 nearly:1 icml:1 foreground:2 others:1 report:1 summand:1 belkin:2 kwon:1 oriented:1 composed:1 national:1 replaced:1 occlusion:2 detection:2 fd:10 evaluation:2 chain:1 edge:4 integral:1 necessary:1 deformation:1 sinha:1 instance:3 column:5 temple:3 stacking:1 vertex:3 entry:5 too:1 dependency:1 answer:1 combined:4 borgwardt:1 international:1 lee:1 enhance:2 together:1 fused:3 tpg:20 batter:2 yao:1 again:1 containing:1 positivity:1 conf:1 book:5 style:1 huazhong:1 wenyu:1 sec:2 coefficient:1 explicitly:3 hust:2 depends:2 caused:1 vehicle:1 view:1 try:1 liu1:1 red:2 complicated:1 parallel:1 collaborative:1 square:1 variance:1 ensemble:1 identify:1 famous:2 lu:2 lighting:1 oab:6 coup:1 obvious:1 associated:1 riemannian:1 di:2 proof:5 boil:1 subsection:1 lim:10 ut:7 color:6 actually:1 higher:1 supervised:9 multigraph:4 adaboost:4 methodology:1 improved:1 reflected:1 bai1:1 formulation:5 done:1 though:1 evaluated:1 bmvc:1 furthermore:1 stage:2 until:1 reversible:1 quality:1 gray:1 semisupervised:1 usa:1 normalized:1 true:2 hence:7 comaniciu:1 shimshoni:1 m:22 complete:1 demonstrate:2 l1:1 motion:2 image:12 meaning:2 novel:3 superior:1 rotation:1 tracked:2 interpret:1 significant:1 measurement:1 vec:35 consistency:1 pq:3 stable:2 similarity:64 base:1 frag:53 inequality:1 binary:2 preserving:1 seen:2 minimum:2 determine:1 converge:2 semi:13 relates:1 multiple:12 ii:1 bcs:1 reduces:1 technical:1 faster:1 schraudolph:1 cross:1 retrieval:2 lin:2 avidan:2 sylvester:6 vision:14 metric:3 histogram:5 represent:2 iteration:2 kernel:2 santner:1 longin:1 background:6 lbp:3 source:2 releasing:1 member:1 flow:1 spirit:1 seem:1 mod:1 call:3 yang:6 easy:1 diffusing:1 iterate:1 forthcoming:2 reduce:2 cn:1 shift:1 six:1 generally:1 useful:1 prost:1 clutter:1 ph:1 supplied:1 nsf:1 estimated:1 algorithmically:1 blue:2 write:1 ivt:21 key:1 four:1 putting:1 threshold:12 diffusion:31 utilize:2 graph:30 fuse:1 sum:4 run:1 inverse:1 wu:3 utilizes:1 patch:23 comparable:1 bound:1 guaranteed:1 distinguish:1 cheng:1 fan:1 nonnegative:1 badly:1 nontrivial:1 kronecker:2 vishwanathan:1 constrain:1 n3:2 diffuse:1 bousquet:1 aspect:2 speed:1 department:1 according:2 combination:5 beneficial:1 smaller:1 s1:1 iccv:1 invariant:1 computationally:1 turn:1 end:1 rewritten:1 eight:1 observe:3 spectral:1 reshape:1 original:5 denotes:1 clustering:2 madison:1 prof:1 especially:1 build:1 society:7 grabner:3 tiger1:3 tensor:8 diffused:3 question:2 strategy:5 diagonal:1 visiting:1 enhances:1 affinity:5 gradient:1 distance:6 thank:1 evaluate:1 manifold:3 considers:1 induction:1 code:2 hog:3 stated:2 enclosing:2 unknown:1 markov:1 precise:1 frame:33 rn:4 introduced:2 pair:9 bischof:3 lal:1 nip:3 eigenfunction:1 address:1 able:1 bar:5 usually:1 below:1 pattern:17 sparsity:1 challenge:3 max:1 including:1 video:11 green:2 oia:1 suitable:1 demanding:1 natural:1 haar:4 zhu:1 representing:4 improve:8 technology:1 n6:1 philadelphia:1 literature:1 xiang:2 wisconsin:1 fully:1 proportional:1 versus:1 foundation:1 integrate:2 degree:1 sufficient:1 consistent:1 propagates:1 viewpoint:1 row:2 eccv:1 course:2 summary:1 supported:1 comprehensively:1 face:1 absolute:1 sparse:2 coupon:4 curve:2 cle:2 transition:3 author:3 made:1 adaptive:1 transaction:9 global:1 belongie:1 search:1 iterative:8 table:10 robust:13 unavailable:1 improving:1 alg:2 european:1 pk:101 motivation:2 whole:1 ling:2 n2:4 fig:4 transduction:2 precision:16 position:1 candidate:6 pe:2 third:2 down:1 british:1 specific:1 fusion:13 effectively:2 texture:1 simply:1 appearance:8 visual:15 tracking:45 aa:1 corresponds:2 semiboost:4 extracted:1 weston:1 goal:3 formulated:1 identity:2 consequently:4 hard:2 experimentally:1 change:3 determined:3 tiger:3 operates:1 lemma:6 called:2 discriminate:1 experimental:5 formally:2 internal:1 support:1 ojala:1 dept:2 tested:1
|
4,190 | 4,793 |
Co-Regularized Hashing for Multimodal Data
Yi Zhen and Dit-Yan Yeung
Department of Computer Science and Engineering
Hong Kong University of Science and Technology
Clear Water Bay, Kowloon, Hong Kong
{yzhen,dyyeung}@cse.ust.hk
Abstract
Hashing-based methods provide a very promising approach to large-scale similarity search. To obtain compact hash codes, a recent trend seeks to learn the hash
functions from data automatically. In this paper, we study hash function learning
in the context of multimodal data. We propose a novel multimodal hash function
learning method, called Co-Regularized Hashing (CRH), based on a boosted coregularization framework. The hash functions for each bit of the hash codes are
learned by solving DC (difference of convex functions) programs, while the learning for multiple bits proceeds via a boosting procedure so that the bias introduced
by the hash functions can be sequentially minimized. We empirically compare
CRH with two state-of-the-art multimodal hash function learning methods on two
publicly available data sets.
1
Introduction
Nearest neighbor search, a.k.a. similarity search, plays a fundamental role in many important applications, including document retrieval, object recognition, and near-duplicate detection. Among
the methods proposed thus far for nearest neighbor search [1], hashing-based methods [2, 3] have
attracted considerable interest in recent years. The major advantage of hashing-based methods is
that they index data using binary hash codes which enjoy not only low storage requirements but
also high computational efficiency. To preserve similarity in the data, a family of algorithms called
locality sensitive hashing (LSH) [4, 5] has been developed over the past decade. The basic idea of
LSH is to hash the data into bins so that the collision probability reflects data similarity. LSH is very
appealing in that it has theoretical guarantee and is also simple to implement. However, in practice
LSH algorithms often generate long hash codes in order to achieve acceptable performance because
the theoretical guarantee only holds asymptotically. This shortcoming can be attributed largely to
their data-independent nature which cannot capture the data characteristics very accurately in the
hash codes. Besides, in many applications, neighbors cannot be defined easily using some generic
distance or similarity measures. As such, a new research trend has emerged over the past few years
by learning the hash functions from data automatically. In the sequel, we refer to this new trend as
hash function learning (HFL).
Boosting, as one of the most popular machine learning approaches, was first applied to learning hash
functions for pose estimation [6]. Later, impressive performance for HFL using restricted Boltzmann machines was reported [7]. These two early HFL methods have been successfully applied to
content-based image retrieval in which large-scale data sets are commonly encountered [8]. A number of algorithms have been proposed since then. Spectral hashing (SH) [9] treats HFL as a special
case of manifold learning and uses an efficient algorithm based on eigenfunctions. One shortcoming of spectral hashing is in its assumption, which requires that the data be uniformly distributed.
To overcome this limitation, several methods have been proposed, including binary reconstructive
embeddings [10], shift-invariant kernel hashing [11], distribution matching [12], optmized kernel
hashing [13], and minimal loss hashing [14]. Recently, some semi-supervised hashing models have
1
been developed to combine both feature similarity and semantic similarity for HFL [15, 16, 17, 18].
To further improve the scalability of these methods, Liu et al. [19] presented a fast algorithm based
on anchor graphs.
Existing HFL algorithms have enjoyed wide success in challenging applications. Nevertheless, they
can only be applied to a single type of data, called unimodal data, which refer to data from a single
modality such as image, text, or audio. Nowadays, it is common to find similarity search applications
that involve multimodal data. For example, given an image of a tourist attraction as query, one
would like to retrieve some textual documents that provide more detailed information about the place
of interest. Because data from different modalities reside in different feature spaces, performing
multimodal similarity search will be made much easier and faster if the multimodal data can be
mapped into a common Hamming space. However, it is challenging to do so because data from
different modalities generally have very different representations.
As far as we know, there exist only two multimodal HFL methods. Bronstein et al. [20] made the
first attempt to learn linear hash functions using eigendecomposition and boosting, while Kumar
et al. [21] extended spectral hashing to the multiview setting and proposed a cross-view hashing
model. One major limitation of these two methods is that they both rely on eigendecomposition
operations which are computationally very demanding when the data dimensionality is high. Moreover, they consider applications for shape retrieval, image alignment, and people search which are
quite different from the multimodal retrieval applications of interest here.
In this paper, we propose a novel multimodal HFL method, called Co-Regularized Hashing (CRH),
based on a boosted co-regularization framework. For each bit of the hash codes, CRH learns a group
of hash functions, one for each modality, by minimizing a novel loss function. Although the loss
function is non-convex, it is in a special form which can be expressed as a difference of convex
functions. As a consequence, the Concave-Convex Procedure (CCCP) [22] can be applied to solve
the optimization problem iteratively. We use a stochastic sub-gradient method, which converges
very fast, in each CCCP iteration to find a local optimum. After learning the hash functions for one
bit, CRH proceeds to learn more bits via a boosting procedure such that the bias introduced by the
hash functions can be sequentially minimized.
In the next section, we present the CRH method in detail. Extensive empirical study using two data
sets is reported in Section 3. Finally, Section 4 concludes the paper.
2
Co-Regularized Hashing
We use boldface lowercase letters and calligraphic letters to denote vectors and sets, respectively.
For a vector x, xT denotes its transpose and kxk its `2 norm.
2.1
Objective Function
Suppose that there are two sets of data points from two modalities,1 e.g., {xi ? X }Ii=1 for a
set of I images from some feature space X and {yj ? Y}Jj=1 for a set of J textual documents from another feature space Y. We also have a set of N inter-modality point pairs ? =
{(xa1 , yb1 ), (xa2 , yb2 ), . . . , (xaN , ybN )}, where, for the nth pair, an and bn are indices of the points
in X and Y, respectively. We further assume that each pair has a label sn = 1 if xan and ybn are
similar and sn = 0 otherwise. The notion of inter-modality similarity varies from application to
application. For example, if an image includes a tiger and a textual document is a research paper on
tigers, they should be labeled as similar. On the other hand, it is highly unlikely to label the image
as similar to a textual document on basketball.
For each bit of the hash codes, we define two linear hash functions as follows:
f (x) = sgn(wxT x) and g(y) = sgn(wyT y),
where sgn(?) denotes the sign function, and wx and wy are projection vectors which, ideally, should
map similar points to the same hash bin and dissimilar points to different bins. Our goal is to achieve
HFL by learning wx and wy from the multimodal data.
1
For simplicity of our presentation, we focus on the bimodal case here and leave the discussion on extension
to more than two modalities to Section 2.4.
2
To achieve this goal, we propose to minimize the following objective function w.r.t. (with respect
to) wx and wy :
O=
I
J
N
X
1X x 1X y
?y
?x
`i +
`j + ?
kwx k2 +
kwy k2 ,
?n `?n +
I i=1
J j=1
2
2
n=1
(1)
where `xi and `yj are intra-modality loss terms for modalities X and Y, respectively. In this work, we
define them as:
`xi = 1 ? f (xi )(wxT xi ) + = 1 ? |wxT xi | + ,
`yj = 1 ? g(yj )(wyT yj ) + = 1 ? |wyT yj | + ,
where [a]+ is equal to a if a ? 0 and 0 otherwise. We note that the intra-modality loss terms
are similar to the hinge loss in the (linear) support vector machine but have quite different meaning. Conceptually, we want the projected values to be far away from 0 and hence expect the hash
functions learned to have good generalization ability [16]. For the inter-modality loss term `?n , we
PN
associate with each point pair a weight ?n , with n=1 ?n = 1, to normalize the loss as well as
compute the bias of the hash functions. In this paper, we define `?n as
`?n = sn d2n + (1 ? sn )? (dn ),
where dn = wxT xan ? wyT ybn and ? (d) is called the smoothly clipped inverted squared deviation
(SCISD) function. The loss function such defined requires that the similar inter-modality points,
i.e., sn = 1, have small distance after projection, and the dissimilar ones, i.e., sn = 0, have large
distance. With these two kinds of loss terms, we expect that the learned hash functions can enjoy
the large-margin property while effectively preserving the inter-modality similarity.
The SCISD function was first proposed in [23]. It can be defined as follows:
?
1
a?2
?
if |d| ? ?
? ? 2 d2 + 2
2
d ?2a?|d|+a2 ?2
? (d) =
if
? < |d| ? a?
2(a?1)
?
?
0
if a? < |d|,
where a and ? are two user-specified parameters. The SCISD function penalizes projection vectors
that result in small distance between dissimilar points after projection. A more important property
is that it can be expressed as a difference of two convex functions. Specifically, we can express
? (d) = ?1 (d) ? ?2 (d) where
?
if |d| ? ?
?
? 0 2
1 2 a?2
ad ?2a?|d|+a?2
if
?
<
|d|
?
a?
d ?
.
and
?
(d)
=
?1 (d) =
2
2(a?1)
?
2
2
? 1 2 a?2
if a? < |d|
2d ? 2
2.2
Optimization
Though the objective function (1) is nonconvex w.r.t. wx and wy , we can optimize it w.r.t. wx and
wy in an alternating manner. Take wx for example, we remove the irrelevant terms and get the
following objective:
I
N
X
1 X x ?x
`i +
kwx k2 + ?
?n `?n ,
I i=1
2
n=1
(2)
where
?
? 0
`xi =
1 ? wxT xi
?
1 + wxT xi
if |wxT xi | ? 1
if 0 ? wxT xi < 1
if ?1 < wxT xi < 0.
It is easy to realize that the objective function (2) can be expressed as a difference of two convex
functions in different cases. As a consequence, we can use CCCP to solve the nonconvex optimization problem iteratively with each iteration minimizing a convex upper bound of the original
objective function.
3
Briefly speaking, given an objective function f0 (x) ? g0 (x) where both f0 and g0 are convex, CCCP
works iteratively as follows. The variable x is first randomly initialized to x(0) . At the tth iteration,
CCCP minimizes the following convex upper bound of f0 (x) ? g0 (x) at location x(t) :
f0 (x) ? g0 (x(t) ) + ?x g0 (x(t) )(x ? x(t) ) ,
where ?x g0 (x(t) ) is the first derivative of g0 (x) at x(t) . This optimization problem can be solved
using any convex optimization solver to obtain x(t+1) . Given an initial value x(0) , the solution
sequence {x(t) } found by CCCP is guaranteed to reach a local minimum or a saddle point.
For our problem, the optimization problem at the tth iteration minimizes the following upper bound
of Equation (2) w.r.t. wx :
Ox =
I
N
X
1X
?x kwx k2
+?
`xi ,
?n sn d2n + (1 ? sn )?nx +
2
I
n=1
i=1
(t)
(t)
(t)
(t)
(3)
(t)
(t)
where ?nx = ?1 (dn ) ? ?2 (dn ) ? dn xTan (wx ? wx ), dn = (wx )T xan ? wyT ybn , and wx is the
value of wx at the tth iteration.
To find a locally optimal solution to problem (3), we can use any gradient-based method. In this
work, we develop a stochastic sub-gradient solver based on Pegasos [24], which is known to be one
of the fastest solvers for margin-based classifiers. Specifically, we randomly select k points from
each modality and l point pairs to evaluate the sub-gradient at each iteration.
The key step of our method is to evaluate the sub-gradient of objective function (3) w.r.t. wx , which
can be computed as
N
N
I
X
X
1X x
?Ox
= 2?
?n sn dn xan + ?
?n ?xn + ?x wx ?
? ,
?wx
I i=1 i
n=1
n=1
(t)
??1
xan ,
?
d
where ?xn = (1 ? sn ) ?d
n
n
if |dn | ? ?
if ? < |dn | ? a?
if a? < |dn |
?
? 0
??1
=
?
?dn
adn ?2a? sgn(dn )
(a?1)
dn
and
? xi
=
0
sgn wxT xi xi
(4)
if |wxT xi | ? 1
if |wxT xi | < 1.
Similarly, the objective function for the optimization problem w.r.t. wy at the tth CCCP iteration is:
Oy =
N
I
X
1X
?y kwy k2
+?
?n sn d2n + (1 ? sn )?ny +
`yj ,
2
J
n=1
j=1
(t)
(t)
(t)
(t)
(t)
(5)
(t)
where ?ny = ?1 (dn ) ? ?2 (dn ) + dn ybTn (wy ? wy ), dn = wxT xan ? (wy )T ybn , wy is the
value of wy at the tth iteration and
?
if |wyT yj | ? 1
? 0
y
T
1 ? wy yj if 0 ? wyT yj < 1
`j =
?
1 + wyT yj if ?1 < wyT yj < 0.
The corresponding sub-gradient is given by
N
N
I
X
X
1X y
?Oy
= ?2?
?n sn dn ybn ? ?
?n ?yn + ?y wy ?
? ,
?wy
J j=1 j
n=1
n=1
where ?yn = (1 ? sn )
??1
?dn
(t)
? dn
? yj
=
ybn and
0
sgn wyT yj yj
4
if |wyT yj | ? 1
if |wyT yj | < 1.
(6)
2.3
Algorithm
So far we have only discussed how to learn the hash functions for one bit of the hash codes. To learn
the hash functions for multiple bits, one could repeat the same procedure and treat the learning for
each bit independently. However, as reported in previous studies [15, 19], it is very important to take
into consideration the relationships between different bits in HFL. In other words, to learn compact
hash codes, we should coordinate the learning of hash functions for different bits.
To this end, we take the standard AdaBoost [25] approach to learn multiple bits sequentially. Intuitively, this approach allows learning of the hash functions in later stages to be aware of the bias
introduced by their antecedents. The overall algorithm of CRH is summarized in Algorithm 1.
Algorithm 1 Co-Regularized Hashing
until convergence.
Compute error of current hash functions
Input:
X , Y ? multimodal data
? ? inter-modality point pairs
K ? code length
?x , ?y , ? ? regularization parameters
a, ? ? parameters for SCISD function
Output:
(k)
wx , k = 1, . . . , K ? projection vectors for X
(k)
wy , k = 1, . . . , K ? projection vectors for Y
k =
XN
n=1
?n(k) I[sn 6=hn ] ,
where I[a] = 1 if a is true and I[a] = 0 otherwise, and
1 if f (xan ) = g(ybn )
hn =
0 if f (xan ) 6= g(ybn ).
Procedure:
(1)
Initialize ?n = 1/N, ?n ? {1, 2, . . . , N }.
for k = 1 to K do
repeat
(k)
Optimize Equation (3) to get wx ;
(k)
Optimize Equation (5) to get wy ;
Set ?k = k /(1 ? k ).
Update the weight for each point pair:
1?I[sn 6=hn ]
?n(k+1) = ?n(k) ?k
.
end for
In the following, we briefly analyze the time complexity of Algorithm 1 for one bit. The first computationally expensive part of the algorithm is to evaluate the sub-gradients. The time complexity is
O((k + l)d), where d is the data dimensionality, and k and l are the numbers of random points and
random pairs, respectively, for the stochastic sub-gradient solver. In our experiments, we set k = 1
and l = 500. We notice that further increasing the two numbers brings no significant performance
improvement. We leave the theoretical study of the impact of k and l to our future work. Another
major computational cost comes from updating the weights of the inter-modality point pairs. The
time complexity is O(dN ), where N is the number of inter-modality point pairs.
To summarize, our algorithm scales linearly with the number of inter-modality point pairs and the
data dimensionality. In practice, the number of inter-modality point pairs is usually small, making
our algorithm very efficient.
2.4
Extensions
We briefly discuss two possible extensions of CRH in this subsection. First, we note that it is easy
to extend CRH to learn nonlinear hash functions via the kernel trick [26]. Specifically, according to
the generalized representer theorem [27], we can represent the projection vectors wx and wy as
XI
XJ
wx =
?i ?x (xi ) and wy =
?j ?y (yj ),
i=1
j=1
where ?x (?) and ?y (?) are kernel-induced feature maps for modalities X and Y, respectively. Then
the objective function (1) can be expressed in kernel form and kernel-based hash functions can be
learned by minimizing a new but very similar objective function.
Another possible extension is to make CRH support more than two modalities. Taking a new modality Z for example, we need to incorporate into Equation (1) the following terms: loss and regularization terms for Z, and all pairwise loss terms involving Z and other modalities, e.g., X and Y.
5
For both extensions, it is straightforward to adapt the algorithm presented above to solve the new
optimization problems.
2.5
Discussions
CRH is closely related to a recent multimodal metric learning method called Multiview Neighborhood Preserving Projections (Multi-NPP) [23], because CRH uses a loss function for inter-modality
point pairs which is similar to Multi-NPP. However, CRH is a general framework and other loss
functions for inter-modality point pairs can also be adopted. The two methods have at least three
significant differences. First, our focus is on HFL while Multi-NPP is on metric learning through
embedding. Second, in addition to the inter-modality loss term, the objective function in CRH includes two intra-modality loss terms for large margin HFL while Multi-NPP only has a loss term for
the inter-modality point pairs. Third, CRH uses boosting to sequentially learn the hash functions but
Multi-NPP does not take this aspect into consideration.
As discussed briefly in [23], one may first use Multi-NPP to map multimodal data into a common
real space and then apply any unimodal HFL method for multimodal hashing. However, this naive
two-stage approach has some limitations. First, both stages can introduce information loss which
impairs the quality of the hash functions learned. Second, a two-stage approach generally needs
more computational resources. These two limitations can be overcome by using a one-stage method
such as CRH.
3
Experiments
3.1
Experimental Settings
In our experiments, we compare CRH with two state-of-the-art multimodal hashing methods,
namely, Cross-Modal Similarity Sensitive Hashing (CMSSH) [20]2 and Cross-View Hashing
(CVH) [21],3 for two cross-modal retrieval tasks: (1) image query vs. text database; (2) text query
vs. image database. The goal of each retrieval task is to find from the text (image) database the
nearest neighbors for the image (text) query.
We use two benchmark data sets which are, to the best of our knowledge, the largest fully paired
and labeled multimodal data sets. We further divide each data set into a database set and a query
set. To train the models, we randomly select a group of documents from the database set to form the
training set. Moreover, we randomly select 0.1% of the point pairs from the training set. For fair
comparison, all models are trained on the same training set and the experiments are repeated with 5
independent training sets.
The mean average precision (mAP) is used as the performance measure. To compute the
mAP, we first evaluate the average precision (AP) of a set of R retrieved documents by AP =
PR
1
r=1 P (r) ?(r), where L is the number of true neighbors in the retrieved set, P (r) denotes the
L
precision of the top r retrieved documents, and ?(r) = 1 if the rth retrieved document is a true
neighbor and ?(r) = 0 otherwise. The mAP is then computed by averaging the AP values over all
the queries in the query set. The larger the mAP, the better the performance. In the experiments, we
set R = 50. Besides, we also report the precision and recall within a fixed Hamming radius.
We use cross-validation to choose the parameters for CRH and find that the model performance is
only mildly sensitive to the parameters. As a result, in all experiments, we set ?x = 0.01, ?y =
0.01, ? = 1000, a = 3.7, and ? = 1/a. Besides, unless specified otherwise, we fix the training set
size to 2,000 and the code length K to 24.
3.2
Results on Wiki
The Wiki data set, generated from Wikipedia featured articles, consists of 2,866 image-text pairs.4
In each pair, the text is an article describing some events or people and the image is closely related to
2
We used the implementation generously provided by the authors.
We implemented the method ourselves because the code is not publicly available.
4
http://www.svcl.ucsd.edu/projects/crossmodal/
3
6
the content of the article. The images are represented by 128-dimensional SIFT [28] feature vectors,
while the text articles are represented by the probability distributions over 10 topics learned by a
latent Dirichlet allocation (LDA) model [29]. Each pair is labeled with one of 10 semantic classes.
We simply use these class labels to identify the neighbors. Moreover, we use 80% of the data as the
database set and the remaining 20% to form the query set.
The mAP values of the three methods are reported in Table 1. We can see that CRH outperforms
CVH and CMSSH under all settings and CVH performs slightly better than CMSSH. We note that
CMSSH ignores the intra-modality relational information and CVH simply treats each bit independently. Hence the performance difference is expected.
Table 1: mAP comparison on Wiki
Task
Method
Image Query
vs.
Text Database
Text Query
vs.
Image Database
CRH
CVH
CMSSH
CRH
CVH
CMSSH
Code Length
K = 48
0.2399 ? 0.0185
0.1788 ? 0.0149
0.1780 ? 0.0080
0.2882 ? 0.0261
0.2304 ? 0.0104
0.2094 ? 0.0072
K = 24
0.2537 ? 0.0206
0.2043 ? 0.0150
0.1965 ? 0.0123
0.2896 ? 0.0214
0.2714 ? 0.0164
0.2179 ? 0.0161
K = 64
0.2392 ? 0.0131
0.1732 ? 0.0072
0.1624 ? 0.0073
0.2989 ? 0.0293
0.2156 ? 0.0202
0.2040 ? 0.0135
We further compare the three methods on several aspects in Figure 1. We first vary the size of the
training set in subfigures 1(a) and 1(d). Although CVH performs the best when the training set
is small, its performance is gradually surpassed by CRH as the size increases. We then plot the
precision-recall curves and recall curves for all three methods in the remaining subfigures. It is clear
that CRH outperforms its two counterparts by a large margin.
Image Query vs. Text Database
0.4
0.25
0.2
0.15
0.8
0.6
0.3
0.2
0.4
0.1
0.2
0.1
CRH
CVH
CMSSH
0.05
0
0
500
1000
1500
Size of Training Set
0
0
2000
(a) Varying training set size
0.2
0.3
0.4
Recall
0.6
0.8
0
0
1
(b) Precision-recall curve
Text Query vs. Image Database
0.35
0.4
Text Query vs. Image Database
0.8
0.6
0.3
Recall
0.2
0.15
0.2
0.4
0.1
0.2
0.1
CRH
CVH
CMSSH
0.05
0
0
500
1000
1500
Size of Training Set
2000
(d) Varying training set size
0
0
15
5
x 10
1
CRH
CVH
CMSSH
0.25
5
10
No. of Retrieved Points
(c) Recall curve
Text Query vs. Image Database
0.5
CRH
CVH
CMSSH
Precision
Precision within Hamming Radius 2
Image Query vs. Text Database
1
CRH
CVH
CMSSH
Recall
0.3
Image Query vs. Text Database
0.5
CRH
CVH
CMSSH
Precision
Precision within Hamming Radius 2
0.4
0.35
0.2
0.4
Recall
0.6
0.8
1
(e) Precision-recall curve
0
0
5
10
No. of Retrieved Points
15
5
x 10
(f) Recall curve
Figure 1: Results on Wiki
3.3
Results on Flickr
The Flickr data set consists of 186,577 image-tag pairs pruned from the NUS data set5 [30] by
keeping the pairs that belong to one of the 10 largest classes. The images are represented by 500dimensional SIFT vectors. To obtain more compact representations of the tags, we perform PCA
on the original tag occurrence features and obtain 1000-dimensional feature vectors. Each pair is
annotated by at least one of 10 semantic labels, and two points are defined as neighbors if they share
at least one label. We use 99% of the data as the database set and the remaining 1% to form the
query set.
5
http://lms.comp.nus.edu.sg/research/NUS-WIDE.htm
7
The mAP values of the three methods are reported in Table 2. In the task of image query vs. text
database, CRH performs comparably to CMSSH, which is better than CVH. However, in the other
task, CRH achieves the best performance.
Table 2: mAP comparison on Flickr
Task
Method
Image Query
vs.
Text Database
Text Query
vs.
Image Database
CRH
CVH
CMSSH
CRH
CVH
CMSSH
Code Length
K = 48
0.4990 ? 0.0075
0.4515 ? 0.0041
0.5098 ? 0.0141
0.5185 ? 0.0050
0.4519 ? 0.0029
0.4815 ? 0.0101
K = 24
0.5259 ? 0.0094
0.4717 ? 0.0035
0.5287 ? 0.0123
0.5364 ? 0.0021
0.4598 ? 0.0020
0.5029 ? 0.0321
K = 64
0.4929 ? 0.0064
0.4471 ? 0.0023
0.4911 ? 0.0220
0.5064 ? 0.0055
0.4477 ? 0.0058
0.4660 ? 0.0298
Similar to the previous subsection, we have conducted a group of experiments to compare the three
methods on several aspects and report the results in Figure 2. The results for varying the size of
the training set are plotted in subfigures 2(a) and 2(d). As more training data are used, CRH always
performs better but the performance of CVH and CMSSH has high variance. The precision-recall
curves and recall curves are shown in the remaining subfigures. Similar to the results on Wiki, CRH
performs the best. However, the performance gap is smaller here.
Image Query vs. Text Database
Image Query vs. Text Database
0.6
0.55
0.5
CRH
CVH
CMSSH
0.2
0.6
0.45
0.4
0.4
0.35
CRH
CVH
CMSSH
0.2
0.1
0.3
0
0
500
1000
1500
Size of Training Set
0.25
0
2000
(a) Varying training set size
0.2
0.6
0.8
0
0
1
0.3
0.8
0.6
0.4
0.35
0.1
0.3
0.4
CRH
CVH
CMSSH
0.2
0
0
500
1000
Size of Training Set
1500
2000
(d) Varying training set size
0.25
0
4
8
x 10
Text Query vs. Image Database
0.45
0.2
3
Recall
0.4
2
No. of Retrieved Points
1
CRH
CVH
CMSSH
0.5
1
(c) Recall curve
Text Query vs. Image Database
0.55
Precision
0.5
Recall
0.6
CRH
CVH
CMSSH
0.6
0.4
(b) Precision-recall curve
Text Query vs. Image Database
0.7
Precision within Hamming Radius 2
0.8
Recall
0.3
Image Query vs. Text Database
1
CRH
CVH
CMSSH
0.4
Precision
Precision within Hamming Radius 2
0.5
0.2
0.4
Recall
0.6
0.8
1
(e) Precision-recall curve
0
0
1
2
No. of Retrieved Points
3
4
8
x 10
(f) Recall curve
Figure 2: Results on Flickr
4
Conclusions
In this paper, we have presented a novel method for multimodal hash function learning based on a
boosted co-regularization framework. Because the objective function of the optimization problem is
in the form of a difference of convex functions, we can devise an efficient learning algorithm based
on CCCP and a stochastic sub-gradient method. Comparative studies based on two benchmark data
sets show that CRH outperforms two state-of-the-art multimodal hashing methods.
To take this work further, we would like to conduct theoretical analysis of CRH and apply it to
some other tasks such as multimodal medical image alignment. Another possible research issue is
to develop more efficient optimization algorithms to further improve the scalability of CRH.
Acknowledgement
This research has been supported by General Research Fund 621310 from the Research Grants
Council of Hong Kong.
8
References
[1] Gregory Shakhnarovich, Trevor Darrell, and Piotr Indyk, editors. Nearest-Neighbor Methods in Learning
and Vision: Theory and Practice. MIT Press, March 2006.
[2] Piotr Indyk and Rajeev Motwani. Approximate nearest neighbors: Towards removing the curse of dimensionality. In STOC, 1998.
[3] Moses Charikar. Similarity estimation techniques from rounding algorithms. In STOC, 2002.
[4] Alexandr Andoni and Piotr Indyk. Near-optimal hashing algorithms for approximate nearest neighbor in
high dimensions. Communications of the ACM, 51(1):117?122, 2008.
[5] Brian Kulis and Kristen Grauman. Kernelized locality-sensitive hashing for scalable image search. In
ICCV, 2009.
[6] Gregory Shakhnarovich, Paul Viola, and Trevor Darrell. Fast pose estimation with parameter-sensitive
hashing. In ICCV, 2003.
[7] Ruslan Salakhutdinov and Geoffrey E. Hinton. Semantic hashing. In SIGIR Workshop on Information
Retrieval and Applications of Graphical Models, 2007.
[8] Antonio Torralba, Rob Fergus, and Yair Weiss. Small codes and large image databases for recognition.
In CVPR, 2008.
[9] Yair Weiss, Antonio Torralba, and Rob Fergus. Spectral hashing. In NIPS 21, 2008.
[10] Brian Kulis and Trevor Darrell. Learning to hash with binary reconstructive embeddings. In NIPS 22,
2009.
[11] Maxim Raginsky and Svetlana Lazebnik. Locality-sensitive binary codes from shift-invariant kernels. In
NIPS 22, 2009.
[12] Ruei-Sung Lin, David A. Ross, and Jay Yagnik. SPEC hashing: Similarity preserving algorithm for
entropy-based coding. In CVPR, 2010.
[13] Junfeng He, Wei Liu, and Shih-Fu Chang. Scalable similarity search with optimized kernel hashing. In
KDD, 2010.
[14] Mohammad Norouzi and David J. Fleet. Minimal loss hashing for compact binary codes. In ICML, 2011.
[15] Jun Wang, Sanjiv Kumar, and Shih-Fu Chang. Semi-supervised hashing for scalable image retrieval. In
CVPR, 2010.
[16] Yadong Mu, Jialie Shen, and Shuicheng Yan. Weakly-supervised hashing in kernel space. In CVPR, 2010.
[17] Dan Zhang, Fei Wang, and Luo Si. Composite hashing with multiple information sources. In SIGIR,
2011.
[18] Jingkuan Song, Yi Yang, Zi Huang, Heng Tao Shen, and Richang Hong. Multiple feature hashing for
real-time large scale near-duplicate video retrieval. In ACM MM, 2011.
[19] Wei Liu, Jun Wang, Sanjiv Kumar, and Shih-Fu Chang. Hashing with graphs. In ICML, 2011.
[20] Michael M. Bronstein, Alexander M. Bronstein, Fabrice Michel, and Nikos Paragios. Data fusion through
cross-modality metric learning using similarity-sensitive hashing. In CVPR, 2010.
[21] Shaishav Kumar and Raghavendra Udupa. Learning hash functions for cross-view similarity search. In
IJCAI, 2011.
[22] A. L. Yuille and Anand Rangarajan. The concave-convex procedure (CCCP). In NIPS 14, 2001.
[23] Novi Quadrianto and Christoph H. Lampert. Learning multi-view neighborhood preserving projections.
In ICML, 2011.
[24] Shai Shalev-Shwartz, Yoram Singer, and Nathan Srebro. Pegasos: Primal estimated sub-gradient solver
for SVM. In ICML, 2007.
[25] Yoav Freund and Robert E. Schapire. A decision-theoretic generalization of on-line learning and an
application to boosting. Journal of Computer and System Sciences, 55(1):119?139, 1997.
[26] John Shawe-Taylor and Nello Cristianini. Kernel Methods for Pattern Analysis. Cambridge University
Press, 2004.
[27] Bernhard Sch?olkopf, Ralf Herbrich, and Alex J. Smola. A generalized representer theorem. In COLT,
2001.
[28] David G. Lowe. Distinctive image features from scale-invariant keypoints. International Journal of
Computer Vision, 60(2):91?110, 2004.
[29] David M. Blei, Andrew Y. Ng, and Michael I. Jordan. Latent Dirichlet allocation. Journal of Machine
Learning Research, 3:993?1022, 2003.
[30] Tat-Seng Chua, Jinhui Tang, Richang Hong, Haojie Li, Zhiping Luo, and Yan-Tao Zheng. NUS-WIDE:
A real-world web image database from National University of Singapore. In CIVR, 2009.
9
|
4793 |@word kong:3 kulis:2 briefly:4 norm:1 d2:1 shuicheng:1 seek:1 tat:1 bn:1 fabrice:1 set5:1 initial:1 liu:3 document:9 past:2 existing:1 outperforms:3 current:1 luo:2 si:1 ust:1 attracted:1 john:1 realize:1 sanjiv:2 wx:19 kdd:1 shape:1 remove:1 plot:1 update:1 fund:1 hash:40 v:19 spec:1 chua:1 blei:1 boosting:6 cse:1 location:1 herbrich:1 zhang:1 dn:21 consists:2 combine:1 dan:1 introduce:1 manner:1 pairwise:1 inter:14 expected:1 multi:7 salakhutdinov:1 automatically:2 curse:1 solver:5 increasing:1 provided:1 project:1 moreover:3 kind:1 minimizes:2 developed:2 sung:1 guarantee:2 concave:2 grauman:1 k2:5 classifier:1 grant:1 medical:1 enjoy:2 yn:2 engineering:1 local:2 treat:3 consequence:2 kwx:3 ap:3 yb1:1 challenging:2 christoph:1 co:7 fastest:1 yj:18 alexandr:1 practice:3 implement:1 procedure:6 featured:1 empirical:1 yan:3 composite:1 matching:1 projection:9 word:1 get:3 cannot:2 pegasos:2 storage:1 context:1 crossmodal:1 optimize:3 www:1 map:11 straightforward:1 xa1:1 independently:2 convex:12 sigir:2 shen:2 simplicity:1 attraction:1 retrieve:1 ralf:1 embedding:1 notion:1 coordinate:1 play:1 suppose:1 user:1 us:3 associate:1 trend:3 trick:1 recognition:2 expensive:1 updating:1 labeled:3 database:26 role:1 solved:1 capture:1 wang:3 mu:1 complexity:3 ideally:1 cristianini:1 trained:1 weakly:1 solving:1 shakhnarovich:2 yuille:1 distinctive:1 efficiency:1 svcl:1 multimodal:20 easily:1 htm:1 represented:3 train:1 fast:3 shortcoming:2 reconstructive:2 query:26 neighborhood:2 shalev:1 quite:2 emerged:1 larger:1 solve:3 cvpr:5 otherwise:5 ability:1 jialie:1 indyk:3 advantage:1 sequence:1 jinhui:1 propose:3 junfeng:1 achieve:3 normalize:1 scalability:2 olkopf:1 convergence:1 motwani:1 requirement:1 optimum:1 darrell:3 ijcai:1 comparative:1 rangarajan:1 converges:1 leave:2 object:1 develop:2 andrew:1 pose:2 nearest:6 implemented:1 come:1 wyt:12 closely:2 radius:5 annotated:1 stochastic:4 sgn:6 bin:3 fix:1 generalization:2 civr:1 kristen:1 brian:2 extension:5 hold:1 mm:1 lm:1 major:3 vary:1 early:1 a2:1 achieves:1 torralba:2 estimation:3 ruslan:1 label:5 ross:1 sensitive:7 council:1 largest:2 successfully:1 kwy:2 reflects:1 mit:1 generously:1 kowloon:1 always:1 pn:1 boosted:3 varying:5 focus:2 improvement:1 hk:1 lowercase:1 unlikely:1 kernelized:1 tao:2 overall:1 among:1 issue:1 colt:1 art:3 special:2 initialize:1 equal:1 aware:1 piotr:3 ng:1 icml:4 novi:1 representer:2 future:1 minimized:2 adn:1 report:2 duplicate:2 few:1 randomly:4 preserve:1 national:1 antecedent:1 ourselves:1 attempt:1 detection:1 interest:3 highly:1 intra:4 zheng:1 alignment:2 sh:1 primal:1 nowadays:1 fu:3 unless:1 conduct:1 divide:1 taylor:1 penalizes:1 initialized:1 plotted:1 theoretical:4 minimal:2 subfigure:4 d2n:3 yoav:1 cost:1 deviation:1 rounding:1 conducted:1 reported:5 varies:1 gregory:2 fundamental:1 international:1 sequel:1 michael:2 squared:1 hn:3 choose:1 huang:1 derivative:1 michel:1 li:1 seng:1 summarized:1 coding:1 includes:2 ad:1 later:2 view:4 lowe:1 analyze:1 shai:1 minimize:1 publicly:2 variance:1 largely:1 characteristic:1 identify:1 conceptually:1 raghavendra:1 norouzi:1 accurately:1 comparably:1 comp:1 reach:1 flickr:4 trevor:3 attributed:1 hamming:6 popular:1 xan:9 recall:21 subsection:2 knowledge:1 dimensionality:4 hashing:36 supervised:3 adaboost:1 modal:2 wei:4 though:1 ox:2 stage:5 smola:1 until:1 hand:1 web:1 nonlinear:1 rajeev:1 xa2:1 brings:1 quality:1 lda:1 true:3 counterpart:1 regularization:4 hence:2 npp:6 alternating:1 iteratively:3 semantic:4 basketball:1 hong:5 generalized:2 multiview:2 theoretic:1 mohammad:1 performs:5 image:39 meaning:1 consideration:2 novel:4 recently:1 lazebnik:1 common:3 wikipedia:1 empirically:1 discussed:2 extend:1 belong:1 rth:1 he:1 refer:2 significant:2 cambridge:1 enjoyed:1 similarly:1 shawe:1 lsh:4 f0:4 similarity:17 impressive:1 wxt:13 recent:3 retrieved:8 irrelevant:1 dyyeung:1 nonconvex:2 binary:5 success:1 calligraphic:1 yagnik:1 yi:2 devise:1 inverted:1 preserving:4 minimum:1 nikos:1 semi:2 ii:1 multiple:5 unimodal:2 keypoints:1 faster:1 adapt:1 cross:7 long:1 retrieval:9 lin:1 cccp:9 paired:1 impact:1 involving:1 basic:1 scalable:3 vision:2 metric:3 surpassed:1 yeung:1 iteration:8 kernel:10 represent:1 bimodal:1 hfl:13 addition:1 want:1 source:1 modality:31 sch:1 eigenfunctions:1 induced:1 anand:1 jordan:1 near:3 yang:1 yb2:1 embeddings:2 easy:2 xj:1 zi:1 idea:1 shift:2 fleet:1 pca:1 impairs:1 song:1 speaking:1 jj:1 antonio:2 generally:2 collision:1 clear:2 involve:1 detailed:1 locally:1 dit:1 tth:5 generate:1 wiki:5 http:2 exist:1 schapire:1 singapore:1 notice:1 moses:1 sign:1 estimated:1 express:1 group:3 key:1 shih:3 nevertheless:1 asymptotically:1 graph:2 year:2 raginsky:1 letter:2 svetlana:1 place:1 family:1 clipped:1 decision:1 acceptable:1 bit:14 bound:3 guaranteed:1 encountered:1 fei:1 alex:1 udupa:1 tag:3 aspect:3 nathan:1 kumar:4 performing:1 pruned:1 department:1 charikar:1 according:1 march:1 smaller:1 slightly:1 appealing:1 rob:2 making:1 intuitively:1 iccv:2 restricted:1 invariant:3 pr:1 gradually:1 computationally:2 ybn:9 equation:4 resource:1 discus:1 describing:1 singer:1 know:1 end:2 adopted:1 available:2 operation:1 apply:2 away:1 generic:1 spectral:4 occurrence:1 yair:2 original:2 denotes:3 top:1 dirichlet:2 remaining:4 graphical:1 hinge:1 yoram:1 objective:13 g0:7 ruei:1 zhiping:1 gradient:10 distance:4 mapped:1 nx:2 topic:1 manifold:1 nello:1 water:1 boldface:1 code:17 besides:3 index:2 relationship:1 length:4 minimizing:3 robert:1 stoc:2 implementation:1 bronstein:3 boltzmann:1 perform:1 upper:3 benchmark:2 viola:1 extended:1 relational:1 communication:1 hinton:1 dc:1 ucsd:1 introduced:3 david:4 pair:22 namely:1 specified:2 extensive:1 optimized:1 learned:6 textual:4 nu:4 nip:4 proceeds:2 wy:18 usually:1 pattern:1 summarize:1 program:1 including:2 video:1 event:1 demanding:1 rely:1 regularized:5 nth:1 improve:2 technology:1 concludes:1 zhen:1 jun:2 naive:1 sn:16 text:25 sg:1 acknowledgement:1 freund:1 loss:19 expect:2 fully:1 oy:2 limitation:4 allocation:2 srebro:1 geoffrey:1 validation:1 eigendecomposition:2 article:4 editor:1 heng:1 share:1 repeat:2 supported:1 transpose:1 keeping:1 bias:4 neighbor:11 wide:3 taking:1 distributed:1 overcome:2 curve:12 xn:3 dimension:1 world:1 ignores:1 reside:1 commonly:1 made:2 projected:1 author:1 far:4 approximate:2 compact:4 tourist:1 bernhard:1 sequentially:4 anchor:1 xi:20 fergus:2 shwartz:1 search:10 latent:2 decade:1 bay:1 table:4 promising:1 learn:9 nature:1 linearly:1 paul:1 lampert:1 quadrianto:1 fair:1 repeated:1 coregularization:1 ny:2 precision:18 sub:9 paragios:1 third:1 jay:1 learns:1 tang:1 theorem:2 removing:1 xt:1 sift:2 svm:1 fusion:1 workshop:1 andoni:1 effectively:1 maxim:1 margin:4 mildly:1 easier:1 gap:1 locality:3 smoothly:1 entropy:1 simply:2 saddle:1 expressed:4 kxk:1 chang:3 acm:2 goal:3 presentation:1 towards:1 considerable:1 content:2 tiger:2 specifically:3 uniformly:1 averaging:1 called:6 experimental:1 select:3 people:2 support:2 dissimilar:3 alexander:1 incorporate:1 evaluate:4 audio:1
|
4,191 | 4,794 |
Max-Margin Structured Output Regression for
Spatio-Temporal Action Localization
Du Tran and Junsong Yuan
School of Electrical and Electronic Engineering
Nanyang Technological University, Singapore
[email protected], [email protected]
Abstract
Structured output learning has been successfully applied to object localization,
where the mapping between an image and an object bounding box can be well
captured. Its extension to action localization in videos, however, is much more
challenging, because we need to predict the locations of the action patterns both
spatially and temporally, i.e., identifying a sequence of bounding boxes that track
the action in video. The problem becomes intractable due to the exponentially
large size of the structured video space where actions could occur. We propose
a novel structured learning approach for spatio-temporal action localization. The
mapping between a video and a spatio-temporal action trajectory is learned. The
intractable inference and learning problems are addressed by leveraging an efficient Max-Path search method, thus making it feasible to optimize the model over
the whole structured space. Experiments on two challenging benchmark datasets
show that our proposed method outperforms the state-of-the-art methods.
1
Introduction
Blaschko and Lampert have recently shown that object localization can be approached as structured
regression problem [2]. Instead of modeling object localization as a binary classification and treating
every bounding box independently, their method trains a discriminant function directly for predicting
the bounding boxes of objects located in images. Compared with conventional sliding-window based
approach, it considers the correlations among the output variables and avoids an exhaustive search
of the subwindows for object detection.
Motivated by the successful application of structured regression in object localization [2], it is natural to ask if we can perform structured regression for action localization in videos. Although
this idea looks plausible, the extension from object localization to action localization is non-trivial.
Different from object localization, where a visual object can be well localized by a 2-dimensional
(2D) subwindow, human actions cannot be tightly bounded in such a similar way, i.e., using a 3dimensional (3D) subvolume. Although many current methods for action detection are based on this
3D subvolume assumption [6, 9, 20, 29], and search for video subvolumes to detect actions, such
an assumption is only reasonable for ?static? actions, where the subjects do not move globally e.g.,
pick-up or kiss. For ?dynamic? actions, where the subjects can move globally e.g., walk, run, or
dive, the subvolume constraint is no longer suitable. Thus, a more accurate localization scheme that
can track the actions is required for localizing dynamic actions in videos. For example, one can localize an action by a 2D bounding box in each frame, and track it as the action moves across different
frames. This localization structured output generates a smooth spatio-temporal path of connected
2-D bounding boxes. Such a spatio-temporal path can tightly bound the actions in the video space
and provides a more accurate spatio-temporal localization of actions.
1
right
left
top
object
bottom
a)
b)
c)
Figure 1: Complexities of object and action localization: a) Object localization is of O(n4 ). b)
Action localization by subvolume search is of O(n6 ). c) Spatio-temporal action localization in a
much larger search space.
However, as the video space is much larger than the image space, spatio-temporal action localization
has a much larger structured space compared with object localization. For a video with size w ?
h ? n, the search space for 3D subvolumes and 2D subwindows is only O(w2 h2 n2 ) and O(w2 h2 ),
respectively (Figure 1). However, the search space for possible spatio-temporal paths in the video
space is exponential O(whnk n )[23] if we do not know the start and end points of the path (k is
the number of incoming edges per node). Any one of these paths can be the candidates for spatiotemporal action localization, thus an exhaustive search is infeasible. This huge structured space
keeps structured learning approaches from being practical to spatio-temporal action localization due
to intractable inferences.
This paper proposes a new approach for spatio-temporal action localization which mainly addresses
the above mentioned problems. Instead of using the 3D subvolume localization scheme, we precisely
locate and track the action by finding an optimal spatio-temporal path to detect and localize actions.
The mapping between a video and a spatio-temporal action trajectory is learned. By leveraging
an efficient Max-Path search method [23], the intractable inference and learning problems can be
addressed, thus makes our approach practical and effective although the structured space is very
large. Being solved as structured learning problem, our method can well exploit the correlations
between local dependent video features, and therefore optimizes the structured output. Experiments
on two challenging benchmark datasets show that our method significantly outperforms the state-ofthe-art methods.
1.1
Related work
Human action detection is traditionally approached by spatio-temporal video volume matching using
different features: space-time orientation [6], volumetric [9], action MACH [20], HOG3D [10]. The
sliding window scheme is then applied to locate actions which is ineffective and time-consuming.
Different matching, learning models have also been introduced. Boiman and Irani proposed ensembles of patches to detect irregularities in images and videos [3]. Hu et al used multiple-instance
learning to detect actions [8]. Mahadevan et al used mixtures of dynamic textures to detect anomaly
events [15]. Le et al used deep learning to learn unsuppervised features for recognizing human activities [14]. Niebles et al used a probabilistic latent semantic analysis model for recognizing actions
[17]. Yao et al trained probabilistic non-linear latent variable models to track complex activities
[28]. Yuan et al extended the branch-and-bound subwindow search [11] to subvolume search for
action detection [29]. Recently, Tran and Yuan relaxed the 3D bounding box constraint for detecting
and localizing medium and long video events [23]. Despite the improvements over 3D subvolume
based approaches, this method did not fully utilize the correlations between local part detectors as
they were independently trained.
Max-margin structured output learning [19, 21, 24] was recently proposed and demonstrated its
success in many applications. One of its attractive features is that although the structured space
can be very large, whenever inference is tractable, learning is also tractable. Finley and Joachims
further showed that overgenerating (e.g. relaxations) algorithms have theoretic advantages over
undergenerating (e.g. greedy) methods when exact inference is intractable [7]. Various structured
learning based approaches were proposed to solve computer vision problems including pedestrian
detection [22], object detection [2, 25], object segmentation [1], facial action unit detection [16],
human interaction recognition [18], group activity recognition [13], and human pose parsing [27].
More recently, Lan et al used a latent SVM to jointly detect and recognize actions in videos [12].
2
Among these work, Lan et al is most similar to ours. However, this method requires a reliable
human detector in both inference and learning, thus it is not applicable to ?dynamic? actions where
the human poses are significantly varied. Moreover, because of its using HOG3D [26], it only detects
actions in a sparse subset of frames where the interest points present.
2
Spatio-Temporal Action Localization as Structured Output Regression
Given a video x with the size of w ? h ? m where w ? h is the frame size and m is its length, to
localize actions, one needs to predict a structured object y which is a smooth spatio-temporal path in
the video space. We denote a path y = {(l, t, r, b)i=1..m } where (l, t, r, b)i are respectively the left,
top, right, bottom of the rectangle that bounds the action in the i-th frame. These values of (l, t, r, b)
are all set to zeros when there is no action in this frame. Because of the spatio-temporal smoothness
constraint, the boxes in y are necessarily smoothed over the spatio-temporal video space. Let us
denote X ? [0, 255]3whm as the set of color videos, and Y ? R4m as the set of all smooth spatiotemporal paths in the video space. The problem of spatio-temporal action localization becomes to
learn a structured prediction function of f : X 7? Y.
2.1
Structured Output Learning
Let {x1 , . . . , xn } ? X be the training videos, and {y1 , . . . , yn } ? Y be their corresponding annotated ground truths. We formulate the action localization problem using the structured learning as
presented in [24]. Instead of searching for f , we learn a discriminant function F : X ?Y 7? R. F is a
compatibility function which measures how compatible the localization y will be suited to the given
input video x. If the model utilizes a parameter set of w, then we denote F (x, y; w) = hw, ?(x, y)i,
which is a family of functions parameterized by w, and ?(x, y) is a joint kernel feature map which
represents spatio-temporal features of y given x.
Once F is trained, meaning the optimal parameter w? is determined, the final prediction y? can be
obtained by maximizing F over Y for a specific input x.
y? = f (x; w? ) = argmax F (x, y; w? ) = argmaxhw? , ?(x, y)i
y?Y
The optimal parameter set w? is selected
by solving the convex optimization problem in Eq. 2:
n
X
1
2
min
kwk + C
?i
w,?
2
i=1
s.t.
(1)
y?Y
hw, ?(xi , yi ) ? ?(xi , y)i ? ?(yi , y) ? ?i , ?i, ?y ? Y\yi ,
?i ? 0, ?i.
(2)
Eq. 2 optimizes w such that the score of the true structure yi of xi will be larger than any other
structure y by a margin which is rescaled by the loss of ?(yi , y). The loss function will be defined
in Section 2.3. This optimization is similar to the traditional support vector machine (SVM) formulation except for two differences. First, the number of constraints is much larger due to the huge
size of the structure space Y. Second, the margins are rescaled differently by the constraint?s loss
?(yi , y). Because of the large number of constraints, the problem in Eq. 2 cannot be solved directly
although it is a convex problem. Alternatively, one can solve the above problem by the cutting plane
algorithm [24] or subgradient methods [19, 21]. We use the cutting plane algorithm to solve this
learning problem. The algorithm starts with a random parameter w and an empty constraint set. At
each round, it searches for the most violated constraint and add it to the constraint set. This step is
to search for y that maximizes the violation value ?i (Eq. 3). When a new constraint is found, the
optimization is applied to update w. The process is repeated until no more constraint is added. This
algorithm is proven to converge [24] and normally within a small number of constraints due to the
sparsity of the structured space.
?i ? ?(yi , y) + hw, ?(xi , y)i ? hw, ?(xi , yi )i, ?y ? Y\yi
(3)
2.2
The Joint Kernel Feature Map for Action Localization
Let us denote x|y as the video portion cut out from x by the path y, namely the stack of images
cropped by the bounding boxes b1..m of y. We also denote ?(bi ) ? Rk as a feature map for a 2D
3
box bi . It is worth noting that ?(bi ) can be represented by either local features (e.g. local interest
points) or global features (e.g. HOG, HOF) of the whole box bi . We thus have a feature map for x|y
as ?(x, y) which is also a vector in Rk :
m
1 X
?(x, y) =
?(bi )
(4)
m i=1
Finally, the decision function of our structured prediction is now formed as in Eq. 5.
m
1 X
hw, ?(bi )i.
F (x, y; w) = hw, ?(x, y)i =
m i=1
2.3
(5)
Loss Function
We define a Hinge loss function ? : Y ? Y 7? [0, 1] for evaluating the loss induced by a predicted
structure y? compared with a true structure label y. We denote y = {bi=1..m }, where bi = (l, t, r, b)i
is the ground truth box of the i-th frame. Similarly, we denote y? = {?bi=1..m } the predicted structure.
The loss function is defined as follow:
m
1 X
?(y, y?) =
?(bi , b?i ).
(6)
m i=1
(
Area(b??
b)
1 ? Area(b?
if lb = l?b = 1
?
b)
?(b, ?b) =
(7)
1
1 ? ( 2 (lb l?b + 1)), otherwise.
?1 if b = (0, 0, 0, 0)
lb =
(8)
1, otherwise.
3
Inference and Learning
We need a feasible way to perform the inference in Eq. 1 during testing which can be rewritten as
in Eq. 9.
m
X
1
y? = argmaxhw, ?(x, y)i =
hw, ?(bi )i.
(9)
argmax
m y?Y i=1
y?Y
During training, we need to search for the most violated constraints by maximizing the right hand
side of Eq. 3 which is equivalent to Eq. 10. From now on, we denote y? for yi in Eq. 2 because the
example index i is no longer important.
max {?(y, y?) + hw, ?(x, y)i}
(10)
y?Y
(
)
m
m
1 X
1 X
= max
?(bi , b?i ) +
hw, ?(bi )i
(11)
y?Y
m i=1
m i=1
(m
)
X
1
?
=
max
?(bi , bi ) + hw, ?(bi )i
(12)
m y?Y i=1
To solve Eq. 9 and Eq. 12, one needs to search for a smooth path y ? in the spatio-temporal video
space Y which gives the maximum total score. Both of the above equations are difficult due to the
large size of Y, e.g. the exponential number of possible spatio-temporal paths in Y (see supplemental
material). We now show that both problems in Eq. 9 and Eq. 12 can be reduced to Max-Path search
problem and solved by [23] efficiently. Max-Path algorithm [23] was proposed to detect dynamic
video events. It is guaranteed to obtain the best spatio-temporal path in the video space provided that
the local windows? scores can be precomputed. The algorithm takes a 3D trellis of local windows?
scores as input, and outputs the best path which the maximum total score. In testing, the trellis?s
local scores are hw, ?(bi )i where bi is the local window. These values are easily evaluated given a w
and a feature map ?. In training, those values of the trellis are ?(bi , b?i ) + hw, ?(bi )i which are also
computable given parameter w, feature map ?, and ground truth b?i . After the trellis is constructed,
the Max-Path algorithm is employed to find the best path, therefore we can identify the smoothed
spatio-temporal path y ? that maximizes Eq. 9 and Eq. 12.
4
3.1
Constraint Enforcement
Let us consider one constraint in Eq. 2, here we ignore the index i of the example for simplicity and
use y? as the ground truth for example x. We also denote y = b1..m and y? = ?b1..m .
?
?
hw, ?(x, y?)i ? hw, ?(x, y)i ? ?(?
y , y) ? ?, ?y ? Y\?
y
m
m
m
X
X
X
1
1
1
hw, ?(b?i )i ?
hw, ?(bi )i ?
?(bi , b?i ) ? ?, ?y ? Y\?
y
m i=1
m i=1
m i=1
m
m
m
X
X
X
?
hw, ?(bi )i ?
hw, ?(bi )i ?
?(bi , b?i ) ? m?, ?y ? Y\?
y
i=1
i=1
(13)
(14)
(15)
i=1
The constraint in Eq. 15 can be split into m constraints in Eq. 16 which are harder, therefore
satisfying these m constraints will lead to satisfying the Eq. 15 constraint
hw, ?(b?i )i ? hw, ?(bi )i ? ?(bi , b?i ) ? ?, ?i ? [1..m], ?y ? Y\?
y
(16)
In training, instead of solving Eq. 2 with the constraints in Eq. 13, we solve it with the set of
constraints as in Eq. 16. The problem is harder because of tighter constraints. However, the important benefit of using such enforcements is that instead of comparing features of two different
spatio-temporal paths y and y?, one can compare the features of individual box pairs (bi , b?i ) of those
two paths. This constraint enforcement will help the training algorithm to avoid comparing features
of two paths of different lengths which is unstable due to feature normalization.
4
Experimetial Setup
Datasets: we conduct experiments on two datasets: UCF-Sport [20] and Oxford-TV [18]. UCFSport dataset consists of 150 video sequences of 10 different action classes. We use the same split
as in [12] for training and testing. On this dataset, we detect three different actions: horse-riding,
running, and diving. We choose those actions because they have different levels of body movements.
Horse riding is relatively rigid; running is more deformable; while diving is extremely deforming
in terms of articulated body movements. Oxford-TV dataset consists of 300 videos taken from real
TV programs. It has 4 classes of actions: hand-shake, high-five, hug, kiss, and a set of 100 negative
videos. As used in [18], this dataset is divided into two equal subsets. We use set 1 for training
and set 2 for testing. We perform the task of kiss detection and localization on this dataset. Kissing
actions is more challenging compared with other action classes in this dataset due to less motion and
appearance cues.
Features and Parameters: our algorithm needs a feature representation ?(b) of a cropped image
b. We use a global representation for ?(b) using Histogram of Oriented Gradients (HOF) [4] and
Histogram of Flows (HOF) [5]. The cropped image b is divided into h ? v half-overlapped blocks;
each block has 2 ? 2 cells. Each cell is represented by a 9-bin histogram. The feature vector?s length
become h ? v ? 2 ? 2 ? 9 ? 2 = 72 ? h ? v for both HOG and HOF. (h, v) can be different for each
class due to different shape-ratios of the actions (e.g. rectangle boxes for horse-riding and running,
square boxes for diving). More specifically, we use (7, 15) for horse-riding and running, (11, 11)
for diving, (9, 7) for kissing. The regularization parameter C in Eq. 2 is set to 1 for all cases.
Evaluation Metrics: we quantitatively evaluate different methods in both detection and localization.
As used in [12], the video localization score is measured by averaging its frame localization scores
which are the overlap area divided by the union area of the predicted and truth boxes. A prediction
is then considered as correct if its localization score is greater or equal to ? = 0.2. It is worth
noting that detection evaluations are applied to both positive and negative testing examples while
localization evaluations are only applied to positive ones. As a result, the detection metric is to
measure the reliability of the detections (precision/recall) where the localization metric indicates
the quality of detections, e.g. how accurate are the predicted spatio-temporal paths compared with
ground truth. More specific, detection is to answer the question ?Is there any action of interest in this
video?? while localization is to answer to ?Provided that there is one action instance that appears in
this video, where is it??.
5
Horse-ride; Subset test
0.6
0.4
0.2
0
0
0.2
0.4
0.6
Recall
Horse-ride; All test
0.8
0.6
0.4
0
1
0.6
0.4
0.2
0
0.2
0.4
Recall
0.6
0.8
0
1
Run; All test
1
Tran&Yuan
Our method
0.6
Precision
0.6
Precision
0.6
0.2
0.4
0.2
0.2
0.4
Recall
0.6
0.8
1
0
0.4
0.6
Recall
Dive; All test
0.8
1
0.8
1
Tran&Yuan
Our method
0.8
0.4
0.2
1
0.8
0
0
Tran&Yuan
Our method
0.8
0
Lan et al
Tran&Yuan
Our method
0.8
0.2
1
Precision
1
Lan et al
Tran&Yuan
Our method
0.8
Precision
Precision
0.8
Dive; Subset test
Run; Subset test
1
Lan et al
Tran&Yuan
Our method
Precision
1
0.4
0.2
0
0.2
0.4
Recall
0.6
0.8
1
0
0
0.2
0.4
Recall
0.6
Figure 2: Action detection results on UCF-Sport: detection curves of our proposed method compared with [12] and [23]. Upper plots are detection results evaluated on subset frames given by [12],
while lower plots are the results of all-frame evaluations. Except for diving, our proposed method
significantly improves the other methods.
Eval. Set
Subset
Method H-Ride Run
Dive Average
[12]
21.75
19.60 42.67
28.01
[23]
62.19
50.20 16.41
42.93
Our
68.06
61.41 36.54
55.34
All
[12]
N/A
N/A
N/A
N/A
[23]
63.06
48.09 22.64
44.60
Our
64.01
61.86 37.03
54.30
Table 1: Action localization results on UCF-Sport: comparisons among our proposed method,
[12], and [23]. The upper section presents results evaluated on a subset of frames given by [12],
while the lower section reports results from evaluating on all frames. Our method improves 27.33%
from [12] and 12.41% from [23] on subset evaluations and improves 9.7% from [23] on all-frame
evaluations. N/A indicates not applicable.
5
Experimental Results
UCF-Sport: we compare our method with two current approaches: Lan et al [12], Tran and Yuan
[23]. The output predictions of Lan et al are directly obtained from [12]. For [23], we train a linear
SVM detector for each action class using the same features as ours. The Max-Path algorithm is then
applied to detect the actions of interest. According to [12], its method used HOG3D [26], so that it
is only able to detect and localize actions at a sparse set of frames where the HOG3D interest points
present. To provide a fair comparison with [12], we report two different sets of evaluations. The first
set is applied only to the subset of frames where [12] reports detections and the second set is to take
all frames into consideration.
Table 1 reports the results of action localization of different methods and action classes. On average,
our method improves 27.33% from [12] and 12.41% from [23] on subset evaluations and improves
9.7% from [23] on all-frame evaluations. Figure 2 shows detection results of different methods
on UCF-Sport dataset. Our method significantly improves over [23] for all three action classes on
both subset and all-frame evaluations. Compared with [12] on subset evaluations, our method significantly improve over [12] on horse-riding and running detection. However, [12] provides better
detection results than ours on diving detection. This better detection is because their interest-pointbased sparse features are more suitable to deformable actions as diving. For a complete presentation,
we visualze localization results of our method comapared with those of [12] and [23] on a diving
sequence (Figure 3). All predicted boxes are plotted together with ground truth boxes for comparisons. It is worth noting that [12] has only predictions at a sparse set of frames, therefore blue
6
7
Localization score
0.8
20
45
28
51
0.6
0.4
Lan et al
Tran&Yuan
Our method
0.2
0
0
10
20
30
Frame number
40
50
60
Figure 3: Visualization of diving localization: the plots of localization scores of different methods
on a diving video sequence. Lan et al?s [12] results are visualized in blue, Tran and Yuan?s [23] are
green, ours are red, and ground truth are black boxes. Best view in color.
Figure 4: Action detection and localization on UCF-Sport: Lan et al?s [12] results are visualized
in blue, Tran and Yuan?s [23] are green, ours are read, and ground truth are black. Our method and
[23] can detect multiple instances of actions (two bottom left images).
squares are visualized as discrete dots while the other methods are visualized by continuous curves.
Our method (red curve) localizes diving action much more accurately than [23] (green curve). [12]
localizes diving action fairly good, however it is not applicable when more accurate localizations
(e.g. all frame predictions) are required.
Oxford-TV: we compare our method with [23] on both detection and localization tasks. For detection, we report two different quantitative evaluations: the equal precision-recall (EPR) and the area
under ROC curve (AUC). For localization, besides the spatial localization (SL) metric as used in
UCF dataset experiments, we also evaluate different methods by temporal localization (TL) metric.
This metric is not applicable to UCF dataset because most action instances in UCF dataset start
and end at the first and last frame, respectively. Temporal localization is computed as the length
Method EPR(%) AUC SL(%) TL(%)
[18]
32.50*
N/A
N/A
N/A
[23]
24.14
0.27
18.46
40.09
Our
38.89
0.42
39.52
45.30
Table 2: Kiss detection and localization results. We improve 14.74% in equal precision/recall
detection rate, 0.15 in area under ROC curve, 21.06% in spatial localization, and 5.21% in temporal
localization over [23]. *Result of [18] is not directly comparable. N/A indicates not applicable.
7
Figure 5: Visualizaiton of kiss detection: our results are visualized in red; ground truths are in
green. The upper two rows are some of correct detections while the last row shows false or missed
detections.
1
0.5
0.8
Tran&Yuan: 8.82/10.34
Our method: 29.03/31.03
0.4
0.6
0.3
0.4
0.2
Tran&Yuan: 0.27
Our method: 0.42
0.7
0.6
True positive rate
0.8
Precision
Precision
Tran&Yuan: 46.67/24.14
Our method: 38.89/48.28
0.2
0.5
0.4
0.3
0.2
0.1
0.1
0
0
0.2
a)
0.4
Recall
0.6
0.8
0
b)
0
0.1
0.2
Recall
0.3
0.4
0.5
0
c)
0
0.2
0.4
0.6
False positive rate
0.8
1
Figure 6: Kiss detection results: a) Precision-recall curves with ? = 0.2. b) Precision-recall curves
with ? = 0.4. c) ROC curves with ? = 0.2. Numbers inside the legends are best precision-recall
values(a and b) and the area under ROC curve(c).
(measured in frames) of the intersection divided by the union of detection and ground truth. Table 2
presents detection and localization results of our proposed method compared with [23]. On localization task, our method improves 21.06% in spatial localization, and 5.21% in temporal localization
over [23]. On detection task, by using the cut-off threshold ? = 0.2, our method improves 14.74%
in equal precision-recall rate and 0.15 in area under ROC curve over [23] (Figure 6a and 6c). One
may further ask ?what if we need more accurate detections??. Interestingly, when we increase the
cut-off threshold ? to 0.4, [23] significantly drops from 24.11% to 8.82% while our method remains
29.03% (Figure 6b) which demonstrates that our method can simultaneously detect and localize
actions with high accuracy.
6
Conclusions
We have proposed a novel structured learning approach for spatio-temporal action localization in
videos. While most of current approaches detect actions as 3D subvolumes [6, 9, 20, 29] or a
sparse subset of frames [12], our method can precisely detect and track actions in both spatial and
temporal spaces. Although [23] is also applicable to spatio-temporal action detection, this method
cannot be optimized over the large video space due to its independently trained detectors. Our approach significantly outperforms [23] thanks to the structured optimization. This improvement gap
is also consistent with the theoretic analysis in [7]. Moreover, being free from people detection
and background subtraction, our approach can efficiently handle unconstrained videos and be easily
extended to detect other spatio-temporal video patterns. Strong experimental results on two challenging benchmark datasets demonstrate that our proposed method significantly outperforms the
state-of-the-arts.
8
Acknowledgments
The authors would like to thank Tian Lan for reproducing [12]?s results on UCF dataset, Minh Hoai
Nguyen for useful discussions about the cutting-plane algorithm. This work is supported in part by
the Nanyang Assistant Professorship (SUG M58040015) to Dr. Junsong Yuan.
References
[1] L. Bertelli, T. Yu, D. Vu, and S. Gokturk. Kernelized structural SVM learning for supervised object
segmentation. CVPR, 2011.
[2] M. B. Blaschko and C. H. Lampert. Learning to localize objects with structured output regression. ECCV,
2008.
[3] O. Boiman and M. Irani. Detecting irregularities in images and in video. IJCV, 2007.
[4] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. CVPR, 2005.
[5] N. Dalal, B. Triggs, and C. Schmid. Human detection using oriented histograms of flow and appearance.
ECCV, 2006.
[6] K. Derpanis, M. Sizintsev, K. Cannons, and P. Wildes. Efficient action spotting based on a spacetime
oriented structure representation. CVPR, 2010.
[7] T. Finley and T. Joachims. Training structural SVMs when exact inference is intractable. ICML, 2008.
[8] Y. Hu, L. Cao, F. Lv, S. Yan, Y. Gong, and T. S. Huang. Action detection in complex scenes with spatial
and temporal ambiguities. ICCV, 2009.
[9] Y. Ke, R. Sukthankar, and M. Hebert. Volumetric features for video event detection. IJCV, 2010.
[10] A. Klaser, M. Marszalek, and C. Schmid. A spatio-temporal descriptor based on 3d-gradients. BMVC,
2008.
[11] C. H. Lampert, M. B. Blaschko, and T. Hofmann. Efficient subwindow search: A branch and bound
framework for object localization. IEEE Trans. on Pattern Analysis and Machine Intelligence, 2009.
[12] T. Lan, Y. Wang, and G. Mori. Discriminative figure-centric models for joint action localization and
recognition. ICCV, 2011.
[13] T. Lan, Y. Wang, W. Yang, and G. Mori. Beyond actions: Discriminative models for contextual group
activities. NIPS, 2010.
[14] Q. Le, W. Zou, S. Yeung, and A. Ng. Learning hierarchical spatio-temporal features for action recognition
with independent subspace analysis. CVPR, 2011.
[15] V. Mahadevan, W. Li, V. Bhalodia, and N. Vasconcelos. Anomaly detection in crowded scenes. CVPR,
2010.
[16] M. H. Nguyen, T. Simon, F. De la Torre, and J. Cohn. Action unit detection with segment-based SVMs.
CVPR, 2010.
[17] J. C. Niebles, H. Wang, and L. Fei-Fei. Unsupervised learning of human action categories using spatialtemporal words. International Journal of Computer Vision, 2008.
[18] A. Patron-Perez, M. Marszalek, A. Zisserman, and I. Reid. High five: Recognising human interactions in
tv shows. BMVC, 2010.
[19] N. Ratliff, J. A. Bagnell, and M. Zinkevich. Subgradient methods for maximum margin structured learning. ICML 2006 Workshop on Learning in Structured Output Spaces, 2006.
[20] M. D. Rodriguez, J. Ahmed, and M. Shah. Action mach: A spatio-temporal maximum average correlation
height filter for action recognition. CVPR, 2008.
[21] B. Taskar, S. Lacoste-Julien, and M. Jordan. Structured prediction via the extragradient method. NIPS,
2005.
[22] D. Tran and D. Forsyth. Configuration estimates improve pedestrian finding. NIPS, 2007.
[23] D. Tran and J. Yuan. Optimal spatio-temporal path discovery for video event detection. CVPR, pages
3321?3328, 2011.
[24] I. Tsochantaridis, T. Joachims, T. Hofmann, and Y. Altun. Large margin methods for structured and
interdependent output variables. JMLR, 2005.
[25] A. Vedaldi and A. Zisserman. Structured output regression for detection with partial truncation. NIPS,
2009.
[26] H. Wang, M. M. Ullah, A. Klaser, I. Laptev, and C. Schmid. Evaluation of local spatio-temporal features
for action recognition. BMVC, 2009.
[27] Y. Wang, D. Tran, and Z. Liao. Learning hierarchical poselets for human parsing. CVPR, 2011.
[28] A. Yao, J. Gall, L. V. Gool, and R. Urtasun. Learning probabilistic non-linear latent variable models for
tracking complex activities. NIPS, 2011.
[29] J. Yuan, Z. Liu, and Y. Wu. Discriminative video pattern search for efficient action detection. IEEE Trans.
on Pattern Analysis and Machine Intelligence, 2011.
9
|
4794 |@word dalal:2 triggs:2 hu:2 pick:1 harder:2 configuration:1 liu:1 score:11 ours:5 interestingly:1 outperforms:4 ullah:1 current:3 com:1 comparing:2 contextual:1 gmail:1 parsing:2 hofmann:2 shape:1 dive:4 treating:1 plot:3 update:1 drop:1 greedy:1 selected:1 cue:1 half:1 intelligence:2 plane:3 provides:2 detecting:2 node:1 location:1 five:2 height:1 constructed:1 become:1 yuan:19 consists:2 ijcv:2 inside:1 globally:2 detects:1 window:5 becomes:2 provided:2 blaschko:3 bounded:1 moreover:2 maximizes:2 medium:1 what:1 supplemental:1 finding:2 temporal:40 quantitative:1 every:1 demonstrates:1 unit:2 normally:1 yn:1 reid:1 positive:4 engineering:1 local:9 despite:1 mach:2 oxford:3 path:27 marszalek:2 black:2 challenging:5 professorship:1 bi:28 tian:1 practical:2 acknowledgment:1 testing:5 vu:1 nanyang:2 block:2 union:2 epr:2 irregularity:2 area:8 yan:1 significantly:8 subvolume:7 matching:2 vedaldi:1 word:1 altun:1 cannot:3 tsochantaridis:1 sukthankar:1 optimize:1 conventional:1 map:6 demonstrated:1 equivalent:1 maximizing:2 zinkevich:1 independently:3 convex:2 formulate:1 ke:1 simplicity:1 identifying:1 searching:1 handle:1 traditionally:1 anomaly:2 exact:2 gall:1 overlapped:1 recognition:6 satisfying:2 located:1 cut:3 bottom:3 taskar:1 electrical:1 solved:3 wang:5 connected:1 movement:2 technological:1 rescaled:2 mentioned:1 complexity:1 dynamic:5 trained:4 solving:2 segment:1 laptev:1 localization:60 easily:2 joint:3 differently:1 various:1 represented:2 train:2 articulated:1 effective:1 approached:2 horse:7 exhaustive:2 larger:5 plausible:1 solve:5 cvpr:9 otherwise:2 jointly:1 final:1 sequence:4 advantage:1 propose:1 tran:18 interaction:2 cao:1 deformable:2 empty:1 object:20 help:1 wilde:1 gong:1 pose:2 measured:2 school:1 eq:24 strong:1 predicted:5 poselets:1 overgenerating:1 annotated:1 correct:2 torre:1 filter:1 human:12 material:1 bin:1 ntu:1 niebles:2 tighter:1 extension:2 considered:1 ground:10 mapping:3 predict:2 assistant:1 applicable:6 label:1 successfully:1 undergenerating:1 avoid:1 cannon:1 gokturk:1 pointbased:1 joachim:3 improvement:2 indicates:3 mainly:1 detect:15 inference:9 dependent:1 rigid:1 kernelized:1 compatibility:1 classification:1 among:3 orientation:1 proposes:1 art:3 spatial:5 fairly:1 equal:5 once:1 vasconcelos:1 ng:1 represents:1 look:1 yu:1 icml:2 unsupervised:1 report:5 quantitatively:1 oriented:4 simultaneously:1 tightly:2 recognize:1 individual:1 argmax:2 detection:47 huge:2 interest:6 eval:1 evaluation:13 violation:1 mixture:1 perez:1 accurate:5 edge:1 partial:1 facial:1 conduct:1 walk:1 plotted:1 subvolumes:3 instance:4 modeling:1 localizing:2 subset:14 hof:4 recognizing:2 successful:1 answer:2 spatiotemporal:2 thanks:1 international:1 probabilistic:3 off:2 together:1 yao:2 ambiguity:1 choose:1 huang:1 dr:1 li:1 de:1 crowded:1 pedestrian:2 forsyth:1 view:1 kwk:1 portion:1 start:3 red:3 hoai:1 simon:1 formed:1 square:2 accuracy:1 descriptor:1 efficiently:2 ensemble:1 boiman:2 ofthe:1 identify:1 accurately:1 trajectory:2 worth:3 detector:4 whenever:1 volumetric:2 static:1 dataset:11 ask:2 recall:15 color:2 improves:8 segmentation:2 appears:1 centric:1 supervised:1 follow:1 zisserman:2 bmvc:3 formulation:1 evaluated:3 box:19 correlation:4 until:1 hand:2 cohn:1 rodriguez:1 quality:1 riding:5 true:3 regularization:1 spatially:1 irani:2 read:1 semantic:1 attractive:1 round:1 during:2 auc:2 klaser:2 sug:1 theoretic:2 complete:1 demonstrate:1 motion:1 image:9 meaning:1 consideration:1 novel:2 recently:4 kissing:2 exponentially:1 volume:1 smoothness:1 unconstrained:1 similarly:1 reliability:1 dot:1 ride:3 ucf:10 longer:2 add:1 showed:1 optimizes:2 diving:12 binary:1 success:1 yi:10 captured:1 greater:1 relaxed:1 employed:1 subtraction:1 converge:1 sliding:2 branch:2 multiple:2 smooth:4 ahmed:1 long:1 divided:4 prediction:8 regression:7 argmaxhw:2 vision:2 metric:6 liao:1 yeung:1 histogram:5 kernel:2 normalization:1 cell:2 cropped:3 background:1 addressed:2 w2:2 ineffective:1 subject:2 induced:1 legend:1 leveraging:2 flow:2 jordan:1 structural:2 noting:3 yang:1 mahadevan:2 split:2 idea:1 computable:1 motivated:1 action:86 deep:1 useful:1 shake:1 visualized:5 svms:2 category:1 reduced:1 sl:2 singapore:1 junsong:2 track:6 per:1 blue:3 discrete:1 group:2 lan:13 threshold:2 localize:6 utilize:1 rectangle:2 lacoste:1 relaxation:1 subgradient:2 run:4 parameterized:1 family:1 reasonable:1 electronic:1 wu:1 patch:1 utilizes:1 missed:1 decision:1 comparable:1 bound:4 guaranteed:1 spacetime:1 activity:5 occur:1 constraint:23 precisely:2 fei:2 scene:2 generates:1 min:1 extremely:1 relatively:1 structured:33 tv:5 according:1 across:1 n4:1 making:1 iccv:2 taken:1 mori:2 equation:1 visualization:1 remains:1 precomputed:1 know:1 enforcement:3 tractable:2 end:2 rewritten:1 hierarchical:2 shah:1 top:2 running:5 hinge:1 exploit:1 move:3 added:1 question:1 traditional:1 bagnell:1 subwindows:2 gradient:3 subspace:1 thank:1 considers:1 discriminant:2 trivial:1 unstable:1 patron:1 urtasun:1 length:4 besides:1 index:2 ratio:1 difficult:1 setup:1 hog:2 negative:2 ratliff:1 perform:3 upper:3 datasets:5 benchmark:3 minh:1 extended:2 frame:24 locate:2 varied:1 y1:1 smoothed:2 stack:1 lb:3 reproducing:1 introduced:1 namely:1 required:2 pair:1 optimized:1 learned:2 hug:1 nip:5 trans:2 address:1 able:1 beyond:1 spotting:1 pattern:5 sparsity:1 program:1 max:11 including:1 video:43 reliable:1 green:4 gool:1 suitable:2 event:5 natural:1 overlap:1 predicting:1 localizes:2 scheme:3 improve:3 temporally:1 julien:1 finley:2 n6:1 schmid:3 sg:1 discovery:1 interdependent:1 r4m:1 fully:1 loss:7 proven:1 localized:1 lv:1 h2:2 consistent:1 row:2 eccv:2 compatible:1 supported:1 last:2 free:1 hebert:1 infeasible:1 truncation:1 side:1 sparse:5 benefit:1 curve:11 xn:1 evaluating:2 avoids:1 author:1 subwindow:3 nguyen:2 ignore:1 cutting:3 keep:1 global:2 incoming:1 b1:3 spatio:34 consuming:1 xi:5 alternatively:1 discriminative:3 search:18 latent:4 continuous:1 table:4 learn:3 du:1 complex:3 necessarily:1 zou:1 did:1 bounding:8 whole:2 lampert:3 n2:1 repeated:1 fair:1 derpanis:1 x1:1 body:2 tl:2 roc:5 hog3d:4 precision:15 trellis:4 exponential:2 candidate:1 jmlr:1 hw:20 rk:2 specific:2 svm:4 intractable:6 workshop:1 false:2 recognising:1 texture:1 margin:6 gap:1 spatialtemporal:1 suited:1 intersection:1 appearance:2 visual:1 kiss:6 tracking:1 sport:6 truth:11 presentation:1 feasible:2 determined:1 except:2 specifically:1 averaging:1 extragradient:1 total:2 experimental:2 la:1 deforming:1 support:1 people:1 violated:2 evaluate:2
|
4,192 | 4,795 |
Dip-means: an incremental clustering method for
estimating the number of clusters
Argyris Kalogeratos
Department of Computer Science
University of Ioannina
Ioannina, Greece 45110
[email protected]
Aristidis Likas
Department of Computer Science
University of Ioannina
Ioannina, Greece 45110
[email protected]
Abstract
Learning the number of clusters is a key problem in data clustering. We present
dip-means, a novel robust incremental method to learn the number of data clusters
that can be used as a wrapper around any iterative clustering algorithm of k-means
family. In contrast to many popular methods which make assumptions about the
underlying cluster distributions, dip-means only assumes a fundamental cluster
property: each cluster to admit a unimodal distribution. The proposed algorithm
considers each cluster member as an individual ?viewer? and applies a univariate
statistic hypothesis test for unimodality (dip-test) on the distribution of distances
between the viewer and the cluster members. Important advantages are: i) the
unimodality test is applied on univariate distance vectors, ii) it can be directly
applied with kernel-based methods, since only the pairwise distances are involved
in the computations. Experimental results on artificial and real datasets indicate
the effectiveness of our method and its superiority over analogous approaches.
1 Introduction
Data clustering is a data analysis methodology which aims to automatically reveal the underlying
structure of data. It produces a partition of a given dataset into k groups of similar objects and as
a task is widely applicable in artificial intelligence, data mining, statistics and other information
processing fields. Although it is an NP-hard problem, various algorithms can find reasonable clusterings in polynomial time. Most clustering methods consider the number of clusters k as a required
input, and then they apply an optimization procedure to adjust the parameters of the assumed cluster
model. As a consequence, in exploratory analysis, where the data characteristics are not known in
advance, an appropriate k value must be chosen. This is a rather difficult problem, but at the same
time very fundamental in order to apply data clustering in practice.
Several algorithms have been proposed to determine a proper k value, most of which wrap around an
iterative model-based clustering framework, such as the k-means or the more general ExpectationMaximization (EM). In a top-down (incremental) strategy they start with one cluster and proceed
to splitting as long as a certain criterion is satisfied. At each phase, they evaluate the clustering
produced with a fixed k and they decide whether to increase the number of clusters as follows:
Repeat until no changes occur in the model structure
1. Improve model parameters by running a conventional clustering algorithm for a fixed k value.
2. Improve model structure, usually through cluster splitting.
One of the first attempts in extending k-means in this direction was x-means [1] which uses a regularization penalty based on model?s complexity. To this end, Bayesian Information Criterion (BIC)
[2] was used, and among many models the one with highest BIC is selected. This criterion works
1
well only in cases where there are plenty of data and well-separated spherical clusters. Alternative
selection criteria have also been examined in literature [3].
G-means [4] is another extension to k-means that uses a statistical test for the hypothesis that each
cluster has been generated from Gaussian distribution. Since statistical tests become weaker in high
dimensions, the algorithm first projects the datapoints of a cluster on an axis of high variance and
then applies Anderson-Darling statistic with a fixed significance level ?. Clusters that are not accepted are split repeatedly until the entire assumed mixture of Gaussians is discovered. Projected
g-means (pg-means) [5] again assumes that the dataset has been generated from a Gaussian mixture,
but it tests the overall model at once and not each cluster separately. Pg-means bases on the EM
algorithm. Using a series of random linear projections, it constructs a one-dimensional projection
of the dataset and the learned model and then tests the model fitness in the projected space with
Kolmogorov-Smirnov (KS) test. The advantage of this method is the ability to discover Gaussian
clusters of various scales and different covariances, that may overlap. Bayesian k-means [6] introduces Maximization-Expectation (ME) to learn a mixture model by maximizing over hidden variables (datapoint assignments to clusters) and computing expectation over random model parameters
(centers and covariances). If the data come from a mixture of Gaussian components, this method can
be used to find the correct number of clusters and is competitive to the aforementioned approaches.
Other alternatives have also been proposed, such as gap statistic [7], self-tuning spectral clustering
[8], data spectroscopic clustering [9], and stability-based model validation [10]-[12], however they
are not closely related to the proposed method.
Our work is primarily motivated by the non generality of the approaches in [4] and [5], as they
make Gausssianity assumptions about the underlying data distribution. As a consequence, they tend
to overfit for clusters that are uniformly distributed, or have a non-Gaussian unimodal distribution.
Additional limitations are that they are designed to handle numerical vectors only and require the
data in the original dataspace. The contribution of our work is two-fold. Firstly, we propose a statistical test for unimodality, called dip-dist, to be applied into a data subset in order to determine if
it contains a single or multiple cluster structures. Thus, we make a more general assumption about
what is an acceptable cluster. Moreover, the test involves pairwise distances or similarities and not
the original data vectors. Secondly, we propose the dip-means incremental clustering method which
is a wrapper around k-means. We experimentally show that dip-means is able to cope with datasets
containing clusters of arbitrary density distributions. Moreover, it can be easily extended in kernel
space by using the kernel k-means [13] and modifying appropriately the cluster splitting procedure.
2 Dip-dist criterion for cluster structure evaluation
In cluster analysis, the detection of multiple cluster structures in a dataset requires assumptions
about what the clusters we seek look like. The assumptions about the presence of certain data characteristics along with the tests employed for verification, considerably influence the performance of
various methods. It is highly desirable for the assumptions to be general in order not to restrict the
applicability of the method to certain types of clusters only (e.g. Gaussian). Moreover, it is of great
value for a method to be able to verify the assumed cluster hypothesis with well designed statistical
hypothesis tests that are theoretically sound, in contrast to various alternative ad hoc criteria.
We propose the novel dip-dist criterion for evaluating the cluster structure of a dataset that is based
on testing the empirical density distribution of the data for unimodality. The unimodality assumption
implies that the empirical density of an acceptable cluster should have a single mode; a region where
the density becomes maximum, while non-increasing density is observed when moving away from
the mode. There are no other underlying assumptions about the shape of a cluster and the distribution
that generated the empirically observed unimodal property. Under this assumption, it is possible to
identify clusters generated by various unimodal distributions, such as Gaussian, Student-t, etc. The
Uniform distribution can also be identified, since it is an extreme single mode case where the mode
covers all the region with non-zero density.
A convenient issue is that unimodality can be verified using powerful statistical hypothesis tests
(especially for one-dimensional data), such as Silverman?s method which uses fixed-width kernel
density estimates [14] or the widely used Hartigan?s dip statistic [15]. As the dimensionality of
the data increases, the tests require a sufficient number of data points in order to be reliable. Thus,
although the data may be of arbitrary dimensionality, it is important to apply unimodality tests on
2
one-dimensional data values. Furthermore, it would be desirable, if the test could also be applied in
cases where the distance (or similarity) matrix is given and not the original datapoints.
To meet the above requirements we propose the dip-dist criterion for determining unimodality in a
set of datapoints using only their pairwise distances (or similarities). More specifically, if we consider an arbitrary datapoint as a viewer and form a vector whose components are the distances of
the viewer from all the datapoints, then the distribution of the values in this distance vector could
reveal information about the cluster structure. In presence of a single cluster, the distribution of
distances is expected to be unimodal. In the case of two distinct clusters, the distribution of distances
should exhibit two distinct modes, with each mode containing the distances to the datapoints of
each cluster. Consequently, a unimodality test on the distribution of the values of the distance
vector would provide indication about the unimodality of the cluster structure. However, there is a
dependence of the results on the selected viewer. Intuitively, viewers at the boundaries of the set
are expected to form distance vectors whose density modes are more distinct in case of more than
one clusters. To tackle the viewer selection problem, we consider all the datapoints of the set as
individual viewers and perform the unimodality test on the distance vector of each viewer. If there
exist viewers that reject unimodality (called split viewers), we conclude that the examined cluster
includes multiple cluster structures.
For testing unimodality we use Hartigans? dip test [15]. A function F(t) is unimodal with mode the
region sm ={(tL , tU ) : tL ? tU } if it is convex in sL =(??, tL ], constant in [tL , tU ], and concave in
sU =[tU , ?). This implies the non-increasing probability density behavior when moving away from
the mode. For bounded input functions F, G, let ?(F, G)=maxt |F(t) ? G(t)|, and let U be the class
of all unimodal distributions. Then the dip statistic of a distribution function F is given by:
dip(F) = min ?(F, G).
G?U
(1)
In other words, the dip statistic computes the minimum among the maximum deviations observed
between the cdf F and the cdfs from the class of unimodal distributions. A nice property of dip is
that, if Fn is a sample distribution of n observations from F, then limn?? dip(Fn )=dip(F). In [15]
it is argued that the class of uniform distributions U is the most appropriate for the null hypothesis,
since its dip values are stochastically larger than other unimodal distributions, such as those having
exponentially decreasing tails.
Given a vector of observations f ={ fi : fi ? R}ni=1 , then
P the algorithm for performing the dip test [15]
is applied on the respective empirical cdf Fn (t)= n1 n I( fi ? t). It examines the n(n-1)/2 possible
modal intervals [tL , tU ] between the sorted n individual observations. For all these combinations
it computes in O(n) time the respective greatest convex minorant and the least concave majorant
curves in (mint Fn , tL ) and (tU , maxt Fn ), respectively. Fortunately, for a given Fn , the complexity
of one dip computation is O(n) [15]. The computation of the p-value for a unimodality test uses
bootstrap samples and expresses the probability of dip(Fn ) being less than the dip value of a cdf U rn
of n observations sampled from the U[0,1] Uniform distribution:
P = # [dip(Fn) ? dip(Unr )] / b, r = 1, ..., b.
(2)
The null hypothesis H0 that Fn is unimodal, is accepted at significance level ? if p-value > ?,
otherwise H0 is rejected in favor of the alternative hypothesis H1 which suggests multimodality.
N
Let a dataset X={xi : xi ? Rd }i=1
then, in the present context, the dip test can be applied on any
P
subset c, e.g. a data cluster, and more specifically on the ecdf Fn (xi ) (t)= 1n x j ?c {Dist(xi , x j ) ? t} of
the distances between a reference viewer xi of c and the n members of the set. We call the viewers
that identify multimodality and vote for the set to split as split viewers. The dip-dist computation for
a set c with n datapoint members is summarized as follows:
1. Compute U rn and the respective dip(U rn ), r=1, ..., b, for the Uniform sample distributions.
i)
2. Compute F n(xi ) and dip(F (x
n ), i=1, ..., n, for datapoint viewers using the sorted matrix Dist.
(x
)
i
3. Estimate the p-values P , i=1, ..., n, based on Eq. 2 using a significance level ? and compute
the percentage of viewers identifying multimodality.
i)
Since the ascending ordering of the rows of Dist, required for computing F (x
n , can be done once during offline preprocessing, and that the same b samples of Uniform distribution can be used for testing
all viewers, the dip-dist computation for a set with n datapoints has O(bn log n + n2 ) complexity.
3
split 71%
max dip
min dip
best split viewer: p=0.00, dip=0.1097
worst split viewer: p=0.00, dip=0.0335
distance
0.06
0.6
0.04
0.4
0.03
distance
0.05
frequency
0.05
0.06
frequency
no split
0.8
0.04
0.03
0.02
0.02
0.01
0.01
0.2
0
0
0.2
0.4
0.6
0.8
1
1.2
0
0
(a) dataset 1
no split
split 24%
0.2
0.4
0.6
0.8
1
0
0.2
(b) strongest split viewer
max dip
min dip
0.8
best split viewer: p=0.00, dip=0.0776
0.6
0.8
1
worst split viewer: p=0.00, dip=0.0335
distance
0.06
0.04
0.03
distance
0.06
0.05
frequency
0.05
0.6
0.4
0.4
(c) weakest split viewer
frequency
0
0.04
0.03
0.02
0.02
0.01
0.01
0.2
0
0
0
0.2
0.4
0.6
0.8
1
1.2
0
0
(d) dataset 2
0.2
0.4
0.6
0.8
1
0
0.2
(e) strongest split viewer
no split
no split
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0
0.4
0.6
0.8
1
(f) weakest split viewer
split 24%
max dip
min dip
0
0
0.2
0.4
0.6
0.8
(g) dataset 3
1
1.2
0
(h) density plot
0.2
0.4
0.6
0.8
(i) dataset 4
1
1.2
(j) density plot
Figure 1: Application of dip-dist criterion on 2d synthetic data with two structures of 200 datapoints
each. The split viewers are denoted in red color. (a) One Uniform spherical and one elliptic Gaussian
structure. (b), (c) The histograms of pairwise distances of the strongest and weakest split viewer.
(d) The two structures come closer; the split viewers are reduced, so does the dip value for the split
viewer. (g) The two structures are no longer distinguishable as the density map in (h) shows one
mode. (i) The Uniform spherical is replaced with a structure generated from a Student-t distribution.
Figure 1 illustrates an example of applying the dip-dist criterion on synthetic data. We generated
a Uniform spherical and a Gaussian elliptic structure, and then constructed three different twodimensional datasets by decreasing the distance between them. The dip test parameters are set ?=0
and b=1000. The histograms in each row indicate the result of the dip test. As the structures come
closer, the number of viewers that observe multimodality decreases. Eventually, the structures form
a unimodal distribution (Figure 1(g)), which may be visually verified from the presented density
map. The fourth dataset of Figure 1(j) was created by including a structure generated by a Student-t
distribution centered at the same location where the sphere is located in Figure 1(g). The respective
density map shows clearly two modes, evidence that justifies why the dip-dist criterion determines
multimodality with 24% of the viewers suggesting the split. More generally, if the percentage of split
viewers is greater than a small threshold, e.g. 1%, we may decide that the cluster is multimodal.
3 The dip-means algorithm
Dip-means is an incremental clustering algorithm that combines three individual components. The
first is a local search clustering technique that takes as input a model of k clusters and optimizes the
model parameters. For this purpose k-means is used where the cluster models are their centroids.
The second, and most important, decides whether a data subset contains multiple cluster structures
using the dip-dist presented in Section 2. The third component is a divisive procedure (bisecting)
that, given a data subset, performs the splitting into two clusters and provides the two centers.
Dip-means methodology takes as input the dataset X and two parameters for the dip-dist criterion:
the significance level ? and the percentage threshold vthd of cluster members that should be split
viewers to decide for a division (Algorithm 1). For the sake of generality, we assume that dip-means
4
Algorithm 1 Dip-means (X, kinit , ?, vthd )
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
N
input: dataset X={xi }i=1
, the initial number of clusters kinit , a statistic significance level ? for the unimodality test, percentage vthd of split viewers required for a cluster to be considered as a split candidate.
output: the sets of cluster members C={c j }kj=1 , the models M={m j }kj=1 with the centroid of each c j set.
let: score=unimodalityTest(c, ?, vthd ) returns a score value for the cluster c,
{C, M}=kmeans(X, k) the k-means clustering, {C, M}=kmeans(X, M) when initialized with model M,
{mL , mR }=splitCluster(c) that splits a cluster c and returns two centers mL , mR .
k ? kinit
{C, M} ? kmeans(X, k)
do while changes in cluster number occur
for j=1,. . . ,k
% for each cluster j
score j ? unimodalityTest(c j , ?, vthd ) % compute the score for unimodality test
end for
if max j (score j ) > 0
% there exist split candidates
target ? argmax j (score j )
% index of cluster to be splitted
{mL , mR } ? splitCluster(ctarget )
M ? {M-mtarget , mL , mR }
% replace the old centroid with the two new ones
{C, M} ? kmeans(X, M)
% refine solution
end if
end do
return {C, M}
may start from any initial partition with kinit ?1 clusters. In each iteration, all k clusters are examined
for unimodality, the set of split viewers v j is found, and the respective cluster c j is characterized as
split candidate if |v j |/n j ?vthd . In this case, a non-zero score value is assigned to each cluster being
a split candidate, while zero score is assigned to clusters that do not have sufficient split viewers.
Various alternatives can be employed in order to compute a score for a split candidate based on the
percentage of split viewers, or even the size of clusters. In our implementation score j of a split
candidate cluster c j is computed as the average value of the dip statistic of its split viewers:
? P
|v j |
1
(x )
?
?
? |v j | xi ?v j dip(F i ), n j ? vthd
score j = ?
(3)
?
?0
, otherwise.
In order to avoid the overestimation of the real number of clusters, only the candidate with maximum score is split in each iteration. A cluster is split into two clusters using a 2-means local
search approach starting from a pair of sufficiently diverse centroids mL , mR inside the cluster and
concerning only the datapoints of that cluster. We use a simple way to set up the initial centroids
{mL , mR } ? {x, m?(x ?m)}, where x a cluster member selected at random and m the cluster centroid. In this way mL , mR lay at equal distances from m, though in opposite directions. The 2-means
procedure can be repeated starting from different mL , mR initializations in order to discover a good
split. A computationally more expensive alternative could be the deterministic principal direction
divisive partitioning (PDDP) [16] that splits the cluster based on the principal component. We refine
the solution at the end of each iteration using k-means, which fine-tunes the model of k+1 clusters.
The procedure terminates when no split candidates are identified among the already formed clusters.
The proposed dip-dist criterion uses only the pairwise distances, or similarities, between datapoints
and not the vector representations themselves. This enables its application in kernel space ?, provided a kernel matrix K with the N ? N pairwise datapoint inner products, Ki j =?(xi )T ?(x j ). Algorithm 1 can be modified appropriately for this purpose. More specifically, kernel dip-means uses
kernel k-means [13] as local search technique, which also implies that centroids cannot be computed
in kernel space, thus each cluster is now described explicitly by the set of its members c j .
In this case, since the transformed data vectors ?(x) are not available, the cluster splitting procedure
could be seeded by two arbitrary cluster members. However, we propose a more efficient approach.
As discussed in Section 2, the distribution of pairwise distances between a reference viewer and the
members of a cluster reveals information about the multimodality of data distribution in the original
space. This implies that a split of the cluster members based on their distance to a reference viewer
constitutes a reasonable split in the original space, as well. To this end, we may use 2-means to
split the elements of the one-dimensional similarity vector. We consider as reference split viewer
the cluster member with the maximum dip value. Here, 2-means is seeded using two values located
5
dip?means: ke = 1
no split
0.8
x?means: ke = 1
0.8
g?means: ke = 10
0.8
0.6
0.6
0.6
0.6
0.4
0.4
0.4
0.4
0.2
0.2
0.2
0.2
0
0
0
0.2
0.4
0.6
0.8
1
0
0
0.2
0.4
0.6
0.8
1
pg?means: ke = 4
0.8
0
0
0.2
0.4
0.6
0.8
1
0
0.2
0.4
0.6
0.8
1
0.6
0.8
1
0.6
0.8
1
0.6
0.8
1
(a) Single structure generated by a Student-t distribution
dip?means: ke = 1
no split
0.8
x?means: ke = 2
0.8
g?means: ke = 2
0.8
0.6
0.6
0.6
0.6
0.4
0.4
0.4
0.4
0.2
0.2
0.2
0.2
0
0
0
0.2
0.4
0.6
0.8
1
0
0
0.2
0.4
0.6
0.8
1
pg?means: ke = 2
0.8
0
0
0.2
0.4
0.6
0.8
1
0
0.2
0.4
(b) Single Uniform rectangle structure
dip?means: ke = 8
0.8
x?means: ke = 26
0.8
g?means: ke = 33
0.8
0.6
0.6
0.6
0.6
0.4
0.4
0.4
0.4
0.2
0.2
0.2
0.2
0
0
0
0.2
0.4
0.6
0.8
1
0
0
0.2
0.4
0.6
0.8
1
pg?means: ke = 19
0.8
0
0
0.2
0.4
0.6
0.8
1
0
0.2
0.4
(c) Eight clusters of various density and shape
kernel dip?means: ke = 2
0.8
kernel k?means: k = 2
0.8
kernel dip?means: ke = 3
0.8
0.6
0.6
0.6
0.6
0.4
0.4
0.4
0.4
0.2
0.2
0.2
0.2
0
0
0
0.2
0.4
0.6
0.8
1
0
0
0.2
0.4
0.6
0.8
1
(d) Two Uniform ring structures
kernel k?means: k = 3
0.8
0
0
0.2
0.4
0.6
0.8
1
0
0.2
0.4
(e) Three Uniform ring structures
Figure 2: Clustering results on 2d synthetic unimodal cluster structures with 200 datapoints each (the
centroids are marked with ?). (a), (b) Single cluster structures. (c) Various structure types. Based on
the leftmost subfigure, it contains a Uniform rectangle (green), a sphere with increasing density at
its periphery (light green), two Gaussian structures (black, pink), a Uniform ellipse (blue), a triangle
denser at a corner (yellow), a Student-t (light blue), and a Uniform arbitrary shape (red). (d), (e) Nonlinearly separable ring clusters (kernel-based clustering with an RBF kernel).
at opposite positions with respect to the distribution?s mean. After convergence, the resulting twoway partition of the datapoints, derived by the partition of the corresponding similarity values to the
selected reference split viewer, initializes a local search with kernel 2-means.
4 Experiments
In our evaluation we compare the proposed dip-means method with x-means [1], g-means [4] and
pg-means [5] that are closely related to present work. In all compared methods we start with a single
cluster (kinit =1) and i) at each iteration one cluster is selected for a bisecting split, ii) 10 split trials
are performed with 2-means initialized with the simple technique described in Section 3, and the
split with lower clustering error (the sum of squared differences between cluster centers and their
assigned datapoints) is kept, iii) the refinement is applied after each iteration on all k+1 clusters.
Hence, only the statistical test that decides whether to stop splitting differs in each case. Exception
6
Table 1: Results for synthetic datasets with fixed k? =20 clusters with 200 datapoints in each cluster.
Case 1, d=4
Case 1, d=16
Case 1, d=32
Methods
ke
ARI
VI
ke
ARI
VI
ke
ARI
VI
dip-means 20.0?0.0 1.00?0.0 0.00?0.0 20.0?0.0 1.00?0.0 0.00?0.0 20.0?0.0 1.00?0.0 0.00?0.0
x-means
7.3?9.3 0.30?0.5 2.07?1.3 28.6?7.8 0.88?0.1 0.27?0.2 31.3?5.6 0.84?0.1 0.36?0.2
g-means 20.3?0.5 0.99?0.0 0.01?0.0 20.3?0.5 0.99?0.0 0.01?0.0 20.5?0.6 0.99?0.0 0.02?0.0
pg-means 19.2?2.5 0.90?0.1 0.16?0.2 19.0?0.9 0.95?0.1 0.07?0.1
3.2?5.1 0.09?0.2 2.62?0.9
Case 2, d=4
Case 2, d=16
Case 2, d=32
Methods
ke
ARI
VI
ke
ARI
VI
ke
ARI
VI
dip-means 20.0?0.0 0.99?0.0 0.05?0.0 20.0?0.0 0.99?0.0 0.02?0.0 20.0?0.0 0.99?0.0 0.01?0.0
x-means 24.8?39. 0.26?0.4 2.26?1.1 80.1?15. 0.75?0.1 0.75?0.2 71.6?14. 0.75?0.1 0.66?0.2
g-means 79.2?22. 0.77?0.1 0.70?0.2 105.9?30. 0.83?0.1 0.66?0.2 133.6?42. 0.83?0.1 0.72?0.2
pg-means 14.2?4.7 0.67?0.2 0.65?0.5 10.4?3.4 0.30?0.2 1.26?0.5
4.0?1.5 0.06?0.1 2.40?0.2
is the pg-means method that uses EM for local search and does not rely on cluster splitting to add
a new cluster. We use the method exactly as presented in [5]. For the kernel-based experiments
we use the necessary modifications described at the end of Section 3 and compare with kernel kmeans [13]. The parameters of the dip-dist criterion are set as ?=0 for significance level of dip test
and b=1000 for the number of bootstraps. We consider as split candidates the clusters having at
least vthd =1% split viewers. These values were fixed in all experiments. For both g-means and pgmeans we set the significance level ?=0.001, while we use 12 random projections for the latter. In
order to compare the ground truth labeling and the grouping produced by clustering, we utilize the
Variation of Information (VI) [17] metric and the Adjusted Rand Index (ARI) [18]. Better clustering
is indicated by lower values of VI and higher for ARI.
We first provide clustering results for synthetic 2d datasets in Figure 2 (ke denotes the estimated
number of clusters). In Figures 2(a), (b), we provide two indicative examples of single cluster structures. X-means decides correctly for the structure generated from Student-t distribution, but overfits
in the Uniform rectangle case, while the other two methods overfit in both cases. In the multicluster
dataset of Figure 2(c), dip-means successfully discovers all clusters, in contrast to the other methods
that significantly overestimate. To test the kernel dip-means extension, we created two 2d synthetic
dataset containing two and three Uniform ring structures and we used an RBF kernel to construct
the kernel matrix K. It is clear that x-means, g-means, and pg-means are not applicable in this case.
Thus we present in Figures 2(d), 2(e) the results using kernel dip-means and also the best solution
from 50 randomly initialized runs of kernel k-means with the true number of clusters. As we may observe, dip-means estimates the true number of clusters and finds the optimal grouping of datapoints
in both cases, whereas kernel k-means fails in the three ring case. Furthermore, we created synthetic
datasets with true number k? =20 clusters, with 200 datapoints each, in d= 4, 16, 32 dimensions with
low separation [19]. Two cases were considered: 1) Gaussian mixtures of varying eccentricity, and
2) datasets with various cluster structures, i.e. Gaussian (40%), Student-t (20%), Uniform ellipses
(20%) or Uniform rectangles (20%). For each case and dimensions, we generated 30 datasets to test
the methods. As the results in Table 1 indicate, dip-means provides excellent clustering performance
in all cases and estimates accurately the true number of clusters. Moreover, it performs remarkably
better than the other methods, especially for the datasets of Case 2.
Two real-world datasets were also used, where the provided class labels were considered as ground
truth. Handwritten Pendigits (UCI) [20] contains 16 dimensional vectors, each one representing a
digit from 0-9 written by a human subject. The data provide a training PDtr and a testing set PDte
with 7494 and 3498 instances, respectively. We also consider two subsets that contain the digits
{0, 2, 4} (PD3tr and PD3te ) and {3, 6, 8, 9} (PD4tr and PD4te ). We do not apply any preprocessing.
Coil-100 is the second dataset [21], which contains 72 images taken from different angles for each
one of the 100 included objects. We used tree subsets Coil3, Coil4, Coil5, with images from 3, 4
and 5 objects, respectively. SIFT descriptors [22] are first extracted from the greyscale images that
are finally represented by the Bag of Visual Words model using 1000 visual words. As reported in
Table 2, dip-means correctly discovers the number of clusters for the subsets of Pendigits, while
providing a reasonable underestimation ke near the optimal for the full datasets PD10tr and PD10te .
Apart from the excessive overfitting of x-means and g-means, pg-means seems to concludes in
overestimated ke . In the high dimensional and sparse space of the considered Coil subsets, x-means
7
Table 2: Clustering results for real-world data. Bold indicates best values.
Methods
dip-means
x-means
g-means
pg-means
Methods
dip-means
x-means
g-means
pg-means
Methods
dip-means
x-means
g-means
PD3te (k? =3)
k
ARI
VI
3 0.879 0.332
155 0.031 3.792
21 0.226 1.800
4 0.835 0.359
PD3tr (k? =3)
e
k
ARI
VI
3 0.963 0.116
288 0.018 4.378
52 0.106 2.641
5 0.655 0.740
Coil3 (k? =3)
ke
ARI
VI
3 1.000 0.000
8 0.499 0.899
7 0.669 0.650
e
PD4te (k? =4)
k
ARI
VI
4 0.626 0.545
194 0.039 3.723
36 0.209 2.049
10 0.576 0.954
PD4tr (k? =4)
e
k
ARI
VI
4 0.522 0.841
381 0.020 4.372
58 0.143 2.464
8 0.439 1.320
Coil4 (k? =4)
ke
ARI
VI
5 0.912 0.173
11 0.499 0.951
12 0.502 0.977
e
PD10te (k? =10)
k
ARI
VI
7 0.343 1.587
515 0.041 3.825
73 0.295 1.961
13 0.447 1.660
PD10tr (k? =10)
e
k
ARI
VI
9 0.435 1.452
942 0.024 4.387
149 0.160 2.605
14 0.494 1.504
Coil5 (k? =5)
ke
ARI
VI
4 0.772 0.308
15 0.601 0.907
18 0.434 1.204
e
and g-means provide more reasonable ke estimations, but still overestimations. An explanation for
this behavior is that they discover smaller groups of similar images, i.e. images taken from close
angles to the same object, but fail to unify the subclusters at higher level. Note also that we did not
manage to test pg-means in Coil-100 subsets, since covariance matrices were not positive definite.
The superiority of dip-means is also indicated by the reported values for ARI and VI measures.
5 Conclusions
We have presented a novel approach for testing whether multiple cluster structures are present in a
set of data objects (e.g. a data cluster). The proposed dip-dist criterion checks for unimodality of the
empirical data density distribution, thus it is much more general compared to alternatives that test for
Gaussianity. Dip-dist uses a statistical hypothesis test, namely Hartigans? dip test, in order to verify
unimodality. If a data object of the set is considered as a viewer, then the dip test can be applied
on the one-dimensional distance (or similarity) vector with components the distances between the
viewer and the members of the same set. We exploit the idea that the observation of multimodality in
the distribution of distances indicates multimodality of the original data distribution. By considering
all the data objects of the set as individual viewers and by combining the respective results of the
test, the presence of multiple cluster structures in the set can be determined.
We have also proposed a new incremental clustering algorithm called dip-means, that incorporates
dip-dist criterion in order to decide for cluster splitting. The procedure starts with one cluster, it iteratively splits the cluster indicated by dip-dist as more probable to contain multiple cluster structures,
and terminates when no new cluster split is suggested. By taking advantage of the fact that dip-dist
utilizes only information about the distances between data objects, we have modified appropriately
the main algorithm to propose kernel dip-means which can be applied in kernel space.
The proposed method is fast, easy to implement, and works very well under a fixed parameter
setting. The reported clustering results indicate that dip-means can provide reasonable estimates
of the number of clusters, and produce meaningful clusterings in both dataset types in a variety of
artificial and real datasets. Apart from testing the method in real-world applications, there are several
ways to improve the implementation details of the method, especially the kernel-based version. We
also plan to test its effectiveness in other settings, such as online clustering of stream data.
Acknowledgments
We thank Prof. Greg Hamerly for providing his code for pg-means. The described work is supported
partially and co-financed by the European Regional Development Fund (ERDF) (2007-2013) of the
European Union and National Funds (Operational Programme ?Competitiveness and Entrepreneurship? (OPCE II), ROP ATTICA), under the Action ?SYNERGASIA (COOPERATION) 2009?.
8
References
[1] D. Pelleg and Andrew Moore. X-means: extending k-means with efficient estimation of the number of
clusters. International Conference on Machine Learning (ICML), pp. 727-734, 2000.
[2] R.E. Kass and L. Wasserman. A reference Bayesian test for nested hypotheses and its relationship to the
Schwarz criterion. Journal of the American Statistical Association, 90(431), pp. 928-934, 1995.
[3] X. Hu and L. Xu. A comparative study of several cluster number selection criteria. In J. Liu et al.(eds.)
Intelligent Data Engineering and Automated Learning, pp. 195?202, Springer, 2003.
[4] G. Hamerly and C. Elkan. Learning the k in k-means. Advances in Neural Information Processing Systems
(NIPS), pp. 281-288, 2003.
[5] Y. Feng and G. Hamerly. PG-means: learning the number of clusters in data. Advances in Neural Information Processing Systems (NIPS), pp. 393?400, 2006.
[6] K. Kurihara and M. Welling. Bayesian k-means as a maximization-expectation algorithm. Neural Computation, 21(4), pp. 1145?1172, 2009.
[7] R. Tibshirani, G. Walther and T. Hastie. Estimating the number of clusters in a dataset via the Gap statistic.
Journal of the Royal Statistical Society B, 63, pp. 411-423, 2001.
[8] L. Zelnik-Manor and P. Perona. Self-tuning spectral clustering. Advances in Neural Information Processing
Systems (NIPS), pp. 1601?1608, 2004.
[9] T. Shi, M. Belkin and B. Yu. Data Spectroscopy: eigenspaces of convolution operators and clustering. The
Annals of Statistics, 37(6B), pp. 3960?3984, 2009.
[10] E. Levine and E. Domany. Resampling method for unsupervised estimation of cluster validity. Neural
Computation, 13(11), pp. 2573-2593, 2001.
[11] Robert Tibshirani and G. Walther. Cluster validation by prediction strength. Computational & Graphical
Statistics, 14(3), pp. 511-528, 2005.
[12] T. Lange, V. Roth, Mikio L. Braun, and J.M. Buhmann. Stability-based validation of clustering solutions.
Neural Computation, 16(6), pp. 1299-1323, 2004.
[13] I.S. Dhillon, Y. Guan and B. Kulis. Kernel k-means: spectral clustering and normalized cuts. International
Conference on Knowledge Discovery and Data Mining (SIGKDD), pp. 551?556, 2004.
[14] B.W. Silverman. Using Kernel density estimates to investigate multimodality. Journal of Royal Statistic
Society B, 43(1), pp. 97-99, 1981.
[15] J.A. Hartigan and P. M. Hartigan. The dip test of unimodality. The Annals of Statistics, 13(1), pp. 70-84,
1985.
[16] D.L. Boley. Principal direction divisive partitioning. Data Mining and Knowledge Discovery, 2(4), pp.
344, 1998.
[17] M. Meila. Comparing clusterings ? an information based distance. Multivariate Analysis, 98(5), pp.
873-895, 2007.
[18] L. Hubert and P. Arabie. Comparing partitions. Journal of Classification, 2(1), pp. 193-218, 1985.
[19] J.J. Verbeek, N. Vlassis, and B. Kro?se. Efficient Greedy Learning of Gaussian Mixture Models. Neural
Computation, 15(2), pp. 469-485, 2003.
[20] A. Asuncion and D. Newman. UCI Machine Learning Repository. University of California at Irvine,
Irvine, CA, 2007. Available online: http://www.ics.uci.edu/ mlearn/MLRepository.html
[21] S.A. Nene, S.K. Nayar and H. Murase. Columbia Object Image Library (COIL-100). Technical Report
CUCS-006-96, February 1996.
[22] D. Lowe. Distinctive image features from scale-invariant keypoints. Journal of Computer Vision, 60, pp.
91-110, 2004.
9
|
4795 |@word trial:1 repository:1 version:1 kulis:1 polynomial:1 seems:1 smirnov:1 hu:1 zelnik:1 seek:1 bn:1 covariance:3 pg:16 initial:3 liu:1 series:1 contains:5 score:12 wrapper:2 ka:1 comparing:2 must:1 written:1 fn:10 numerical:1 partition:5 shape:3 enables:1 designed:2 plot:2 fund:2 resampling:1 intelligence:1 selected:5 greedy:1 indicative:1 provides:2 location:1 firstly:1 along:1 constructed:1 become:1 competitiveness:1 walther:2 combine:1 inside:1 multimodality:9 theoretically:1 pairwise:7 expected:2 behavior:2 themselves:1 dist:21 spherical:4 decreasing:2 automatically:1 considering:1 increasing:3 becomes:1 project:1 estimating:2 underlying:4 discover:3 moreover:4 bounded:1 provided:2 null:2 what:2 concave:2 tackle:1 braun:1 exactly:1 partitioning:2 superiority:2 overestimate:1 positive:1 engineering:1 local:5 consequence:2 meet:1 black:1 pendigits:2 initialization:1 k:1 examined:3 suggests:1 co:1 cdfs:1 pddp:1 acknowledgment:1 testing:6 hamerly:3 practice:1 union:1 definite:1 differs:1 silverman:2 implement:1 bootstrap:2 digit:2 procedure:7 empirical:4 reject:1 significantly:1 projection:3 convenient:1 word:3 cannot:1 close:1 selection:3 operator:1 twodimensional:1 context:1 influence:1 applying:1 www:1 conventional:1 map:3 deterministic:1 center:4 maximizing:1 shi:1 roth:1 starting:2 convex:2 ke:27 unify:1 splitting:8 identifying:1 wasserman:1 examines:1 datapoints:16 his:1 stability:2 handle:1 exploratory:1 variation:1 analogous:1 annals:2 target:1 us:8 hypothesis:10 elkan:1 element:1 expensive:1 located:2 lay:1 cut:1 observed:3 levine:1 worst:2 region:3 ordering:1 decrease:1 highest:1 boley:1 complexity:3 overestimation:2 arabie:1 distinctive:1 division:1 bisecting:2 triangle:1 easily:1 multimodal:1 various:9 unimodality:20 kolmogorov:1 represented:1 unr:1 separated:1 distinct:3 fast:1 artificial:3 labeling:1 newman:1 h0:2 whose:2 widely:2 larger:1 denser:1 otherwise:2 ability:1 statistic:14 favor:1 online:2 hoc:1 advantage:3 indication:1 twoway:1 propose:6 product:1 tu:6 uci:3 combining:1 financed:1 arly:1 cluster:119 requirement:1 extending:2 convergence:1 produce:2 eccentricity:1 incremental:6 ring:5 comparative:1 object:9 andrew:1 expectationmaximization:1 eq:1 c:2 involves:1 indicate:4 come:3 implies:4 murase:1 direction:4 closely:2 correct:1 modifying:1 centered:1 human:1 require:2 argued:1 spectroscopic:1 probable:1 secondly:1 adjusted:1 viewer:46 extension:2 around:3 considered:5 sufficiently:1 ground:2 visually:1 great:1 ic:1 purpose:2 estimation:3 applicable:2 bag:1 label:1 schwarz:1 successfully:1 clearly:1 gaussian:13 aim:1 modified:2 rather:1 manor:1 avoid:1 varying:1 derived:1 indicates:2 check:1 contrast:3 sigkdd:1 centroid:8 entire:1 hidden:1 perona:1 transformed:1 overall:1 among:3 aforementioned:1 issue:1 ecdf:1 denoted:1 classification:1 development:1 plan:1 rop:1 html:1 field:1 once:2 construct:2 having:2 equal:1 look:1 icml:1 constitutes:1 excessive:1 yu:1 plenty:1 unsupervised:1 np:1 report:1 intelligent:1 primarily:1 belkin:1 randomly:1 national:1 individual:5 fitness:1 replaced:1 phase:1 argmax:1 n1:1 attempt:1 detection:1 mining:3 highly:1 investigate:1 evaluation:2 adjust:1 introduces:1 mixture:6 extreme:1 light:2 hubert:1 closer:2 necessary:1 respective:6 eigenspaces:1 tree:1 old:1 initialized:3 subfigure:1 instance:1 cover:1 assignment:1 maximization:2 applicability:1 deviation:1 subset:9 uniform:18 gr:2 reported:3 considerably:1 synthetic:7 density:18 fundamental:2 international:2 overestimated:1 again:1 squared:1 satisfied:1 manage:1 containing:3 admit:1 stochastically:1 corner:1 american:1 return:3 suggesting:1 student:7 summarized:1 includes:1 uoi:2 bold:1 gaussianity:1 explicitly:1 ad:1 vi:18 stream:1 performed:1 h1:1 lowe:1 overfits:1 red:2 start:4 competitive:1 asuncion:1 contribution:1 formed:1 ni:1 greg:1 variance:1 characteristic:2 descriptor:1 identify:2 yellow:1 bayesian:4 handwritten:1 accurately:1 produced:2 mlearn:1 datapoint:5 strongest:3 nene:1 splitted:1 ed:1 frequency:4 involved:1 pp:20 sampled:1 stop:1 dataset:18 irvine:2 popular:1 color:1 knowledge:2 dimensionality:2 greece:2 higher:2 subclusters:1 methodology:2 modal:1 rand:1 done:1 though:1 generality:2 anderson:1 furthermore:2 rejected:1 until:2 overfit:2 su:1 mode:11 reveal:2 indicated:3 validity:1 verify:2 true:4 contain:2 normalized:1 regularization:1 assigned:3 seeded:2 hence:1 iteratively:1 moore:1 dhillon:1 during:1 self:2 width:1 mlrepository:1 criterion:18 leftmost:1 performs:2 image:7 novel:3 fi:3 ari:18 discovers:2 empirically:1 exponentially:1 tail:1 discussed:1 association:1 tuning:2 rd:1 meila:1 moving:2 similarity:7 longer:1 etc:1 base:1 add:1 multivariate:1 optimizes:1 mint:1 apart:2 periphery:1 certain:3 minimum:1 additional:1 fortunately:1 greater:1 mr:8 employed:2 pgmeans:1 determine:2 ii:3 multiple:7 unimodal:12 desirable:2 sound:1 full:1 likas:1 keypoints:1 technical:1 characterized:1 long:1 sphere:2 concerning:1 ellipsis:1 prediction:1 verbeek:1 vision:1 expectation:3 metric:1 histogram:2 kernel:29 iteration:5 whereas:1 remarkably:1 separately:1 fine:1 interval:1 limn:1 appropriately:3 regional:1 subject:1 tend:1 member:13 incorporates:1 effectiveness:2 call:1 near:1 presence:3 split:57 iii:1 easy:1 automated:1 variety:1 bic:2 hastie:1 restrict:1 identified:2 opposite:2 inner:1 idea:1 domany:1 lange:1 whether:4 motivated:1 penalty:1 proceed:1 repeatedly:1 action:1 generally:1 clear:1 se:1 tune:1 reduced:1 http:1 sl:1 exist:2 percentage:5 estimated:1 correctly:2 tibshirani:2 blue:2 diverse:1 darling:1 express:1 group:2 key:1 threshold:2 hartigan:3 verified:2 kept:1 rectangle:4 utilize:1 pelleg:1 sum:1 run:1 angle:2 powerful:1 fourth:1 family:1 reasonable:5 decide:4 separation:1 utilizes:1 acceptable:2 ki:1 fold:1 refine:2 strength:1 occur:2 sake:1 min:4 performing:1 separable:1 department:2 combination:1 pink:1 terminates:2 smaller:1 em:3 modification:1 intuitively:1 invariant:1 taken:2 computationally:1 eventually:1 fail:1 dataspace:1 ascending:1 end:7 available:2 gaussians:1 apply:4 observe:2 eight:1 away:2 appropriate:2 spectral:3 elliptic:2 alternative:7 original:6 assumes:2 clustering:33 top:1 running:1 denotes:1 graphical:1 exploit:1 especially:3 ellipse:1 prof:1 society:2 february:1 feng:1 initializes:1 already:1 strategy:1 dependence:1 exhibit:1 wrap:1 distance:30 thank:1 me:1 considers:1 code:1 index:2 relationship:1 providing:2 difficult:1 robert:1 greyscale:1 implementation:2 proper:1 perform:1 observation:5 convolution:1 datasets:12 sm:1 extended:1 vlassis:1 discovered:1 rn:3 arbitrary:5 pair:1 required:3 nonlinearly:1 namely:1 cucs:1 california:1 learned:1 nip:3 able:2 suggested:1 usually:1 reliable:1 max:4 including:1 green:2 explanation:1 greatest:1 overlap:1 royal:2 rely:1 buhmann:1 representing:1 improve:3 library:1 axis:1 created:3 concludes:1 columbia:1 kj:2 nice:1 literature:1 discovery:2 determining:1 limitation:1 validation:3 verification:1 sufficient:2 maxt:2 row:2 cooperation:1 repeat:1 supported:1 offline:1 weaker:1 taking:1 sparse:1 distributed:1 dip:89 dimension:3 boundary:1 evaluating:1 curve:1 world:3 computes:2 refinement:1 projected:2 preprocessing:2 programme:1 cope:1 welling:1 ml:8 decides:3 reveals:1 overfitting:1 assumed:3 conclude:1 xi:9 search:5 iterative:2 why:1 table:4 learn:2 robust:1 ca:1 operational:1 spectroscopy:1 excellent:1 european:2 did:1 significance:7 main:1 n2:1 repeated:1 xu:1 mikio:1 tl:6 fails:1 position:1 candidate:9 guan:1 third:1 down:1 sift:1 weakest:3 evidence:1 grouping:2 illustrates:1 justifies:1 gap:2 distinguishable:1 univariate:2 visual:2 partially:1 applies:2 springer:1 nested:1 truth:2 determines:1 extracted:1 cdf:3 coil:4 sorted:2 marked:1 kmeans:5 consequently:1 rbf:2 replace:1 minorant:1 hard:1 change:2 experimentally:1 specifically:3 included:1 uniformly:1 determined:1 kurihara:1 principal:3 called:3 accepted:2 experimental:1 divisive:3 vote:1 underestimation:1 meaningful:1 exception:1 latter:1 majorant:1 evaluate:1 nayar:1
|
4,193 | 4,796 |
Wavelet based multi-scale shape features on arbitrary
surfaces for cortical thickness discrimination
Won Hwa Kim??? Deepti Pachauri? Charles Hatt?
Moo K. Chung? Sterling C. Johnson?? Vikas Singh????
?
Dept. of Computer Sciences, University of Wisconsin, Madison, WI
Dept. of Biostatistics & Med. Informatics, University of Wisconsin, Madison, WI
?
Dept. of Biomedical Engineering, University of Wisconsin, Madison, WI
?
Wisconsin Alzheimer?s Disease Research Center, University of Wisconsin, Madison, WI
?
GRECC, William S. Middleton VA Hospital, Madison, WI
?
{wonhwa, pachauri}@cs.wisc.edu {hatt, mkchung}@wisc.edu
[email protected] [email protected]
Abstract
Hypothesis testing on signals defined on surfaces (such as the cortical surface) is
a fundamental component of a variety of studies in Neuroscience. The goal here
is to identify regions that exhibit changes as a function of the clinical condition
under study. As the clinical questions of interest move towards identifying very
early signs of diseases, the corresponding statistical differences at the group level
invariably become weaker and increasingly hard to identify. Indeed, after a multiple comparisons correction is adopted (to account for correlated statistical tests
over all surface points), very few regions may survive. In contrast to hypothesis
tests on point-wise measurements, in this paper, we make the case for performing statistical analysis on multi-scale shape descriptors that characterize the local
topological context of the signal around each surface vertex. Our descriptors are
based on recent results from harmonic analysis, that show how wavelet theory
extends to non-Euclidean settings (i.e., irregular weighted graphs). We provide
strong evidence that these descriptors successfully pick up group-wise differences,
where traditional methods either fail or yield unsatisfactory results. Other than
this primary application, we show how the framework allows performing cortical
surface smoothing in the native space without mappint to a unit sphere.
1
Introduction
Cortical thickness measures the distance between the outer and inner cortical surfaces (see Fig.
1). It is an important biomarker implicated in brain development and disorders [3]. Since 2011,
more than 1000 articles (from a search on Google Scholar and/or Pubmed) tie cortical thickness to
conditions ranging from Alzheimer?s disease (AD), to Schizophrenia and Traumatic Brain injury
(TBI) [9, 14, 13]. Many of these results show how cortical thickness also correlates with brain
growth (and atrophy) during adolescence (and aging) respectively [22, 20, 7]. Given that brain
function and pathology manifest strongly as changes in the cortical thickness, the statistical analysis
of such data (to find group level differences in clinically disparate populations) plays a central role
in structural neuroimaging studies.
In typical cortical thickness studies, magnetic resonance images (MRI) are acquired for two populations: clinical and normal. A sequence of image processing steps are performed to segment the
cortical surfaces and establish vertex-to-vertex correspondence across surface meshes [15]. Then, a
group-level analysis is performed at each vertex. That is, we can ask if there are statistically significant differences in the signal between the two groups. Since there are multiple correlated statistical
1
tests over all voxels, a Bonferroni type multiple comparisons correction is required [4]. If many
vertices survive the correction (i.e., differences are strong enough), the analysis will reveal a set of
discriminative cortical surface regions, which may be positively or negatively correlated with the
clinical condition of interest. This procedure is well understood and routinely used in practice.
In the last five years, a significant majority of research has shifted
towards investigations focused on the pre-clinical stages of diseases [16, 23, 17]. For instance, we may be interested in identifying early signs of dementia by analyzing cortical surfaces
(e.g., by comparing subjects that carry a certain gene versus
those who do not). In this regime, the differences are weaker,
and the cortical differences may be too subtle to be detected.
In a statistically under-powered cortical thickness analysis, few
vertices may survive the multiple comparisons correction. Another aspect that makes this task challenging is that the cortical
thickness data (obtained from state of the art tools) is still inherently noisy. The standard approach for filtering cortical sur- Figure 1: Cortical thickness illusface noise is to adopt an appropriate parameterization to model tration: the outer cortical surface (in
yellow) and the inner cortical surthe signal followed by a diffusion-type smoothing [6]. The pri- face (in blue). The distance between
mary difficulty is that most (if not all) widely used parameteri- the two surfaces is the cortical thickzations operate in a spherical coordinate system using spherical ness.
harmonic (SPHARM) basis functions [6]. As a result, one must
first project the signal on the surface to a unit sphere. This ?ballooning? process introduces serious
metric distortions. Second, SPHARM parameterization usually suffers from ringing artifacts (i.e.,
Gibbs phenomena) when used to fit rapidly changing localized cortical measurements [10]. Third,
SPHARM uses global basis functions which typically requires a large number of terms in the expansion to model cortical surface signals to high fidelity. Subsequently, even if the globally-based
coefficients exhibit statistical differences, interpreting which brain regions contribute to these variations is difficult. As a result, the coefficients of the model cannot be used directly in localizing
variations in the cortical signal.
This paper is motivated by the simple observation that statistical inference on surface based signals
should be based not on a single scalar measurement but on multivariate descriptors that characterize
the topologically localized context around each point sample. This view insures against signal noise
at individual vertices, and should offer the tools to meaningfully compare the behavior of the signal
at multiple resolutions of the topological feature, across multiple subjects. The ability to perform
the analysis in a multi-resolution manner, it seems, is addressable if one makes use of Wavelets
based methods (e.g., scaleograms [19]). Unfortunately, the non-regular structure of the topology
makes this problematic. In our neuroimaging application, samples are not drawn on a regular grid,
instead governed entirely by the underlying cortical surface mesh of the participant. To get around
this difficulty, we make use of some recent results from the harmonic analysis literature [8] ? which
suggests how wavelet analysis can be extended to arbitrary weighted graphs with irregular topology. We show how these ideas can be used to derive a wavelet multi-scale descriptor for statistical
analysis of signals defined on surfaces. This framework yields rather surprising improvements in
discrimination power and promises immediate benefits for structural neuroimaging studies.
Contributions. We derive wavelet based multi-scale representations of surface based signals. Our
representation has varying levels of local support, and as a result can characterize the local context
around a vertex to varying levels of granularity. We show how this facilitates statistical analysis of
signals defined on arbitrary topologies (instead of the lattice setup used in image processing).
(i) We show how the new model significantly extends the operating range of analysis of cortical
surface signals (such as cortical thickness). At a pre-specified significance level, we can detect
a much stronger signal showing group differences that are barely detectable using existing
approaches. This is the main experimental result of this paper.
(ii) We illustrate how the procedure of smoothing of cortical surfaces (and shapes) can completely
bypass the mapping on to a sphere, since smoothing can now be performed in the native space.
2
2
A Brief Review of Wavelets in Signal Processing
Recall that the celebrated Fourier series representation of a periodic function is expressed via a superposition of sines and cosines, which is widely used in signal processing for representing a signal
in the frequency domain and obtaining meaningful information from it. Wavelets are conceptually
similar to the Fourier series transform, in that they can be used to extract information from many
different kinds of data, however unlike the Fourier transform which is localized in frequency only,
wavelets can be localized in both time and frequency [12] and extend frequency analysis to the notion of scale. The construction of wavelets is defined by a wavelet function ? (called an analyzing
wavelet or a mother wavelet) and a scaling function ?. Here, ? serves as a band-pass filter and
? operates as a low-pass filter covering the low frequency components of the signal which cannot
be tackled by the band-pass filters. When the band-pass filter is transformed back by the inverse
transform and translated, it becomes a localized oscillating function with finite duration, providing
very compact (local) support in the original domain [21]. This indicates that points in the original
domain which are far apart have negligable impact on one another. Note the contrast with Fourier
series representation of a short pulse which suffers from issues due to nonlocal support of sin(?)
with infinite duration.
Formally, the wavelet function ? on x is a function of two parameters, the scale and translation
parameters, s and a
1 x?a
?s,a (x) = ?(
)
(1)
a
s
Varying scales control the dilation of the wavelet, and together with a translation parameter, constitute the key building blocks for approximating a signal using a wavelet expansion. The function
?s,a (x) forms a basis for the signal and can be used with other bases at different scales to decompose a signal, similar to Fourier transform. The wavelet transform of a signal f (x) is defined as the
inner product of the wavelet and signal and can be represented as
Z
1
x?a
Wf (s, a) = hf, ?i =
f (x)? ? (
)dx
(2)
a
s
where Wf (s, a) is the wavelet coefficient at scale s and at location a. The function ? ? represents
the complex conjugate of ?. Such a transform is invertible, that is
ZZ
1
Wf (s, a)?s,a (x)da ds
(3)
f (x) =
C?
R
2
where C? = |?(j?)|
d? is called the admissibility condition constant, and ? is the Fourier
|?|
transform of the wavelet [21], and the ? is the domain of frequency.
As mentioned earlier, the scale parameter s controls the dilation of the basis and can be used to produce both short and long basis functions. While short basis functions correspond to high frequency
components and are useful to isolate signal discontinuities, longer basis functions corresponding to
lower frequencies, are also required to to obtain detailed frequency analysis. Indeed, wavelets transforms have an infinite set of possible basis functions, unlike the single set of basis functions (sine
and cosine) in the Fourier transform. Before concluding this section, we note that while wavelets
based analysis for image processing is a mature field, most of these results are not directly applicable
to non-uniform topologies such as those encountered in shape meshes and surfaces in Fig. 1.
3
Defining Wavelets on Arbitrary Graphs
Note that the topology of a brain surface is naturally modeled as a weighted graph. However, the
application of wavelets to this setting is not straightforward, as wavelets have traditionally been
limited to the Euclidean space setting. Extending the notion of wavelets to a non-Euclidean setting,
particularly to weighted graphs, requires deriving a multi-scale representation of a function defined
on the vertices. The first bottleneck here is to come up with analogs of dilation and translation on the
graph. To address this problem, in [8], the authors introduce Diffusion Wavelets on manifolds. The
basic idea is related to some well known results from machine learning, especially the eigenmaps
framework by Belkin and Niyogi [1]. It also has a strong relationship with random walks on a
weighted graph. Briefly, a graph G = (V, E, w) with vertex set V , edge set E and symmetric edge
3
weights w has an associated random walk R. The walk R, when represented as a matrix, is conjugate
to a self adjoint matrix T , which can be interpreted as an operator associated with a diffusion process,
explaining how the random walk propagates from one node to another. Higher powers of T (given
as T t ) induce a dilation (or scaling) process on the function to which it is applied, and describes
the behavior of the diffusion at varying time scales (t). This is equivalent to iteratively performing a
random walk for a certain number of steps and collecting together random walks into representatives
[8]. Note that the orthonormalization of the columns of T induces the effect of ?compression?, and
corresponds to downsampling in the function space [5]. In fact, the powers of T are low rank (since
the spectrum of T decays), and this ties back to the compressibility behavior of classical wavelets
used in image processing applications (e.g., JPEG standard). In this way, the formalization in [8]
obtains all wavelet-specific properties including dilations, translations, and downsampling.
3.1
Constructing Wavelet Multiscale Descriptors (WMD)
Very recently, [11] showed that while the orthonormalization above is useful for iteratively obtaining compression (i.e., coarser subspaces), it complicates the construction of the transform and only
provides limited control on scale selection. These issues are critical in practice, especially when
adopting this framework for analysis of surface meshes with ? 200, 000 vertices with a wide spectum of frequencies (which can benefit from finer control over scale). The solution proposed in [11]
discards repeated application of the diffusion operator T , and instead relies on the graph Laplacian
to derive a spectral graph wavelet transform (SGWT). To do this, [11] uses a form of the wavelet
operator in the Fourier domain, and generalizes it to graphs. Particularly, SGWT takes the Fourier
transform of the graph by using the properties of the Laplacian L (since the eigenvectors of L are
analogous to the Fourier basis elements). The formalization is shown to preserve the localization
properties at fine scales as well as other wavelets specific properties. But beyond constructing the
transform, the operator-valued functions of the Laplacian are very useful to derive a powerful multiscale shape descriptor localized at different frequencies which performs very well in experiments.
For a function f (m) defined on a vertex m of a graph, interpreting f (sm) for a scaling constant s,
is not meaningful on its own. SGWT gets around this problem by operating in the dual domain ? by
taking the graph Fourier transformation. In this scenario, the spectrum of the Laplacian is analogous
to the frequency domain, where scales can be defined (seen in (6) later). This provides a multiresolution view of the signal localized at m. By analyzing the entire spectra at once, we can obtain
a handle on which scale best characterizes the signal of interest. Indeed, for graphs, this provides
a mechanism for simultaneously analyzing various local topologically-based contexts around each
vertex. And for a specific scale s, we can now construct band-pass filters g in the frequency domain
which suppresses the influence of scales s0 6= s. When transformed back to the original domain, we
directly obtain a representation of the signal for that scale. Repeating this process for multiple scales,
the set of coefficients obtained for S scales comprises our multiscale descriptor for that vertex.
Given a mesh with N vertices, we first obtain the complete orthonormal basis ?l and eigenvalues
?l , l ? {0, 1, ? ? ? , N ? 1} for the graph Laplacian. Using these bases, the forward and inverse graph
Fourier transformation are defined using eigenvalues and eigenvectors of L as,
f?(l) = h?l , f i =
N
X
??l (n)f (n), and f (n) =
n=1
N
?1
X
f?(l)?l (n)
(4)
l=0
Using the transforms above, we construct spectral graph wavelets by applying band-pass filters at
multiple scales and localizing it with an impulse function. Since the transformed impulse function in
the frequency domain is equivalent to a unit function, the wavelet ? localized at vertex n is defined
as,
N
?1
X
?s,n (m) =
g(s?l )??l (n)?l (m)
(5)
l=0
where m is a vertex index on the graph. The wavelet coefficients of a given function f (n) can be
easily generated from the inner product of the wavelets and the given function,
Wf (s, n) = h?s,n , f i =
N
?1
X
l=0
4
g(s?l )f?(l)?l (n)
(6)
The coefficients obtained from the transformation yield the Wavelet Multiscale Descriptor (WMD)
as a set of wavelet coefficients at each vertex n for each scale s.
WMDf (n) = {Wf (s, n)|s ? S}
(7)
In the following sections, we make use of the multi-scale descriptor for the statistical analysis of
signals defined on surfaces(i.e., standard structured meshes). We will discuss shortly how many of
the low-level processes in obtaining wavelet coefficients can be expressed as linear algebra primitives that can be translated on to the CUDA architecture.
4
Applications of Multiscale Shape Features
In this section, we present extensive experimental results demonstrating the applicability of the
descriptors described above. Our core application domain is Neuroimaging. In this context, we first
test if the multi-scale shape descriptors can drive significant improvements in the statistical analysis
of cortical surface measurements. Then, we use these ideas to perform smoothing of cortical surface
meshes without first projecting them onto a spherical coordinate system (the conventional approach).
4.1
Cortical Thickness Discrimination: Group Analysis for Alzheimer?s disease (AD) studies
As we briefly discussed in Section 1, the identification of group differences between cortical surface
signals is based on comparing the distribution of the signal across the two groups at each vertex. This
can be done either by using the signal (cortical thickness) obtained from the segmentation directly,
or by using a spherical harmonic (SPHARM) or spherical wavelet approach to first parameterize and
then smooth the signal, followed by a vertex-wise T ?test on the smoothed signal. These spherical
approaches change the domain of the data from manifolds to a sphere, introducing distortion. In
contrast, our multi-scale descriptor is well defined for characterizing the shape (and the signal) on
the native graph domain itself. We employ hypothesis testing using the original cortical thickness
and SPHARM as the two baselines for comparison when presenting our experiments below.
Data and Pre-processing. We used Magnetic Resonance (MR) images acquired as part of the
Alzheimer?s Disease Neuroimaging Initiative (ADNI). Our data included brain images from 356
participants: 160 Alzheimer?s disease subjects (AD) and 196 healthy controls (CN). Details of the
dataset are given in Table1.
This dataset was pre-processed using a stan- Table 1: Demographic details and baseline cognitive stadard image processing pipeline, and the tus measure of the ADNI dataset
Freesurfer algorithm [18] was used to segADNI data
ment the cortical surfaces, calculate the corCategory
AD (mean) AD (s.d.) Ctrl (mean) Ctrl (s.d.)
#
of
Subjects
160
196
tical thickness values, and provide vertex to
Age
75.53
7.41
76.09
5.13
vertex correspondences across brain surfaces.
Gender (M/F)
86 / 74
101 / 95
MMSE at Baseline 21.83
5.98
28.87
3.09
The data was then analyzed using our algoYears of Education 13.81
4.61
15.87
3.23
rithm and the two baselines algorithms mentioned above. We constructed WMDs for each vertex on the template cortical surface at 6 different
scales, and used Hotelling?s T 2 ?test for group analysis. The same procedure was repeated using the cortical thickness measurements (from Freesurfer) and the smoothed signal obtained from
SPHARM. The resulting p-value map was corrected for multiple comparisons over all vertices using
the false discovery rate (FDR) method [2].
Fig. 2 summarizes the results of our analysis. The first row corresponds to group analysis using the
original cortical thickness values (CT). Here, while we see some discriminative regions, group differences are weak and statistically significant in only a small region. The second row shows results
pertaining to SPHARM, which indicate a significant improvement over the baseline, partly due to
the effect of noise filtering. Finally, the bottom row in Fig. 2 shows that performing the statistical
tests using our multi-scale descriptor gives substantially larger regions with much lower p-values.
To further investigate this behavior, we repeated these experiments by making the significance level
more conservative. These results (after FDR correction) are shown in Fig. 4. Again, we can directly
compare CT, SPHARM and WMD for a different FDR. A very conservative FDR q = 10?7 was
used on the uncorrected p-values from the hypothesis test, and the q-values after the correction were
projected back on the template mesh. Similar to Fig. 2, we see that relative to CT and SPHARM,
several more regions (with substantially improved q-values) are recovered using the multi-scale descriptor.
5
To quantitatively compare the behavior above, we evaluated the uncorrected p-values over all vertices and sorted them in increasing order. Recall that any p-value below the FDR threshold is considered significant, and gives q-values. Fig. 3 shows the sorted p-values, where blue/black dotted
lines are the FDR thresholds identifying significant vertices.
Figure 2: Normalized log scale p-values after FDR correction at q = 10?5 , projected back on a brain mesh
and displayed. Row 1: Original cortical thickness, Row 2: SPHARM, Row 3: Wavelet Multiscale descriptor.
As seen in Figs. 2, 3 and 5, the number of significant vertices is far larger in WMD compared to CT and
SPHARM. At FDR 10?4 level, there are total 6943 (CT),
28789 (SPHARM) and 40548 (WMD) vertices, showing
that WMD finds 51.3% and 17.9% more discriminative
vertices over CT and SPHARM methods. In Fig. 5, we
can see the effect of FDR correction. With FDR set to
10?3 , 10?5 and 10?7 , the number of vertices that survives the correction threshold decreases to 51929, 28606
and 13226 respectively.
Finally, we evaluated the regions identified by these tests
in the context of their relevance to Alzheimer?s disease.
We found that the identified regions are those that might
be expected to be atrophic in AD. All three methods iden- Figure 3: Sorted p-values from statistitified the anterior entorhinal cortex in the mesial temporal cal analysis of sampled vertices from left
lobe, but at the prespecified threshold, the WMD method hemisphere using cortical thickness (CT),
for FDR q = 10?3
was more sensitive to changes in this location as well as SPHARM, WMD ?4
(black) and q = 10 (blue).
in the posterior cingulate, precuneus, lateral parietal lobe,
and dorsolateral frontal lobe. These are regions that are commonly implicated in AD, and strongly
tie to known results from neuroscience.
Remarks. When we compare two clinically different groups of brain subjects at the opposite ends of
the disease spectrum (AD versus controls), the tests help identify which brain regions are severely
affected. Then, if the analysis of mild AD versus controls reveals the same regions, we know that the
new method is indeed picking up the legitimate regions. The ADNI dataset comprises of mild (and
relatively younger) AD subjects, and the result from our method identifies regions which are known
to be affected by AD. Our experiments suggest that for a study where group differences are expected
to be weak, WMDs can facilitate identification of important variations which may be missed by the
current state of the art, and can improve the statistical power of the experiment.
6
Figure 4: Normalized log scale p-values after FDR correction on the left hemisphere with q = 10?7 on
cortical thickness (left column) , SPHARM (middle column), WMD (right column) repectively, showing both
inner and outer sides of the hemisphere.
Figure 5: Normalized log scale p-values showing the effect of FDR correction on the template left hemisphere
using WMD with FDR q = 10?3 (left column), q = 10?5 (middle column) and q = 10?7 (right column)
repectively, showing both inner and outer sides of the hemisphere.
4.2
Cortical Surface Smoothing without Sphere Mapping
Existing methods for smoothing cortical surfaces and the signal defined on it, such as spherical
harmonics, explicitly represent the cortical surface as a combination of basis functions defined over
regular Euclidean spaces. Such methods have been shown to be quite powerful, but invariably cause
information loss due to the spherical mapping. Our goal was to evaluate whether the ideas introduced
here can avoid this compromise by being able to represent (and smooth) the signal defined on any
arbitrarily shaped mesh using the basis in Section 3.1 .
A small set of experiments were performed to evaluate this idea. We used wavelets of varying
scales to localize the structure of the brain mesh. An inverse wavelet transformation of the resultant
function provides the smooth estimate of the cortical surface at various scales. The same process can
be applied to the
signal defined on the surface as well. Let us rewrite (3) in terms of the graph Fourier
P R ? g2 (s?l ) ?
1
ds f (l)?l (m) which sums over the entire scale s. Interestingly, in our
basis, Cg l 0
s
case, the set of scales directly control the spatial smoothness of the surface. In contrast, existing
methods introduce an additional smoothness parameter (e.g., ? in case of heat kernel). Coarser
spectral scales overlap less and smooth higher frequencies. At finer scale, the complete spectrum is
used and recovers the original surface to high fidelity. An optimal choice of scale removes noisy high
frequency variations and provide the true underlying signal. Representative examples are shown in
Fig. 6 where we illustrate the process of reconstructing the surface of a brain mesh (and the cortical
thickness signal) from a coarse to finer scales.
7
The final reconstruction of the sample brain surface from inverse transformation using five scales
of wavelets and one scaling function returns total error of 2.5855 on x coordinate, 2.2407 in y
coordinate and 2.4594 in z coordinate repectively over entire 136228 vertices. The combined error
of all three coordinates per vertex is 5.346 ? 10?5 , which is small. Qualitatively, we found that the
results compare favorably with [6, 24] but does not need a transformation to a spherical coordinate
system.
Figure 6: Structural smoothing on a brain mesh. Top row: Structural smoothing from coarse to finer scales,
Bottom row: Smoothed cortical thickness displayed on the surface.
Implementation. Processing large surface meshes with ? 200000 vertices is computationally intensive. A key bottleneck is the diagonalization of the Laplacian, which can be avoided by a clever
use of a Chebyshev polynomial approximation method, as suggested by [11]. It turns out that this
procedure basically consists of n iterative sparse matrix-vector multiplications and scalar-vector
multiplications, where n is the degree of the polynomial.
With some manipulations (details in the code release),
the processes above translate nicely on to the GPU architecture. Using the cusparse and cublas libraries,
we derived a specialized procedure for computing the
wavelet transform, which makes heavy use of commodity graphics-card hardware. Fig. 7 provides a comparison
of our results to the serial MATLAB implementation and
code using the commercial Jacket toolbox, for processing
one brain with 166367 vertices over 6 wavelet scales as
a function of polynomial degree. We see that a dataset Figure 7: Running times to process a single
can be processed in less than 10 seconds (even with high brain dataset using native MATLAB code,
Jacket, and our own implementation
polynomial order) using our implementation.
5
Conclusions
We showed that shape descriptors based on multi-scale representations of surface based signals are
a powerful tool for performing multivariate analysis of such data at various resolutions. Using a
large and well characterized neuroimaging dataset, we showed how the framework improves statistical power in hypothesis testing of cortical thickness signals. We expect that in many cases, this
form of analysis can detect group differences where traditional methods fail. This is the primary
experimental result of the paper. We also demonstrated how the idea is applicable to cortical surface smoothing and yield competitive results without a spherical coordinate transformation. The
implementation will be publicly distributed as a supplement to our paper.
Acknowledgments
This research was supported by funding from NIH R01AG040396, NIH R01AG021155, NSF RI
1116584, the Wisconsin Partnership Proposal, UW ADRC, and UW ICTR (1UL1RR025011). The
authors are grateful to Lopa Mukherjee for much help in improving the presentation of this paper.
8
References
[1] M. Belkin and P. Niyogi. Laplacian Eigenmaps for dimensionality reduction and data representation.
Neural Computation, 15(6):pp. 1373?1396, 2003.
[2] Y. Benjamini and Y. Hochberg. Controlling the false discovery rate: A practical and powerful approach
to multiple testing. Journal of the Royal Statistical Society, 57(1):pp. 289?300, 1995.
[3] R. Brown, N. Colter, and J. Corsellis. Postmortem evidence of structural brain changes in Schizophrenia
differences in brain weight, temporal horn area, and parahippocampal gyrus compared with affective
disorder. Arch Gen Psychiatry, 43:36?42, 1986.
[4] R. Cabin and R. Mitchell. To Bonferroni or not to Bonferroni: when and how are the questions. Bulletin
of the Ecological Society of America, 81(3):246?248, 2000.
[5] H. Cheng, Z. Gimbutas, P. G. Martinsson, and V. Rokhlin. On the compression of low rank matrices.
SIAM J. Sci. Comput., 26(4):1389?1404, 2005.
[6] M. Chung, K. Dalton, S. Li, et al. Weighted Fourier series representation and its application to quantifying
the amount of gray matter. Med. Imaging, IEEE Trans. on, 26(4):566 ?581, 2007.
[7] M. Chung, K. Worsley, S. Robbins, et al. Deformation-based surface morphometry applied to gray matter
deformation. NeuroImage, 18(2):198 ? 213, 2003.
[8] R. Coifman and M. Maggioni. Diffusion wavelets. Applied and Computational Harmonic Analysis,
21(1):53 ? 94, 2006.
[9] S. DeKosky and S. Scheff. Synapse loss in frontal cortex biopsies in Alzheimer?s disease: Correlation
with cognitive severity. Annals of Neurology, 27(5):457?464, 1990.
[10] A. Gelb. The resolution of the Gibbs phenomenon for spherical harmonics. Mathematics of Computation,
66:699?717, 1997.
[11] D. Hammond, P. Vandergheynst, and R. Gribonval. Wavelets on graphs via spectral graph theory. Applied
and Computational Harmonic Analysis, 30(2):129 ? 150, 2011.
[12] S. Mallat. A theory for multiresolution signal decomposition: the wavelet representation. Pattern Analysis
and Machine Intelligence, IEEE Trans. on, 11(7):674 ?693, 1989.
[13] T. Merkley, E. Bigler, E. Wilde, et al. Short communication: Diffuse changes in cortical thickness in
pediatric moderate-to-severe traumatic brain injury. Journal of Neurotrauma, 25(11):1343?1345, 2008.
[14] K. Narr, R. Bilder, A. Toga, et al. Mapping cortical thickness and gray matter concentration in first episode
Schizophrenia. Cerebral Cortex, 15(6):708?719, 2005.
[15] D. Pachauri, C. Hinrichs, M. Chung, et al. Topology-based kernels with application to inference problems
in Alzheimer?s disease. Medical Imaging, IEEE Transactions on, 30(10):1760 ?1770, 2011.
[16] S. Peng, J. Wuu, E. Mufson, et al. Precursor form of brain-derived neurotrophic factor and mature brainderived neurotrophic factor are decreased in the pre-clinical stages of Alzheimer?s disease. Journal of
Neurochemistry, 93(6):1412?21, 2005.
[17] E. Reiman, R. Caselli, L. Yun, et al. Preclinical evidence of Alzheimer?s disease in persons homozygous
for the ?4 allele for apolipoprotein e. New England Journal of Medicine, 334(12):752?758, 1996.
[18] M. Reuter, H. D. Rosas, and B. Fischl. Highly accurate inverse consistent registration: A robust approach.
NeuroImage, 53(4):1181?1196, 2010.
[19] O. Rioul and M. Vetterli. Wavelets and signal processing. Signal Processing Magazine, 8(4):14?38, 1991.
[20] P. Shaw, D. Greenstein, J. Lerch, et al. Intellectual ability and cortical development in children and
adolescents. Nature, 440:676?679, 2006.
[21] S.Haykin and B. V. Veen. Signals and Systems, 2nd Edition. Wiley, 2005.
[22] E. Sowell, P. Thompson, C. Leonard, et al. Longitudinal mapping of cortical thickness and brain growth
in normal children. The Journal of Neuroscience, 24:8223?8231, 2004.
[23] R. Sperling, P. Aisen, L. Beckett, et al. Toward defining the preclinical stages of Alzheimers disease: Recommendations from the national institute on Aging-Alzheimer?s Association workgroups on diagnostic
guidelines for Alzheimer?s disease. Alzheimer?s and Dementia, 7(3):280 ? 292, 2011.
[24] P. Yu, P. Grant, Y. Qi, et al. Cortical surface shape analysis based on spherical wavelets. Med. Imaging,
IEEE Trans. on, 26(4):582 ?597, 2007.
9
|
4796 |@word mild:2 mri:1 briefly:2 compression:3 seems:1 stronger:1 cingulate:1 middle:2 polynomial:4 nd:1 pulse:1 lobe:3 decomposition:1 pick:1 carry:1 reduction:1 celebrated:1 series:4 interestingly:1 mmse:1 longitudinal:1 existing:3 recovered:1 comparing:2 anterior:1 surprising:1 current:1 dx:1 moo:1 must:1 gpu:1 mesh:14 shape:10 remove:1 discrimination:3 intelligence:1 parameterization:2 short:4 core:1 prespecified:1 gribonval:1 haykin:1 precuneus:1 provides:5 coarse:2 contribute:1 location:2 node:1 intellectual:1 five:2 constructed:1 become:1 tbi:1 initiative:1 consists:1 affective:1 introduce:2 manner:1 coifman:1 acquired:2 peng:1 expected:2 indeed:4 behavior:5 deepti:1 multi:12 brain:22 globally:1 spherical:12 precursor:1 increasing:1 becomes:1 project:1 underlying:2 biostatistics:1 kind:1 interpreted:1 substantially:2 ringing:1 suppresses:1 transformation:7 temporal:2 commodity:1 collecting:1 growth:2 tie:3 control:8 unit:3 medical:1 grant:1 before:1 engineering:1 local:5 understood:1 aging:2 severely:1 vsingh:1 analyzing:4 black:2 might:1 suggests:1 challenging:1 jacket:2 limited:2 range:1 statistically:3 acknowledgment:1 practical:1 horn:1 testing:4 practice:2 block:1 orthonormalization:2 procedure:5 addressable:1 veen:1 area:1 significantly:1 pre:5 induce:1 regular:3 suggest:1 get:2 cannot:2 onto:1 selection:1 operator:4 cal:1 clever:1 context:6 influence:1 applying:1 parahippocampal:1 equivalent:2 middleton:1 conventional:1 center:1 map:1 demonstrated:1 straightforward:1 primitive:1 duration:2 adolescent:1 focused:1 resolution:4 thompson:1 identifying:3 disorder:2 legitimate:1 deriving:1 orthonormal:1 scj:1 population:2 handle:1 notion:2 coordinate:8 variation:4 traditionally:1 analogous:2 annals:1 construction:2 play:1 commercial:1 controlling:1 mallat:1 magazine:1 us:2 hypothesis:5 element:1 particularly:2 mukherjee:1 native:4 coarser:2 postmortem:1 bottom:2 role:1 pediatric:1 parameterize:1 calculate:1 region:15 episode:1 decrease:1 disease:15 mentioned:2 singh:1 rewrite:1 segment:1 algebra:1 compromise:1 grateful:1 negatively:1 localization:1 basis:14 completely:1 translated:2 easily:1 routinely:1 represented:2 various:3 america:1 heat:1 pertaining:1 detected:1 quite:1 widely:2 valued:1 larger:2 distortion:2 ability:2 niyogi:2 transform:13 noisy:2 itself:1 final:1 sequence:1 eigenvalue:2 reconstruction:1 ment:1 product:2 sowell:1 repectively:3 rapidly:1 gen:1 translate:1 multiresolution:2 adjoint:1 table1:1 extending:1 oscillating:1 produce:1 wilde:1 help:2 derive:4 illustrate:2 strong:3 c:1 uncorrected:2 come:1 indicate:1 biopsy:1 filter:6 subsequently:1 allele:1 education:1 scholar:1 investigation:1 decompose:1 correction:11 around:6 considered:1 normal:2 mapping:5 early:2 adopt:1 applicable:2 reiman:1 superposition:1 healthy:1 sensitive:1 robbins:1 ictr:1 dalton:1 successfully:1 tool:3 weighted:6 survives:1 rather:1 avoid:1 varying:5 release:1 derived:2 improvement:3 unsatisfactory:1 rank:2 biomarker:1 indicates:1 contrast:4 psychiatry:1 cg:1 baseline:5 kim:1 detect:2 wf:5 inference:2 typically:1 entire:3 transformed:3 interested:1 issue:2 fidelity:2 dual:1 development:2 resonance:2 smoothing:10 art:2 ness:1 spatial:1 field:1 once:1 construct:2 shaped:1 nicely:1 rosa:1 zz:1 represents:1 yu:1 survive:3 quantitatively:1 serious:1 few:2 belkin:2 employ:1 preserve:1 simultaneously:1 national:1 individual:1 iden:1 sterling:1 william:1 invariably:2 interest:3 investigate:1 highly:1 severe:1 introduces:1 analyzed:1 tical:1 accurate:1 edge:2 adrc:1 alzheimers:1 euclidean:4 walk:6 deformation:2 complicates:1 instance:1 column:7 earlier:1 injury:2 jpeg:1 localizing:2 cublas:1 lattice:1 applicability:1 introducing:1 vertex:35 uniform:1 eigenmaps:2 johnson:1 too:1 graphic:1 characterize:3 thickness:25 periodic:1 combined:1 person:1 fundamental:1 siam:1 informatics:1 invertible:1 picking:1 together:2 again:1 central:1 cognitive:2 chung:4 return:1 worsley:1 li:1 account:1 coefficient:8 matter:3 explicitly:1 toga:1 ad:11 performed:4 view:2 sine:2 later:1 greenstein:1 characterizes:1 competitive:1 hf:1 participant:2 contribution:1 hwa:1 publicly:1 descriptor:17 who:1 yield:4 identify:3 correspond:1 yellow:1 conceptually:1 weak:2 identification:2 basically:1 hammond:1 biostat:1 drive:1 finer:4 suffers:2 against:1 ul1rr025011:1 frequency:16 pp:2 naturally:1 associated:2 resultant:1 recovers:1 sampled:1 dataset:7 ask:1 mitchell:1 manifest:1 recall:2 improves:1 dimensionality:1 segmentation:1 subtle:1 vetterli:1 neurotrophic:2 back:5 higher:2 improved:1 synapse:1 done:1 evaluated:2 strongly:2 biomedical:1 stage:3 arch:1 correlation:1 d:2 multiscale:6 google:1 artifact:1 reveal:1 impulse:2 gray:3 mary:1 building:1 effect:4 facilitate:1 normalized:3 true:1 brown:1 symmetric:1 iteratively:2 pri:1 sin:1 during:1 bonferroni:3 self:1 covering:1 won:1 cosine:2 presenting:1 yun:1 complete:2 performs:1 interpreting:2 reuter:1 ranging:1 wise:3 harmonic:8 image:8 recently:1 charles:1 funding:1 nih:2 specialized:1 cerebral:1 extend:1 analog:1 discussed:1 martinsson:1 association:1 measurement:5 significant:8 gibbs:2 mother:1 smoothness:2 grid:1 mathematics:1 benjamini:1 pathology:1 longer:1 surface:46 operating:2 cortex:3 base:2 multivariate:2 own:2 recent:2 showed:3 posterior:1 hemisphere:5 apart:1 discard:1 scenario:1 manipulation:1 certain:2 moderate:1 ecological:1 arbitrarily:1 seen:2 additional:1 mr:1 signal:47 ii:1 multiple:10 smooth:4 adni:3 characterized:1 clinical:6 sphere:5 offer:1 long:1 england:1 dept:3 serial:1 schizophrenia:3 va:1 impact:1 laplacian:7 qi:1 basic:1 maggioni:1 metric:1 represent:2 adopting:1 kernel:2 cusparse:1 irregular:2 younger:1 proposal:1 morphometry:1 fine:1 decreased:1 operate:1 unlike:2 subject:6 med:3 isolate:1 hatt:2 facilitates:1 meaningfully:1 mature:2 alzheimer:13 structural:5 beckett:1 granularity:1 enough:1 lerch:1 variety:1 fit:1 architecture:2 topology:6 identified:2 opposite:1 inner:6 idea:6 cn:1 tus:1 intensive:1 chebyshev:1 bottleneck:2 whether:1 motivated:1 wmd:12 cause:1 constitute:1 remark:1 matlab:2 useful:3 detailed:1 eigenvectors:2 transforms:2 repeating:1 amount:1 band:5 induces:1 processed:2 hardware:1 gyrus:1 problematic:1 nsf:1 shifted:1 cuda:1 dotted:1 sign:2 neuroscience:3 diagnostic:1 per:1 blue:3 fischl:1 promise:1 affected:2 group:15 key:2 demonstrating:1 threshold:4 drawn:1 localize:1 wisc:4 changing:1 registration:1 diffusion:6 uw:2 imaging:3 graph:23 year:1 sum:1 inverse:5 powerful:4 topologically:2 extends:2 missed:1 preclinical:2 summarizes:1 scaling:4 dorsolateral:1 hochberg:1 entirely:1 ct:7 followed:2 tackled:1 correspondence:2 cheng:1 topological:2 encountered:1 ri:1 diffuse:1 aspect:1 fourier:14 concluding:1 performing:5 relatively:1 structured:1 clinically:2 combination:1 conjugate:2 across:4 describes:1 increasingly:1 reconstructing:1 wi:5 making:1 projecting:1 pipeline:1 computationally:1 discus:1 detectable:1 fail:2 mechanism:1 turn:1 know:1 sperling:1 serf:1 demographic:1 end:1 adopted:1 generalizes:1 appropriate:1 spectral:4 magnetic:2 shaw:1 hotelling:1 shortly:1 vikas:1 original:7 top:1 running:1 madison:5 atrophy:1 medicine:2 negligable:1 especially:2 establish:1 approximating:1 pachauri:3 classical:1 society:2 homozygous:1 move:1 question:2 primary:2 concentration:1 traditional:2 exhibit:2 subspace:1 distance:2 card:1 lateral:1 sci:1 majority:1 outer:4 tration:1 manifold:2 barely:1 toward:1 code:3 sur:1 modeled:1 relationship:1 index:1 providing:1 downsampling:2 freesurfer:2 difficult:1 neuroimaging:6 unfortunately:1 setup:1 favorably:1 disparate:1 implementation:5 guideline:1 fdr:14 perform:2 observation:1 sm:1 finite:1 displayed:2 parietal:1 immediate:1 defining:2 extended:1 severity:1 gelb:1 communication:1 compressibility:1 smoothed:3 arbitrary:4 introduced:1 required:2 specified:1 extensive:1 toolbox:1 discontinuity:1 trans:3 address:1 beyond:1 able:1 suggested:1 usually:1 below:2 pattern:1 regime:1 including:1 royal:1 power:5 critical:1 overlap:1 difficulty:2 aisen:1 representing:1 improve:1 brief:1 library:1 stan:1 identifies:1 extract:1 review:1 voxels:1 literature:1 discovery:2 powered:1 multiplication:2 relative:1 wisconsin:6 loss:2 admissibility:1 expect:1 filtering:2 versus:3 localized:8 vandergheynst:1 age:1 degree:2 consistent:1 s0:1 article:1 propagates:1 bypass:1 heavy:1 translation:4 row:8 supported:1 last:1 implicated:2 side:2 weaker:2 institute:1 explaining:1 wide:1 face:1 taking:1 characterizing:1 template:3 sparse:1 bulletin:1 benefit:2 distributed:1 cortical:53 author:2 forward:1 commonly:1 projected:2 qualitatively:1 avoided:1 far:2 correlate:1 nonlocal:1 transaction:1 compact:1 obtains:1 scheff:1 gene:1 global:1 reveals:1 discriminative:3 neurology:1 spectrum:5 search:1 iterative:1 dilation:5 table:1 nature:1 robust:1 inherently:1 obtaining:3 improving:1 expansion:2 complex:1 constructing:2 domain:13 da:1 hinrichs:1 significance:2 main:1 noise:3 edition:1 ctrl:2 repeated:3 child:2 positively:1 fig:11 representative:2 pubmed:1 rithm:1 wiley:1 formalization:2 neuroimage:2 comprises:2 comput:1 governed:1 atrophic:1 third:1 wavelet:52 specific:3 showing:5 dementia:2 decay:1 evidence:3 false:2 supplement:1 diagonalization:1 entorhinal:1 neurochemistry:1 traumatic:2 insures:1 expressed:2 g2:1 scalar:2 recommendation:1 gender:1 corresponds:2 relies:1 goal:2 sorted:3 presentation:1 quantifying:1 leonard:1 towards:2 change:6 hard:1 included:1 typical:1 infinite:2 operates:1 corrected:1 conservative:2 called:2 hospital:1 pas:6 partly:1 experimental:3 total:2 meaningful:2 formally:1 rokhlin:1 support:3 partnership:1 relevance:1 frontal:2 evaluate:2 phenomenon:2 correlated:3
|
4,194 | 4,797 |
Stochastic Gradient Descent with
Only One Projection
Mehrdad Mahdavi? , Tianbao Yang? , Rong Jin? , Shenghuo Zhu? , and Jinfeng Yi?
?
?
Dept. of Computer Science and Engineering, Michigan State University, MI, USA
?
Machine Learning Lab, GE Global Research, CA, USA
?
NEC Laboratories America, CA, USA
{mahdavim,rongjin,yijinfen}@msu.edu,? [email protected],? [email protected]
Abstract
Although many variants of stochastic gradient descent have been proposed for
large-scale convex optimization, most of them require projecting the solution at
each iteration to ensure that the obtained solution stays within the feasible domain.
For complex domains (e.g., positive semidefinite cone), the projection step can
be computationally expensive, making stochastic gradient descent unattractive for
large-scale optimization problems. We address this limitation by developing novel
stochastic optimization algorithms that do not need intermediate projections. Instead, only one projection at the last iteration is needed to obtain a feasible solution
in the given domain. Our theoretical analysis
? shows that with a high probability,
the proposed algorithms achieve an O(1/ T ) convergence rate for general convex optimization, and an O(ln T /T ) rate for strongly convex optimization under
mild conditions about the domain and the objective function.
1
Introduction
With the increasing amount of data that is available for training, it becomes an urgent task to devise
efficient algorithms for optimization/learning problems with unprecedented sizes. Online learning
algorithms, such as celebrated Stochastic Gradient Descent (SGD) [16, 2] and its online counterpart
Online Gradient Descent (OGD) [22], despite of their slow rate of convergence compared with the
batch methods, have shown to be very effective for large scale and online learning problems, both
theoretically [16, 13] and empirically [19]. Although a large number of iterations is usually needed
to obtain a solution of desirable accuracy, the lightweight computation per iteration makes SGD
attractive for many large-scale learning problems.
To find a solution within the domain K that optimizes the given objective function f (x), SGD
computes an unbiased estimate of the gradient of f (x), and updates the solution by moving it in
the opposite direction of the estimated gradient. To ensure that the solution stays within the domain
K, SGD has to project the updated solution back into the K at every iteration. Although efficient
algorithms have been developed for projecting solutions into special domains (e.g., simplex and `1
ball [6, 14]); for complex domains, such as a positive semidefinite (PSD) cone in metric learning
and bounded trace norm matrices in matrix completion (more examples of complex domains can
be found in [10] and [11]), the projection step requires solving an expensive convex optimization,
leading to a high computational cost per iteration and consequently making SGD unappealing for
large-scale optimization problems over such domains. For instance, projecting a matrix into a PSD
cone requires computing the full eigen-decomposition of the matrix, whose complexity is cubic in
the size of the matrix.
The central theme of this paper is to develop a SGD based method that does not require projection
at each iteration. This problem was first addressed in a very recent work [10], where the authors
extended Frank-Wolfe algorithm [7] for online learning. But, one main shortcoming of the algo1
rithm proposed in [10] is that it has a slower convergence rate (i.e., O(T ?1/3 )) than a standard
SGD algorithm (i.e., O(T ?1/2 )). In this work, we demonstrate that a properly modified SGD algorithm can achieve the optimal convergence rate of O(T ?1/2 ) using only ONE projection for general
stochastic convex optimization problem. We further develop an SGD based algorithm for strongly
convex optimization that achieves a convergence rate of O(ln T /T ), which is only a logarithmic
factor worse than the optimal rate [9]. The key idea of both algorithms is to appropriately penalize
the intermediate solutions when they are outside the domain. With an appropriate design of penalbT obtained by the SGD after T iterations will be very
ization mechanism, the average solution x
close to the domain K, even without intermediate projections. As a result, the final feasible solution
eT can be obtained by projecting x
bT into the domain K, the only projection that is needed for the
x
entire algorithm. We note that our approach is very different from the previous efforts in developing
projection free convex optimization algorithms (see [8, 12, 11] and references therein), where the
key idea is to develop appropriate updating procedures to restore the feasibility of solutions at every
iteration.
We close this section with a statement of contributions and main results made by the present work:
? We propose a stochastic gradient descent algorithm for general convex optimization that introduces a Lagrangian multiplier to penalize the solutions outside the domain and performs
primal-dual
updating. The proposed algorithm achieves the optimal convergence rate of
?
O(1/ T ) with only one projection;
? We propose a stochastic gradient descent algorithm for strongly convex optimization that
constructs the penalty function using a smoothing technique. This algorithm attains an
O(ln T /T ) convergence rate with only one projection.
2
Related Works
Generally, the computational complexity of the projection step in SGD has seldom been taken into
account in the literature. Here, we briefly review the previous works on projection free convex optimization, which is closely related to the theme of this study. For some specific domains, efficient
algorithms have been developed to circumvent the high computational cost caused by projection
step at each iteration of gradient descent methods. The main idea is to select an appropriate direction to take from the current solution such that the next solution is guaranteed to stay within the
domain. Clarkson [5] proposed a sparse greedy approximation algorithm for convex optimization
over a simplex domain, which is a generalization of an old algorithm by Frank and Wolfe [7] (a.k.a
conditional gradient descent [3]). Zhang [21] introduced a similar sequential greedy approximation
algorithm for certain convex optimization problems over a domain given by a convex hull. Hazan [8]
devised an algorithm for approximately maximizing a concave function over a trace norm bounded
PSD cone, which only needs to compute the maximum eigenvalue and the corresponding eigenvector of a symmetric matrix. Ying et al. [20] formulated the distance metric learning problems into
eigenvalue maximization and proposed an algorithm similar to [8].
Recently, Jaggi [11] put these ideas into a general framework for convex optimization with a general convex domain. Instead of projecting the intermediate solution into a complex convex domain,
Jaggi?s algorithm solves a linearized problem over the same domain. He showed that Clark?s algorithm , Zhang?s algorithm and Hazan?s algorithm discussed above are special cases of his general
algorithm for special domains. It is important to note that all these algorithms are designed for batch
optimization, not for stochastic optimization, which is the focus of this work.
Our work is closely related to the online Frank-Wolfe (OFW) algorithm proposed in [10]. It is a
projection free online learning algorithm, built on the the assumption that it is possible to efficiently
minimize a linear function over the complex domain. One main shortcoming of the OFW algorithm
is that its convergence rate for general stochastic optimization is O(T ?1/3 ), significantly slower than
that of a standard stochastic gradient descent algorithm (i.e., O(T ?1/2 )). It achieves a convergence
rate of O(T ?1/2 ) only when the objective function is smooth, which unfortunately does not hold
for many machine learning problems where either a non-smooth regularizer or a non-smooth loss
function is used. Another limitation of OFW is that it assumes a linear optimization problem over
the domain K can be solved efficiently. Although this assumption holds for some specific domains
as discussed in [10], but in many settings of practical interest, this may not be true. The proposed
algorithms address the two limitations explicitly. In particular, we show that how two seemingly
different modifications of the SGD can be used to avoid performing expensive projections with
similar convergency rates as the original SGD method.
2
3
Preliminaries
Throughout this paper, we consider the following convex optimization problem:
min f (x),
x?K
(1)
where K is a bounded convex domain. We assume that K can be characterized by an inequality
constraint and without loss of generality is bounded by the unit ball, i.e.,
K = {x ? Rd : g(x) ? 0} ? B = {x ? Rd : kxk2 ? 1},
(2)
where g(x) is a convex constraint function. We assume that K has a non-empty interior, i.e., there
exists x such that g(x) < 0 and the optimal solution x? to (1) is in the interior of the unit ball B, i.e.,
kx? k2 < 1. Note that when a domain is characterized by multiple convex constraint functions, say
gi (x) ? 0, i = 1, . . . , m, we can summarize them into one constraint g(x) ? 0, by defining g(x) as
g(x) = max1?i?m gi (x).
To solve the optimization problem in (1), we assume that the only information available to the algorithm is through a stochastic oracle that provides unbiased estimates of the gradient of f (x).
More precisely, let ?1 , . . . , ?T be a sequence of independently and identically distributed (i.i.d)
random variables sampled from an unknown distribution P . At each iteration t, given solue (xt ; ?t ), an unbiased estimate of the true gradient ?f (xt ), i.e.,
tion xt , the oracle returns ?f
e (xt , ?t )] = ?f (xt ). The goal of the learner is to find an approximate optimal solution
E?t [?f
by making T calls to this oracle.
Before proceeding, we recall a few definitions from convex analysis [17].
Definition 1. A function f (x) is a G-Lipschitz continuous function w.r.t a norm k ? k, if
|f (x1 ) ? f (x2 )| ? Gkx1 ? x2 k, ?x1 , x2 ? B.
(3)
In particular, a convex function f (x) with a bounded (sub)gradient k?f (x)k? ? G is G-Lipschitz
continuous, where k ? k? is the dual norm to k ? k.
Definition 2. A convex function f (x) is ?-strongly convex w.r.t a norm k ? k if there exists a constant
? > 0 (often called the modulus of strong convexity) such that, for any ? ? [0, 1], it holds:
1
f (?x1 + (1 ? ?)x2 ) ? ?f (x1 ) + (1 ? ?)f (x2 ) ? ?(1 ? ?)?kx1 ? x2 k2 , ?x1 , x2 ? B.
2
When f (x) is differentiable, the strong convexity is equivalent to f (x1 ) ? f (x2 ) + h?f (x2 ), x1 ?
x2 i + ?2 kx1 ? x2 k2 , ?x1 , x2 ? B. In the sequel, we use the standard Euclidean norm to define
Lipschitz and strongly convex functions. Stochastic gradient descent method is an iterative algorithm
and produces a sequence of solutions xt , t = 1, . . . , T , by
e (xt , ?t )),
xt+1 = ?K (xt ? ?t ?f
(4)
T
where {?t }t=1 is a sequence of step sizes, ?K (x) is a projection operator that projects x into the
e (x, ?t ) is an unbiased stochastic gradient of f (x), for which we further assume
domain K, and ?f
bounded gradient variance as
e (x, ?t ) ? ?f (x)k22 /? 2 )] ? exp(1).
E?t [exp(k?f
(5)
?
For general convex optimization, stochastic gradient descent methods can obtain an O(1/ T ) convergence rate in expectation or in a high probability provided (5) [16]. As we mentioned in the
Introduction section, SGD methods are computationally efficient only when the projection ?K (x)
can be carried out efficiently. The objective of this work is to develop computationally efficient
stochastic optimization algorithms that are able to yield the same performance guarantee as the
standard SGD algorithm but with only ONE projection when applied to the problem in (1).
4
Algorithms and Main Results
We now turn to extending the SGD method to the setting where only one projection is allowed to
perform for the entire sequence of updating. The main idea is to incorporate the constraint function
g(x) into the objective function to penalize the intermediate solutions that are outside the domain.
The result of the penalization is that, although the average solution obtained by SGD may not be
feasible, it should be very close to the boundary of the domain. A projection is performed at the end
of the iterations to restore the feasibility of the average solution.
3
Algorithm 1 (SGDP-PD): SGD with ONE Projection by Primal Dual Updating
1: Input: a sequence of step sizes {?t }, and a parameter ? > 0
2: Initialization: x1 = 0 and ?1 = 0
3: for t = 1, 2, . . . , T do
e (xt , ?t ) + ?t ?g(xt ))
4:
Compute x0t+1 = xt ? ?t (?f
5:
Update xt+1 = x0t+1 / max (kx0t+1 k2 , 1),
6:
Update ?t+1 = [(1 ? ??t )?t + ?t g(xt )]+
7: end for
P
eT = ?K (b
bT = Tt=1 xt /T .
8: Output: x
xT ), where x
The key ingredient of proposed algorithms is to replace the projection step with the gradient computation of the constraint function defining the domain K, which is significantly cheaper than projection step. As an example, when a solution is restricted to a PSD cone, i.e., X 0 where X
is a symmetric matrix, the corresponding inequality constraint is g(X) = ?max (?X) ? 0, where
?max (X) computes the largest eigenvalue of X and is a convex function. In this case, ?g(X) only
requires computing the minimum eigenvector of a matrix, which is cheaper than a full eigenspectrum
computation required at each iteration of the standard SGD algorithm to restore feasibility.
Below, we state a few assumptions about f (x) and g(x) often made in stochastic optimization as:
k?f (x)k2 ? G1 , k?g(x)k2 ? G2 , |g(x)| ? C2 , ?x ? B,
e (x, ?t ) ? ?f (x)k22 /? 2 )] ? exp(1), ?x ? B.
A2
E?t [exp(k?f
We also make the following mild assumption about the boundary of the convex domain K as:
A1
A3
there exists a constant ? > 0 such that min k?g(x)k2 ? ?.
g(x)=0
(6)
(7)
(8)
Remark 1. The purpose of introducing assumption A3 is to ensure that the optimal dual variable
for the constrained optimization problem in (1) is well bounded from the above, a key factor for our
analysis. To see this, we write the problem in (1) into a convex-concave optimization problem:
min max f (x) + ?g(x).
x?B ??0
Let (x? , ?? ) be the optimal solution to the above convex-concave optimization problem. Since we
assume g(x) is strictly feasible, x? is also an optimal solution to (1) due to the strong duality
theorem [4]. Using the first order optimality condition, we have ?f (x? ) = ??? ?g(x? ). Hence,
?? = 0 when g(x? ) < 0, and ?? = k?f (x? )k2 /k?g(x? )k2 when g(x? ) = 0. Under assumption
A3, we have ?? ? [0, G1 /?].
We note that, from a practical point of view, it is straightforward to verify that for many domains
including PSD cone and Polytope, the gradient of the constraint function is lower bounded on the
boundary and therefore assumption A3 does not limit the applicability of the proposed algorithms
for stochastic optimization. For the example of g(X) = ?max (?X), the assumption A3 implies
ming(X)=0 k?g(X)kF = kuu> kF = 1, where u is an orthonomal vector representing the corresponding eigenvector of the matrix X whose minimum eigenvalue is zero.
We propose two different ways of incorporating the constraint function into the objective function,
which result in two algorithms, one for general convex and the other for strongly convex functions.
4.1
SGD with One Projection for General Convex Optimization
To incorporate the constraint function g(x), we introduce a regularized Lagrangian function,
?
L(x, ?) = f (x) + ?g(x) ? ?2 , ? ? 0.
2
The summation of the first two terms in L(x, ?) corresponds to the Lagrangian function in dual analysis and ? corresponds to a Lagrangian multiplier. A regularization term ?(?/2)?2 is introduced in
L(x, ?) to prevent ? from being too large. Instead of solving the constrained optimization problem
in (1), we try to solve the following convex-concave optimization problem
min max L(x, ?).
(9)
x?B ??0
The proposed algorithm for stochastically optimizing the problem in (9) is summarized in Algorithm 1. It differs from the existing stochastic gradient descent methods in that it updates both the
primal variable x (steps 4 and 5) and the dual variable ? (step 6), which shares the same step sizes.
4
We note that the parameter ? is not employed in the implementation of Algorithm 1 and is only
required for the theoretical analysis. It is noticeable that a similar primal-dual updating is explored
in [15] to avoid projection in online learning. Our work differs from [15] in that their algorithm
and analysis only lead to a bound for the regret and the violation of the constraints in a long run,
which does not necessarily guarantee the feasibility of final solution. Also our proof techniques
differ from [16], where the convergence rate is obtained for the saddle point; however our goal is to
attain bound on the convergence of the primal feasible solution.
Remark 2. The convex-concave optimization problem in (9) is equivalent to the following minimization problem:
[g(x)]2+
min f (x) +
,
(10)
x?B
2?
where [z]+ outputs z if z > 0 and zero otherwise. It thus may seem attractive to directly optimize
the penalized function f (x) + [g(x)]2+ /(2?) using the standard SGD method, which unfortunately
?
?
does not yield?
a regret of O( T ). This is because, in order to obtain a regret of O( T ), we need
to set ? = ?( T ), which unfortunately will lead to a blowup of the gradients and consequently a
poor regret bound. Using a primal-dual
updating schema allows us to adjust the penalization term
?
more carefully to obtain an O(1/ T ) convergence rate.
Theorem p
1. For any general convex function f (x), if we set ?t = ?/(2G22 ), t = 1, ? ? ? , T , and
2
? = G2 / (G21 + C22 + (1 + ln(2/?))? 2 )T in Algorithm 1, under assumptions A1-A3, we have,
with a probability at least 1 ? ?,
1
,
f (e
xT ) ? min f (x) + O ?
x?K
T
where O(?) suppresses polynomial factors that depend on ln(2/?), G1 , G2 , C2 , ?, and ?.
4.2
SGD with One Projection for Strongly Convex Optimization
We first emphasize that it is difficult to extend Algorithm 1 to achieve an O(ln T /T ) convergence
rate for strongly convex optimization. This is because although ?L(x, ?) is strongly convex in ?,
its modulus for strong convexity is ?, which is too small to obtain an O(ln T ) regret bound.
To achieve a faster convergence rate for strongly convex optimization, we change assumptions A1
and A2 to
e (x, ?t )k2 ? G1 , k?g(x)k2 ? G2 , ?x ? B,
A4 k?f
where we slightly abuse the same notation G1 . Note that A1 only requires that k?f (x)k2 is
bounded and A2 assumes a mild condition on the stochastic gradient. In contrast, for strongly
e (x, ?t )k2 . Alconvex optimization we need to assume a bound on the stochastic gradient k?f
though assumption A4 is stronger than assumptions A1 and A2, however, it is always possible
to bound the stochastic gradient for machine learning problems where f (x) usually consists of
a summation of loss functions on training examples, and the stochastic gradient is computed by
e (x, ?t )k2 , we can easily have
sampling over the training examples. Given the bound on k?f
e
e
k?f (x)k2 = kE?f (x, ?t )k2 ? Ek?f (x, ?t )k2 ? G1 , which is used to set an input parameter
?0 > G1 /? to the algorithm. According to the discussion in the last subsection, we know that the
optimal dual variable ?? is upper bounded by G1 /?, and consequently is upper bounded by ?0 .
Similar to the last approach, we write the optimization problem (1) into an equivalent convexconcave optimization problem:
min f (x) = min max f (x) + ?g(x) = min f (x) + ?0 [g(x)]+ .
g(x)?0
x?B 0????0
x?B
To avoid unnecessary complication due to the subgradient of [?]+ , following [18], we introduce a
smoothing term H(?/?0 ), where H(p) = ?p ln p ? (1 ? p) ln(1 ? p) is the entropy function, into
the Lagrangian function, leading to the optimization problem min F (x), where F (x) is defined as
x?B
?0 g(x)
F (x) = f (x) + max ?g(x) + ?H(?/?0 ) = f (x) + ? ln 1 + exp
,
0????0
?
where ? > 0 is a parameter whose value will be determined later. Given the smoothed objective
function F (x), we find the optimal solution by applying SGD to minimize F (x), where the gradient
5
Algorithm 2 (SGDP-ST): SGD with ONE Projection by a Smoothing Technique
1: Input: a sequence of step sizes {?t }, ?0 , and ?
2: Initialization: x1 = 0.
3: for t = 1, . . . , T do
e (xt , ?t ) + exp (?0 g(xt )/?) ?0 ?g(xt )
4:
Compute x0t+1 = xt ? ?t ?f
1 + exp(?0 g(xt )/?)
5:
Update xt+1 = x0t+1 / max(kx0t+1 k2 , 1)
6: end for
P
eT = ?K (b
bT = Tt=1 xt /T .
7: Output: x
xT ), where x
of F (x) is computed by
exp (?0 g(x)/?)
?0 ?g(x).
(11)
1 + exp (?0 g(x)/?)
Algorithm 2 gives the detailed steps. Unlike Algorithm 1, only the primal variable x is updated in
each iteration using the stochastic gradient computed in (11).
?F (x) = ?f (x) +
The following theorem shows that Algorithm 2 achieves an O(ln T /T ) convergence rate if the cost
functions are strongly convex.
Theorem 2. For any ?-strongly convex function f (x), if we set ?t = 1/(2?t), t = 1, . . . , T , ? =
ln T /T , and ?0 > G1 /? in Algorithm 2, under assumptions A3 and A4, we have with a probability
at least 1 ? ?,
ln T
f (e
xT ) ? min f (x) + O
,
x?K
T
where O(?) suppresses polynomial factors that depend on ln(1/?), 1/?, G1 , G2 , ?, and ?0 .
It is well known that the optimal convergence rate of SGD for strongly convex optimization is
O(1/T ) [9] which has been proven to be tight in stochastic optimization setting [1]. According to
Theorem 2, Algorithm 2 achieves an almost optimal convergence rate except for the factor of ln T .
It is worth mentioning that although it is not explicitly given in Theorem 2, the detailed expression
for the convergence rate of Algorithm 2 exhibits a tradeoff in setting ?0 (more can be found in?the
proof of Theorem 2). Finally, under assumptions A1-A3, Algorithm 2 can achieve an O(1/ T )
convergence rate for general convex functions, similar to Algorithm 1.
5
Convergence Rate Analysis
We here present the proofs of main theorems. The omitted proofs are provided in the Appendix. We
use O(?) notation in a few inequalities to absorb constants independent from T for ease of exposition.
5.1
Proof of Theorem 1
To pave the path for the proof, we present a series of lemmas. The lemma below states two key
inequalities, which follows the standard analysis of gradient descent.
Lemma 1. Under the bounded assumptions in (6) and (7), for any x ? B and ? > 0, we have
1
(xt ? x)> ?x L(xt , ?t ) ?
kx ? xt k22 ? kx ? xt+1 k22 + 2?t G21 + ?t G22 ?2t
2?t
e (xt , ?t ) ? ?f (xt )k22 + (x ? xt )> (?f
e (xt , ?t ) ? ?f (xt )),
+ 2?t k?f
|
{z
} |
{z
}
??t
??t (x)
1
(? ? ?t )?? L(xt , ?t ) ?
|? ? ?t |2 ? |? ? ?t+1 |2 + 2?t C22 .
2?t
An immediate result of Lemma 1 is the following which states a regret-type bound.
Lemma 2. For any general convex function f (x), if we set ?t = ?/(2G22 ), t = 1, ? ? ? , T , we have
PT
T
T
T
X
X
[ t=1 g(xt )]2+
G22
(G21 + C22 )
? X
?
(f (xt ) ? f (x )) +
?
+
?T
+
?
+
?t (x? ),
t
2 /?)
2
2
2(?T
+
2G
?
G
G
2
2
2
t=1
t=1
t=1
where x? = arg minx?K f (x).
6
Proof of Therorem 1. First, by martingale
p inequality
? (e.g., Lemma 4 in [13]), with a probability
PT
?
1 ? ?/2, we have t=1 ?t (x ) ? 2? 3 ln(2/?) T . By Markov?s inequality, with a probability
PT
1 ? ?/2, we have t=1 ?t ? (1 + ln(2/?))? 2 T . Substituting these inequalities into Lemma 2,
plugging the stated value of ?, we have with a probability 1 ? ?
T
T
X
?
2
1 X
?
(f (xt ) ? f (x )) + ?
g(xt ) + ? O( T ),
C T t=1
t=1
p
p
2
2
where C = 2G2 (1/ G1 + C2 + (1 + ln(2/?))? 2 + 2 G21 + C22 + (1 + ln(2/?))? 2 ) and O(?)
suppresses polynomial factors that depend on ln(2/?), G1 , G2 , C2 , ?.
P
bT = Tt=1 xt /T and using the convexity of f (x) and g(x), we have
Recalling the definition of x
?
T
1
?
2
f (b
xT ) ? f (x ) +
[g(b
xT )]+ ? O ?
.
(12)
C
T
?
eT = x
bT and we easily have f (e
Assume g(b
xT ) > 0, otherwise x
xT ) ? minx?K f (x) + O(1/ T ).
eT is the projection of x
bT into K, i.e., x
eT = arg ming(x)?0 kx ? x
bT k22 , then by first order
Since x
optimality condition, there exists a positive constant s > 0 such that
bT ? x
eT = s?g(e
g(e
xT ) = 0, and x
xT )
bT ? x
eT is in the same direction to ?g(?
which indicates that x
xT ). Hence,
eT )> ?g(e
eT k2 k?g(e
eT k2 ,
g(b
xT ) = g(b
xT ) ? g(e
xT ) ? (b
xT ? x
xT ) = kb
xT ? x
xT )k2 ? ?kb
xT ? x
(13)
where the last inequality follows the definition of ming(x)=0 k?g(x)k2 ? ?. Additionally, we have
eT k2 ,
f (x? ) ? f (b
xT ) ? f (x? ) ? f (e
xT ) + f (e
xT ) ? f (b
xT ) ? G1 kb
xT ? x
(14)
?
due to f (x ) ? f (e
xT ) and Lipschitz continuity of f (x). Combining inequalities (12), (13), and (14)
yields
?
?2 ?
eT k22 ? O(1/ T ) + G1 kb
eT k2 .
T kb
xT ? x
xT ? x
C
q
C
1C
eT k2 ? ?G2 ?
By simple algebra, we have kb
xT ? x
+
O
2
? T . Therefore
T
1
1
?
?
eT k2 + f (x ) + O ?
? f (x ) + O ?
,
f (e
xT ) ? f (e
xT ) ? f (b
xT ) + f (b
xT ) ? G1 kb
xT ? x
T
T
where we use the inequality in (12) to bound f (b
xT ) by f (x? ) and absorb the dependence on ?, G1 , C
into the O(?) notation.
Remark 3. From the proof of Theorem 1, we can see that the key inequalities are (12), (13), and (14).
In particular, the regret-type bound in (12) depends on the algorithm. If we only update the primal
variable using the penalized objective in (10), whose gradient depends on 1/?, it will cause a blowup
in the regret bound with (1/? + ?T + T /?), which leads to a non-convergent bound.
5.2
Proof of Theorem 2
Our proof of Theorem 2 for the convergence rate of Algorithm 2 when applied to strongly convex
functions starts with the following lemma by analogy of Lemma 2.
Lemma 3. For any ?-strongly convex function f (x), if we set ?t = 1/(2?t), we have
T
T
T
X
(G21 + ?20 G22 )(1 + ln T ) X
?X ?
(F (x) ? F (x? )) ?
+
?t (x? ) ?
kx ? xt k22
2?
4
t=1
t=1
t=1
where x? = arg minx?K f (x).
In order to prove Theorem 2, we need the following result for an improved martingale inequality.
PT
PT
Lemma 4. For any fixed x ? B, define DT = t=1 kxt ? xk22 , ?T = t=1 ?t (x), and m =
dlog2 T e. We have
r
4
m
m
+ Pr ?T ? 4G1 DT ln
+ 4G1 ln
? 1 ? ?.
Pr DT ?
T
?
?
7
Proof of Theorem 2. We substitute the bound in Lemma 4 into the inequality in Lemma 3 with
x = x? . We consider two cases. In the first case, we assume DT ? 4/T . As a result, we have
T
T
X
X
p
e (xt , ?t ))> (x? ? xt ) ? 2G1 T DT ? 4G1 ,
?t (x? ) =
(?f (xt ) ? ?f
t=1
t=1
which together with the inequality in Lemma 3 leads to the bound
T
X
(G21 + ?20 G22 )(1 + ln T )
(F (xt ) ? F (x? )) ? 4G1 +
.
2?
t=1
In the second case, we assume
r
T
X
m
m
?
16G21
m
?
+ 4G1 ln
? DT +
+ 4G1 ln ,
?t (x ) ? 4G1 DT ln
?
?
4
?
?
t=1
?
where the last step uses the fact 2 ab ? a2 + b2 . We thus have
T
X
m (G21 + ?20 G22 )(1 + ln T )
16G21
?
+ 4G1 ln
+
.
(F (xt ) ? F (x )) ?
?
?
2?
t=1
Combing the results of the two cases, we have, with a probability 1 ? ?,
T
X
16G21
(G21 + ?20 G22 )(1 + ln T )
m
(F (xt ) ? F (x? )) ?
+ 4G1 ln
+ 4G1 +
.
?
?
2?
t=1
|
{z
}
O(ln T )
By convexity of F (x), we have F (b
xT ) ? F (x? ) + O (ln T /T ). Noting that x? ? K, g(x? ) ? 0,
we have F (x? ) ? f (x? ) + ? ln 2. On the other hand,
?0 g(b
xT )
F (b
xT ) = f (b
xT ) + ? ln 1 + exp
? f (b
xT ) + max (0, ?0 g(b
xT )) .
?
Therefore, with the value of ? = ln T /T , we have
ln T
?
f (b
xT ) ? f (x ) + O
,
(15)
T
ln T
f (b
xT ) + ?0 g(b
xT ) ? f (x? ) + O
.
(16)
T
Applying the inequalities (13) and (14) to (16), and noting that ? = ln T /T , we have
ln T
eT k2 ? G1 kb
eT k2 + O
?0 ?kb
xT ? x
xT ? x
.
T
eT k2 ? (1/(?0 ? ? G1 ))O(ln T /T ). Therefore
For ?0 > G1 /?, we have kb
xT ? x
ln T
ln T
?
?
eT k2 + f (x ) + O
f (e
xT ) ? f (e
xT ) ? f (b
xT ) + f (b
xT ) ? G1 kb
xT ? x
? f (x ) + O
,
T
T
where in the second inequality we use inequality (15).
6
Conclusions
In the present paper, we made a progress towards making the SGD method efficient by proposing a
framework in which it is possible to exclude the projection steps from the SGD algorithm. We have
proposed two novel algorithms to overcome the computational bottleneck of the projection step in
applying SGD to optimization problems with complex domains.
We showed using novel theoretical
?
analysis that the proposed algorithms can achieve an O(1/ T ) convergence rate for general convex
functions and an O(ln T /T ) rate for strongly convex functions with a overwhelming probability
which are known to be optimal (up to a logarithmic factor) for stochastic optimization.
Acknowledgments
The authors would like to thank the anonymous reviewers for their helpful suggestions. This work
was supported in part by National Science Foundation (IIS-0643494) and Office of Navy Research
(Award N000141210431 and Award N00014-09-1-0663).
8
References
[1] A. Agarwal, P. L. Bartlett, P. D. Ravikumar, and M. J. Wainwright. Information-theoretic
lower bounds on the oracle complexity of stochastic convex optimization. IEEE Transactions
on Information Theory, 58(5):3235?3249, 2012.
[2] F. Bach and E. Moulines. Non-asymptotic analysis of stochastic approximation algorithms for
machine learning. In NIPS, pages 451?459, 2011.
[3] D. P. Bertsekas. Nonlinear Programming. Athena Scientific, 2nd edition, 1999.
[4] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004.
[5] K. L. Clarkson. Coresets, sparse greedy approximation, and the frank-wolfe algorithm. ACM
Transactions on Algorithms, 6(4), 2010.
[6] J. Duchi, S. Shalev-Shwartz, Y. Singer, and T. Chandra. Efficient projections onto the l1-ball
for learning in high dimensions. In ICML, pages 272?279, 2008.
[7] M. Frank and P. Wolfe. An algorithm for quadratic programming. Naval Research Logistics,
3, 1956.
[8] E. Hazan. Sparse approximate solutions to semidefinite programs. In LATIN, pages 306?316,
2008.
[9] E. Hazan and S. Kale. Beyond the regret minimization barrier: an optimal algorithm for
stochastic strongly-convex optimization. Journal of Machine Learning Research - Proceedings
Track, 19:421?436, 2011.
[10] E. Hazan and S. Kale. Projection-free online learning. In ICML, 2012.
[11] M. Jaggi. Sparse Convex Optimization Methods for Machine Learning. PhD thesis, ETH
Zurich, Oct. 2011.
[12] M. Jaggi and M. Sulovsk?y. A simple algorithm for nuclear norm regularized problems. In
ICML, pages 471?478, 2010.
[13] G. Lan. An optimal method for stochastic composite optimization. Math. Program., 133(12):365?397, 2012.
[14] J. Liu and J. Ye. Efficient euclidean projections in linear time. In ICML, page 83, 2009.
[15] M. Mahdavi, R. Jin, and T. Yang. Trading regret for efficiency: online convex optimization
with long term constraints. JMLR, 13:2465?2490, 2012.
[16] A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro. Robust stochastic approximation approach
to stochastic programming. SIAM J. on Optimization, 19:1574?1609, 2009.
[17] Y. Nesterov. Introductory Lectures on Convex Optimization: A Basic Course. Kluwer Academic Publishers, 2004.
[18] Y. Nesterov. Smooth minimization of non-smooth functions. Math. Program., 103(1):127?
152, 2005.
[19] S. Shalev-Shwartz, Y. Singer, and N. Srebro. Pegasos: Primal estimated sub-gradient solver
for svm. In ICML, pages 807?814, 2007.
[20] Y. Ying and P. Li. Distance metric learning with eigenvalue optimization. JMLR., 13:1?26,
2012.
[21] T. Zhang. Sequential greedy approximation for certain convex optimization problems. Information Theory, IEEE Transactions on, 49:682?691, 2003.
[22] M. Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In
ICML, pages 928?936, 2003.
9
|
4797 |@word mild:3 briefly:1 polynomial:3 norm:7 stronger:1 nd:1 linearized:1 decomposition:1 sgd:28 celebrated:1 lightweight:1 series:1 liu:1 existing:1 current:1 com:2 designed:1 update:6 juditsky:1 greedy:4 provides:1 math:2 complication:1 c22:4 zhang:3 c2:4 consists:1 prove:1 introductory:1 introduce:2 theoretically:1 blowup:2 moulines:1 ming:3 overwhelming:1 solver:1 increasing:1 becomes:1 project:2 provided:2 bounded:12 notation:3 eigenvector:3 suppresses:3 developed:2 proposing:1 guarantee:2 every:2 concave:5 k2:30 unit:2 bertsekas:1 positive:3 before:1 engineering:1 limit:1 despite:1 path:1 approximately:1 abuse:1 shenghuo:1 therein:1 initialization:2 mentioning:1 ease:1 nemirovski:1 practical:2 acknowledgment:1 regret:10 differs:2 n000141210431:1 procedure:1 eth:1 significantly:2 attain:1 projection:35 boyd:1 composite:1 onto:1 interior:2 close:3 operator:1 convergency:1 put:1 pegasos:1 applying:3 optimize:1 equivalent:3 zinkevich:1 lagrangian:5 reviewer:1 maximizing:1 tianbao:1 straightforward:1 kale:2 independently:1 convex:57 ke:1 nuclear:1 vandenberghe:1 his:1 updated:2 pt:5 programming:4 ogd:1 us:1 wolfe:5 expensive:3 updating:6 solved:1 mentioned:1 pd:1 convexity:5 complexity:3 nesterov:2 depend:3 solving:2 tight:1 algebra:1 max1:1 efficiency:1 learner:1 easily:2 america:1 regularizer:1 effective:1 shortcoming:2 outside:3 shalev:2 navy:1 whose:4 solve:2 say:1 otherwise:2 gi:2 g1:31 final:2 online:11 seemingly:1 sequence:6 eigenvalue:5 unprecedented:1 differentiable:1 kxt:1 propose:3 combining:1 achieve:6 kx1:2 convergence:23 empty:1 extending:1 produce:1 develop:4 completion:1 noticeable:1 progress:1 strong:4 solves:1 implies:1 trading:1 differ:1 direction:3 closely:2 g22:8 stochastic:32 hull:1 kb:11 require:2 generalization:1 preliminary:1 anonymous:1 summation:2 rong:1 strictly:1 hold:3 exp:10 substituting:1 achieves:5 a2:5 omitted:1 purpose:1 largest:1 minimization:3 always:1 modified:1 avoid:3 office:1 focus:1 naval:1 properly:1 indicates:1 contrast:1 attains:1 helpful:1 bt:9 entire:2 arg:3 dual:9 smoothing:3 special:3 constrained:2 construct:1 sampling:1 icml:6 simplex:2 few:3 national:1 cheaper:2 unappealing:1 psd:5 recalling:1 ab:1 interest:1 adjust:1 introduces:1 violation:1 semidefinite:3 primal:9 therorem:1 old:1 euclidean:2 theoretical:3 instance:1 maximization:1 cost:3 introducing:1 applicability:1 too:2 sulovsk:1 st:1 siam:1 stay:3 sequel:1 together:1 thesis:1 central:1 worse:1 stochastically:1 ek:1 leading:2 return:1 combing:1 mahdavi:2 account:1 exclude:1 li:1 summarized:1 b2:1 coresets:1 caused:1 explicitly:2 depends:2 tion:1 performed:1 view:1 lab:2 try:1 hazan:5 schema:1 later:1 start:1 contribution:1 minimize:2 accuracy:1 variance:1 efficiently:3 yield:3 worth:1 definition:5 infinitesimal:1 proof:11 mi:1 sampled:1 recall:1 subsection:1 carefully:1 back:1 dt:7 improved:1 though:1 strongly:18 generality:1 hand:1 nonlinear:1 continuity:1 scientific:1 modulus:2 usa:3 ye:1 verify:1 true:2 k22:8 unbiased:4 multiplier:2 regularization:1 ization:1 hence:2 counterpart:1 symmetric:2 laboratory:1 attractive:2 convexconcave:1 generalized:1 tt:3 demonstrate:1 theoretic:1 performs:1 duchi:1 l1:1 novel:3 recently:1 x0t:4 empirically:1 discussed:2 he:1 extend:1 kluwer:1 cambridge:1 rd:2 seldom:1 moving:1 jaggi:4 recent:1 showed:2 optimizing:1 optimizes:1 certain:2 n00014:1 inequality:17 yi:1 devise:1 minimum:2 employed:1 ii:1 full:2 desirable:1 multiple:1 smooth:5 faster:1 characterized:2 academic:1 bach:1 long:2 devised:1 ravikumar:1 award:2 a1:6 feasibility:4 plugging:1 variant:1 basic:1 metric:3 expectation:1 chandra:1 iteration:14 agarwal:1 penalize:3 addressed:1 publisher:1 appropriately:1 unlike:1 ascent:1 seem:1 call:1 yang:2 noting:2 intermediate:5 latin:1 identically:1 opposite:1 idea:5 tradeoff:1 bottleneck:1 expression:1 bartlett:1 effort:1 penalty:1 clarkson:2 kuu:1 cause:1 remark:3 generally:1 detailed:2 amount:1 mahdavim:1 shapiro:1 estimated:2 per:2 track:1 write:2 key:6 lan:2 prevent:1 subgradient:1 cone:6 run:1 throughout:1 almost:1 appendix:1 bound:15 guaranteed:1 convergent:1 quadratic:1 oracle:4 constraint:12 precisely:1 x2:12 min:11 optimality:2 performing:1 developing:2 according:2 ball:4 poor:1 slightly:1 urgent:1 making:4 modification:1 projecting:5 restricted:1 pr:2 taken:1 computationally:3 ln:44 xk22:1 zurich:1 turn:1 mechanism:1 needed:3 know:1 singer:2 ge:2 end:3 available:2 appropriate:3 batch:2 eigen:1 slower:2 original:1 substitute:1 assumes:2 ensure:3 a4:3 objective:8 kx0t:2 dependence:1 mehrdad:1 pave:1 exhibit:1 gradient:33 minx:3 distance:2 thank:1 athena:1 polytope:1 eigenspectrum:1 g21:11 ying:2 difficult:1 unfortunately:3 statement:1 tyang:1 frank:5 trace:2 ofw:3 stated:1 design:1 implementation:1 unknown:1 perform:1 upper:2 markov:1 descent:14 jin:2 logistics:1 immediate:1 defining:2 extended:1 smoothed:1 introduced:2 required:2 nip:1 address:2 able:1 beyond:1 usually:2 below:2 summarize:1 program:3 jinfeng:1 built:1 max:10 including:1 wainwright:1 restore:3 circumvent:1 regularized:2 zhu:1 representing:1 carried:1 review:1 literature:1 kf:2 asymptotic:1 loss:3 lecture:1 suggestion:1 limitation:3 proven:1 analogy:1 ingredient:1 srebro:1 clark:1 penalization:2 foundation:1 share:1 course:1 penalized:2 supported:1 last:5 free:4 barrier:1 sparse:4 distributed:1 boundary:3 overcome:1 dimension:1 computes:2 author:2 made:3 transaction:3 approximate:2 emphasize:1 absorb:2 dlog2:1 global:1 unnecessary:1 shwartz:2 msu:1 continuous:2 iterative:1 gkx1:1 additionally:1 robust:1 ca:2 rongjin:1 complex:6 necessarily:1 domain:34 main:7 edition:1 allowed:1 x1:10 zsh:1 rithm:1 cubic:1 martingale:2 slow:1 sub:2 theme:2 kxk2:1 jmlr:2 theorem:14 specific:2 xt:95 explored:1 svm:1 unattractive:1 a3:8 exists:4 incorporating:1 sequential:2 phd:1 nec:2 kx:5 entropy:1 michigan:1 logarithmic:2 saddle:1 g2:8 corresponds:2 acm:1 oct:1 conditional:1 goal:2 formulated:1 consequently:3 exposition:1 towards:1 lipschitz:4 replace:1 feasible:6 change:1 determined:1 except:1 lemma:14 called:1 duality:1 select:1 incorporate:2 dept:1
|
4,195 | 4,798 |
Ensemble weighted kernel estimators
for multivariate entropy estimation
Kumar Sricharan, Alfred O. Hero III
Department of EECS
University of Michigan
Ann Arbor, MI 48104
{kksreddy,hero}@umich.edu
Abstract
The problem of estimation of entropy functionals of probability densities
has received much attention in the information theory, machine learning
and statistics communities. Kernel density plug-in estimators are simple,
easy to implement and widely used for estimation of entropy. However, for
large feature dimension d, kernel plug-in estimators suffer from the curse
of dimensionality: the MSE rate of convergence is glacially slow - of order
O(T ??/d ), where T is the number of samples, and ? > 0 is a rate parameter. In this paper, it is shown that for sufficiently smooth densities, an
ensemble of kernel plug-in estimators can be combined via a weighted convex combination, such that the resulting weighted estimator has a superior
parametric MSE rate of convergence of order O(T ?1 ). Furthermore, it is
shown that these optimal weights can be determined by solving a convex
optimization problem which does not require training data or knowledge of
the underlying density, and therefore can be performed offline. This novel
result is remarkable in that, while each of the individual kernel plug-in estimators belonging to the ensemble suffer from the curse of dimensionality,
by appropriate ensemble averaging we can achieve parametric convergence
rates.
1
Introduction
R
Non-linear entropy functionals of a multivariate density f of the form g(f (x), x)f (x)dx
arise in applications including machine learning, signal processing, mathematical statistics,
and statistical communication theory. Important examples of such functionals include Shannon and R?enyi entropy. Entropy based applications include image registration and texture
classification, ICA, anomaly detection, data and image compression, testing of statistical
models and parameter estimation. For details and other applications, see, for example, Beirlant et al. [2] and Leonenko et al. [18]. In these applications, the functional of interest must
be estimated empirically from sample realizations of the underlying densities. Several estimators of entropy measures have been proposed for general multivariate densities f . These
include consistent estimators based on histograms [10, 2], kernel density plug-in estimators,
entropic graphs [5, 20], gap estimators [24] and nearest neighbor distances [8, 18, 19].
Kernel density plug-in estimators [1, 6, 11, 15, 12] are simple, easy to implement, computationally fast and therefore widely used for estimation of entropy [2, 23, 14, 4, 13]. However,
these estimators suffer from mean squared error (MSE) rates which typically grow with
feature dimension d as O(T ??/d ), where T is the number of samples and ? is a positive rate
parameter.
1
In this paper, we propose a novel weighted ensemble kernel density plug-in estimator
? w , that achieves parametric MSE rates of O(T ?1 ) when the feature densof entropy G
?w =
ity is smooth. The estimator is constructed as a weighted convex combination G
P
?
?
k(l) wrt the weights
l??
l w(l)Gk(l) of individual kernel density plug-in estimators Gp
{w(l); l ? ?l}. Here, ?l is a vector of indices {l1 , .., lL } and k(l) = l T /2 is proportional
? k(l) . The individual kernel estimto the the volume of the kernel bins used in evaluating G
? k(l) are similar to the data-split kernel estimator of Gy?orfi and van der Muelen [11],
ators G
and have slow MSE rates of convergence of order O(T ?1/1+d ). Please refer to Section 2 for
? k(l) .
the exact definition of G
The principal result presented in this paper is as follows. It is shown that the weights
{w(l); l ? ?l} can be chosen so as to significantly improve the rate of MSE convergence
? w . In fact our ensemble averaging method can improve MSE
of the weighted estimator G
?
convergence of Gw to the parametric rate O(T ?1 ). These optimal weights can be determined
by solving a convex optimization problem. Furthermore, this optimization problem does not
involve any density-dependent parameters and can therefore be performed offline.
1.1
Related work
Ensemble based methods have been previously proposed in the context of classification. For
example, in both boosting [21] and multiple kernel learning [16] algorithms, lower complexity
weak learners are combined to produce classifiers with higher accuracy. Our work differs
from these methods in several ways. First and foremost, our proposed method performs
estimation rather than classification. An important consequence of this is that the weights
we use are data independent , while the weights in boosting and multiple kernel learning
must be estimated from training data since they depend on the unknown distribution.
Birge and Massart [3] show that for density f in a Holder smoothness class with s derivatives, the minimax MSE rate for estimation of a smooth functional is T ?2? , where
? = min{1/2, 4s/(4s + d)}. This means that for s > 4/d, parametric rates are achievable.
The kernel estimators proposed in this paper require higher order smoothness conditions
on the density, i. e. the density must be s > d times differentiable. While there exist other
estimators [17, 7] that achieve parametric MSE rates of O(1/T ) when s > 4/d, these estimators are more difficult to implement than kernel density estimators, which are a staple
of many toolboxes in machine learning, pattern recognition, and statistics. The proposed
ensemble weighted estimator is a simple weighted combination of off-the-shelf kernel density
estimators.
1.2
Organization
The reminder of the paper is organized as follows. We formally describe the kernel plug-in
entropy estimators for entropy estimation in Section 2 and discuss the MSE convergence
properties of these estimators. In particular, we establish that these estimators have MSE
rate which decays as O(T ?1/1+d ). Next, we propose the weighted ensemble of kernel entropy estimators in Section 3. Subsequently, we provide an MSE-optimal set of weights as
the solution to a convex optimization(3.4) and show that the resultant optimally weighted
estimator has a MSE of O(T ?1 ). We present simulation results in Section 4 that illustrate
the superior performance of this ensemble entropy estimator in the context of (i) estimation
of the Panter-Dite distortion-rate factor [9] and (ii) testing the probability distribution of a
random sample. We conclude the paper in Section 5.
Notation
We will use bold face type to indicate random variables and random vectors and regular
type face for constants. We denote the expectation operator by the symbol E, the variance
operator as V[X] = E[(X ? E[X])2 ], and the bias of an estimator by B.
2
2
Entropy estimation
This paper focuses on the estimation of general non-linear functionals G(f ) of d-dimensional
multivariate densities f with known support S = [a, b]d , where G(f ) has the form
Z
G(f ) = g(f (x), x)f (x)d?(x),
(2.1)
for some smooth function g(f, x). Let B denote the boundary of S. Here, ? denotes the
Lebesgue measure and E denotes statistical expectation with respect to the density f . Assume that T = N + M i.i.d realizations of feature vectors {X1 , . . . , XN , XN +1 , . . . , XN +M }
are available from the density f . In the sequel f will be called the feature density.
2.1
Plug-in estimators of entropy
A truncated kernel density estimator with uniform kernel is defined below. Our proposed
weighted ensemble method applies to other types of kernels as well but we specialize to
uniform kernels as it makes the derivations clearer. For integer 1 ? k ? M , define the
distance dk to be: dk = (k/M )1/d . Define the truncated kernel bin region for each X ? S
to be Sk (X) = {Y
R ? S : ||X ? Y ||1 ? dk /2}, and the volume of the truncated kernel bins
to be Vk (X) = Sk (X) dz. Note that when the smallest distance from X to S is greater
than dk , Vk (X) = ddk = k/M . Let lk (X) denotes the number of points falling in Sk (X):
P
lk (X) = M
i=1 1{Xi ?Sk (X)} . The truncated kernel density estimator is defined as
?fk (X) =
lk (X)
.
M Vk (X)
(2.2)
The plug-in estimator of the density functional is constructed using a data splitting approach as follows. The data is randomly subdivided into two parts {X1 , . . . , XN } and
{XN +1 , . . . , XN +M } of N and M points respectively. In the first stage, we estimate
the kernel density estimate ?fk at the N points {X1 , . . . , XN } using the M realizations
{XN +1 , . . . , XN +M }. Subsequently, we use the N samples {X1 , . . . , XN } to approximate
the functional G(f ) and obtain the plug-in estimator:
?k
G
=
N
1 X ?
g(f k (Xi ), Xi ).
N i=1
(2.3)
Also define a standard kernel density estimator with uniform kernel ?fk (X), which is identical
to ?fk (X) except that the volume Vk (X) is always set to be Vk (X) = k/M . Define
?k
G
=
N
1 X ?
g(f k (Xi ), Xi ).
N i=1
(2.4)
? k is identical to the estimator of Gy?orfi and van der Muelen [11]. Observe
The estimator G
? k , unlike G
? k , does not require knowledge about the support
that the implementation of G
of the density.
2.1.1
Assumptions
We make a number of technical assumptions that will allow us to obtain tight MSE convergence rates for the kernel density estimators defined in above. These assumptions are
comparable to other rigorous treatments of entropy estimation. Please refer to Section
II, [2] for details. (A.0) : Assume that the kernel bandwidth satisfies k = k0 M ? for any
rate constant 0 < ? < 1, and assume that M , N and T are linearly related through the
proportionality constant ?f rac with: 0 < ?f rac < 1, M = ?f rac T and N = (1 ? ?f rac )T .
(A.1) : Let the feature density f be uniformly bounded away from 0 and upper bounded
on the set S, i.e., there exist constants ?0 , ?? such that 0 < ?0 ? f (x) ? ?? < ? ?x ? S.
(A.2): Assume that the density f has continuous partial derivatives of order d in the interior of the set S, and that these derivatives are upper bounded. (A.3): Assume that the
3
function g(f, x) has max{?, d} partial derivatives w.r.t. the argument f , where ? satisfies
the conditions ?? > 1. Denote the n-th partial derivative of g(f, x) wrt x by g (n) (f, x).
Also, let g ? (f, x) := g (1) (f, x) and g ?? (f, x) := g (2) (f, x). (A.4): Assume that the absolute
value of the functional g(f, x) and its partial derivatives are strictly bounded away from
? in the range ?0 < x < ?? for all y. (A.5): Let ? ? (0, 1) and ? ? (2/3, 1). Let C(M )
be a positive function satisfying the condition C(M ) = O(exp(?M ?(1??) )). For some fixed
0 < ? < 1, define pl = (1 ? ?)?0 and pu = (1 + ?)?? . Assume that the following four conditions are satisfied by h(f, x) = g(f, x), g (3) (f, x) and g (?) (f, x) : (i) supx |h(0, x)| = G1 < ?,
(ii) supf ?(pl ,pu ),x |h(f, x)| = G2 /4 < ?, (iii) supf ?(1/k,pu ),x |h(f, x)|C(M ) = G3 < ?, and
(iv)E[supf ?(pl ,2d M/k),x |h(f, x)|]C(M ) = G4 < ?.
2.1.2
Analysis of MSE
Under these assumptions, we have shown the following (please see [22] for the proof) :
? k, G
? k is given by
Theorem 1. The bias of the plug-in estimators G
i/d
X
c2
k
1
k
?
+
+o
+
B(Gk ) =
c1,i
M
k
k M
i?I
1/d
k
c2
k
1
?
B(Gk ) = c1
.
+
+o
+
M
k
k M
? ,G
? is given by
Theorem 2. The variance of the plug-in estimators G
k k
1
1
1
1
? k ) = c4
+ c5
+o
+
V(G
N
M
M
N
1
1
1
1
? k ) = c4
V(G
+ c5
+o
.
+
N
M
M
N
In the above expressions, c1,i , c1 , c2 , c4 and c5 are constants that depend only on g, f and
their partial derivatives, and I = {1, . . . , d}. In particular, the constants c1,i , c1 , c2 , c4 and
c5 are independent of k, N and M .
2.1.3
Optimal MSE rate
? k and G
? k to be unbiased.
From Theorem 1, k ? ? and k/M ? 0 for the estimators G
Likewise from Theorem 2 N ? ? and M ? ? for the variance of the estimator to
converge to 0. We can optimize the choice of bandwidth k, and the data splitting proportions
N/(N + M ), M/(N + M ) for minimum M.S.E.
Minimizing the MSE over k is equivalent to minimizing the bias over k. The optimal choice
of k is given by kopt = O(M 1/(1+d) ), and the bias evaluated at kopt is O(M ?1/(1+d) ). Also
? k and G
? k is dominated by the squared bias (O(M ?2/(1+d) )) as
observe that the MSE of G
contrasted to the variance (O(1/N + 1/M )). This implies that the asymptotic MSE rate of
convergence is invariant to selected proportionality constant ?f rac . The optimal MSE for
? k and G
? k is therefore achieved for the choice of k = O(M 1/(1+d) ), and is
the estimators G
?2/(1+d)
? k and G
? k have identical optimal
given by O(T
). In particular, observe that both G
rates of MSE convergence. Our goal is to reduce the estimator MSE to O(T ?1 ). We do so
by applying the method of weighted ensembles described next in section 3.
3
Ensemble estimators
For a positive integer L > d, choose ?l ?
= {l1 , . . . , lL } to be a vector of distinct positive real
numbers. Define the mapping k(l) = l M and let k? = {k(l); l ? ?l}. Observe that any k ? k?
corresponds to the rate constant ? = 0.5, and that N = ?(T ) and M = ?(T ). Define the
weighted ensemble estimator
X
? k(l) .
?w =
w(l)G
(3.1)
G
l??
l
4
Theorem 3. There exists a weight vector w? such that
? w? ? G(f ))2 ] = O(1/T ).
E[(G
This weight vector can be found by solving a convex optimization. Furthermore, this optimal weight vector does not depend on the unknown feature density f or the samples
{X1 , .., XN +M }, and hence can be solved off-line.
P
Proof. For each i ? I, define ?w (i) = l??l w(l)li/d . The bias of the ensemble estimator
follows from Theorem 1 and is given by
X
1
? w] =
B[G
c1,i ?w (i)M ?i/2d + O ?
.
(3.2)
T
i?I
? k(l) ; l ? ?l} by ?L . Let ?
? L = ?L T . Observe that by
Denote the covariance matrix of {G
?
(2.5) and the Cauchy-Schwarz inequality, the entries of ?L are O(1). The variance of the
? w can then be bound as follows:
weighted estimator G
?
?
??
2
X
?
? k(l) ? = w? ?L w = w ?L w ? ?max (?L )||w||2 .
? w] = V ?
wl G
V[G
(3.3)
T
T
?
l?l
We seek a weight vector w that (i) ensures that the bias of the weighted estimator is
O(T ?1/2 ) and (ii) has low ?2 norm ||w||2 in order to limit the contribution of the variance
of the weighted estimator. To this end, let w? be the solution to the convex optimization
problem
minimize ||w||2
w
X
w(l) = 1,
subject to
(3.4)
l??
l
|?w (i)| = 0, i ? I.
This problem is equivalent to minimizing ||w||2 subject to A0 w = b, where A0 and b are
defined below. Let fIN : I ? {1, .., I} be a bijective mapping. Let a0 be the vector of
i/d
i/d
ones: [1, 1..., 1]1?L ; and let afIN (i) , for i ? I be given by afIN (i) = [l1 , .., lL ]. Define
A0 = [a?0 , a?1 , ..., a?I ]? , A1 = [a?1 , ..., a?I ] and b = [1; 0; 0; ..; 0](I+1)?1. Observe that the entries
?
of A0 and b are O(1), and therefore
? the entries of the solution w are O(1). Consequently,
?
by (3.2), the bias B[p
Gw? ] = O(1/ T ). Furthermore, the optimal minimum ?(d) := ||w? ||2
? w? ] is of
is given by ?(d) = det(A1 A?1 )/det(A0 A?0 ). By (6.4), the estimator variance V[G
order O(?(d)/T ). This concludes the proof.
While we have illustrated the weighted ensemble method only in the context of kernel
estimators, this method can be applied to any general ensemble of estimators that satisfy
bias and variance conditions C .1 and C .2 in [22].
4
Experiments
We illustrate the superior performance of the proposed weighted ensemble estimator for two
applications: (i) estimation of the Panter-Dite rate distortion factor, and (ii) estimation of
entropy to test for randomness of a random sample.
4.1
Panter-Dite factor estimation
For a d-dimensional source with underlying density f , the Panter-Dite distortion-rate
distortion-rate function [9] for Ra q-dimensional vector quantizer with n levels of quantization is given by ?(n) = n?2/q f q/(q+2) (x)dx. The Panter-Dite factor corresponds to the
5
0
10
?1
Mean squared error
10
?2
10
Standard kernel plug?in estimator [12]
Truncated kernel plug?in estimator (2.3)
Histogram plug?in estimator [11]
k?nearest neighbor estimator [19]
Entropic graph estimator [6,21]
Weighted kernel estimator (3.1)
?3
10
?4
10
3
4
10
10
Sample size T
(a) Variation of MSE of Panter-Dite factor estimates as a function of
sample size T . From the figure, we see that the proposed weighted estimator has the fastest MSE rate of convergence wrt sample size T .
0
10
?1
Mean squared error
10
?2
10
Standard kernel plug?in estimator [12]
Truncated kernel plug?in estimator (2.3)
Histogram plug?in estimator [11]
k?nearest neighbor estimator [19]
Entropic graph estimator [6,21]
Weighted kernel estimator (3.1)
?3
10
?4
10
1
2
3
4
5
6
dimension d
7
8
9
10
(b) Variation of MSE of Panter-Dite factor estimates as a function of dimension d. From the figure, we see that the MSE of the proposed weighted
estimator has the slowest rate of growth with increasing dimension d.
Figure 1: Variation of MSE of Panter-Dite factor estimates using standard kernel plug-in estimator [12], truncated kernel plug-in estimator (2.3), histogram plug-in estimator[11], k-NN
estimator [19], entropic graph estimator [6,21] and the weighted ensemble estimator (3.1).
functional G(f ) with g(f, x) = n?2/q f ?2/(q+2) I(f > 0) + I(f = 0), where I(.) is the indicator function. The Panter-Dite factor is directly related to the R?enyi ?-entropy, for which
several other estimators have been proposed.
In our simulations we compare six different choices of functional estimators - the three
? k , (ii) the
estimators previously introduced: (i) the standard kernel plug-in estimator G
?
?
boundary truncated plug-in estimator Gk and (iii) the weighted estimator Gw with optimal
weight w = w? given by (3.4), and in addition the following popular entropy estimators: (iv)
histogram plug-in estimator [10], (v) k-nearest neighbor (k-NN) entropy estimator [18] and
? k and G
? k , we select the bandwidth
(vi) entropic k-NN graph estimator [5, 20]. For both G
parameter k as a function of M according to the optimal proportionality k = M 1/(1+d) and
N = M = T /2. To illustrate the weighted estimator of the Panter-Dite factor we assume
that f is the d = 6 dimensional mixture density f (a, b, p, d) = pf? (a, b, d) + (1 ? p)fu (d);
where f? (a, b, d) is a d-dimensional Beta density with parameters a = 6, b = 6, fu (d) is a
d-dimensional uniform density and the mixing ratio p is 0.8.
4.1.1
Variation of MSE with sample size T
The MSE results of these different estimators are shown in Fig. 1(a) as a function of sample
? w has significantly
size T . It is clear from the figure that the proposed ensemble estimator G
6
100
1
0.5
Hypothesis index
True entropy
Standard kernel plug?in estimate
Truncated kernel plug?in estimate
Weighted plug?in estimate
50
0
?1
0
?0.9
?0.8
?0.7
?0.6
?0.5
Standard kernel plug?in estimate
?0.4
?0.3
100
?0.5
Standard kernel plug?in estimate
50
?1
?1.5
Truncated kernel plug?in estimate
0
?1.4
?2
?2.5
0
?1.35
?1.3
?1.25
?1.2
?1.15
?1.1
Truncated kernel plug?in estimate
?1.05
?1
?0.95
?1.4
?1.3
100
Weighted plug?in estimate
50
100
200
300
400
500
600
700
800
900
0
?2.3
1000
?2.2
?2.1
?2
?1.9
?1.8
?1.7
Weighted estimate
?1.6
?1.5
(a) Entropy estimates for random samples cor- (b) Histogram envelopes of entropy estimates
responding to hypothesis H0 and H1 .
for random samples corresponding to hypothesis H0 (blue) and H1 (red).
Figure 2: Entropy estimates using standard kernel plug-in estimator, truncated kernel plugin estimator and the weighted estimator, for random samples corresponding to hypothesis
H0 and H1 . The weighted estimator provided better discrimination ability by suppressing
the bias, at the cost of some additional variance.
faster rate of convergence while the MSE of the rest of the estimators, including the truncated
kernel plug-in estimator, have similar, slow rates of convergence. It is therefore clear that the
proposed optimal ensemble averaging significantly accelerates the MSE convergence rate.
4.1.2
Variation of MSE with dimension d
The MSE results of these different estimators are shown in Fig. 1(b) as a function of dimension d, for fixed sample size T = 3000. For the standard kernel plug-in estimator and
truncated kernel plug-in estimator, the MSE varies exponentially with d as expected. The
MSE of the histogram and k-NN estimators increase at a similar rate, indicating that these
estimators suffer from the curse of dimensionality as well. The MSE of the weighted estimator on the other hand increases at a slower rate, which is in agreement with our theory that
the MSE is O(?(d)/T ) and observing that ?(d) is an increasing function of d. Also observe
that the MSE of the weighted estimator is significantly smaller than the MSE of the other
estimators for all dimensions d > 3.
4.2
Distribution testing
In this section, Shannon differential entropy is estimated using the function g(f, x) =
? log(f )I(f > 0) + I(f = 0) and used as a test statistic to test for the underlying probability distribution of a random sample. In particular, we draw 500 instances each of random
samples of size 103 from the probability distribution f (a, b, p, d), described in Sec. 4. 1, with
fixed d = 6, p = 0.75 for two sets of values of a, b under the null and alternate hypothesis,
H0 : a = a0 , b = b0 versus H1 : a = a1 , b = b1 .
First, we fix a0 = b0 = 6 and a1 = b1 = 5. We note that the underlying density under the
null hypothesis f (6, 6, 0.75, 6) has greater curvature relative to f (5, 5, 0.75, 6) and therefore
? k, G
?k
has smaller entropy (randomness). The true entropy, and entropy estimates using G
? w for the cases corresponding to each of the 500 instances of hypothesis H0 and H1
and G
are shown in Fig. 2(a). From this figure, it is apparent that the weighted estimator provides
better discrimination ability by suppressing the bias, at the cost of some additional variance.
To demonstrate that the weighted estimator provides better discrimination, we plot the
histogram envelope of the entropy estimates using standard kernel plug-in estimator, truncated kernel plug-in estimator and the weighted estimator for the cases corresponding to
the hypothesis H0 (color coded blue) and H1 (color coded red) in Fig. 2(b). Furthermore,
we quantitatively measure the discriminative
ability of the different estimators using the
p
deflection statistic ds = |?1 ? ?0 |/ ?02 + ?12 , where ?0 and ?0 (respectively ?1 and ?1 ) are
7
1
1
0.95
Area under ROC curve
False Negative rate
0.9
0.85
0.8
0.75
0.7
0.65
0.6
0.5
0.05
0.1
0.15
0.2 0.25 0.3 0.35
False Positive rate
0.4
0.45
0.8
0.7
0.6
Standard kernel plug?in estimator
Truncated kernel plug?in estimator
Weighted estimator
0.55
0.9
0.5
0.5
(a) ROC curves corresponding to entropy estimates obtained using standard and truncated kernel plug-in estimator and the weighted estimator.
The corresponding AUC are given by 0.9271,
0.9459 and 0.9619.
Neyman?Pearson test
Standard kernel plug?in estimate
Truncated kernel plug?in estimate
Weighted estimate
0.2
0.4
?
0.6
0.8
1
(b) Variation of AUC curves vs ?(= a0 ? a1 , b0 ?
b1 ) corresponding to Neyman-Pearson omniscient test, entropy estimates using the standard
and truncated kernel plug-in estimator and the
weighted estimator.
Figure 3: Comparison of performance in terms of ROC for the distribution testing problem.
The weighted estimator uniformly outperforms the individual plug-in estimators.
the sample mean and standard deviation of the entropy estimates. The deflection statistic
was found to be 1.49, 1.60 and 1.89 for the standard kernel plug-in estimator, truncated
kernel plug-in estimator and the weighted estimator respectively. The receiver operating
curves (ROC) for this test using these three different estimators is shown in Fig. 3(a). The
corresponding area under the ROC curves (AUC) are given by 0.9271, 0.9459 and 0.9619.
In our final experiment, we fix a0 = b0 = 10 and set a1 = b1 = 10 ? ?, draw 500 instances
each of random samples of size 5 ? 103 under the null and alternate hypothesis, and plot
the AUC as ? varies from 0 to 1 in Fig. 3(b). For comparison, we also plot the AUC for the
Neyman-Pearson likelihood ratio test. The Neyman-Pearson likelihood ratio test, unlike
the Shannon entropy based tests, is an omniscient test that assumes knowledge of both
the underlying beta-uniform mixture parametric model of the density and the parameter
values a0 , b0 and a1 , b1 under the null and alternate hypothesis respectively. Figure 4 shows
that the weighted estimator uniformly and significantly outperforms the individual plug-in
estimators and is closest to the performance of the omniscient Neyman-Pearson likelihood
test. The relatively superior performance of the Neyman-Pearson likelihood test is due to
the fact that the weighted estimator is a nonparametric estimator that has marginally higher
variance (proportional to ||w? ||22 ) compared to the underlying parametric model for which
the Neyman-Pearson test statistic provides the most powerful test.
5
Conclusions
A novel method of weighted ensemble estimation was proposed in this paper. This method
combines slowly converging individual estimators to produce a new estimator with faster
MSE rate of convergence. In this paper, we applied weighted ensembles to improve the
MSE of a set of uniform kernel density estimators with different kernel width parameters.
We showed by theory and in simulation that that the improved ensemble estimator achieves
parametric MSE convergence rate O(T ?1 ). The optimal weights are determined by solving
a convex optimization problem which does not require training data and can be performed
offline. The superior performance of the weighted ensemble entropy estimator was verified
in the context of two important problems: (i) estimation of the Panter-Dite factor and (ii)
non-parametric hypothesis testing.
Acknowledgments
This work was partially supported by ARO grant W911NF-12-1-0443.
8
References
[1] I. Ahmad and Pi-Erh Lin. A nonparametric estimation of the entropy for absolutely continuous
distributions (corresp.). Information Theory, IEEE Trans. on, 22(3):372 ? 375, May 1976.
[2] J. Beirlant, EJ Dudewicz, L. Gy?
orfi, and EC Van der Meulen. Nonparametric entropy estimation: An overview. Intl. Journal of Mathematical and Statistical Sciences, 6:17?40, 1997.
[3] L. Birge and P. Massart. Estimation of integral functions of a density. The Annals of Statistics,
23(1):11?29, 1995.
[4] D. Chauveau and P. Vandekerkhove. Selection of a MCMC simulation strategy via an entropy
convergence criterion. ArXiv Mathematics e-prints, May 2006.
[5] J.A. Costa and A.O. Hero. Geodesic entropic graphs for dimension and entropy estimation in
manifold learning. Signal Processing, IEEE Transactions on, 52(8):2210?2221, 2004.
[6] P. B. Eggermont and V. N. LaRiccia. Best asymptotic normality of the kernel density entropy
estimator for smooth densities. Information Theory, IEEE Trans. on, 45(4):1321 ?1326, May
1999.
[7] E. Gin?e and D.M. Mason. Uniform in bandwidth estimation of integral functionals of the
density function. Scandinavian Journal of Statistics, 35:739761, 2008.
[8] M. Goria, N. Leonenko, V. Mergel, and P. L. Novi Inverardi. A new class of random vector entropy estimators and its applications in testing statistical hypotheses. Nonparametric
Statistics, 2004.
[9] R. Gupta. Quantization Strategies for Low-Power Communications. PhD thesis, University of
Michigan, Ann Arbor, 2001.
[10] L. Gy?
orfi and E. C. van der Meulen. Density-free convergence properties of various estimators
of entropy. Comput. Statist. Data Anal., pages 425?436, 1987.
[11] L. Gy?
orfi and E. C. van der Meulen. An entropy estimate based on a kernel density estimation.
Limit Theorems in Probability and Statistics, pages 229?240, 1989.
[12] P. Hall and S. C. Morton. On the estimation of the entropy. Ann. Inst. Statist. Meth., 45:69?88,
1993.
[13] K. Hlav?
a?ckov?
a-Schindler, M. Palu?s, M. Vejmelka, and J. Bhattacharya. Causality detection
based on information-theoretic approaches in time series analysis. Physics Reports, 441(1):1?
46, 2007.
[14] A.T. Ihler, J.W. Fisher III, and A.S. Willsky. Nonparametric estimators for online signature
authentication. In Acoustics, Speech, and Signal Processing, 2001. Proceedings.(ICASSP?01).
2001 IEEE International Conference on, volume 6, pages 3473?3476. IEEE, 2001.
[15] H. Joe. Estimation of entropy and other functionals of a multivariate density. Annals of the
Institute of Statistical Mathematics, 41(4):683?697, 1989.
[16] G. Lanckriet, N. Cristianini, P. Bartlett, and L. El Ghaoui. Learning the kernel matrix with
semi-definite programming. Journal of Machine Learning Research, 5:2004, 2002.
[17] B. Laurent. Efficient estimation of integral functionals of a density. The Annals of Statistics,
24(2):659?681, 1996.
[18] N. Leonenko, L. Prozanto, and V. Savani. A class of R?enyi information estimators for multidimensional densities. Annals of Statistics, 36:2153?2182, 2008.
[19] E. Liiti?
ainen, A. Lendasse, and F. Corona. On the statistical estimation of r?enyi entropies.
In Proceedings of IEEE/MLSP 2009 International Workshop on Machine Learning for Signal
Processing, Grenoble (France), September 2-4 2009.
[20] D. Pal, B. Poczos, and C. Szepesvari. Estimation of R?enyi entropy and mutual information
based on generalized nearest-neighbor graphs. In Proc. Advances in Neural Information Processing Systems (NIPS). MIT Press, 2010.
[21] Robert E. Schapire. The strength of weak learnability. Machine Learning, 5(2):197?227?227,
June 1990.
[22] K. Sricharan and A. O. Hero, III. Ensemble estimators for multivariate entropy estimation.
ArXiv e-prints, March 2012.
[23] C. Studholme, C. Drapaca, B. Iordanova, and V. Cardenas. Deformation-based mapping of
volume change from serial brain mri in the presence of local tissue contrast change. Medical
Imaging, IEEE Transactions on, 25(5):626?639, 2006.
[24] B. van Es. Estimating functionals related to a density by class of statistics based on spacing.
Scandinavian Journal of Statistics, 1992.
9
|
4798 |@word mri:1 achievable:1 compression:1 proportion:1 norm:1 proportionality:3 simulation:4 seek:1 covariance:1 series:1 suppressing:2 omniscient:3 outperforms:2 dx:2 must:3 plot:3 ainen:1 discrimination:3 v:1 selected:1 quantizer:1 boosting:2 provides:3 mathematical:2 constructed:2 c2:4 beta:2 differential:1 specialize:1 combine:1 g4:1 ra:1 expected:1 ica:1 brain:1 curse:3 pf:1 increasing:2 provided:1 estimating:1 notation:1 underlying:7 bounded:4 null:4 multidimensional:1 growth:1 classifier:1 grant:1 medical:1 positive:5 local:1 limit:2 consequence:1 plugin:1 laurent:1 fastest:1 range:1 savani:1 acknowledgment:1 testing:6 implement:3 differs:1 definite:1 area:2 orfi:5 significantly:5 regular:1 staple:1 interior:1 selection:1 operator:2 context:4 applying:1 optimize:1 equivalent:2 dz:1 attention:1 convex:8 splitting:2 estimator:136 ity:1 variation:6 annals:4 anomaly:1 exact:1 programming:1 hypothesis:12 agreement:1 lanckriet:1 recognition:1 satisfying:1 solved:1 region:1 ensures:1 ahmad:1 complexity:1 cristianini:1 geodesic:1 signature:1 depend:3 solving:4 tight:1 learner:1 icassp:1 k0:1 various:1 derivation:1 enyi:5 fast:1 describe:1 distinct:1 pearson:7 h0:6 apparent:1 widely:2 distortion:4 cardenas:1 ability:3 statistic:15 g1:1 gp:1 final:1 online:1 differentiable:1 propose:2 aro:1 realization:3 mixing:1 achieve:2 convergence:18 intl:1 produce:2 illustrate:3 clearer:1 nearest:5 received:1 b0:5 panter:11 indicate:1 implies:1 subsequently:2 bin:3 require:4 subdivided:1 fix:2 strictly:1 pl:3 sufficiently:1 hall:1 exp:1 mapping:3 achieves:2 entropic:6 smallest:1 estimation:29 proc:1 schwarz:1 wl:1 weighted:46 mit:1 always:1 rather:1 shelf:1 ej:1 morton:1 focus:1 june:1 vk:5 likelihood:4 slowest:1 contrast:1 rigorous:1 inst:1 birge:2 dependent:1 el:1 nn:4 typically:1 a0:11 france:1 classification:3 mutual:1 identical:3 novi:1 report:1 quantitatively:1 grenoble:1 randomly:1 individual:6 corresp:1 lebesgue:1 detection:2 organization:1 interest:1 mixture:2 fu:2 integral:3 partial:5 iv:2 deformation:1 instance:3 w911nf:1 cost:2 deviation:1 entry:3 uniform:7 pal:1 optimally:1 learnability:1 supx:1 eec:1 varies:2 combined:2 density:48 international:2 sequel:1 off:2 physic:1 squared:4 thesis:1 satisfied:1 choose:1 slowly:1 derivative:7 li:1 gy:5 bold:1 sec:1 mlsp:1 satisfy:1 vi:1 performed:3 h1:6 observing:1 red:2 contribution:1 minimize:1 accuracy:1 holder:1 variance:11 afin:2 likewise:1 ensemble:26 weak:2 marginally:1 randomness:2 tissue:1 definition:1 resultant:1 proof:3 mi:1 ihler:1 costa:1 treatment:1 popular:1 knowledge:3 reminder:1 dimensionality:3 color:2 organized:1 higher:3 improved:1 evaluated:1 furthermore:5 stage:1 d:1 hand:1 unbiased:1 true:2 hence:1 illustrated:1 gw:3 ll:3 width:1 authentication:1 please:3 auc:5 criterion:1 generalized:1 bijective:1 theoretic:1 demonstrate:1 performs:1 l1:3 corona:1 image:2 novel:3 superior:5 functional:7 empirically:1 overview:1 exponentially:1 volume:5 kopt:2 refer:2 smoothness:2 fk:4 mathematics:2 scandinavian:2 operating:1 pu:3 curvature:1 multivariate:6 closest:1 showed:1 dite:11 inequality:1 der:5 minimum:2 greater:2 additional:2 converge:1 signal:4 semi:1 ii:7 multiple:2 smooth:5 technical:1 faster:2 plug:50 lin:1 serial:1 coded:2 a1:7 converging:1 dudewicz:1 expectation:2 foremost:1 arxiv:2 histogram:8 kernel:67 achieved:1 c1:7 addition:1 spacing:1 grow:1 source:1 envelope:2 rest:1 unlike:2 massart:2 subject:2 integer:2 ckov:1 presence:1 iii:5 easy:2 split:1 mergel:1 bandwidth:4 reduce:1 det:2 palu:1 expression:1 six:1 bartlett:1 suffer:4 speech:1 poczos:1 clear:2 involve:1 nonparametric:5 statist:2 inverardi:1 schapire:1 exist:2 estimated:3 blue:2 alfred:1 four:1 falling:1 goria:1 schindler:1 verified:1 registration:1 imaging:1 graph:7 deflection:2 powerful:1 ddk:1 draw:2 comparable:1 accelerates:1 bound:1 strength:1 dominated:1 argument:1 min:1 leonenko:3 kumar:1 relatively:1 department:1 according:1 alternate:3 combination:3 march:1 belonging:1 smaller:2 g3:1 invariant:1 ghaoui:1 computationally:1 neyman:7 previously:2 discus:1 wrt:3 hero:4 end:1 umich:1 cor:1 available:1 observe:7 away:2 appropriate:1 rac:5 bhattacharya:1 slower:1 denotes:3 responding:1 include:3 assumes:1 establish:1 eggermont:1 print:2 parametric:10 strategy:2 gin:1 september:1 distance:3 manifold:1 cauchy:1 willsky:1 index:2 ratio:3 minimizing:3 difficult:1 robert:1 gk:4 negative:1 implementation:1 anal:1 unknown:2 upper:2 sricharan:2 fin:1 truncated:20 communication:2 community:1 introduced:1 toolbox:1 c4:4 acoustic:1 nip:1 trans:2 below:2 pattern:1 including:2 max:2 power:1 indicator:1 meth:1 minimax:1 normality:1 improve:3 meulen:3 chauveau:1 lk:3 concludes:1 asymptotic:2 relative:1 proportional:2 versus:1 remarkable:1 erh:1 consistent:1 pi:1 supported:1 free:1 offline:3 bias:11 allow:1 institute:1 neighbor:5 face:2 absolute:1 van:6 boundary:2 dimension:9 xn:11 evaluating:1 curve:5 c5:4 ec:1 transaction:2 functionals:8 approximate:1 b1:5 receiver:1 conclude:1 xi:5 discriminative:1 continuous:2 sk:4 szepesvari:1 mse:42 beirlant:2 linearly:1 arise:1 x1:5 fig:6 causality:1 roc:5 slow:3 comput:1 theorem:7 hlav:1 symbol:1 mason:1 decay:1 dk:4 gupta:1 exists:1 workshop:1 quantization:2 false:2 joe:1 texture:1 phd:1 gap:1 supf:3 entropy:47 michigan:2 g2:1 partially:1 applies:1 corresponds:2 satisfies:2 goal:1 ann:3 consequently:1 fisher:1 change:2 determined:3 except:1 uniformly:3 contrasted:1 averaging:3 principal:1 called:1 arbor:2 e:1 shannon:3 indicating:1 formally:1 select:1 support:2 absolutely:1 mcmc:1
|
4,196 | 4,799 |
Visual Recognition using Embedded Feature
Selection for Curvature Self-Similarity
Angela Eigenstetter
HCI & IWR, University of Heidelberg
[email protected]
Bj?orn Ommer
HCI & IWR, University of Heidelberg
[email protected]
Abstract
Category-level object detection has a crucial need for informative object representations. This demand has led to feature descriptors of ever increasing dimensionality like co-occurrence statistics and self-similarity. In this paper we propose a
new object representation based on curvature self-similarity that goes beyond the
currently popular approximation of objects using straight lines. However, like all
descriptors using second order statistics, ours also exhibits a high dimensionality.
Although improving discriminability, the high dimensionality becomes a critical
issue due to lack of generalization ability and curse of dimensionality. Given
only a limited amount of training data, even sophisticated learning algorithms
such as the popular kernel methods are not able to suppress noisy or superfluous dimensions of such high-dimensional data. Consequently, there is a natural
need for feature selection when using present-day informative features and, particularly, curvature self-similarity. We therefore suggest an embedded feature selection method for SVMs that reduces complexity and improves generalization
capability of object models. By successfully integrating the proposed curvature
self-similarity representation together with the embedded feature selection in a
widely used state-of-the-art object detection framework we show the general pertinence of the approach.
1
Introduction
One of the key challenges of computer vision is the robust representation of complex objects and
so over the years, increasingly rich features have been proposed. Starting with brightness values
of image pixels and simple edge histograms [10] descriptors evolved and more sophisticated features like shape context [1] and wavelets [23] were suggested. The probably most widely used and
best performing image descriptors today are SIFT [18] and HOG [4] which model objects based
on edge orientation histograms. Recently, there has been a trend to utilize more complicated image
statistics like co-occurrence and self-similarity [25, 5, 15, 29, 31] to build more robust descriptors.
This development shows, that the dimensionality of descriptors is getting larger and larger. Furthermore it is noticeable that all descriptors that model the object boundary rely on image statistics
that are primarily based on edge orientation. Thus, they approximate objects with straight lines.
However, it was shown in different studies within the perception community that besides orientation also curvature is an important cue when performing visual search tasks. In our earlier work
[21] we extended the modeling of object boundary contours beyond the widely used edge orientation histograms by utilizing curvature information to overcome the drawbacks of straight line
approximations. However, curvature can provide even more information about the object boundary. By computing co-occurrences between discriminatively curved boundaries we build a curvature
self-similarity descriptor that provides a more detailed and accurate object description.While it was
shown that self-similarity and co-occurrence lead to very robust and highly discriminative object
representations, these second order image statistics are also pushing feature spaces to extremely
1
high dimensions. Since the amount of training data stays more or less the same, the dimensionality
of the object representation has to be reduced to prevent systems to suffer from curse of dimensionality and overfitting. Nevertheless, well designed features still increase performance. Deselaers et
al. [5], for instance, suggested an approach that results in a 160000 dimensional descriptor which
was evaluated on the ETHZ shape dataset which contains on average 30 positive object instances
per category. To exploit the full capabilities of high-dimensional representations applied in object
detection we developed a new embedded feature selection method for SVM which reliable discards
superfluous dimensions and therefore improves object detection performance.
The paper is organized as follows: First we will give a short overview on embedded feature selection
methods for SVMs (Section 2.1) and describe a novel method to capture the important dimensions
from high-dimensional representations (Section 2.2). After that we describe our new self-similarity
descriptor based on curvature to go beyond the straight line approximation of objects to a more
accurate description (Section 3). Moreover, Section 3 discusses previous work on self-similarity. In
the experimental section at the end of the paper we evaluate the suggested curvature self-similarity
descriptor along with our feature selection method.
2
2.1
Feature Selection for Support Vector Machines
Embedded Feature Selection Approaches
Guyon et al. [12] categorize feature selection methods into filters, wrappers and embedded methods.
Contrary to filters and wrappers embedded feature selection methods incorporate feature selection
as a part of the learning process (for a review see [17]). The focus of this paper is on embedded
feature selection methods for SVMs, since most state-of-the-art detection systems use SVM as a
classifier. To directly integrate feature selection into the learning process of SVMs sparsity can be
enforced on the model parameter w. Several researchers e.g [2] have considered replacing the L2
regularization term kwk22 with an L1 regularization term kwk1 . Since L1 norm penalty for SVM has
some serious limitations, Wang et al. [30] suggested the doubly regularized SVM (DrSVM) which
is not replacing the L2 regularization but adding an additional L1 regularization to automatically
select dimensions during the learning process.
Contrary to linear SVM enforcing sparsity on the model parameter w does reduce dimensionality
for non-linear kernel functions in the higher dimensional kernel space rather than in the number
of input features. To reduce the dimensionality for non-linear SVMs in the feature space one can
introduce an additional selection vector ? ? [0, 1]n , where larger values of ?i indicate more useful
features. The objective is then to find the best kernel of the form K? (x, z) = K(? ? x, ? ? z), where
x, z ? Rn are the feature vectors and ? is element-wise multiplication. These hyper-parameters
? can be obtained via gradient descent on a generalization bound or a validation error. Another
possibility is to consider the scaling factors ? as parameters of the learning algorithm [11], where
the problem was solved using a reduced conjugate gradient technique.
In this paper we integrate the scaling factors into the learning algorithm, but instead of using L2
norm constraint like in [11] on the scaling parameter ? we apply an L1 norm sparsity which is
explicitly discarding dimensions of the input feature vector. For the linear case our optimization
problem becomes similar to DrSVM [30] where a gradient descent method is applied to find the
optimal solution w? . To find a starting point a computational costly initialization is applied, while
our selection step can start at the canonical ? = 1, because w is modeled in a separate variable.
2.2
Iterative Dimensionality Reduction for SVM
A SVM classifier is learning a hyperplane defined by w and b which best separates the training data
{(xi , yi )}1?i?N with labels yi ? {?1, +1}. We are following the concept of embedded feature
selection and therefore include the feature selection parameter ? directly in the SVM classifier. The
corresponding optimization problem can be expressed in the following way:
N
min min
?
w,b,?
subject to :
X
1
kwk22 + C
?i
2
i=1
(1)
yi (wT ?(? ? xi ) + b) ? 1 ? ?i
2
?
?i ? 0
?
k?k1 ? ?0
p
k
Algorithm 1: Iterative Dimensionality Reduction for SVM
p
p
i+l
i
D
1: converged := FALSE, ? := 1
2: while converged==FALSE do
3:
[x0l , ? , b] = trainSVM( X 0 , Y 0 , ?, C)
4:
?* = applyBundleMethod(X 00 ,Y 00 ,x0l ,?,b,C)
5:
if ?* == ? then
6:
converged=TRUE;
7:
end if
8:
? = ?*
9: end while
p'
D'
i+l
ik
ik
p'
i
p'
k
Figure 1: Visualization of curvature computation. Dik is on the left-hand side of the
vector (pi+l ? pi ) and therefore has a posi0
tive sign, while Dik
is on the right-hand side
0
of the vector (pi+l ? p0i ) and therefore gets a
negative sign
where K(x, z) := ?(x) ? ?(z) is the SVM kernel function. The function ?(x) is typically unknown
and represents the mapping of the feature vector x into a higher dimensional space. We enforce
sparsity of the feature selection parameter ? by the last constraint of Eq. 1, which restricts the
L1-norm of ? by a constant ?0 . Since SVM uses L2 normalization it does not explicitly enforce
single dimensions to be exactly zero. However, this is necessary to explicitly discard unnecessary
dimensions. We rewrite the problem in Eq. 1 without additional constraints in the following way:
N
X
1
min min ?k?k1 + kwk22 + C
max(0, 1 ? yi f? (xi ))
?
w,b
2
i=1
(2)
where the decision function f? is given by f? (x) = wT ?(? ? x) + b. Note, that the last constraint,
where the L1-norm is restricted by a constant ?0 is rewritten as an L1-regularization term, multiplied
with the sparsity parameter ?.
Due to the complexity of problem 2 we propose to solve two simpler problems iteratively. We
first split the training data into three sets, training {(x0i , yi0 )}1?i?N 0 , validation {(x00i , yi00 )}1?i?N 00
and a hold out testset. Now we optimize the problem according to w and b for a fixed selection
parameter ? using a standard SVM algorithm on the training set. Parameter ? is optimized in a
second optimization step on the validation data using an extended version of the bundle method
suggested in [6]. We are performing the second step of our algorithm on a separate validation set
to prevent overfitting. In the first step of our algorithm, the parameter ? is fixed and the remaining
problem is converted into the dual problem
0
N
X
0
N
1 X
max
?i ?
?i ?j yi0 yj0 K(? ? x0i , ? ? x0j )
?
2
i=1
i,j=1
(3)
0
N
X
subject to : 0 ? ?i ? C,
?i yi0 = 0
i=1
Pm
0
where the decision function f? is given by f? (x) =
l=1 ?l yl K(? ? x, ? ? xl ) + b, where m
is the number of support vectors. Eq. 3 is solved using a standard SVM algorithm [3, 19]. The
optimization of the selection parameter ? starts at the canonical solution where all dimensions are
set to one. This is corresponding to the solution that is usually taken as a final model in other
approaches. In our approach we apply a second optimization step to explicitly eliminate dimensions
which are not necessary to classify data from the validation set. Fixing the values of the Lagrange
multipliers ?, the support vectors x0l and the offset b obtained by solving Eq. 3, leads to
N
X
1
max(0, 1 ? yi f? (x00i )).
min ?k?k1 + kwk22 + C
?
2
i=1
(4)
which is an instance of the regularized risk minimization problem min ??(?) + R(?) , where ?(?)
?
is a regularization term and R(?) is an upper bound on the empirical risk. To solve such nondifferentiable risk minimization problems bundle methods have recently gained increasing interest
in the machine learning community. For the case that the risk function R is non-negative and convex
3
it is always lower bounded by its cutting plane at a certain point ? i :
R(?) ? < ai , ? > +bi for all i
(5)
where ai := ?? R(? i ) and bi := R(? i )? < ai , ? i >. Bundle methods build an iteratively increasing
piecewise lower bound of the objective function by utilizing its cutting planes. Starting with an
initial solution it solves the problem where R is approximated by one initial cutting plane using
standard solver. A second cutting plane is build at the solution of the approximated problem. The
new approximated lower bound of R is now the maximum over all cutting planes. The more cutting
planes are added the more accurate gets the lower bound of the risk function.
For the general case of non-linear kernel functions the problem in Eq. 4 is a non-convex and therefore especially hard to optimize. In the special case of a linear kernel the problem is convex and
the applied bundle method converges towards the global optimum. Some efforts have been made
to adjust bundle methods to handle non-convex problems [16, 6]. We adapted the method of [6] to
apply L1 regularization instead of L2 regularization and employ it to solve the optimization problem
in Eq. 4. Although the convergence rate of O(1/e) to a solution of accuracy e [6] does no longer
apply for our L1 regularized version, we observed that the algorithm converges withing the order of
10 iterations which is in the same range as for the algorithm in [6]. An overview of the suggested
iterative dimensionality reduction algorithm is given in Algorithm 1.
3
Representing Curvature Self-Similarity
Although several methods have been suggested for the robust estimation of curvature, it has been
mainly represented indirectly in a contour based manner [1, 32] and to locate interest points at
boundary points with high curvature value. To design a more exact object representation that represents object curvedness in a natural way we revisit the idea of [21] and design a novel curvature
self-similarity descriptor. The idea of self-similarity was first suggested by Shechtman et al. [25]
who proposed a descriptor based on local self-similarity (LSS). Instead of measuring image features directly it measures the correlation of an image patch with a larger surrounding image region.
The general idea of self-similarity was used in several methods and applications [5, 15, 29, 31]. In
[15] self-similarity is used to improve the Local Binary Pattern (LBP) descriptor for face identification. Deselaers et al. [5] explored global self-similarity (GSS) and showed its advantages over local
self-similarity (LSS) for object detection. Furthermore, Walk et al. [29] showed that using color
histograms directly is decreasing performance while using color self-similarity (CSS) as a feature
is more appropriate. Besides object classification and detection, self-similarity was also used for
action recognition [15] and turned out to be very robust to viewpoint variations.
We propose a new holistic self-similarity representation based on curvature. To make use of the
aforementioned advantages of global self-similarity we compute all pairwise curvature similarities
across the whole image. This results in a very high dimensional object representation. As mentioned
before such high dimensional representations have a natural need for dimensionality reduction which
we fulfill by applying our embedded feature selection algorithm outlined in the previous section.
To describe complex objects it is not sufficient to build a self-similarity descriptor solely based on
curvature information, since self-similarity of curvature leaves open many ambiguities. To resolve
these ambiguities we add 360 degree orientation information to get a more accurate descriptor. We
are using 360 degree orientation, since curved lines cannot be fully described by their 180 degree
orientation. This is different to straight lines, where 180 degree orientation gives us the full information about the line. Consider a half circle, with an arbitrary tangent line on it. The tangent line has
an orientation between 0 and 180 degrees. However, it does not provide information on which side
of the tangent the half circle is actually located, in contrast to a 360 degree orientation. Therefore,
using a 180 degree orientation yields to high similarities between a left curved line segment and a
right curved line segment.
As a first step we extract the curvature information and the corresponding 360 degree orientation
of all edge pixels in the image. To estimate the curvature we follow our approach presented in
[21] and use the distance accumulation method of Han et al. [13], which accurately approximates
the curvedness along given 2D line segments. Let B be a set of N consecutive boundary points,
B := {p0 , p1 , p2 , ..., pN ?1 } representing one line segment. A fixed integer value l defines a line Li
between pairs of points pi to pi+l , where i + l is taken modulo N . The perpendicular distance Dik
4
Figure 2: Our visualization shows the original images along with their curvature self-similarity
matrices displaying the similarity between all pairs of curvature histogram cells. While curvature
self-similarity descriptor is similar for the same object category it looks quite different to other object
categories
is computed from Li to the point pk , using the euclidean distance. The distance accumulation for
Pk
point pk and a chord length l is the sum hl (k) = i=k?l Dik . The distance is positive if pk is on
the left-hand side of the vector (pi+l ? pi ), and negative otherwise (see Figure 1 and Figure 3). To
get the 360 degree orientation information we compute the gradient of the probabilistic boundary
edge image [20] and extend the resulting 180 degree gradient orientation to a 360 degree orientation
using the sign of the curvature.
Contrary to the original curvature feature proposed in [21] where histograms of curvature are computed using differently sized image regions we build our basic curvature feature using equally
sized cells to make it more suitable for computing self-similarities. We divide the image into nonoverlapping 8 ? 8 pixel cells and build histograms over the curvature values in each cell. Next
we do the same for the 360 degree orientation and concatenate the two histograms. This results in
histograms of 28 bins, 10 bins representing the curvature and 18 bins representing the 360 degree
orientation. There are many ways to define similarities between histograms. We follow the scheme
that was applied to compute self similarities between color histograms [29] and use histogram intersection as a comparison measure to compute the similarities between different curvature histograms
in the same bounding box. Furthermore, we apply an L2-normalization to the final self-similarity
vector. The computation of self-similarities between all curvature-orientation histograms results in
an extremely high-dimensional representation. Let D be the number of cells in an image, then computing all pairwise similarities results in a D2 large curvature self-similarity matrix. Some examples
are shown in Figure 2. Since, the similarity matrix is symmetric we use only the upper triangle
which results in a (D ? (D ? 1)/2)-dimensional vector. This representation gives a very detailed
description of the object.
The higher dimensional a descriptor gets, the more likely it contains noisy and correlated dimensions. Furthermore, it is also intuitive that not all similarities extracted from a bounding box are
helpful to describe the object. To discard such superfluous dimensions we apply our embedded
feature selection method to the proposed curvature self-similarity representation.
4
Experiments
We evaluate our curvature self-similarity descriptor in combination with the suggested embedded
dimensionality reduction algorithm for the object detection task on the PASCAL dataset [7]. To
show the individual strengths of these two contributions we need to perform a number of evaluations.
Since this is not supported by the PASCAL VOC 2011 evaluation server we follow the best practice
guidelines and use the VOC 2007 dataset. Our experiments show, that curvature self-similarity
is providing complementary information to straight lines, while our feature selection algorithm is
further improving performance by fulfilling its natural need for dimensionality reduction.
The common basic concept shared by many current detection systems are high-dimensional, holistic representations learned with a discriminative classifier, mostly an SVM [28]. In particular the
combination of HOG [4] and SVM constitutes the basis of many powerful recognition systems and
it has laid the foundation for numerous extensions like, part based models [8, 22, 24, 33], variations
of the SVM classifier [8, 27] and approaches utilizing context information [14, 26]. These systems
rely on high-dimensional holistic image statistics primarily utilizing straight line approximations. In
this paper we explore a orthogonal direction to these extensions and focus on how one can improve
on the basic system by extending the straight line representation of HOG to a more discriminative
description using curvature self-similarity. At the same time our aim is to reduce the dimensionality
5
Table 1: Average precision of our iterative feature reduction algorithm for linear and non-linear
kernel function using our final feature vector consisting of HOG+Curv+CurvSS. For linear kernel
function we compare our feature selection (linSVM+FS) to L2 normalized linear SVM (linSVM)
and to the doubly regularized SVM (DrSVM) [30]. For non-linear kernel function we compare the
fast intersection kernel SVM (FIKSVM) [19] with our feature selection (FIKSVM+FS)
linSVM
DrSVM
linSVM + FS
FIKSVM
FIKSVM + FS
aero
66.1
59.1
69.7
80.1
80.4
bike
80.0
77.6
80.3
74.8
74.9
bird
53.0
53.5
55.5
57.1
57.5
boat
53.1
49.9
56.2
59.3
62.1
bottle
70.7
64.4
71.8
63.3
66.7
bus
73.8
71.6
74.0
73.9
73.9
car
75.3
75.8
75.9
77.3
78.0
cat
61.2
50.8
63.2
77.3
80.1
chair
63.8
56.1
64.8
69.1
70.6
cow
70.7
64.5
71.0
66.4
69.9
linSVM
DrSVM
linSVM + FS
FIKSVM
FIKSVM + FS
table
71.4
59.9
72.0
64.1
67.6
dog
57.2
53.9
57.8
61.7
64.6
horse
76.5
70.9
77.2
74.6
79.7
mbike
83.0
76.5
83.3
70.9
74.2
pers
72.9
72.3
73.0
79.4
79.6
plant
47.7
47.7
49.7
47.5
53.0
sheep
55.1
66.3
56.7
62.0
64.2
sofa
61.1
69.0
62.4
59.8
64.6
train
70.4
67.7
70.7
76.9
77.1
tv
73.1
79.7
73.8
69.3
69.8
mean
66.8
64.3
68.0
68.1
70.4
of such high-dimensional representations to decrease the complexity of the learning procedure and
to improve generalization performance.
In the first part of our experiments we adjust the selection parameter ? of our iterative dimensionality
reduction technique via cross-validation. Furthermore, we compare the performance of our feature
selection algorithm to L2 regularized SVM [3, 19] and DrSVM [30]. In the second part we evaluate
the suggested curvature self-similarity feature after applying our feature selection method to it.
4.1
Evaluation of Feature Selection
All experiments in this section are performed using our final feature vector consisting of HOG,
curvature (Curv) and curvature self-similarity (CurvSS). We apply our iterative dimensionality reduction algorithm in combination with linear L2 regularized SVM classifier (linSVM) [3] and nonlinear fast intersection kernel SVM (FIKSVM) by Maji et al. [19]. The FIKSVM is widely used
and evaluation is relatively fast compared to other non-linear kernels. Nevertheless, computational
complexity is still an issue on the PASCAL dataset. This is why on this database linear kernels are
typically used [8, 26].
Because of the high computational complexity of DrSVM and FIKSVM, we compare to these methods on a smaller train and test subset obtained from the PASCAL training and validation data in the
following way. All training and validation data from the PASCAL VOC 2007 dataset are used to
train an SVM using our final object representation on all positive samples and randomly chosen
negative samples. The resulting model is used to collect hard negative samples. The set of collected
samples is split up into three sets: training, validation and test. Out of the collected set of samples
every tenth sample is assigned to the hold out test set which is used to compare the performance of
our feature selection method. The remaining samples are randomly split into training and validation
set of equal size which are used to perform the feature selection. The reduction algorithm is applied
on 5 different training/validation splits which results in five different sets of selected features. For
each set we train an L2 norm SVM on all samples from the training and validation set using only
the remaining dimensions of the feature vector. Then we choose the feature set with the best performance on the hold out test set. To find the best performing selection parameter ?, we repeat this
procedure for different values of ?.
The performance of our dimensionality reduction algorithm is compared to the performance of
linSVM and DrSVM [30] for the case of a linear kernel. Since DrSVM is solving a similar optimization problem as our suggested feature selection algorithm for a linear kernel this comparison
is of particular interest. We are not comparing performance to DrSVM in the non-linear case since
6
curvature values
Figure 3: Based on meaningful
edge images one can extract accurate curvature information which is
used to build our curvature selfsimilarity object representation
Figure 4: A significant number of images from PASCAL
VOC feature contour artifacts i.e, due to their size, low
resolution, or compression artifacts. The edge maps are
obtained from the state-of-the-art probabilistic boundary
detector [20]. It is evident that objects like the sheep are
not defined by their boundary shape and are thus beyond the
scope of approaches base on contour shape
it is performing feature selection in the higher dimensional kernel space rather than in the original
feature space. Instead we compare our feature selection method to that of FIKSVM for the nonlinear case. Our feature selection method reduces the dimensionality of the feature by up to 55% for
the linear case and by up to 40% in the non-linear case, while the performance in average precision
is constant or increases beyond the performance of linSVM and FIKSVM. On average our feature
selection increases performance about 1.2% for linSVM and 2.3% for FIKSVM on the hold-out
testset. The DrSVM is actually decreasing the performance of linSVM by 2.5% while discarding a
similar amount of features. All in all our approach improves the DrSVM by 3.7% (see Table 1). Our
results confirm that our feature selection method reduces the amount of noisy dimensions of highdimensional representations and therefore increases the average precision compared to an linear and
non-linear SVM classifier without applying any feature selection. For the linear kernel we showed
furthermore that the proposed feature selection algorithm achieves gain over the DrSVM.
4.2
Object Detection using Curvature Self-Similarity
In this section we provide a structured evaluation of the parts of our final object detection system.
We use the HOG of Felzenszwalb et al. [8, 9] as baseline system, since it is the basis for many
powerful object detection systems. All detection results are measured in terms of average precision
performing object detection on the PASCAL VOC 2007 dataset.
To the best of our knowledge neither curvature nor self-similarity was used to perform object detection on a dataset of similar complexity as the PASCAL dataset so far. Deselaers et al. [5] evaluated
their global self-similarity descriptor (GSS) on the simpler classification challenge on the PASCAL
VOC 2007 dataset, while the object detection evaluation was performed on the ETHZ shape dataset.
However, we showed in [21], that including curvature already solves the detection task almost perfectly on the ETHZ dataset. Furthermore, [21] outperforms the GSS descriptor on three categories
and reached comparable performance on the other two. Thus we evaluate on the more challenging
PASCAL dataset. Since the proposed approach models the shape of curved object contours and
reduces the dimensionality of the representation, we expect it to be of particular value for objects
that are characterized by their shape and where their contours can be extracted using state-of-the-art
methods. However, a significant number of images form PASCAL VOC are corrupted due to noise
or compression artifacts (see Fig. 4). Therefore state-of-the-art edge extraction fails to provide any
basis for contour based approaches on these images and one can therefore only expect a significant
gain on categories where proper edge information can be computed for a majority of the images.
Our training procedure makes use of all objects that are not marked as difficult from the training
and validation set. We evaluate the performance of our system on the full testset consisting of
4952 images containing objects from 20 categories using a linear SVM classifier [3]. Due to the
large amount of data in the PASCAL database the usage of intersection kernel for object detection
becomes comparable intractable. Results of our final system consisting of HOG, curvature (Curv),
curvature self-similarity (CurvSS) and our embedded feature selection method (FS) are reported in
terms of average precision in Table 2. We compare our results to that of HOG [9] without applying
the part based model. Additionally we show results of our own HOG baseline system which is using
standard linear SVM [3] instead of the latent SVM used in [9]. Furthermore we show results with
7
Table 2: Detection performance in terms of average precision of the HOG baseline system, HOG and
curvature (Curv) before and after discarding noisy dimensions using our feature selection method
(FS) and our final detection system consisting of HOG, curvature (Curv), the suggested curvature
self-similarity (CurvSS) with and without feature selection (FS) on the PASCAL VOC 2007 dataset.
Note, that we use all data points to compute the average precision as it is specified by the default experimental protocol since VOC 2010 development kit. This yields lower but more accurate average
precision measurements
HOG of [9]
HOG
HOG+Curv
HOG+Curv+FS
HOG+Curv+CurvSS
HOG+Curv+
CurvSS+FS
HOG of [9]
HOG
HOG+Curv
HOG+Curv+FS
HOG+Curv+CurvSS
HOG+Curv+
CurvSS+FS
aero
19.0
20.8
23.0
25.4
28.6
bike
44.5
43.0
42.6
42.9
39.1
bird
2.9
2.1
3.7
3.7
2.3
boat
4.2
5.0
6.7
6.8
6.8
bottle
13.5
13.7
12.4
13.5
12.9
bus
37.7
37.8
38.6
38.8
40.3
car
39.0
38.7
39.9
40.0
38.8
cat
8.3
6.7
7.5
8.1
9.3
chair
11.4
12.1
10.0
12.0
11.1
cow
15.8
16.3
16.9
17.1
13.9
28.9
43.1
3.5
7.0
13.6
40.6
40.4
9.6
12.5
17.3
table
10.5
9.8
13.0
15.6
16.3
dog
2.0
2.2
3.7
3.7
6.2
horse
43.5
42.4
46.0
46.4
48.0
mbike
29.7
29.5
30.5
30.8
27.5
pers
24.0
24.3
25.5
25.7
27.2
plant
3.0
3.8
4.0
4.0
4.2
sheep
11.6
11.5
8.7
11.3
9.3
sofa
17.7
17.6
18.7
19.1
20.5
train
28.3
29.0
32.3
32.3
35.9
tv
32.4
33.4
33.6
33.6
34.8
mean
20.0
20.0
20.9
21.5
21.7
16.7
6.4
48.5
30.6
27.3
4.8
11.6
20.7
36.0
34.8
22.7
and without feature selection to show the individual gain of the curvature self-similarity descriptor
and our embedded feature selection algorithm.
The results show that the suggested self-similarity representation in combination with feature selection improves performance on most of the categories. All in all this results in an increase of 2.7% in
average precision compared to the HOG descriptor. One can observe that curvature information in
combination with our feature selection algorithm is already improving performance over the HOG
baseline and that adding curvature self-similarity additionally increases performance by 1.2%. The
gain obtained by applying our feature selection (FS) depends obviously on the dimensionality of the
feature vector; the higher the dimensionality the more can be gained by removing noisy dimensions.
For HOG+Curv applying our feature selection is improving performance by 0.6% while the gain for
the higher dimensional HOG+Curv+CurvSS is 1%. The results underline that curvature information provides complementary information to straight lines and that feature selection is needed when
dealing with high dimensional features like self-similarity.
5
Conclusion
We have observed that high-dimensional representations cannot be sufficiently handled by linear and
non-linear SVM classifiers. An embedded feature selection method for SVMs has therefore been
proposed in this paper, which has been demonstrated to successfully deal with high-dimensional
descriptions and it increases the performance of linear and intersection kernel SVM. Moreover, the
proposed curvature self-similarity representation has been shown to add complementary information
to widely used orientation histograms.1
References
[1] S. Belongie, J. Malik, and J. Puzicha. Matching shapes. ICCV, 2001.
1
This work was supported by the Excellence Initiative of the German Federal Government and the Frontier
fund, DFG project number ZUK 49/1.
8
[2] P. S. Bradley and O. L. Magasarian. Feature selection via concave minimization and support vector
machines. ICML, 1998.
[3] C.-C Chang and C.-J. Lin. LIBSVM: A library for support vector machines. ACM Transactions on
Intelligent Systems and Technology, 2:27:1?27:27, 2011.
[4] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. CVPR, 2005.
[5] T. Deselaers and V. Ferrari. Global and efficient self-similarity for object classification and detection.
CVPR, 2010.
[6] T.-M.-T. Do and T. Arti?eres. Large margin training for hidden markov models with partially observed
states. ICML, 2009.
[7] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman.
The
PASCAL Visual Object Classes Challenge 2007 (VOC2007) Results.
http://www.pascalnetwork.org/challenges/VOC/voc2007/workshop/index.html.
[8] P. Felzenszwalb, R. Girshick, D. McAllester, and D. Ramanan. Object detection with discriminatively
trained part based models. PAMI, 2010.
[9] P. F. Felzenszwalb, R. B. Girshick, and D. McAllester. Discriminatively trained deformable part models,
release 4. http://www.cs.brown.edu/ pff/latent-release4/.
[10] W. T. Freeman and M. Roth. Orientation histograms for hand gesture recognition. Intl. Workshop on
Automatic Face and Gesture- Recognition, 1995.
[11] Y. Grandvalet and S. Canu. Adaptive scaling for feature selection in SVMs. NIPS, 2003.
[12] I. Guyon and A. Elisseeff. An introduction to variable and feature selection. JMLR, 3:11571182, 2003.
[13] J. H. Han and T. Poston. Chord-to-point distance acccumulation and planar curvature: a new approach to
discrete curvature. Pattern Recognition Letters, 22(10):1133 ? 1144, 2001.
[14] G. Heitz and D. Koller. Learning spatial context: Using stuff to find things. ECCV, 2008.
[15] I. N. Junejo, E. Dexter, I. Laptec, and P. Per?ez. Cross-view action recognition from temporal selfsimilarities. ECCV, 2008.
[16] N. Karmitsa, M. Tanaka Filho, and J. Herskovits. Globally convergent cutting plane method for nonconvex
nonsmooth minimization. Journal of Optimization Theory and Applications, 148(3):528 ? 549, 2011.
[17] T. N. Lal, O. Chapelle, J. Weston, and A. Elisseeff. Studies in Fuzziness and Soft Computing. I. Guyon
and S. Gunn and N. Nikravesh and L. A. Zadeh, 2006.
[18] D.G. Lowe. Object recognition from local scale-invariant features. ICCV, 1999.
[19] S. Maji, A. C. Berg, and J. Malik. Classification using intersection kernel support vector machines is
efficient. CVPR, 2008.
[20] D. Martin, C. Fowlkes, and J. Malik. Learning to detect natural image boundaries using local brightness,
color, and texture cues. PAMI, 26(5):530 ? 549, 2004.
[21] A. Monroy, A. Eigenstetter, and B. Ommer. Beyond straight lines - object detection using curvature.
ICIP, 2011.
[22] A. Monroy and B. Ommer. Beyond bounding-boxes: Learning object shape by model-driven grouping.
ECCV, 2012.
[23] C. P. Papageorgiou, M. Oren, and T. Poggio. A general framwork for object detection. ICCV, 1998.
[24] P. Schnitzspan, M. Fritz, S. Roth, and B. Schiele. Discriminative structure learning of hierarchical representations for object detection. CVPR, 2009.
[25] E. Shechtman and M. Irani. Matching local self-similarities across images and videos. CVPR, 2007.
[26] Z. Song, Q. Chen, Z. Huang, Y. Hua, and S. Yan. Contextualizing object detection and classification.
CVPR, 2011.
[27] I. Tsochantaridis, T. Hofmann, T. Joachims, and Y. Altun. Support vector learning for interdependent and
structured output spaces. ICML, 2004.
[28] V. N. Vapnik. The Nature of Statistical Learning Theory. Springer Verlag, 1995.
[29] S. Walk, N. Majer, K. Schindler, and B. Schiele. New features and insights for pedestiran detection.
CVPR, 2010.
[30] L. Wang, J. Zhu, and H. Zou. The doubly regularized support vector machine. Statistica Sinica, 16, 2006.
[31] L. Wolf, T. Hassner, and Y. Taigman. Descriptor based methods in the wild. ECCV, 2008.
[32] P. Yarlagadda and B. Ommer. From meaningful contours to discriminative object shape. ECCV, 2012.
[33] L. Zhu, Y. Chen, A. Yuille, and W. Freeman. Latent hierarchical structural learning for object detection.
CVPR, pages 1062 ?1069, 2010.
9
|
4799 |@word version:2 dalal:1 compression:2 norm:6 yi0:3 underline:1 triggs:1 open:1 everingham:1 d2:1 p0:1 elisseeff:2 brightness:2 arti:1 shechtman:2 initial:2 reduction:11 contains:2 zuk:1 wrapper:2 ours:1 outperforms:1 bradley:1 current:1 comparing:1 concatenate:1 informative:2 shape:10 hofmann:1 designed:1 fund:1 cue:2 leaf:1 half:2 selected:1 plane:7 short:1 provides:2 org:1 simpler:2 five:1 along:3 ik:2 initiative:1 hci:2 doubly:3 wild:1 manner:1 introduce:1 excellence:1 pairwise:2 p1:1 nor:1 freeman:2 voc:10 decreasing:2 globally:1 automatically:1 resolve:1 curse:2 solver:1 increasing:3 becomes:3 project:1 moreover:2 bounded:1 bike:2 evolved:1 developed:1 temporal:1 every:1 stuff:1 concave:1 exactly:1 classifier:9 ramanan:1 positive:3 before:2 local:6 solely:1 pami:2 discriminability:1 initialization:1 bird:2 collect:1 challenging:1 co:4 limited:1 bi:2 range:1 perpendicular:1 practice:1 procedure:3 empirical:1 yan:1 matching:2 integrating:1 suggest:1 altun:1 get:5 cannot:2 selection:55 tsochantaridis:1 context:3 risk:5 applying:6 optimize:2 accumulation:2 map:1 demonstrated:1 www:2 roth:2 go:2 williams:1 starting:3 l:2 convex:4 resolution:1 insight:1 utilizing:4 handle:1 ferrari:1 variation:2 cs:1 today:1 yj0:1 modulo:1 exact:1 us:1 schnitzspan:1 trend:1 element:1 recognition:8 particularly:1 approximated:3 located:1 gunn:1 database:2 observed:3 aero:2 wang:2 capture:1 solved:2 region:2 decrease:1 chord:2 mentioned:1 complexity:6 schiele:2 trained:2 rewrite:1 solving:2 segment:4 yuille:1 basis:3 triangle:1 differently:1 represented:1 cat:2 maji:2 surrounding:1 train:5 fast:3 describe:4 horse:2 hyper:1 quite:1 widely:5 larger:4 solve:3 cvpr:8 otherwise:1 ability:1 statistic:6 noisy:5 final:8 obviously:1 advantage:2 propose:3 turned:1 holistic:3 deformable:1 description:5 intuitive:1 getting:1 convergence:1 optimum:1 extending:1 intl:1 converges:2 object:55 fixing:1 measured:1 x0i:2 noticeable:1 eq:6 p2:1 solves:2 c:1 indicate:1 direction:1 drawback:1 filter:2 human:1 mcallester:2 orn:1 bin:3 hassner:1 government:1 generalization:4 extension:2 frontier:1 hold:4 sufficiently:1 considered:1 mapping:1 bj:1 scope:1 achieves:1 consecutive:1 estimation:1 x00i:2 sofa:2 label:1 currently:1 successfully:2 minimization:4 federal:1 always:1 aim:1 rather:2 fulfill:1 pn:1 dexter:1 deselaers:4 release:1 focus:2 joachim:1 mainly:1 contrast:1 baseline:4 detect:1 helpful:1 typically:2 eliminate:1 hidden:1 koller:1 x0l:3 pixel:3 issue:2 dual:1 orientation:20 classification:5 aforementioned:1 pascal:14 html:1 development:2 art:5 special:1 spatial:1 equal:1 extraction:1 represents:2 look:1 icml:3 constitutes:1 nonsmooth:1 piecewise:1 serious:1 primarily:2 employ:1 intelligent:1 randomly:2 oriented:1 individual:2 dfg:1 consisting:5 detection:29 interest:3 highly:1 possibility:1 evaluation:6 adjust:2 sheep:3 superfluous:3 contextualizing:1 bundle:5 accurate:6 edge:10 necessary:2 poggio:1 orthogonal:1 euclidean:1 divide:1 walk:2 circle:2 withing:1 mbike:2 girshick:2 instance:3 classify:1 earlier:1 ommer:5 modeling:1 soft:1 measuring:1 subset:1 reported:1 corrupted:1 fritz:1 stay:1 probabilistic:2 yl:1 together:1 ambiguity:2 containing:1 choose:1 huang:1 li:2 converted:1 de:2 nonoverlapping:1 explicitly:4 depends:1 performed:2 view:1 lowe:1 reached:1 start:2 capability:2 complicated:1 contribution:1 accuracy:1 descriptor:24 who:1 yield:2 identification:1 accurately:1 researcher:1 straight:10 converged:3 detector:1 pers:2 gain:5 dataset:13 popular:2 color:4 car:2 dimensionality:23 improves:4 organized:1 knowledge:1 sophisticated:2 actually:2 higher:6 day:1 follow:3 planar:1 zisserman:1 evaluated:2 box:3 furthermore:8 correlation:1 hand:4 replacing:2 nonlinear:2 lack:1 defines:1 artifact:3 usage:1 pascalnetwork:1 concept:2 true:1 multiplier:1 normalized:1 brown:1 regularization:8 assigned:1 symmetric:1 iteratively:2 irani:1 deal:1 during:1 self:50 evident:1 l1:9 image:24 wise:1 nikravesh:1 novel:2 recently:2 common:1 overview:2 extend:1 approximates:1 significant:3 measurement:1 ai:3 automatic:1 outlined:1 pm:1 canu:1 monroy:2 chapelle:1 han:2 similarity:58 longer:1 add:2 base:1 curvature:59 own:1 showed:4 driven:1 discard:3 certain:1 server:1 nonconvex:1 verlag:1 binary:1 kwk1:1 yi:5 additional:3 kit:1 filho:1 full:3 reduces:4 characterized:1 gesture:2 cross:2 lin:1 equally:1 basic:3 vision:1 histogram:17 kernel:21 normalization:2 p0i:1 iteration:1 oren:1 cell:5 curv:15 lbp:1 winn:1 crucial:1 probably:1 kwk22:4 subject:2 thing:1 contrary:3 integer:1 structural:1 split:4 perfectly:1 cow:2 reduce:3 idea:3 handled:1 effort:1 penalty:1 dik:4 suffer:1 f:14 song:1 action:2 useful:1 detailed:2 amount:5 svms:7 category:8 reduced:2 http:2 restricts:1 canonical:2 herskovits:1 revisit:1 sign:3 per:2 discrete:1 key:1 nevertheless:2 schindler:1 prevent:2 neither:1 libsvm:1 tenth:1 utilize:1 year:1 sum:1 enforced:1 taigman:1 letter:1 powerful:2 poston:1 laid:1 guyon:3 x0j:1 almost:1 patch:1 decision:2 zadeh:1 scaling:4 comparable:2 bound:5 convergent:1 g:3 adapted:1 strength:1 constraint:4 extremely:2 min:6 chair:2 performing:6 relatively:1 martin:1 structured:2 tv:2 according:1 combination:5 conjugate:1 across:2 smaller:1 increasingly:1 hl:1 restricted:1 fulfilling:1 iccv:3 invariant:1 taken:2 visualization:2 bus:2 discus:1 german:1 needed:1 end:3 rewritten:1 multiplied:1 apply:7 observe:1 hierarchical:2 enforce:2 indirectly:1 appropriate:1 occurrence:4 fowlkes:1 original:3 angela:1 remaining:3 include:1 pushing:1 exploit:1 k1:3 build:8 especially:1 objective:2 malik:3 added:1 already:2 costly:1 exhibit:1 gradient:6 pertinence:1 separate:3 distance:6 majority:1 nondifferentiable:1 collected:2 enforcing:1 besides:2 length:1 modeled:1 index:1 providing:1 eres:1 difficult:1 mostly:1 sinica:1 hog:28 negative:5 suppress:1 design:2 guideline:1 proper:1 unknown:1 perform:3 upper:2 markov:1 descent:2 curved:5 extended:2 ever:1 locate:1 rn:1 arbitrary:1 community:2 tive:1 pair:2 bottle:2 dog:2 specified:1 optimized:1 lal:1 icip:1 learned:1 tanaka:1 nip:1 beyond:7 able:1 suggested:13 usually:1 perception:1 pattern:2 sparsity:5 challenge:4 reliable:1 max:3 video:1 including:1 gool:1 critical:1 suitable:1 natural:5 rely:2 regularized:7 boat:2 zhu:2 representing:4 scheme:1 improve:3 technology:1 voc2007:2 library:1 numerous:1 extract:2 review:1 interdependent:1 l2:10 tangent:3 multiplication:1 embedded:16 fully:1 plant:2 discriminatively:3 expect:2 limitation:1 validation:13 foundation:1 integrate:2 degree:13 sufficient:1 displaying:1 viewpoint:1 grandvalet:1 pi:7 eccv:5 supported:2 last:2 repeat:1 side:4 face:2 felzenszwalb:3 van:1 overcome:1 boundary:10 dimension:16 default:1 heitz:1 rich:1 contour:8 made:1 adaptive:1 testset:3 far:1 transaction:1 approximate:1 uni:2 cutting:7 confirm:1 dealing:1 global:5 overfitting:2 unnecessary:1 belongie:1 discriminative:5 xi:3 search:1 iterative:6 latent:3 why:1 table:6 additionally:2 nature:1 robust:5 improving:4 heidelberg:4 curvedness:2 complex:2 papageorgiou:1 zou:1 protocol:1 pk:4 statistica:1 whole:1 bounding:3 noise:1 complementary:3 fig:1 junejo:1 precision:9 fails:1 xl:1 jmlr:1 wavelet:1 removing:1 discarding:3 sift:1 offset:1 explored:1 svm:30 grouping:1 intractable:1 workshop:2 false:2 adding:2 vapnik:1 gained:2 iwr:3 texture:1 demand:1 margin:1 pff:1 chen:2 intersection:6 led:1 likely:1 explore:1 ez:1 visual:3 lagrange:1 expressed:1 partially:1 release4:1 chang:1 hua:1 springer:1 wolf:1 extracted:2 acm:1 weston:1 eigenstetter:2 sized:2 marked:1 fuzziness:1 consequently:1 towards:1 shared:1 hard:2 hyperplane:1 wt:2 experimental:2 meaningful:2 select:1 highdimensional:1 puzicha:1 berg:1 support:8 categorize:1 ethz:3 incorporate:1 evaluate:5 correlated:1
|
4,197 | 48 |
174
A Neural Network Classifier Based on Coding Theory
Tzt-Dar Chlueh and Rodney Goodman
eanrornla Instltute of Technology. Pasadena. eanromla 91125
ABSTRACT
The new neural network classifier we propose transforms the
classification problem into the coding theory problem of decoding a noisy
codeword. An input vector in the feature space is transformed into an internal
representation which is a codeword in the code space, and then error correction
decoded in this space to classify the input feature vector to its class. Two classes
of codes which give high performance are the Hadamard matrix code and the
maximal length sequence code. We show that the number of classes stored in an
N-neuron system is linear in N and significantly more than that obtainable by
using the Hopfield type memory as a classifier.
I. INTRODUCTION
Associative recall using neural networks has recently received a great deal
of attention. Hopfield in his papers [1,2) deSCribes a mechanism which iterates
through a feedback loop and stabilizes at the memory element that is nearest the
input, provided that not many memory vectors are stored in the machine. He has
also shown that the number of memories that can be stored in an N-neuron
system is about O.15N for N between 30 and 100. McEliece et al. in their work (3)
showed that for synchronous operation of the Hopfield memory about N/(2IogN)
data vectors can be stored reliably when N is large. Abu-Mostafa (4) has predicted
that the upper bound for the number of data vectors in an N-neuron Hopfield
machine is N. We believe that one should be able to devise a machine with M, the
number of data vectors, linear in N and larger than the O.15N achieved by the
Hopfield method.
Feature Space
=
N
B
= {-1
N
, 1 }
L
Code Space
= B = {-1
L
?1 }
Figure 1 (a) Classification problems versus (b) Error control decoding problems
In this paper we are specifically concerned with the problem of
classification as in pattern recognition. We propose a new method of building a
neural network classifier, based on the well established techniques of error
control coding. ConSider a typical classification problem (Fig. l(a)), in which one
is given a priori a set of classes, C( a), a = 1, .... M. Associated with each class is a
feature vector which labels the class ( the exemplar of the class), I.e. it is the
? American Institute of Physics 1988
175
most representative point in the class region. The input is classified into the
class with the nearest exemplar to the input. Hence for each class there is a
region in the N-dimensional binary feature space BN == (I. -I}N. in which every
vector will be classified to the corresponding class.
A similar problem is that of decoding a codeword in an error correcting
code as shown in Fig. I(b). In this case codewords are constructed by design and
are usually at least dmtn apart. The received corrupted codeword is the input to
the decoder. which then finds the nearest codeword to the input. In principle
then. if the distance between codewords is greater than 2t +1. it is possible to
decode (or classify) a noisy codeword (feature vector) into the correct codeword
(exemplar) provided that the Hamming distance between the noisy codeword and
the correct codeword is no more than t. Note that there is no guarantee that the
exemplars are uniformly distributed in BN. consequently the attraction radius
(the maximum number of errors that can occur in any given feature vector such
that the vector can st111 be correctly classified) will depend on the minimum
distance between exemplars.
Many solutions to the minimum Hamming distance classification have
been proposed. the one commonly used is derived from the idea of matched filters
in communication theory. Lippmann [5) proposed a two-stage neural network
that solves this classification problem by first correlating the input with all
exemplars and then picking the maximum by a "winner-take-all" circuit or a
network composed of two-input comparators. In Figure 2. fI.f2 .... .fN are the N
input bits. and SI.S2 .... SM are the matching score s(Similartty) of f with the M
exemplars. The second block picks the maximum of sI.S2 ..... SM and produces the
index of the exemplar with the largest score. The main disadvantage of such a
classifier is the complexity of the maximum-picking circuit. for example a
''winner-take-all'' net needs connection weights of large dynamic range and
graded-response neurons. whilst the comparator maximum net demands M-I
comparators organized in log2M stages.
f= d
M
A
(0:)
M
(0:)
g = c
+e
?
,,-_;_~~ DECODER~SS(f)
X
I
M
U
+ e
....
cloSS(f)
Feature
Space
Code
Space
Fig. 2 A matched filter type classifier Fig. 3 Structure of the proposed classifier
Our main idea is thus to transform every vector in the feature space to a
vector in some code space in such a way that every exemplar corresponds to a
codeword in that code. The code should preferably (but not necessarily) have the
property that codewords are uniformly distributed in the code space. that is, the
Hamming distance between every pair of codewords is the same. With this
transformation. we turn the problem of classification into the coding problem of
decoding a noisy codeword. We then do error correction decoding on the vector in
the code space to obtain the index of the noisy codeword and hence classify the
original feature vector. as shown in Figure 3.
This paper develops the construction of such a classification machine as
follows. First we conSider the problem of transforming the input vectors from the
feature space to the code space. We describe two hetero-associative memories for
dOing this. the first method uses an outer product matrix technique Similar to
176
that of Hopfield's. and the second method generates its matrix by the
pseudoinverse techruque[S.7J. Given that we have transformed the problem of
associative recall. or classification. into the problem of decoding a noisy
codeword. we next consider suitable codes for our machine. We require the
codewords in this code to have the property of orthogonality or
pseudo-orthogonality. that is. the ratio of the cross-correlation to the
auto-correlation of the codewords is small. We show two classes of such good
codes for this particular decoding problem l.e. the Hadamard matrix codes. and
the maximal length sequence codes[8J. We next formulate the complete decoding
algorithm. and describe the overall structure of the classifier in terms of a two
layer neural network. The first layer performs the mapping operation on the
input. and the second one decodes its output to produce the index of the class to
which the input belongs.
The second part of the paper is concerned with the performance of the
classifier. We first analyze the performance of this new classifier by finding the
relation between the maximum number of classes that can be stored and the
classification error rate. We show (when using a transform based on the outer
product method) that for negligible misclassification rate and large N. a not very
tight lower bound on M. the number of stored classes. is 0.22N. We then present
comprehensive simulation results that confirm and exceed our theoretical
expectations. The Simulation results compare our method with the Hopfield
model for both the outer product and pseudo-inverse method. and for both the
analog and hard limited connection matrices. In all cases our classifier exceeds
the performance of the Hopfield memory in terms of the number of classes that
can be reliably recovered.
D. TRANSFORM TECHNIQUES
Our objective is to build a machine that can discriminate among input
vectors and classify each one of them into the appropriate class.
Suppose
d(a) E BN is the exemplar ofthe corresponding class e(a.). a. = 1.2 ..... M . Given the
input f . we want the machine to be able to identify the class whose exemplar is
closest to f. that is. we want to calculate the follOWing function.
class ( f) =
a.
if f
I f - d( a) I
< I f - dH3)
I
where I I denotes Hamming distance in BN.
We approach the problem by seeking a transform ~ that maps each
exemplar d(a) in BN to the corresponding codeword w(a) in BL. And an input
feature vector f = dey) + e is thus mapped to a noisy codeword g = wlY) + e' where e
is the error added to the exemplar, and e' is the corresponding error pattern in the
code space. We then do error correction decoding on g to get the index of the
corresponding codeword. Note that e' may not have the same Hamming weight as
e, that is, the transformation ~ may either generate more errors or eliminate
errors that are present in the original input feature vector. We require ~ to
satisfy the following equation,
0.=0,1 ..... M-l
and
~
will be implemented uSing a Single-layer feedfoIWard network.
177
Thus we first construct a matrix according to the sets of d(a)'s and w(a)'s, call it T,
and define r:, as
where sgn is the threshold operator that maps a vector in RL to BL and R is the
field of real numbers.
Let D be an N x M matrix whose <lth column is d( a) and W be an L x M
matrix whose ~th column Is w(~). The two possible methods of constructing the
matrix for r:, are as follows:
Scheme A (outer product method) [3,6] : In this scheme the matrix T Is
defined as the sum of outer products of all exemplar-codeword pairs, i.e.
M-l
=
T(A)y
L
Wl(e:)? die:)
or equivalently.
T(A)
= WDt
Scheme B (pseudo-Inverse method) [6.7] : We want to find a matrix
satisfying the follOWing equation,
T(B)
'f{B) D = W
In general D is not a square matrix, moreover D may be singular, so D-l
may not exist. To circumvent this difficulty, we calculate the pseudo-inverse
(denoted Dt) of the matrix D instead of its real Inverse, let Dt::= (DtD)-lDt. T(B) can
be formulated as,
'f{B) = W Dt = W (ot D)-l nt
m.
CODES
The codes we are looking for should preferably have the property that its
codewords be distributed uniformly in BL, that is, the distance between each two
codewords must be the same and as large as pOSSible. We thus seek classes of
eqUidistant codes. Two such classes are the Hadamard matrix codes, and the
maximal length sequence codes.
First define the word pseudo-orthogonal.
Defmition: Let w(a) = (wO(a),Wl (a), ..... , WL-l (a)) E BL be the ath codeword of
code C, where a = 1,2, ... ,M. Code C is said to be pseudo-orthogonal iff
L-l
(w(a) , w(~)) =
Wl(a) Wl(~)
L
1=0
=(~
:::
where
E
?L
where ( , ) denotes inner product of two vectors.
Hadamard Matrices: An orthogonal code of length L whose L codewords are
rows or columns of an L x L Hadamard matrix. In this case e = 0 and the
distance between any two codewords is L/2. It is conjectured that there exist such
codes for all L which are multiples of 4, thus providing a large class of codes[8].
178
Mazlmal Length Sequence Codes: There exists a family of maximal length
sequence (also called pseudo-random or PN sequence) codes(8). generated by shift
registers. that satisfy pseudo-orthogonality with e = -1. Suppose 9 (x) is a
primitive polynomial over OF (2) of degree D. and let L = 2D -1. and if
00
f(xl
= l/g (xl = L
ck? xk
k=O
then CO.Cl ?.??... is a periodic sequence of period L ( since 9 (x) I x L - 1). If code C is
made up of the L cyclic shifts of
c = ( 1 - 2 CO. 1 - 2 cl ?... 1 - 2 c L - Il
then code C satisfies pseudo-orthogonality with E = -1. One then easily sees that
the minimum distance of this code is (L - 1)/2 which gives a correcting power of
approximately L/4 errors for large L.
IV. OVERALL CLASSIFIER STRUCTURE
We shall now describe the overall classifier structure. essentially it
consists of the mapping ~ followed by the error correction decoder for the
maximal length sequence code or Hadamard matrix code. The decoder operates
by correlating the input vector with every codeword and then thresholding the
result at (L + e)/2. The rationale of this algorithm is as follows. since the distance
between every two codewords in this code is exactly (L - e)/2 bits. the decoder
should be able to correct any error pattern with less than (L - e) / 4 errors if the
threshold is set halfway between Land e I.e. (L + e )/2.
Suppose the input vector to the decoder is g = w< a) + e and e has Hamming
weight s (i.e. s nonzero components) then we have
(g. w (0:)) = L - 2s
(g ? w (~) ~ 2s + E
where ~ i= a
From the above equation. if g is less than (L - e) /4 errors away from w( a)
(I.e. s < (L - e)/4) then (g ? W<a)) will be more than (L + e)/2 and (g ? w(~)) will be
less than (L + e)/2. for all ~ #= a. As a result. we arrive at the following decoding
algorithm.
deax1e (g) = sgn ( w t g - ( (L + E)/2)j )
where j = [ 1 1 ..... 1 )t ? which is an M x 1 vector.
In the case when E = -1 and less than (L+l)/4 errors in the input. the output
will be a vector in SM == {l.-I}M with only one component positive (+1). the index
of which is the index of the class that the input vector belongs. However if there
are more than (L+ 1) / 4 errors. the output can be either the all negative( -1) vector
(decoder failure) or another vector with one pOSitive component(decoder error).
The function class can now be defined as the composition of ~ and decode.
the overall structure of the new classifier is depicted in Figure 4. It can be viewed
as a twO-layer neural network with L hidden units and M output neurons. The
first layer is for mapping the input feature vector to a noisy codeword in the code
space ( the "internal representation" ) while the second one decodes the first's
output and produces the index of the class to which the input belongs.
179
T(A) or T
(8)
W
91
f1
t
f2
?
?
?
f N-1
fN
9L
Figure 4
Overall architecture of the new neural network classifier
v.
PERFORMANCE ANALYSIS
From the previous section, we know that our classifier will make an error
only if the transformed vector in the code space, which is the input to the decoder,
has no less than (L - e)/4 errors. We now proceed to find the error rate for this
classifier in the case when the input is one of the exemplars (i.e. no error), say
f = d(~) and an outer product connection matrix for~. Following the approach of
McEl1ece et. al.[31, we have
N-l M-l
(~ d(~)h
= sgl (
L L
Wl(a) dj(a) dj(~) )
j=o a= 0
N-l
= sgn( N wl(f3) +
M-l
L L
Wl(a) dl a ) dj(~) )
j=o a=O
a~~
Assume without loss of generality that Wl(~) = -I, and if
N-l
X
==
M-l
L L
j=O
Wl(a) dl a ) dj(~) ~
N
a=o
a~~
then
Notice that we assumed all d(a)'s are random, namely each component of
any d(a) is the outcome of a Bernoulli trial, accordingly, X is the sum of N(M-l)
independent identically distributed random variables with mean 0 and variance
1. In the asymptotic case, when Nand M are both very large, X can be
approximated by a normal distribution with mean 0, variance NM. Thus
p
-
Pr { (~d(~)h ~ Wl(~)}
-
Q(vlN/M)
_1_
where Q(x) =
vi 2 TT
fOO t 2 /2
x
e
dt
180
Next we calculate the misclassification rate of the new classifier as follows
(assuming E? L),
L
Pe
=
~
k=IL/4J
(L) pk(l_p)L-k
k
where L J is the integer floor. Since in generallt is not possible to express the
summation expliCitly, we use the Chernoff method to bound Pe from above.
Multiplying each term in the summation by a number larger than unity
( et(k - L/4) with t > 0 ) and summing from k = 0 instead of k = L
L/ 4 J'
L
Pe
<
L (L )P k (l-p) L- k e t(k - L/4)
k=O
=
e -L t/4
(1 _ P + p
et ) L
k
Differentiating the RHS of the above equation w.r.t. t and set it to 0, we find
the optimal to as eto = (l-p)/3p. The condition that to > 0 implies that p < 1/4,
and since we are dealing with the case where p is small, it is automatically
satisfied. Substituting the optimal to, we obtain
where c =4/(33 / 4 ) =1.7547654
From the expression for Pe ,we can estimate M, the number of classes that
can be classified with negllgible misclassification rate, in the following way,
suppose Pe = () where ()? land p ? 1, then
For small x we have g-l(Z) - ../2 Log ( i/z) and since () is a fixed value, as L
approaches infinity, we have
M>
N
=.l:L
810gc
4.5
From the above lower bound for M, one easily see that this new machlne is able to
classify a constant times N classes, which is better than the number of memory
items a Hopfield model can store Le. N/(210gN). Although the analysis is done
assumlng N approaches lnfinlty, the simulation results in the next section show
that when N is moderately large (e.g. 63) the above lower bound applles.
VI. SIMULATION RESULTS AND A CHARACTER RECOGNITION EXAMPLE
We have Simulated both the Hopfield model and our new machine(using
maxlmallength sequence codes) for L = N =31, 63 and for the following four cases
respectively.
(1) connection matrix generated by outer product method
(ti) connection matrix generated by pseudo-inverse method
(ill) connection matrix generated by outer product method, the components of the
connection matrix are hard limited.
(iv) connection matrix generated by pseudo-inverse method, the components of
the connection matrix are hard limited.
181
For each case and each choice of N. the program fixes M and the number of
errors in the input vector. then randomly generates 50 sets of M exemplars and
computes the connection matrix for each machine. For each machine it
randomly picks an exemplar and adds nOise to it by randomly complementing
the specified number of bits to generate 20 trial input vectors. it then simulates
the machine and checks whether or not the input is classified to the nearest class
and reports the percentage of success for each machine.
The simulation results are shown in Figure 5. in each graph the hOrizontal
axis is M and the vertical axis is the attraction radius. The data we show are
obtained by collecting only those cases when the success rate is more than 98%,
that is for fixed M what is the largest attraction radius (number of bits in error of
the input vector) that has a success rate of more than 98%. Here we use the
attraction radiUS of -1 to denote that for this particular M. with the input being
an exemplar. the success rate is less than 98% in that machine .
_e_ Hopfield Model
.0-
?-
New Classifier{Op)
New Classtfier{PI)
N=31
N=31
Analog Connection Matrix
Binary Connection Matrix
-,
?
..... CIl
....
:s
(,).~
... ..
~'1.:!
::;:0::
.....
23
21
. .++++++++. .~~~~~~
a ,0 12 '" ,. 18 'o It lit II , . '0
- ,~~~
"18101114
f
,.
:tD
"
:1'
I.
II
,
'0
(a)
(h)
.2
,
23
en
tl.a
f!
Binary Connection Matrix
, ....
?
M
d
15
13
"
M
N=63
'9~
17
u
N=63
21
Analog Connection Matrix
19
~ 17
~~
?
<:
15
13
11
11
9
9
7
7
5
5
3
3
~~~~~~~++~~I
-1
3
7
11
1519
23
27
31
35 39 43
47 51
55
59
83
3
7
11
1519
23
27
31
35
M
M
(c)
(d)
39 43
47 61
Figure 5 Simulation results of the Hopfield memory and the new classifier
182
_e_
Hopfield Model
.0-
New Classifier(OP.L=63)
.... New Classifier(OP.L=31)
1
~
23
21
19 ."--II.I--".~o.......
~"C 17 ......... -=-.~
1
.9 rIl
u.a
:::~15
< 13
,~
\
'~
-.~~
~-:~
-1 +---+-_e _ _ -4eO-,e _ _ _ e_-4e__..
e
3
5
7
9
11
13
15
17
19
21
23
25
e _..
e __
e
27
29
31
M
Figure 6 Perfonnance of the new classifier using codes of different lengths
In all cases our classifier exceeds the perfonnance of the Hopfield model
in tenns of the number of classes that can be reliably recovered_ For example.
consider the case of N = 63 and a hard limited connection matrix for both the new
classifier and the Hopfield model. we find that for an attraction radius of zero.
that is. no error in the input vector. the Hopfield model has a classification
capacity of approximately 5. while our new model can store 47. Also. for an
attraction radius of 8. that is. an average of N/8 errors in the input vector. the
Hopfield model can reliably store 4 classes while our new model stores 27
classes. Another Simulation (Fig. 6) USing a shorter code (L = 31 instead of L = 63)
reveals that by shortening the code. the performance of the classifier degrades
only slightly. We therefore conjecture that it is pOSSible to use traditional error
correcting codes (e.g. BCH code) as internal representations. however. by going to
a higher rate code, one is trading minimum distance of the code (error tolerance)
for complexity (number of hidden units). which implies pOSSibly poorer
performance of the classifier.
We also notice that the superiority of the pseudoinverse method over the
outer product method appears only when the connection matrices are hard
limited. The reason for this is that the pseudOinverse method is best for
decorrelating the dependency among exemplars, yet the exemplars in this
simulation are generated randomly and are presumably independent.
consequently one can not see the advantage of pseudoinverse method. For
correlated exemplars, we expect the pseudoinverse method to be clearly better
(see next example).
Next we present an example of applying this classifier to recognizing
characters. Each character is represented by a 9 x 7 pixel array, the input is
generated by flipping every pixel with 0.1 and 0.2 probability. The input is then
passed to five machines: Hopfield memory. the new classifier with either the
pseudotnverse method or outer product method, and L = 7 or L = 31. Figure 7 and 8
show the results of all 5 machines for 0.1 and 0.2 pixel flipping probability
respectively, a blank output means that the classifier refuses to make a decision.
First note that the L = 7 case is not necessarily worse than the L =31 case. this
confirms the earlier conjecture that fewer hidden units (shorter code) only
degrades perfonnance slightly. Also one eaSily sees that the pseudoinverse
method is better than the outer product method because of the correlation
between exemplars. Both methods outperform the Hopfield memory since the
latter mixes exemplars that are to be remembered and produces a blend of
exemplars rather than the exemplars themselves, accordingly it cannot classify
the input without mistakes.
183
(a)
(b)
?" ..
?-.
??...? ..
.
...
??
??.-..
?...
~
:':'J
L:.)
..
?....
?.
?...._..
?.
?.
~
(c)
(d)
(e)
(f)
(a)
.
...
.??.." .. ??-" -. ??-?" .-.. ~~
? . ? . .. ..
?? . ? ..
~;
....
:...
:.. '
:-'
..
?
.
?
?? ?? .
??...
?..- ??--...
,
_..
.. U
.?. ?-.
,...
?-.. ?:..L. ??.- ?[~
? _.
t:: j'.?
tF
. ?:... ?..
,.',
??...
Lo::
??.,. L L-; t'::l
"
"
~.
;
...,'.., ,
...
-
;
;
;
;
l.:1i
Figure 7 The character recognition
example with 10% pixel reverse
probability (a) input (b) correct
output (c) Hopfield Model (d)-(g) new
classifier (d) OP, L =7 (e)OP, L =31
(1) PI, L = 7 (g) PI, L = 31
..-" ...
??--..
(b)
"
?-..-..
?? ..
~;_. ?,...
-"?"...'.:..'_.-. ??--..
???....., ??.-_...
?.
?.". ?.?-.
~. ?
..?. ??_.
?? .
.. ??
?"- ?, ~
~
J'"l
"
II
_
(c)
(d)
(e)
.'.
??.-..
??,.?.? .-...
..
??? ...
-.
," .
???--...
E
???
"
..
?? ? ?... U
~.'
,
?...,
?? F
? ..
....
:
i
:.:::
(f)
{g}
. ? . ?.
??_.-.. ???-.....
- ?...
:'L' b
..
??-- ?-. ??..
:L ?-..
.
?,? -. ? -. ?: -:
?? -. L.J..- L~?L. ?,_.
.. ???
!
?? . ,..
?....-: ?.
"
'
L_
'
Figure 8 The character recognition
example with 20016 pixel reverse
probability (a) input (b) correct
output (c) Hopfield Model (d)-(g) new
classifier (d) OP, L =7 (e)OP. L =31
(1) PI. L 7 (g) PI. L = 31
=
Vll. CONCLUSION
In this paper we have presented a new neural network classifier design
based on coding theory techniques. The classifier uses codewords from an error
correcting code as its internal representations. Two classes of codes which give
high performance are the Hadamard matrix codes and the maximal length
sequence codes. In penormance terms we have shown that the new machine is
significantly better than using the Hopfield model as a classifier. We should also
note that when comparing the new classifier with the Hopfield model. the
increased performance of the new classifier does not entail extra complexity.
since it needs only L + M hard limiter neurons and L(N + M) connection weights
versus N neurons and N2 weights in a Hopfield memory.
In conclusion we believe that our model forms the basis of a fast. practical
method of classification with an effiCiency greater than other previOUS neural
network techniques.
REFERENCES
[1) J. J. Hopfield. Proc. Nat. Acad. Set USA, Vol. 79. pp. 2554-2558 (1982).
[2) J. J. Hopfield. Proc. Nat. Acad. Set USA, Vol. 81, pp. 3088-3092 (1984).
[3) R J. McEliece, et. aI, IEEE Tran. on Infonnation. Theory. Vol. IT-33.
pp. 461-482 (1987).
[4) Y. S. Abu-Mostafa and J. St. Jacques. IEEE Tran. on Information Theory ?
Vol. IT-3I, pp. 461-464 (1985).
[5) R Lippmann, IEEEASSP Magazine, Vol. 4, No.2. pp. 4-22 (April 1987).
[6) T. Kohonen. Associative Memory - A System-Theoretical Approach
(Springer-Verlag. Berlin Heidelberg. 1977).
[7) S . S. Venkatesh,Linear Map with Point Rules ,Ph. D Thesis, Caltech, 1987.
[8) E . R Berlekamp. Algebraic Coding Theory. Aegean Park Press. 1984.
|
48 |@word trial:2 polynomial:1 confirms:1 seek:1 simulation:8 bn:5 pick:2 cyclic:1 score:2 recovered:1 blank:1 nt:1 comparing:1 si:2 yet:1 must:1 fn:2 fewer:1 item:1 complementing:1 accordingly:2 xk:1 iterates:1 five:1 constructed:1 consists:1 themselves:1 automatically:1 td:1 provided:2 matched:2 moreover:1 circuit:2 what:1 whilst:1 finding:1 transformation:2 guarantee:1 pseudo:11 every:7 preferably:2 ti:1 collecting:1 exactly:1 classifier:37 control:2 unit:3 superiority:1 positive:2 negligible:1 mistake:1 acad:2 approximately:2 co:2 limited:5 range:1 practical:1 block:1 significantly:2 log2m:1 matching:1 word:1 get:1 cannot:1 operator:1 applying:1 map:3 primitive:1 attention:1 formulate:1 correcting:4 rule:1 attraction:6 array:1 his:1 vll:1 construction:1 suppose:4 decode:2 magazine:1 us:2 element:1 recognition:4 satisfying:1 approximated:1 calculate:3 region:2 transforming:1 complexity:3 moderately:1 dynamic:1 depend:1 tight:1 f2:2 efficiency:1 basis:1 easily:3 hopfield:26 represented:1 sgl:1 fast:1 describe:3 outcome:1 whose:4 larger:2 say:1 s:1 transform:4 noisy:8 associative:4 ldt:1 sequence:10 advantage:1 net:2 propose:2 tran:2 maximal:6 product:12 kohonen:1 hadamard:7 loop:1 ath:1 iff:1 eto:1 produce:4 exemplar:25 nearest:4 op:7 received:2 solves:1 implemented:1 predicted:1 implies:2 trading:1 radius:6 hetero:1 correct:5 filter:2 sgn:3 require:2 f1:1 fix:1 summation:2 correction:4 normal:1 great:1 presumably:1 mapping:3 mostafa:2 stabilizes:1 substituting:1 bch:1 proc:2 label:1 limiter:1 infonnation:1 largest:2 wl:11 tf:1 clearly:1 ck:1 rather:1 pn:1 derived:1 bernoulli:1 check:1 eliminate:1 nand:1 pasadena:1 relation:1 hidden:3 transformed:3 going:1 pixel:5 overall:5 classification:12 among:2 ill:1 denoted:1 priori:1 field:1 construct:1 f3:1 chernoff:1 lit:1 park:1 comparators:2 report:1 develops:1 randomly:4 composed:1 comprehensive:1 poorer:1 shorter:2 orthogonal:3 perfonnance:3 iv:2 closs:1 theoretical:2 increased:1 classify:6 column:3 earlier:1 gn:1 disadvantage:1 recognizing:1 stored:6 dependency:1 corrupted:1 periodic:1 st:1 physic:1 e_:3 decoding:10 picking:2 thesis:1 nm:1 satisfied:1 possibly:1 worse:1 american:1 coding:6 satisfy:2 register:1 explicitly:1 vi:2 doing:1 analyze:1 rodney:1 square:1 il:2 variance:2 ofthe:1 identify:1 decodes:2 multiplying:1 classified:5 failure:1 pp:5 associated:1 hamming:6 recall:2 organized:1 obtainable:1 appears:1 higher:1 dt:4 response:1 april:1 decorrelating:1 done:1 dey:1 generality:1 stage:2 correlation:3 mceliece:2 horizontal:1 believe:2 tzt:1 usa:2 building:1 hence:2 ril:1 nonzero:1 deal:1 die:1 wdt:1 complete:1 tt:1 performs:1 dtd:1 recently:1 fi:1 rl:1 winner:2 analog:3 he:1 defmition:1 composition:1 ai:1 dj:4 entail:1 add:1 closest:1 showed:1 conjectured:1 belongs:3 apart:1 reverse:2 codeword:20 store:4 verlag:1 binary:3 success:4 tenns:1 remembered:1 devise:1 caltech:1 minimum:4 greater:2 floor:1 eo:1 period:1 ii:4 multiple:1 mix:1 exceeds:2 cross:1 essentially:1 expectation:1 achieved:1 want:3 singular:1 goodman:1 ot:1 extra:1 simulates:1 call:1 integer:1 exceed:1 identically:1 concerned:2 equidistant:1 architecture:1 inner:1 idea:2 shift:2 synchronous:1 whether:1 expression:1 passed:1 wo:1 algebraic:1 proceed:1 dar:1 transforms:1 shortening:1 ph:1 generate:2 outperform:1 exist:2 percentage:1 notice:2 jacques:1 correctly:1 shall:1 vol:5 abu:2 express:1 four:1 threshold:2 graph:1 halfway:1 sum:2 inverse:6 arrive:1 family:1 decision:1 bit:4 bound:5 layer:5 followed:1 occur:1 orthogonality:4 infinity:1 generates:2 conjecture:2 according:1 describes:1 slightly:2 character:5 unity:1 pr:1 equation:4 turn:1 mechanism:1 know:1 operation:2 away:1 appropriate:1 original:2 denotes:2 build:1 graded:1 bl:4 seeking:1 objective:1 added:1 flipping:2 codewords:12 degrades:2 blend:1 traditional:1 said:1 distance:11 mapped:1 simulated:1 capacity:1 decoder:9 outer:11 berlin:1 reason:1 assuming:1 code:52 length:9 index:7 ratio:1 providing:1 equivalently:1 negative:1 design:2 reliably:4 upper:1 vertical:1 neuron:7 sm:3 communication:1 looking:1 gc:1 venkatesh:1 pair:2 namely:1 specified:1 connection:17 established:1 able:4 usually:1 pattern:3 refuse:1 program:1 memory:13 power:1 suitable:1 misclassification:3 difficulty:1 circumvent:1 scheme:3 technology:1 axis:2 auto:1 asymptotic:1 loss:1 expect:1 rationale:1 versus:2 degree:1 principle:1 thresholding:1 pi:5 land:2 row:1 lo:1 l_:1 institute:1 differentiating:1 distributed:4 tolerance:1 feedback:1 computes:1 commonly:1 made:1 lippmann:2 confirm:1 dealing:1 pseudoinverse:6 correlating:2 reveals:1 summing:1 assumed:1 heidelberg:1 necessarily:2 cl:2 constructing:1 pk:1 main:2 rh:1 s2:2 noise:1 n2:1 fig:5 representative:1 en:1 tl:1 cil:1 foo:1 vln:1 decoded:1 xl:2 pe:5 dl:2 exists:1 nat:2 demand:1 depicted:1 springer:1 corresponds:1 satisfies:1 comparator:1 lth:1 viewed:1 formulated:1 consequently:2 hard:6 specifically:1 typical:1 uniformly:3 operates:1 called:1 discriminate:1 aegean:1 internal:4 latter:1 correlated:1
|
4,198 | 480 |
JANUS: Speech-to-Speech Translation Using
Connectionist and Non-Connectionist Techniques
Alex Waibel? Ajay N. Jain t
Arthur McNair Joe Tebelskis
School of Computer Science
Carnegie Mellon University
Pittsburgh, PA 15213
Louise OsterhoItz
Computational Linguistics Program
Carnegie Mellon University
Hiroaki Saito
Otto Schmidbauer
Tilo Sloboda Monika Woszczyna
Keio University
Tokyo, Japan
Siemens Corporation
Munich, Germany
University of Karlsruhe
Karlsruhe, Germany
ABSTRACT
We present JANUS, a speech-to-speech translation system that utilizes
diverse processing strategies, including connectionist learning, traditional AI knowledge representation approaches, dynamic programming,
and stochastic techniques. JANUS translates continuously spoken
English and German into German, English, and Japanese. JANUS currently achieves 87% translation fidelity from English speech and 97%
from German speech. We present the JANUS system along with comparative evaluations of its interchangeable processing components, with
special emphasis on the connectionist modules.
?Also with University of Karlsruhe, Karlsruhe. Germany.
1N"ow with Alliant Techsystems Research and Technology Center. Hopkins. Minnesota.
183
184
Waibel, et al.
1 INTRODUCTION
In an age of increasing globalization of our economies and ever more efficient communication media. one important challenge is the need for effective ways of overcoming language barriers. Human translation efforts are generally expensive and slow. thus
eliminating this possibility between individuals and around rapidly changing material (e.g.
newscasts. newspapers). This need has recently lead to a resurgence of effort in machine
translation-mostly of written language.
Much of human communication. however. is spoken, and the problem of spoken language
translation must also be addressed. If successful. speech-to-text translation systems could
lead to automatic subtitles in TV-broadcasts and cross-linguistic dictation. Speech-tospeech translation could be deployed as interpreting telephone service in restricted domains
such as cross-linguistic hoteVconference reservations. catalog purchasing, travel planning,
etc., and eventually in general domains. such as person-to-person telephone calls. Apart
from telephone service. speech translation could facilitate multilingual negotiations and
collaboration in face-to-face or video-conferencing settings.
With the potential applications so promising, what are the scientific challenges? Speech
translation systems will need to address three distinct problems:
? Speech Recognition and Understanding: A naturally spoken utterance must be recognized and understood in the context of ongoing dialog.
? Machine Translation: A recognized message must be translated from one language
into another (or several others).
? Speech Synthesis: A translated message must be synthesized in the target language.
Considerable challenges still face the development of each of the components, let alone the
combination of the three. Among them only speech synthesis is mature enough for commercial systems to exist that can synthesize intelligible speech in several languages from
text But even here, to guarantee acceptance of the translation system, research is needed to
improve naturalness and to allow for adaptation of the output speech (in the target language) to the voice characteristics of the input speaker. Speech recognition systems to date
are generally limited in vocabulary size. and can only accept grammatically well-formed
utterances. They require improvement to handle spontaneous unrestricted dialogs. Machine
Translation systems require considerable development effort to work in a given language
pair and domain reasonably well, and generally require syntactically well-formed input
sentences. Improvements are needed to handle ill-formed sentences well and to allow for
flexibility in the face of changes in domain and language pairs.
Beyond the challenges facing each system component, the combination of the three also
introduces extra difficulties. Both the speech recognition and machine translation components, must deal with spoken languager-ill-formed noisy input, both acoustically as well
as syntactically. Therefore, the speech recognition component must be concerned less with
transcription fidelity than semantic fidelity, while the MT-component must try to capture
the meaning or intent of the input sentence without being guaranteed a syntactically legal
sequence of words. In addition, non-symbolic prosodic information (intonation, rhythm,
etc.) and dialog state must be taken into consideration to properly translate an input utterance. A closer cooperation between traditional signal processing and language level processing must be achieved.
JANUS: Speech-to-Speech Translation
Input
Utterance
(
~
Speech
System
/
PARSEC
Network
1.(
LR
Parser
Parse
)-( :;::: )--
Translated
Utterance
1
DecTalk
DTCOI
Figure 1: High-level JANUS architecture
JANUS is our first attempt at multilingual speech translation. It is the result of a collaborative effort between AlR Interpreting Telephony Research Laboratories, Carnegie Mellon
University, Siemens Corporation, and the University of Karlsruhe. JANUS currently
accepts continuously spoken sentences from a conference registration scenario, where a fictitious caller attempts to register to an international conference. The dialogs are read aloud
from dialog scripts that make use of a vocabulary of approximately 400 words. Speakerdependent and independent versions of the input recognition systems have been developed.
JANUS currently accepts continuously spoken English and German input and produces
spoken German, English, and Japanese output as a result.
While JANUS has some of the limitations mentioned above, it is the first tri-lingual continuous large vocabulary speech translation system to-date. It is a vehicle toward overcoming
some of the limitations described. A particular focus is the trainability of system components, so that flexible, adaptive, and robust systems may result. JANUS is a hybrid system
that uses a blend of computational strategies: connectionist, statistical and knowledge
based techniques. This paper will describe each of JANUS's processing components separately and particularly highlight the relative contributions of connectionist techniques
within this ensemble. Figure 1 shows a high-level diagram of JANUS's components.
2 SPEECH RECOGNITION
Two alternative speech recognition systems are currently used in JANUS: Linked Predictive Neural Networks (LPNNs) and Learned Vector Quantization networks (LVQ) (Tebelskis et al. 1991; Schmidbauer and Tebelskis 1992). They are both connectionist,
continuous-speech recognition systems, and both have vocabularies of approximately 400
English and 400 German words. Each use statistical bigram or word-pair grammars
derived from the conference registration database. The systems are based on canonical
phoneme models (states) that can be logically concatenated in any order to create models
for different words. The need for training data with labeled phonemes can be reduced by
first bootstrapping the networks on a small amount of speech with forced phoneme boundaries, then training on the whole database using only forced word boundaries.
In the LPNN system, each phoneme model is implemented by a predictive neural network.
Each network is trained to accurately predict the next frame of speech within segments of
speech corresponding to its phoneme model. Continuous scores (prediction errors) are
accumulated for various word candidates. The LPNN module produces either a single
185
186
Waibel, et al.
hypothesized sentence or the first N best hypotheses using a modified dynamic-programming beam-search algorithm (Steinbiss 1989). The LPNN system has speaker-dependent
word accuracy rates of 93% with first-best recognition, and sentence accuracy of 69%.
LVQ is a vector clustering technique based on neural networks. We have used LVQ to
automatically cluster speech frames into a set of acoustic features; these features are fed
into a set of output units that compute the emission probability for HMM states. This technique gives speaker-dependent word accuracy rates of 98%,86%, and 82% for English
conference registration tasks of perplexity 7, 61, and Ill, respectively. The sentence recognition rate at perplexity 7 is 80%.
We are also evaluating other approaches to speech recognition, such as the Multi-State
TDNN for continuous-speech (Haffner, Franzini, and Waibel 1991) and a neural-network
based word spotting system that may be useful for modeling spontaneous speech effects
(Zeppenfield and Waibel 1992). The recognitions systems' text output serves as input to
the alternative parsing modules of JANUS.
3 LANGUAGE UNDERSTANDING AND TRANSLATION
3.1 LANGUAGE ANALYSIS
The translation module of JANUS is based on the Universal Parser Architecture (UPA)
developed at Carnegie Mellon (Tomita and Carbonell 1987; Tomita and Nyberg 1988). It
is designed for efficient multi-lingual translation. Text in a source language is parsed into a
language independent frame-based inter lingual representation. From the interlingua, text
can be generated in different languages.
The system requires hand-written parsing and generation grammars for each language to
be processed. The parsing grammars are based on a Lexical Functional Grammar formalism, and are implemented using Tomita's Generalized LR parsing Algorithm (Tomita
1991). The generation grammars are compiled into LISP functions. Both parsing and generation with UPA approach real-time. Figure 2 shows an example of the input, interlingual
representation, and the output of the JANUS system
3.2 PARSEC: CONNECTIONIST PARSING
JANUS can use a connectionist parser in place of the LR parser to process the output of
the speech system. PARSEC is a structured connectionist parsing architecture that is
geared toward the problems found in spoken language (for details, see Jain 1992 (in this
volume) and Jain's PhD thesis, in preparation). PARSEC networks exhibit three strengths:
? They automatically learn to parse, and generalize well compared to hand-coded
grammars.
? They tolerate several types of noise without any explicit noise modeling.
? They can learn to use multi-modal input such as pitch in conjunction with syntax and
semantics.
The PARSEC network architecture relies on a variation of supervised back-propagation
learning. The architecture differs from some other connectionist approaches in that it is
highly structured, both at the macroscopic level of modules. and at the microscopic level
of connections.
JANUS: Speech-to-Speech Translation
Input
Hello is this the office for the conference.
Interlingual Representation
((CFNAME *is-this-phone)
(MOOD *interrogative)
(OBJECT ((NUMBER sg) (DET the)
(CFNAME *conf-office)))
(SADJUNCTl ((CFNAME *hello))))
Output
Japanese: MOSHI MOSHI KAIGI JIMUKYOKU DESUKA
German: HALLO 1ST DIES DAS KONFERENZBUERO
Figure 2: Example of input, interlingua, and output of JANUS
3.2.1 Learning and Generalization
Through exposure to example output parses, PARSEC networks learn parsing behavior.
Trained networks generalize well compared to hand-written grammars. In direct tests of
coverage for the conference registration domain, PARSEC achieved 67% correct parsing
of novel sentences, whereas hand-written grammars achieved just 5%,25%, and 38% correcl Two of the grammars were written as part of a contest with a large cash prize for best
coverage.
The process of training PARSEC networks is highly automated, and is made possible
through the use of constructive learning coupled with a robust control procedure that
dynamically adjusts learning parameters during training. Novice users of the PARSEC
system were able to train networks for parsing a German-language version of the conference registration task and a novel English air-travel reservation task.
3.2.2 Noise Tolerance
We have compared PARSEC's performance on noisy input with that of hand-written
grammars. On synthetic ungrammatical conference registration sentences, PARSEC produced acceptable interpretations 66% of the time, with the three hand-coded grammars
mentioned above performing at 2%, 38%, and 34%, respectively. We have also evaluated
PARSEC in the context of noisy speech recognition in JANUS, and this is discussed later.
3.2.3 Multi-Modal Input
A somewhat elusive goal of spoken language processing has been to utilize information
from the speech signal beyond just word sequences in higher-level processing. It is well
known that humans use such information extensively in conversation. Consider the utterances "Okay." and "Okay?" Although semantically distinct, they cannot be distinguished
based on word sequence, but pitch contours contain the necessary information (Figure 3).
187
188
Waibel, et al.
FILE: S.O.O "Okay."
0.1
*..........
0.0
duration =409.1 msec, mean freq
= 113.2
. ........... .
?????????????????????????????????????????????????
FILE: q.O.O "Okay?-
duration = 377.0 msec, mean freq = 137.3
........
0.6
0.5
0.4
0.3
0.2
0.1 ????????
? ?????? _---, ??????
Figure 3: Smoothed pitch contours.
In a grammar-based system, it is difficult to incorporate real-valued vector input in a useful way. In a PARSEC network, the vector is just another set of input units. A module of a
PARSEC network was augmented to contain an additional set of units that contained pitch
information. The pitch contours were smoothed output from the OGI Neural Network
Pitch Tracker (Barnard et al. 1991).
Within the JANUS system, the augmented PARSEC network brings new functionality.
Intonation affects translation in JANUS when using the augmented PARSEC network.
The sentence, "This is the conference office." is translated to "Kaigi jimukyoku desu."
"This is the conference office?" is translated to ''Kaigi jimukyoku desuka?" This required
no changes in the other modules of the JANUS system. It also should be possible to use
other types of information from the speech signal to aid in robust parsing (e.g. energy patterns to disambiguate clausal structure).
4 SPEECH SYNTHESIS
To generate intelligible speech in the respective target languages, we have predominantly
used commercial devices. Most notably, DEC-talk has provided unrestricted English textto-speech synthesis. DEC-talk has also been used for Japanese and German synthesis. The
internal English text-to-phoneme conversion rules and tables of DEC-talk were bypassed
by external German and Japanese text-to-phoneme look-up tables that convert the German/Japanese target sentences into phonemic strings for DEC-talk synthesis. The resulting synthesis is limited to the task vocabulary, but the external tables result in intelligible
German and Japanese speech-albeit with a pronounced American accent
To allow for greater flexibility in vocabulary and more language specific synthesis, several
alternate devices are currently being integrated. For Japanese, in particular, two high quality speech synthesizers developed separately by NEC and A1R will be used to provide
more satisfactory results. In JANUS, no attempt has so far been made to adapt the output
speech to the input speaker's voice characteristics. However, this has recently been demonstrated by work with code book mapping (Abe, Shikano, and Kuwabara 1990) and connectionist mapping techniques (Huang, Lee, and Waibel 1991).
JANUS: Speech-to-Speech Translation
5 IMPLEMENTATION ISSUES AND PERFORMANCE
5.1 Parallel Hardware
Neural network forward passes for the speech recognizer were programmed on two general purpose parallel machines. a MasPar computer at the University of Karlsruhe, Germany and an Intel iWarp at Carnegie Mellon. The MasPar is a parallel SIMD machine
with 4096 processing elements. The iWarp is a MIMD machine, and a 16MHz, 64 cell
experimental version was used for testing.
The use of parallel hardware and algorithms has significantly decreased JANUS's processing time. Compared to forward pass calculations performed by a DecStation 5000, the
iWarp is 9 times faster (41.4 million connections per second). The MasPar does the forward pass calculations for a two second utterance in less than 500 milliseconds. Both the
iWarp and MasPar are scalable. Efforts are underway to implement other parts of JANUS
on parallel hardware with the goal of near real-time performance.
5.2 Performance
Currently, English JANUS using the LR parsing module (JANUS-LR) performs at 87%
correct translation using the LPNN speech system with the N-best sentence hypotheses.
Gennan JANUS performs at 97% correct translation (on a subset of the conference registration database) using Gennan versions of the LPNN system and LR parsing grammar.
English JANUS using PARSEC (JANUS-NN) does not perform as well as the LR parser
version in N-best mode, with 80% correct translation. PARSEC is not able to select from a
list of ranked candidate utterance hypotheses as robustly as is the LR parser using a very
tight grammar. However, the grammar used for this comparison only achieves 5% coverage of novel test sentences, compared with PARSEC's 67%. This vast difference in coverage explains some of the N -best performance difference.
In First-best mode, however, JANUS-NN does better than J ANUS-LR (77% versus 70%).
The PARSEC network is able to produce acceptable parses for a number of noisy speech
recognition hypotheses, but JANUS-LR tends to reject those hypotheses as unparsable.
PARSEC's flexibility, which hurt its N-best performance, enhances its F-best performance. No performance evaluations were carried out using German PARSEC in German
JANUS.
6 CONCLUSION
In this paper we have described JANUS, a multi-lingual speech-to-speech translation system. JANUS uses a mixture of connectionist, statistical and rule based strategies to achieve
this goal. Connectionist models have contributed in providing high performance recognition and parsing performance as well as greater robustness in the light of task variations and
syntactically ill-formed sentences. Connectionist models also provide a mechanism for
merging traditionally distinct symbolic (syntax) and signal-level (intonation) information
gracefully and achieve successful disambiguation between grammatical statements whose
mood can be affected by intonation. Finally, connectionist sentence analysis appears to
offer high flexibility as the relevant modules can be retrained automatically for new tasks,
domains and even languages without laborious recoding. We plan to continue exploring
different mixtures of computing paradigms to achieve higher performance.
189
190
Waibel, et al.
Acknowledgements
The authors gratefully acknowledge the support of A1R Interpreting Telephony Laboratories, Siemens Corporation, NEC Corporation, and the National Science Foundation.
References
Abe, M., K. Shikano, H. Kuwabara. 1990. Cross Language Voice Conversion. In IEEE
Proceedings of the International Conference on Acoustics, Speech, and Signal Processing.
Barnard, E., R. A. Cole, M. P. Vea, F. A. Alleva 1991. Pitch Detection with a Neural-Net
Classifier. IEEE Transactions on Signal Processing 39(2): 298-307.
Haffner, P., M. Franzini, and A. Waibel. 1991. Integrating time alignment and neural networks for high performance speech recognition. In IEEE Proceedings of the I nterna-
tional Conference on Acoustics, Speech, and Signal Processing.
Huang, X. D., K. F. Lee, A. Waibel. 1991. In Proceedings of the IEEE-SP Workshop on
Neural Networksfor Signal Processing.
Jain, A. N. 1992. Generalization performance in PARSEC-A structured connectionist
learning architecture. In Advances in Neural Information Processing Systems 4, ed. J.
E. Moody, S. J. Hanson, and R. P. Lippmann. San Mateo, CA: Morgan Kaufmann Publishers.
Jain, A. N. In preparation. PARSEC: A Connectionist Learning Architecture for Parsing
Spoken Language. PhD Thesis, School of Computer Science, Carnegie Mellon University.
Schmidbauer, O. and J. Tebelskis. 1992. An LVQ based reference model for speaker-adaptive speech recognition. In IEEE Proceedings of the International Conference on
Acoustics, Speech, and Signal Processing.
Steinbiss, V. 1989. Sentence-hypothesis generation in a continuous-speech recognition
system. In Proceedings of the 1989 European Conference on Speech Communication
and Technology, Vol. 2, 51-54.
Tebelskis, J., A. Waibel, B. Petek, O. Schmidbauer. 1991. Continuous speech recognition
by Linked Predictive Neural Networks. In Advances in Neural Information Processing
System 3, ed. R. Lippmann, J. Moody, and D. Touretzky. San Mateo, CA: Morgan
Kaufmann Publishers.
Tomita, M. (ed.). 1991. Generalized LR Parsing. Norwell, MA: Kluwer Academic Publishers.
Tomita, M. and 1. G. Carbonell. 1987. The Universal Parser Architecture for KlWwledgeBased Machine Translation. Technical Report CMU-CMT-87-01, Center for Machine
Translation, Carnegie Mellon University.
Tomita, M. and E. Nyberg. 1988. Generation Kit and Transformation Kit. Technical
Report CMU-CMT-88-MEMO, Center for Machine Translation, Carnegie Mellon University.
Zeppenfield, T. and A. Waibel. 1992. A hybrid neural network, dynamic programming
word spotter. In IEEE Proceedings of the International Conference on Acoustics,
Speech, and Signal Processing.
|
480 |@word version:5 eliminating:1 bigram:1 alliant:1 score:1 synthesizer:1 written:6 parsing:16 must:9 designed:1 alone:1 device:2 prize:1 lr:11 parsec:24 along:1 direct:1 inter:1 notably:1 behavior:1 planning:1 dialog:5 multi:5 jimukyoku:3 automatically:3 increasing:1 provided:1 medium:1 what:1 string:1 developed:3 spoken:11 transformation:1 corporation:4 bootstrapping:1 guarantee:1 classifier:1 control:1 unit:3 service:2 understood:1 tends:1 approximately:2 emphasis:1 mateo:2 dynamically:1 limited:2 programmed:1 testing:1 implement:1 differs:1 procedure:1 decstation:1 saito:1 universal:2 significantly:1 reject:1 word:13 speakerdependent:1 integrating:1 symbolic:2 cannot:1 context:2 demonstrated:1 center:3 lexical:1 elusive:1 exposure:1 duration:2 adjusts:1 rule:2 handle:2 caller:1 variation:2 steinbiss:2 hurt:1 traditionally:1 target:4 commercial:2 spontaneous:2 parser:7 user:1 programming:3 us:2 hypothesis:6 pa:1 synthesize:1 element:1 expensive:1 recognition:19 particularly:1 database:3 labeled:1 module:9 subtitle:1 capture:1 tospeech:1 mentioned:2 dynamic:3 trained:2 interchangeable:1 segment:1 tight:1 predictive:3 translated:5 interlingual:2 various:1 talk:4 train:1 jain:5 distinct:3 effective:1 prosodic:1 describe:1 forced:2 reservation:2 aloud:1 whose:1 valued:1 nyberg:2 otto:1 grammar:15 noisy:4 mood:2 sequence:3 net:1 adaptation:1 relevant:1 rapidly:1 date:2 translate:1 flexibility:4 achieve:3 pronounced:1 cluster:1 produce:3 comparative:1 object:1 school:2 phonemic:1 implemented:2 coverage:4 tokyo:1 correct:4 functionality:1 stochastic:1 human:3 material:1 explains:1 require:3 generalization:2 exploring:1 around:1 tracker:1 mapping:2 predict:1 kuwabara:2 achieves:2 purpose:1 recognizer:1 travel:2 currently:6 vea:1 cole:1 create:1 hello:2 modified:1 cash:1 office:4 conjunction:1 linguistic:2 derived:1 focus:1 emission:1 gennan:2 improvement:2 properly:1 logically:1 tional:1 economy:1 dependent:2 accumulated:1 nn:2 integrated:1 accept:1 germany:4 semantics:1 issue:1 fidelity:3 among:1 ill:4 flexible:1 negotiation:1 development:2 plan:1 special:1 simd:1 look:1 connectionist:18 others:1 report:2 okay:4 national:1 individual:1 keio:1 attempt:3 detection:1 acceptance:1 message:2 possibility:1 highly:2 evaluation:2 laborious:1 alignment:1 introduces:1 mixture:2 light:1 norwell:1 closer:1 arthur:1 necessary:1 respective:1 maspar:4 formalism:1 modeling:2 mhz:1 alr:1 subset:1 successful:2 synthetic:1 person:2 st:1 international:4 lee:2 acoustically:1 synthesis:8 continuously:3 hopkins:1 moody:2 thesis:2 broadcast:1 huang:2 conf:1 external:2 american:1 book:1 japan:1 potential:1 register:1 a1r:2 script:1 try:1 vehicle:1 later:1 performed:1 linked:2 parallel:5 collaborative:1 contribution:1 formed:5 air:1 accuracy:3 phoneme:7 characteristic:2 kaufmann:2 ensemble:1 generalize:2 accurately:1 produced:1 upa:2 touretzky:1 ed:3 energy:1 naturally:1 knowledge:2 conversation:1 schmidbauer:4 globalization:1 back:1 appears:1 tolerate:1 higher:2 supervised:1 modal:2 evaluated:1 just:3 hand:6 parse:2 propagation:1 accent:1 mode:2 brings:1 quality:1 scientific:1 karlsruhe:6 facilitate:1 effect:1 hypothesized:1 contain:2 read:1 laboratory:2 satisfactory:1 semantic:1 freq:2 deal:1 ogi:1 during:1 speaker:5 rhythm:1 generalized:2 syntax:2 performs:2 syntactically:4 interpreting:3 meaning:1 consideration:1 novel:3 recently:2 predominantly:1 functional:1 mt:1 volume:1 million:1 discussed:1 interpretation:1 kluwer:1 synthesized:1 mellon:8 ai:1 automatic:1 contest:1 language:24 gratefully:1 minnesota:1 geared:1 compiled:1 etc:2 kaigi:3 apart:1 perplexity:2 scenario:1 phone:1 continue:1 morgan:2 unrestricted:2 somewhat:1 additional:1 greater:2 kit:2 recognized:2 paradigm:1 signal:10 technical:2 faster:1 adapt:1 calculation:2 cross:3 offer:1 academic:1 constructive:1 coded:2 prediction:1 pitch:7 scalable:1 ajay:1 cmu:2 alleva:1 achieved:3 dec:4 beam:1 cell:1 addition:1 whereas:1 separately:2 addressed:1 decreased:1 diagram:1 source:1 macroscopic:1 publisher:3 extra:1 tri:1 file:2 pass:1 mature:1 grammatically:1 lisp:1 call:1 near:1 enough:1 concerned:1 automated:1 affect:1 architecture:8 haffner:2 translates:1 det:1 effort:5 speech:61 generally:3 useful:2 amount:1 extensively:1 hardware:3 processed:1 reduced:1 generate:1 exist:1 canonical:1 millisecond:1 per:1 clausal:1 diverse:1 carnegie:8 vol:1 affected:1 changing:1 registration:7 utilize:1 vast:1 spotter:1 convert:1 petek:1 place:1 utilizes:1 disambiguation:1 dy:1 acceptable:2 networksfor:1 guaranteed:1 strength:1 alex:1 conferencing:1 tebelskis:5 hiroaki:1 performing:1 structured:3 munich:1 tv:1 waibel:12 alternate:1 combination:2 lingual:4 dictation:1 lpnn:5 restricted:1 taken:1 legal:1 german:14 eventually:1 mechanism:1 needed:2 fed:1 serf:1 naturalness:1 distinguished:1 robustly:1 alternative:2 voice:3 robustness:1 clustering:1 linguistics:1 tomita:7 parsed:1 concatenated:1 franzini:2 blend:1 strategy:3 traditional:2 exhibit:1 ow:1 microscopic:1 enhances:1 hmm:1 gracefully:1 carbonell:2 toward:2 code:1 mimd:1 providing:1 difficult:1 mostly:1 statement:1 memo:1 resurgence:1 intent:1 implementation:1 perform:1 contributed:1 conversion:2 acknowledge:1 ever:1 communication:3 frame:3 smoothed:2 retrained:1 abe:2 overcoming:2 pair:3 required:1 woszczyna:1 sentence:16 connection:2 hanson:1 catalog:1 acoustic:5 accepts:2 learned:1 interrogative:1 address:1 beyond:2 able:3 spotting:1 pattern:1 challenge:4 program:1 including:1 mcnair:1 video:1 difficulty:1 hybrid:2 ranked:1 improve:1 technology:2 carried:1 interlingua:2 tdnn:1 coupled:1 utterance:8 text:7 understanding:2 sg:1 acknowledgement:1 underway:1 relative:1 par:2 highlight:1 generation:5 telephony:2 limitation:2 fictitious:1 facing:1 versus:1 age:1 foundation:1 purchasing:1 collaboration:1 translation:30 cooperation:1 english:12 monika:1 allow:3 face:4 barrier:1 tolerance:1 grammatical:1 boundary:2 recoding:1 vocabulary:6 evaluating:1 contour:3 forward:3 made:2 adaptive:2 author:1 san:2 novice:1 far:1 newspaper:1 transaction:1 janus:39 lippmann:2 multilingual:2 transcription:1 pittsburgh:1 shikano:2 continuous:6 search:1 table:3 disambiguate:1 promising:1 learn:3 reasonably:1 robust:3 bypassed:1 ca:2 cmt:2 ungrammatical:1 japanese:8 european:1 domain:6 da:1 sp:1 intelligible:3 whole:1 noise:3 desuka:2 augmented:3 intel:1 deployed:1 slow:1 aid:1 explicit:1 intonation:4 msec:2 candidate:2 specific:1 list:1 workshop:1 joe:1 quantization:1 albeit:1 merging:1 phd:2 nec:2 contained:1 relies:1 ma:1 goal:3 lvq:4 barnard:2 considerable:2 change:2 telephone:3 semantically:1 newscast:1 pas:2 experimental:1 trainability:1 siemens:3 select:1 internal:1 support:1 preparation:2 ongoing:1 incorporate:1
|
4,199 | 4,800 |
On the Use of Non-Stationary Policies for Stationary
Infinite-Horizon Markov Decision Processes
Boris Lesner
Inria, Villers-l`es-Nancy, F-54600, France
[email protected]
Bruno Scherrer
Inria, Villers-l`es-Nancy, F-54600, France
[email protected]
Abstract
We consider infinite-horizon stationary ?-discounted Markov Decision Processes,
for which it is known that there exists a stationary optimal policy. Using Value
and Policy Iteration with some error at each iteration, it is well-known that one
2?
can compute stationary policies that are (1??)
2 -optimal. After arguing that this
guarantee is tight, we develop variations of Value and Policy Iteration for com2?
-optimal, which constitutes a
puting non-stationary policies that can be up to 1??
significant improvement in the usual situation when ? is close to 1. Surprisingly,
this shows that the problem of ?computing near-optimal non-stationary policies?
is much simpler than that of ?computing near-optimal stationary policies?.
1
Introduction
Given an infinite-horizon stationary ?-discounted Markov Decision Process [24, 4], we consider
approximate versions of the standard Dynamic Programming algorithms, Policy and Value Iteration,
that build sequences of value functions vk and policies ?k as follows
vk+1 ? T vk + k+1
Approximate Value Iteration (AVI):
Approximate Policy Iteration (API):
vk
?k+1
? v? k + k
? any element of G(vk )
(1)
(2)
where v0 and ?0 are arbitrary, T is the Bellman optimality operator, v?k is the value of policy ?k
and G(vk ) is the set of policies that are greedy with respect to vk . At each iteration k, the term k
accounts for a possible approximation of the Bellman operator (for AVI) or the evaluation of v?k
(for API). Throughout the paper, we will assume that error terms k satisfy for all k, kk k? ?
for some ? 0. Under this assumption, it is well-known that both algorithms share the following
performance bound (see [25, 11, 4] for AVI and [4] for API):
Theorem 1. For API (resp. AVI), the loss due to running policy ?k (resp. any policy ?k in G(vk?1 ))
instead of the optimal policy ?? satisfies
lim sup kv? ? v?k k? ?
k??
2?
.
(1 ? ?)2
2?
The constant (1??)
2 can be very big, in particular when ? is close to 1, and consequently the above
bound is commonly believed to be conservative for practical applications. Interestingly, this very
2?
constant (1??)
2 appears in many works analyzing AVI algorithms [25, 11, 27, 12, 13, 23, 7, 6, 20, 21,
22, 9], API algorithms [15, 19, 16, 1, 8, 18, 5, 17, 10, 3, 9, 2] and in one of their generalization [26],
2?
suggesting that it cannot be improved. Indeed, the bound (and the (1??)
2 constant) are tight for
API [4, Example 6.4], and we will show in Section 3 ? to our knowledge, this has never been argued
in the literature ? that it is also tight for AVI.
1
Even though the theory of optimal control states that there exists a stationary policy that is optimal,
the main contribution of our paper is to show that looking for a non-stationary policy (instead of a
stationary one) may lead to a much better performance bound. In Section 4, we will show how to
deduce such a non-stationary policy from a run of AVI. In Section 5, we will describe two original
policy iteration variations that compute non-stationary policies. For all these algorithms, we will
2?
1
prove that we have a performance bound that can be reduced down to 1??
. This is a factor 1??
better than the standard bound of Theorem 1, which is significant when ? is close to 1. Surprisingly,
this will show that the problem of ?computing near-optimal non-stationary policies? is much simpler
than that of ?computing near-optimal stationary policies?. Before we present these contributions, the
next section begins by precisely describing our setting.
2
Background
We consider an infinite-horizon discounted Markov Decision Process [24, 4] (S, A, P, r, ?), where S
is a possibly infinite state space, A is a finite action space, P (ds0 |s, a), for all (s, a), is a probability
kernel on S, r : S ? A ? R is a reward function bounded in max-norm by Rmax , and ? ? (0, 1)
is a discount factor. A stationary deterministic policy ? : S ? A maps states to actions. We write
r? (s) = r(s, ?(s)) and P? (ds0 |s) = P (ds0 |s, ?(s)) for the immediate reward and the stochastic
kernel associated to policy ?. The value v? of a policy ? is a function mapping states to the expected
discounted sum of rewards received when following ? from any state: for all s ? S,
#
"?
X
t
? r? (st )s0 = s, st+1 ? P? (?|st ) .
v? (s) = E
t=0
The value v? is clearly bounded by Vmax = Rmax /(1 ? ?). It is well-known that v? can be
characterized as the unique fixed point of the linear Bellman operator associated to a policy ?:
T? : v 7? r? + ?P? v. Similarly, the Bellman optimality operator T : v 7? max? T? v has as
unique fixed point the optimal value v? = max? v? . A policy ? is greedy w.r.t. a value function v
if T? v = T v, the set of such greedy policies is written G(v). Finally, a policy ?? is optimal, with
value v?? = v? , iff ?? ? G(v? ), or equivalently T?? v? = v? .
Though it is known [24, 4] that there always exists a deterministic stationary policy that is optimal,
we will, in this article, consider non-stationary policies and now introduce related notations. Given
a sequence ?1 , ?2 , . . . , ?k of k stationary policies (this sequence will be clear in the context we
describe later), and for any 1 ? m ? k, we will denote ?k,m the periodic non-stationary policy
that takes the first action according to ?k , the second according to ?k?1 , . . . , the mth according to
?k?m+1 and then starts again. Formally, this can be written as
?k,m = ?k ?k?1 ? ? ? ?k?m+1 ?k ?k?1 ? ? ? ?k?m+1 ? ? ?
It is straightforward to show that the value v?k,m of this periodic non-stationary policy ?k,m is the
unique fixed point of the following operator:
Tk,m = T?k T?k?1 ? ? ? T?k?m+1 .
Finally, it will be convenient to introduce the following discounted kernel:
?k,m = (?P?k )(?P?k?1 ) ? ? ? (?P?k?m+1 ).
In particular, for any pair of values v and v 0 , it can easily be seen that Tk,m v?Tk,m v 0 = ?k,m (v?v 0 ).
3
Tightness of the performance bound of Theorem 1
The bound of Theorem 1 is tight for API in the sense that there exists an MDP [4, Example 6.4]
for which the bound is reached. To the best of our knowledge, a similar argument has never been
provided for AVI in the literature. It turns out that the MDP that is used for showing the tightness
for API also applies to AVI. This is what we show in this section.
Example 1. Consider the ?-discounted deterministic MDP from [4, Example 6.4] depicted on Figure 1. It involves states 1, 2, . . . . In state 1 there is only one self-loop action with zero reward, for
each state i > 1 there are two possible choices: either move to state i ? 1 with zero reward or stay
2
k
1
0
2
?2 ???
1??
?2(? + ? 2 )
?2?
0
0
...
3
0
0
k
...
0
Figure 1: The determinisitic MDP for which the bound of Theorem 1 is tight for Value and Policy
Iteration.
i
with reward ri = ?2 ???
1?? with ? 0. Clearly the optimal policy in all states i > 1 is to move to
i ? 1 and the optimal value function v? is 0 in all states.
Starting with v0 = v? , we are going to show that for all iterations k ? 1 it is possible to have a
policy ?k+1 ? G(vk ) which moves in every state but k + 1 and thus is such that v?k+1 (k + 1) =
rk+1
1??
k+1
= ?2 ???
(1??)2 , which meets the bound of Theorem 1 when k tends to infinity.
To do so, we assume that the following approximation errors are made at each iteration k > 0:
(
? if i = k
if i = k + 1 .
k (i) =
0
otherwise
With this error, we are now going to prove by induction on k that for all k ? 1,
?
k?1
if i < k
?
? ??
rk /2 ?
if i = k
vk (i) =
.
?
? ?(rk /2 ? ) if i = k + 1
0
otherwise
Since v0 = 0 the best action is clearly to move in every state i ? 2 which gives v1 = v0 + 1 = 1
which establishes the claim for k = 1.
Assuming that our induction claim holds for k, we now show that it also holds for k + 1.
For the move action, write qkm its action-value function. For all i > 1 we have qkm (i) = 0 + ?vk (i ?
1), hence
?
k?1
)
= ?? k
if i = 2, . . . , k
?
? ?(??
?(rk /2 ? )
= rk+1 /2
if i = k + 1
m
.
qk (i) =
?
? ??(rk /2 ? ) = ?rk+1 /2 if i = k + 2
0
otherwise
For the stay action, write qks its action-value function. For all i > 0 we have qks (i) = ri + ?vk (i),
hence
?
ri + ?(?? k?1 ) = ri ? ? k
if i = 1, . . . , k ? 1
?
?
?
? rk + ?(rk /2 ? ) = rk + rk+1 /2 if i = k
qks (i) =
.
rk+1 ? rk+1 /2
= rk+1 /2
if i = k + 1
?
?
= rk+2
if i = k + 2
? rk+2 + ?0
?
0
otherwise
First, only the stay action is available in state 1, hence, since r0 = 0 and k+1 (1) = 0, we have
vk+1 (1) = qks (1) + k+1 (1) = ?? k , as desired. Second, since ri < 0 for all i > 1 we have
qkm (i) > qks (i) for all these states but k + 1 where qkm (k + 1) = qks (k + 1) = rk+1 /2. Using the fact
that vk+1 = max(qkm , qks ) + k+1 gives the result for vk+1 .
The fact that for i > 1 we have qkm (i) ? qks (i) with equality only at i = k +1 implies that there exists
a policy ?k+1 greedy for vk which takes the optimal move action in all states but k + 1 where the
stay action has the same value, leaving the algorithm the possibility of choosing the suboptimal stay
action in this state, yielding a value v?k+1 (k + 1), matching the upper bound as k goes to infinity.
Since Example 1 shows that the bound of Theorem 1 is tight, improving performance bounds imply
to modify the algorithms. The following sections of the paper shows that considering non-stationary
policies instead of stationary policies is an interesting path to follow.
3
4
Deducing a non-stationary policy from AVI
While AVI (Equation (1)) is usually considered as generating a sequence of values v0 , v1 , . . . , vk?1 ,
it also implicitely produces a sequence1 of policies ?1 , ?2 , . . . , ?k , where for i = 0, . . . , k ? 1,
?i+1 ? G(vi ). Instead of outputing only the last policy ?k , we here simply propose to output the
periodic non-stationary policy ?k,m that loops over the last m generated policies. The following
theorem shows that it is indeed a good idea.
Theorem 2. For all iteration k and m such that 1 ? m ? k, the loss of running the non-stationary
policy ?k,m instead of the optimal policy ?? satisfies:
2
? ? ?k
k
kv? ? v?k,m k? ?
+
?
kv
?
v
k
?
0 ? .
1 ? ?m 1 ? ?
When m = 1 and k tends to infinity, one exactly recovers the result of Theorem 1. For general
m
m, this new bound is a factor 1??
1?? better than the standard bound of Theorem 1. The choice that
optimizes the bound, m = k, and which consists in looping over all the policies generated from the
very start, leads to the following bound:
?
?k
2? k
kv? ? v?k,k k? ? 2
?
kv? ? v0 k? ,
+
k
1??
1??
1 ? ?k
that tends to
2?
1??
when k tends to ?.
The rest of the section is devoted to the proof of Theorem 2. An important step of our proof lies
in the following lemma, that implies that for sufficiently big m, vk = T vk?1 + k is a rather good
) of the value v?k,m of the non-stationary policy ?k,m (whereas in
approximation (of the order 1??
general, it is a much poorer approximation of the value v?k of the last stationary policy ?k ).
Lemma 1. For all m and k such that 1 ? m ? k,
kT vk?1 ? v?k,m k? ? ? m kvk?m ? v?k,m k? +
? ? ?m
.
1??
Proof of Lemma 1. The value of ?k,m satisfies:
v?k,m = T?k T?k?1 ? ? ? T?k?m+1 v?k,m .
(3)
By induction, it can be shown that the sequence of values generated by AVI satisfies:
T?k vk?1 = T?k T?k?1 ? ? ? T?k?m+1 vk?m +
m?1
X
?k,i k?i .
(4)
i=1
By substracting Equations (4) and (3), one obtains:
T vk?1 ? v?k,m = T?k vk?1 ? v?k,m = ?k,m (vk?m ? v?k,m ) +
m?1
X
?k,i k?i
i=1
and the result follows by taking the norm and using the fact that for all i, k?k,i k? = ? i .
We are now ready to prove the main result of this section.
Proof of Theorem 2. Using the fact that T is a contraction in max-norm, we have:
kv? ? vk k? = kv? ? T vk?1 + k k?
? kT v? ? T vk?1 k? +
? ?kv? ? vk?1 k? + .
1
A given sequence of value functions may induce many sequences of policies since more than one greedy
policy may exist for one particular value function. Our results holds for all such possible choices of greedy
policies.
4
Then, by induction on k, we have that for all k ? 1,
kv? ? vk k? ? ? k kv? ? v0 k? +
1 ? ?k
.
1??
(5)
Using Lemma 1 and Equation (5) twice, we can conclude by observing that
kv? ? v?k,m k? ? kT v? ? T vk?1 k? + kT vk?1 ? v?k,m k?
? ? ?m
? ?kv? ? vk?1 k? + ? m kvk?m ? v?k,m k? +
1??
1 ? ? k?1
? ? ? k?1 kv? ? v0 k? +
1??
? ? ?m
+ ? m kvk?m ? v? k? + kv? ? v?k,m k? +
1??
? ? ?k
? ? k kv? ? v0 k? +
1??
1 ? ? k?m
? ? ?m
m
k?m
+?
?
kv? ? v0 k? +
+ kv? ? v?k,m k? +
1??
1??
= ? m kv? ? v?k,m k? + 2? k kv? ? v0 k? +
2
? ? ?k
k
+
?
kv
?
v
k
?
.
?
0
?
1 ? ?m 1 ? ?
5
2(? ? ? k )
1??
API algorithms for computing non-stationary policies
We now present similar results that have a Policy Iteration flavour. Unlike in the previous section
where only the output of AVI needed to be changed, improving the bound for an API-like algorithm
is slightly more involved. In this section, we describe and analyze two API algorithms that output
non-stationary policies with improved performance bounds.
API with a non-stationary policy of growing period Following our findings on non-stationary
policies AVI, we consider the following variation of API, where at each iteration, instead of computing the value of the last stationary policy ?k , we compute that of the periodic non-stationary policy
?k,k that loops over all the policies ?1 , . . . , ?k generated from the very start:
vk ? v?k,k + k
?k+1 ? any element of G(vk )
where the initial (stationary) policy ?1,1 is chosen arbitrarily. Thus, iteration after iteration, the nonstationary policy ?k,k is made of more and more stationary policies, and this is why we refer to it as
having a growing period. We can prove the following performance bound for this algorithm:
Theorem 3. After k iterations, the loss of running the non-stationary policy ?k,k instead of the
optimal policy ?? satisfies:
kv? ? v?k,k k? ?
2(? ? ? k )
+ ? k?1 kv? ? v?1,1 k? + 2(k ? 1)? k Vmax .
1??
When k tends to infinity, this bound tends to
original API bound.
2?
1?? ,
5
and is thus again a factor
1
1??
better than the
Proof of Theorem 3. Using the facts that Tk+1,k+1 v?k,k = T?k+1 Tk,k v?k,k = T?k+1 v?k,k and
T?k+1 vk ? T?? vk (since ?k+1 ? G(vk )), we have:
v? ? v?k+1,k+1
= T?? v? ? Tk+1,k+1 v?k+1,k+1
= T?? v? ? T?? v?k,k + T?? v?k,k ? Tk+1,k+1 v?k,k + Tk+1,k+1 v?k,k ? Tk+1,k+1 v?k+1,k+1
= ?P?? (v? ? v?k,k ) + T?? v?k,k ? T?k+1 v?k,k + ?k+1,k+1 (v?k,k ? v?k+1,k+1 )
= ?P?? (v? ? v?k,k ) + T?? vk ? T?k+1 vk + ?(P?k+1 ? P?? )k + ?k+1,k+1 (v?k,k ? v?k+1,k+1 )
? ?P?? (v? ? v?k,k ) + ?(P?k+1 ? P?? )k + ?k+1,k+1 (v?k,k ? v?k+1,k+1 ).
By taking the norm, and using the facts that kv?k,k k? ? Vmax , kv?k+1,k+1 k? ? Vmax , and
k?k+1,k+1 k? = ? k+1 , we get:
kv? ? v?k+1,k+1 k? ? ?kv? ? v?k,k k? + 2? + 2? k+1 Vmax .
Finally, by induction on k, we obtain:
kv? ? v?k,k k? ?
2(? ? ? k )
+ ? k?1 kv? ? v?1,1 k? + 2(k ? 1)? k Vmax .
1??
Though it has an improved asymptotic performance bound, the API algorithm we have just described
has two (related) drawbacks: 1) its finite iteration bound has a somewhat unsatisfactory term of the
form 2(k ? 1)? k Vmax , and 2) even when there is no error (when = 0), we cannot guarantee that,
similarly to standard Policy Iteration, it generates a sequence of policies of increasing values (it
is easy to see that in general, we do not have v?k+1,k+1 ? v?k,k ). These two points motivate the
introduction of another API algorithm.
API with a non-stationary policy of fixed period We consider now another variation of API
parameterized by m ? 1, that iterates as follows for k ? m:
vk ? v?k,m + k
?k+1 ? any element of G(vk )
where the initial non-stationary policy ?m,m is built from a sequence of m arbitrary stationary
policies ?1 , ?2 , ? ? ? , ?m . Unlike the previous API algorithm, the non-stationary policy ?k,m here
only involves the last m greedy stationary policies instead of all of them, and is thus of fixed period.
This is a strict generalization of the standard API algorithm, with which it coincides when m = 1.
For this algorithm, we can prove the following performance bound:
Theorem 4. For all m, for all k ? m, the loss of running the non-stationary policy ?k,m instead of
the optimal policy ?? satisfies:
kv? ? v?k,m k? ? ? k?m kv? ? v?m,m k? +
2(? ? ? k+1?m )
.
(1 ? ?)(1 ? ? m )
When m = 1 and k tends to infinity, we recover exactly the bound of Theorem 1. When m > 1
and k tends to infinity, this bound coincides with that of Theorem 2 for our non-stationary version
m
of AVI: it is a factor 1??
1?? better than the standard bound of Theorem 1.
The rest of this section develops the proof of this performance bound. A central argument of our
proof is the following lemma, which shows that similarly to the standard API, our new algorithm
has an (approximate) policy improvement property.
Lemma 2. At each iteration of the algorithm, the value v?k+1,m of the non-stationary policy
?k+1,m = ?k+1 ?k . . . ?k+2?m ?k+1 ?k . . . ?k?m+2 . . .
0
cannot be much worse than the value v?k,m
of the non-stationary policy
0
?k,m
= ?k?m+1 ?k . . . ?k+2?m ?k?m+1 ?k . . . ?k?m+2 . . .
in the precise following sense:
0
v?k+1,m ? v?k,m
?
6
2?
.
1 ? ?m
0
The policy ?k,m
differs from ?k+1,m in that every m steps, it chooses the oldest policy ?k?m+1
0
0
instead of the newest one ?k+1 . Also ?k,m
is related to ?k,m as follows: ?k,m
takes the first action
according to ?k?m+1 and then runs ?k,m ; equivalently, since ?k,m loops over ?k ?k?1 . . . ?k?m+1 ,
0
?k,m
= ?k?m+1 ?k,m can be seen as a 1-step right rotation of ?k,m . When there is no error (when
= 0), this shows that the new policy ?k+1,m is better than a ?rotation? of ?k,m . When m =
0
1, ?k+1,m = ?k+1 and ?k,m
= ?k and we thus recover the well-known (approximate) policy
improvement theorem for standard API (see for instance [4, Lemma 6.1]).
0
Proof of Lemma 2. Since ?k,m
takes the first action with respect to ?k?m+1 and then runs ?k,m , we
0
have v?k,m
= T?k?m+1 v?k,m . Now, since ?k+1 ? G(vk ), we have T?k+1 vk ? T?k?m+1 vk and
0
v?k,m
? v?k+1,m = T?k?m+1 v?k,m ? v?k+1,m
= T?k?m+1 vk ? ?P?k?m+1 k ? v?k+1,m
? T?k+1 vk ? ?P?k?m+1 k ? v?k+1,m
= T?k+1 v?k,m + ?(P?k+1 ? P?k?m+1 )k ? v?k+1,m
= T?k+1 Tk,m v?k,m ? Tk+1,m v?k+1,m + ?(P?k+1 ? P?k?m+1 )k
= Tk+1,m T?k?m+1 v?k,m ? Tk+1,m v?k+1,m + ?(P?k+1 ? P?k?m+1 )k
= ?k+1,m (T?k?m+1 v?k,m ? v?k+1,m ) + ?(P?k+1 ? P?k?m+1 )k
0
= ?k+1,m (v?k,m
? v?k+1,m ) + ?(P?k+1 ? P?k?m+1 )k .
from which we deduce that:
0
v?k,m
? v?k+1,m ? (I ? ?k+1,m )?1 ?(P?k+1 ? P?k?m+1 )k
and the result follows by using the facts that kk k? ? and k(I ? ?k+1,m )?1 k? =
1
1?? m .
We are now ready to prove the main result of this section.
Proof of Theorem 4. Using the facts that 1) Tk+1,m+1 v?k,m = T?k+1 Tk,m v?k,m = T?k+1 v?k,m and
2) T?k+1 vk ? T?? vk (since ?k+1 ? G(vk )), we have for k ? m,
v? ? v?k+1,m
= T?? v? ? Tk+1,m v?k+1,m
= T?? v? ? T?? v?k,m + T?? v?k,m ? Tk+1,m+1 v?k,m + Tk+1,m+1 v?k,m ? Tk+1,m v?k+1,m
= ?P?? (v? ? v?k,m ) + T?? v?k,m ? T?k+1 v?k,m + ?k+1,m (T?k?m+1 v?k,m ? v?k+1,m )
? ?P?? (v? ? v?k,m ) + T?? vk ? T?k+1 vk + ?(P?k+1 ? P?? )k + ?k+1,m (T?k?m+1 v?k,m ? v?k+1,m )
? ?P?? (v? ? v?k,m ) + ?(P?k+1 ? P?? )k + ?k+1,m (T?k?m+1 v?k,m ? v?k+1,m ).
(6)
0
Consider the policy ?k,m
defined in Lemma 2. Observing as in the beginning of the proof of
0
Lemma 2 that T?k?m+1 v?k,m = v?k,m
, Equation (6) can be rewritten as follows:
0
v? ? v?k+1,m ? ?P?? (v? ? v?k,m ) + ?(P?k+1 ? P?? )k + ?k+1,m (v?k,m
? v?k+1,m ).
By using the facts that v? ? v?k,m , v? ? v?k+1,m and Lemma 2, we get
kv? ? v?k+1,m k? ? ?kv? ? v?k,m k? + 2? +
= ?kv? ? v?k,m k? +
? m (2?)
1 ? ?m
2?
.
1 ? ?m
Finally, we obtain by induction that for all k ? m,
kv? ? v?k,m k? ? ? k?m kv? ? v?m,m k? +
7
2(? ? ? k+1?m )
.
(1 ? ?)(1 ? ? m )
6
Discussion, conclusion and future work
We recalled in Theorem 1 the standard performance bound when computing an approximately optimal stationary policy with the standard AVI and API algorithms. After arguing that this bound is
tight ? in particular by providing an original argument for AVI ? we proposed three new dynamic
programming algorithms (one based on AVI and two on API) that output non-stationary policies for
1
which the performance bound can be significantly reduced (by a factor 1??
).
From a bibliographical point of view, it is the work of [14] that made us think that non-stationary
policies may lead to better performance bounds. In that work, the author considers problems with
a finite-horizon T for which one computes non-stationary policies with performance bounds in
O(T ), and infinite-horizon problems for which one computes stationary policies with performance
1
bounds in O( (1??)
2 ). Using the informal equivalence of the horizons T ' 1?? one sees that
non-stationary policies look better than stationary policies. In [14], non-stationary policies are only
computed in the context of finite-horizon (and thus non-stationary) problems; the fact that nonstationary policies can also be useful in an infinite-horizon stationary context is to our knowledge
completely new.
The best performance improvements are obtained when our algorithms consider periodic nonstationary policies of which the period grows to infinity, and thus require an infinite memory, which
may look like a practical limitation. However, in two of the proposed algorithm, a parameter m
allows to make a trade-off between the quality of approximation (1?? m2?)(1??) and the amount of
l
m
1
memory O(m) required. In practice, it is easy to see that by choosing m = 1??
, that is a
memory that scales linearly with the horizon (and thus the difficulty) of the problem, one can get a
performance bound of2 (1?e?12?)(1??) ? 3.164?
1?? .
2?
We conjecture that our asymptotic bound of 1??
, and the non-asymptotic bounds of Theorems 2
and 4 are tight. The actual proof of this conjecture is left for future work. Important recent works
of the literature involve studying performance bounds when the errors are controlled in Lp norms
instead of max-norm [19, 20, 21, 1, 8, 18, 17] which is natural when supervised learning algorithms
are used to approximate the evaluation steps of AVI and API. Since our proof are based on componentwise bounds like those of the pioneer works in this topic [19, 20], we believe that the extension
of our analysis to Lp norm analysis is straightforward. Last but not least, an important research
direction that we plan to follow consists in revisiting the many implementations of AVI and API for
building stationary policies (see the list in the introduction), turn them into algorithms that look for
non-stationary policies and study them precisely analytically as well as empirically.
References
[1] A. Antos, Cs. Szepesv?ari, and R. Munos. Learning near-optimal policies with Bellmanresidual minimization based fitted policy iteration and a single sample path. Machine Learning,
71(1):89?129, 2008.
[2] M. Gheshlaghi Azar, V. Gmez, and H.J. Kappen. Dynamic Policy Programming with Function Approximation. In 14th International Conference on Artificial Intelligence and Statistics
(AISTATS), volume 15, Fort Lauderdale, FL, USA, 2011.
[3] D.P. Bertsekas. Approximate policy iteration: a survey and some new methods. Journal of
Control Theory and Applications, 9:310?335, 2011.
[4] D.P. Bertsekas and J.N. Tsitsiklis. Neuro-Dynamic Programming. Athena Scientific, 1996.
[5] L. Busoniu, A. Lazaric, M. Ghavamzadeh, R. Munos, R. Babuska, and B. De Schutter. Leastsquares methods for Policy Iteration. In M. Wiering and M. van Otterlo, editors, Reinforcement
Learning: State of the Art. Springer, 2011.
[6] D. Ernst, P. Geurts, and L. Wehenkel. Tree-based batch mode reinforcement learning. Journal
of Machine Learning Research (JMLR), 6, 2005.
2
With this choice of m, we have m ?
1
log 1/?
and thus
8
2
1?? m
?
2
1?e?1
? 3.164.
[7] E. Even-dar. Planning in pomdps using multiplicity automata. In Uncertainty in Artificial
Intelligence (UAI, pages 185?192, 2005.
[8] A.M. Farahmand, M. Ghavamzadeh, Cs. Szepesv?ari, and S. Mannor. Regularized policy iteration. Advances in Neural Information Processing Systems, 21:441?448, 2009.
[9] A.M. Farahmand, R. Munos, and Cs. Szepesv?ari. Error propagation for approximate policy
and value iteration (extended version). In NIPS, December 2010.
[10] V. Gabillon, A. Lazaric, M. Ghavamzadeh, and B. Scherrer. Classification-based Policy Iteration with a Critic. In International Conference on Machine Learning (ICML), pages 1049?
?
1056, Seattle, Etats-Unis,
June 2011.
[11] G.J. Gordon. Stable Function Approximation in Dynamic Programming. In ICML, pages
261?268, 1995.
[12] C. Guestrin, D. Koller, and R. Parr. Max-norm projections for factored MDPs. In International
Joint Conference on Artificial Intelligence, volume 17-1, pages 673?682, 2001.
[13] C. Guestrin, D. Koller, R. Parr, and S. Venkataraman. Efficient Solution Algorithms for Factored MDPs. Journal of Artificial Intelligence Research (JAIR), 19:399?468, 2003.
[14] S.M. Kakade. On the Sample Complexity of Reinforcement Learning. PhD thesis, University
College London, 2003.
[15] S.M. Kakade and J. Langford. Approximately Optimal Approximate Reinforcement Learning.
In International Conference on Machine Learning (ICML), pages 267?274, 2002.
[16] M.G. Lagoudakis and R. Parr. Least-squares policy iteration. Journal of Machine Learning
Research (JMLR), 4:1107?1149, 2003.
[17] A. Lazaric, M. Ghavamzadeh, and R. Munos. Finite-Sample Analysis of Least-Squares Policy
Iteration. To appear in Journal of Machine learning Research (JMLR), 2011.
[18] O.A. Maillard, R. Munos, A. Lazaric, and M. Ghavamzadeh. Finite Sample Analysis of Bellman Residual Minimization. In Masashi Sugiyama and Qiang Yang, editors, Asian Conference
on Machine Learpning. JMLR: Workshop and Conference Proceedings, volume 13, pages 309?
324, 2010.
[19] R. Munos. Error Bounds for Approximate Policy Iteration. In International Conference on
Machine Learning (ICML), pages 560?567, 2003.
[20] R. Munos. Performance Bounds in Lp norm for Approximate Value Iteration. SIAM J. Control
and Optimization, 2007.
[21] R. Munos and Cs. Szepesv?ari. Finite time bounds for sampling based fitted value iteration.
Journal of Machine Learning Research (JMLR), 9:815?857, 2008.
[22] M. Petrik and B. Scherrer. Biasing Approximate Dynamic Programming with a Lower Discount Factor. In Twenty-Second Annual Conference on Neural Information Processing Systems
-NIPS 2008, Vancouver, Canada, 2008.
[23] J. Pineau, G.J. Gordon, and S. Thrun. Point-based value iteration: An anytime algorithm
for POMDPs. In International Joint Conference on Artificial Intelligence, volume 18, pages
1025?1032, 2003.
[24] M. Puterman. Markov Decision Processes. Wiley, New York, 1994.
[25] S. Singh and R. Yee. An Upper Bound on the Loss from Approximate Optimal-Value Functions. Machine Learning, 16-3:227?233, 1994.
[26] C. Thiery and B. Scherrer. Least-Squares ? Policy Iteration: Bias-Variance Trade-off in Control Problems. In International Conference on Machine Learning, Haifa, Israel, 2010.
[27] J.N. Tsitsiklis and B. Van Roy. Feature-Based Methods for Large Scale Dynamic Programming. Machine Learning, 22(1-3):59?94, 1996.
9
|
4800 |@word version:3 norm:9 contraction:1 kappen:1 initial:2 interestingly:1 written:2 pioneer:1 stationary:59 greedy:7 newest:1 intelligence:5 oldest:1 beginning:1 iterates:1 mannor:1 simpler:2 farahmand:2 prove:6 consists:2 introduce:2 expected:1 indeed:2 planning:1 growing:2 bellman:5 discounted:6 actual:1 considering:1 increasing:1 begin:1 provided:1 bounded:2 notation:1 what:1 israel:1 rmax:2 finding:1 guarantee:2 every:3 masashi:1 exactly:2 control:4 appear:1 bertsekas:2 puting:1 before:1 modify:1 tends:8 api:26 analyzing:1 meet:1 path:2 approximately:2 inria:4 twice:1 equivalence:1 practical:2 unique:3 arguing:2 practice:1 differs:1 significantly:1 convenient:1 matching:1 projection:1 induce:1 get:3 cannot:3 close:3 operator:5 context:3 yee:1 deterministic:3 map:1 straightforward:2 go:1 starting:1 automaton:1 survey:1 m2:1 factored:2 variation:4 resp:2 programming:7 element:3 roy:1 com2:1 revisiting:1 wiering:1 venkataraman:1 trade:2 complexity:1 reward:6 babuska:1 dynamic:7 ghavamzadeh:5 motivate:1 singh:1 tight:8 petrik:1 completely:1 easily:1 joint:2 describe:3 london:1 artificial:5 avi:20 choosing:2 tightness:2 otherwise:4 statistic:1 think:1 sequence:9 propose:1 fr:2 loop:4 iff:1 ernst:1 kv:35 seattle:1 produce:1 generating:1 boris:2 tk:19 develop:1 received:1 c:4 involves:2 implies:2 direction:1 drawback:1 stochastic:1 argued:1 require:1 generalization:2 leastsquares:1 extension:1 hold:3 sufficiently:1 considered:1 mapping:1 claim:2 parr:3 establishes:1 minimization:2 clearly:3 always:1 rather:1 june:1 improvement:4 vk:52 unsatisfactory:1 sense:2 mth:1 koller:2 going:2 france:2 scherrer:5 classification:1 plan:1 art:1 never:2 having:1 sampling:1 qiang:1 look:3 icml:4 constitutes:1 qkm:6 future:2 develops:1 gordon:2 asian:1 qks:8 possibility:1 evaluation:2 kvk:3 yielding:1 antos:1 devoted:1 kt:4 poorer:1 tree:1 desired:1 haifa:1 villers:2 fitted:2 instance:1 periodic:5 chooses:1 st:3 international:7 siam:1 stay:5 off:2 lauderdale:1 bellmanresidual:1 gabillon:1 again:2 central:1 thesis:1 possibly:1 worse:1 account:1 suggesting:1 de:1 satisfy:1 vi:1 later:1 view:1 observing:2 sup:1 reached:1 start:3 analyze:1 recover:2 contribution:2 square:3 qk:1 variance:1 pomdps:2 involved:1 associated:2 proof:12 recovers:1 nancy:2 lim:1 knowledge:3 anytime:1 maillard:1 thiery:1 appears:1 jair:1 supervised:1 follow:2 improved:3 though:3 just:1 langford:1 propagation:1 mode:1 pineau:1 quality:1 scientific:1 mdp:4 believe:1 grows:1 usa:1 building:1 hence:3 equality:1 analytically:1 puterman:1 self:1 coincides:2 outputing:1 geurts:1 ari:4 lagoudakis:1 rotation:2 empirically:1 volume:4 significant:2 refer:1 similarly:3 sugiyama:1 bruno:2 of2:1 stable:1 v0:11 deduce:2 recent:1 optimizes:1 arbitrarily:1 seen:2 guestrin:2 somewhat:1 r0:1 period:5 characterized:1 believed:1 controlled:1 neuro:1 iteration:33 kernel:3 background:1 whereas:1 szepesv:4 leaving:1 rest:2 unlike:2 strict:1 december:1 nonstationary:3 near:5 yang:1 easy:2 suboptimal:1 idea:1 york:1 action:15 dar:1 useful:1 clear:1 involve:1 amount:1 discount:2 reduced:2 exist:1 lazaric:4 write:3 v1:2 sum:1 run:3 parameterized:1 uncertainty:1 throughout:1 decision:5 flavour:1 bound:45 fl:1 annual:1 precisely:2 infinity:7 looping:1 ri:5 implicitely:1 otterlo:1 generates:1 argument:3 optimality:2 conjecture:2 according:4 slightly:1 lp:3 kakade:2 multiplicity:1 equation:4 describing:1 turn:2 needed:1 informal:1 studying:1 available:1 rewritten:1 batch:1 original:3 running:4 wehenkel:1 build:1 move:6 usual:1 thrun:1 athena:1 topic:1 considers:1 induction:6 assuming:1 providing:1 equivalently:2 implementation:1 policy:108 twenty:1 upper:2 markov:5 finite:7 immediate:1 situation:1 extended:1 looking:1 precise:1 arbitrary:2 canada:1 pair:1 required:1 fort:1 componentwise:1 ds0:3 recalled:1 nip:2 gheshlaghi:1 usually:1 biasing:1 built:1 max:7 memory:3 difficulty:1 natural:1 regularized:1 residual:1 mdps:2 imply:1 ready:2 literature:3 vancouver:1 asymptotic:3 loss:5 interesting:1 limitation:1 substracting:1 s0:1 article:1 editor:2 share:1 critic:1 changed:1 surprisingly:2 last:6 tsitsiklis:2 bias:1 taking:2 munos:8 van:2 computes:2 author:1 commonly:1 made:3 vmax:7 reinforcement:4 lesner:2 approximate:13 obtains:1 schutter:1 uai:1 conclude:1 why:1 improving:2 aistats:1 main:3 linearly:1 big:2 azar:1 unis:1 wiley:1 sequence1:1 lie:1 jmlr:5 theorem:23 down:1 rk:17 showing:1 list:1 exists:5 workshop:1 phd:1 horizon:10 depicted:1 simply:1 deducing:1 applies:1 springer:1 satisfies:6 consequently:1 infinite:8 lemma:11 conservative:1 e:2 busoniu:1 formally:1 college:1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.