id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
sequencelengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1507.04761 | Bob Sturm | Corey Kereliuk and Bob L. Sturm and Jan Larsen | Deep Learning and Music Adversaries | 13 pages, 6 figures, 3 tables, 6 sections | null | null | null | cs.LG cs.NE cs.SD | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An adversary is essentially an algorithm intent on making a classification
system perform in some particular way given an input, e.g., increase the
probability of a false negative. Recent work builds adversaries for deep
learning systems applied to image object recognition, which exploits the
parameters of the system to find the minimal perturbation of the input image
such that the network misclassifies it with high confidence. We adapt this
approach to construct and deploy an adversary of deep learning systems applied
to music content analysis. In our case, however, the input to the systems is
magnitude spectral frames, which requires special care in order to produce
valid input audio signals from network-derived perturbations. For two different
train-test partitionings of two benchmark datasets, and two different deep
architectures, we find that this adversary is very effective in defeating the
resulting systems. We find the convolutional networks are more robust, however,
compared with systems based on a majority vote over individually classified
audio frames. Furthermore, we integrate the adversary into the training of new
deep systems, but do not find that this improves their resilience against the
same adversary.
| [
{
"version": "v1",
"created": "Thu, 16 Jul 2015 20:24:18 GMT"
}
] | 2015-07-20T00:00:00 | [
[
"Kereliuk",
"Corey",
""
],
[
"Sturm",
"Bob L.",
""
],
[
"Larsen",
"Jan",
""
]
] | TITLE: Deep Learning and Music Adversaries
ABSTRACT: An adversary is essentially an algorithm intent on making a classification
system perform in some particular way given an input, e.g., increase the
probability of a false negative. Recent work builds adversaries for deep
learning systems applied to image object recognition, which exploits the
parameters of the system to find the minimal perturbation of the input image
such that the network misclassifies it with high confidence. We adapt this
approach to construct and deploy an adversary of deep learning systems applied
to music content analysis. In our case, however, the input to the systems is
magnitude spectral frames, which requires special care in order to produce
valid input audio signals from network-derived perturbations. For two different
train-test partitionings of two benchmark datasets, and two different deep
architectures, we find that this adversary is very effective in defeating the
resulting systems. We find the convolutional networks are more robust, however,
compared with systems based on a majority vote over individually classified
audio frames. Furthermore, we integrate the adversary into the training of new
deep systems, but do not find that this improves their resilience against the
same adversary.
| no_new_dataset | 0.938969 |
1507.04831 | Yongtao Hu | Yongtao Hu, Jimmy Ren, Jingwen Dai, Chang Yuan, Li Xu, and Wenping
Wang | Deep Multimodal Speaker Naming | null | null | 10.1145/2733373.2806293 | null | cs.CV cs.LG cs.MM cs.SD | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automatic speaker naming is the problem of localizing as well as identifying
each speaking character in a TV/movie/live show video. This is a challenging
problem mainly attributes to its multimodal nature, namely face cue alone is
insufficient to achieve good performance. Previous multimodal approaches to
this problem usually process the data of different modalities individually and
merge them using handcrafted heuristics. Such approaches work well for simple
scenes, but fail to achieve high performance for speakers with large appearance
variations. In this paper, we propose a novel convolutional neural networks
(CNN) based learning framework to automatically learn the fusion function of
both face and audio cues. We show that without using face tracking, facial
landmark localization or subtitle/transcript, our system with robust multimodal
feature extraction is able to achieve state-of-the-art speaker naming
performance evaluated on two diverse TV series. The dataset and implementation
of our algorithm are publicly available online.
| [
{
"version": "v1",
"created": "Fri, 17 Jul 2015 04:13:12 GMT"
}
] | 2015-07-20T00:00:00 | [
[
"Hu",
"Yongtao",
""
],
[
"Ren",
"Jimmy",
""
],
[
"Dai",
"Jingwen",
""
],
[
"Yuan",
"Chang",
""
],
[
"Xu",
"Li",
""
],
[
"Wang",
"Wenping",
""
]
] | TITLE: Deep Multimodal Speaker Naming
ABSTRACT: Automatic speaker naming is the problem of localizing as well as identifying
each speaking character in a TV/movie/live show video. This is a challenging
problem mainly attributes to its multimodal nature, namely face cue alone is
insufficient to achieve good performance. Previous multimodal approaches to
this problem usually process the data of different modalities individually and
merge them using handcrafted heuristics. Such approaches work well for simple
scenes, but fail to achieve high performance for speakers with large appearance
variations. In this paper, we propose a novel convolutional neural networks
(CNN) based learning framework to automatically learn the fusion function of
both face and audio cues. We show that without using face tracking, facial
landmark localization or subtitle/transcript, our system with robust multimodal
feature extraction is able to achieve state-of-the-art speaker naming
performance evaluated on two diverse TV series. The dataset and implementation
of our algorithm are publicly available online.
| no_new_dataset | 0.950088 |
1507.04844 | Xiang Wu | Xiang Wu | Learning Robust Deep Face Representation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the development of convolution neural network, more and more researchers
focus their attention on the advantage of CNN for face recognition task. In
this paper, we propose a deep convolution network for learning a robust face
representation. The deep convolution net is constructed by 4 convolution
layers, 4 max pooling layers and 2 fully connected layers, which totally
contains about 4M parameters. The Max-Feature-Map activation function is used
instead of ReLU because the ReLU might lead to the loss of information due to
the sparsity while the Max-Feature-Map can get the compact and discriminative
feature vectors. The model is trained on CASIA-WebFace dataset and evaluated on
LFW dataset. The result on LFW achieves 97.77% on unsupervised setting for
single net.
| [
{
"version": "v1",
"created": "Fri, 17 Jul 2015 06:21:31 GMT"
}
] | 2015-07-20T00:00:00 | [
[
"Wu",
"Xiang",
""
]
] | TITLE: Learning Robust Deep Face Representation
ABSTRACT: With the development of convolution neural network, more and more researchers
focus their attention on the advantage of CNN for face recognition task. In
this paper, we propose a deep convolution network for learning a robust face
representation. The deep convolution net is constructed by 4 convolution
layers, 4 max pooling layers and 2 fully connected layers, which totally
contains about 4M parameters. The Max-Feature-Map activation function is used
instead of ReLU because the ReLU might lead to the loss of information due to
the sparsity while the Max-Feature-Map can get the compact and discriminative
feature vectors. The model is trained on CASIA-WebFace dataset and evaluated on
LFW dataset. The result on LFW achieves 97.77% on unsupervised setting for
single net.
| no_new_dataset | 0.950227 |
1507.04997 | Ismael Rodr\'iguez-Fdez M.Sc | I. Rodr\'iguez-Fdez, M. Mucientes, A. Bugar\'in | FRULER: Fuzzy Rule Learning through Evolution for Regression | null | null | null | null | cs.LG cs.AI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In regression problems, the use of TSK fuzzy systems is widely extended due
to the precision of the obtained models. Moreover, the use of simple linear TSK
models is a good choice in many real problems due to the easy understanding of
the relationship between the output and input variables. In this paper we
present FRULER, a new genetic fuzzy system for automatically learning accurate
and simple linguistic TSK fuzzy rule bases for regression problems. In order to
reduce the complexity of the learned models while keeping a high accuracy, the
algorithm consists of three stages: instance selection, multi-granularity fuzzy
discretization of the input variables, and the evolutionary learning of the
rule base that uses the Elastic Net regularization to obtain the consequents of
the rules. Each stage was validated using 28 real-world datasets and FRULER was
compared with three state of the art enetic fuzzy systems. Experimental results
show that FRULER achieves the most accurate and simple models compared even
with approximative approaches.
| [
{
"version": "v1",
"created": "Fri, 17 Jul 2015 15:26:06 GMT"
}
] | 2015-07-20T00:00:00 | [
[
"Rodríguez-Fdez",
"I.",
""
],
[
"Mucientes",
"M.",
""
],
[
"Bugarín",
"A.",
""
]
] | TITLE: FRULER: Fuzzy Rule Learning through Evolution for Regression
ABSTRACT: In regression problems, the use of TSK fuzzy systems is widely extended due
to the precision of the obtained models. Moreover, the use of simple linear TSK
models is a good choice in many real problems due to the easy understanding of
the relationship between the output and input variables. In this paper we
present FRULER, a new genetic fuzzy system for automatically learning accurate
and simple linguistic TSK fuzzy rule bases for regression problems. In order to
reduce the complexity of the learned models while keeping a high accuracy, the
algorithm consists of three stages: instance selection, multi-granularity fuzzy
discretization of the input variables, and the evolutionary learning of the
rule base that uses the Elastic Net regularization to obtain the consequents of
the rules. Each stage was validated using 28 real-world datasets and FRULER was
compared with three state of the art enetic fuzzy systems. Experimental results
show that FRULER achieves the most accurate and simple models compared even
with approximative approaches.
| no_new_dataset | 0.946151 |
1507.03811 | Liliana Lo Presti | Liliana Lo Presti and Marco La Cascia | Ensemble of Hankel Matrices for Face Emotion Recognition | Paper to appear in Proc. of ICIAP 2015. arXiv admin note: text
overlap with arXiv:1506.05001 | null | null | null | cs.CV cs.HC cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, a face emotion is considered as the result of the composition
of multiple concurrent signals, each corresponding to the movements of a
specific facial muscle. These concurrent signals are represented by means of a
set of multi-scale appearance features that might be correlated with one or
more concurrent signals. The extraction of these appearance features from a
sequence of face images yields to a set of time series. This paper proposes to
use the dynamics regulating each appearance feature time series to recognize
among different face emotions. To this purpose, an ensemble of Hankel matrices
corresponding to the extracted time series is used for emotion classification
within a framework that combines nearest neighbor and a majority vote schema.
Experimental results on a public available dataset shows that the adopted
representation is promising and yields state-of-the-art accuracy in emotion
classification.
| [
{
"version": "v1",
"created": "Tue, 14 Jul 2015 11:26:31 GMT"
}
] | 2015-07-19T00:00:00 | [
[
"Presti",
"Liliana Lo",
""
],
[
"La Cascia",
"Marco",
""
]
] | TITLE: Ensemble of Hankel Matrices for Face Emotion Recognition
ABSTRACT: In this paper, a face emotion is considered as the result of the composition
of multiple concurrent signals, each corresponding to the movements of a
specific facial muscle. These concurrent signals are represented by means of a
set of multi-scale appearance features that might be correlated with one or
more concurrent signals. The extraction of these appearance features from a
sequence of face images yields to a set of time series. This paper proposes to
use the dynamics regulating each appearance feature time series to recognize
among different face emotions. To this purpose, an ensemble of Hankel matrices
corresponding to the extracted time series is used for emotion classification
within a framework that combines nearest neighbor and a majority vote schema.
Experimental results on a public available dataset shows that the adopted
representation is promising and yields state-of-the-art accuracy in emotion
classification.
| no_new_dataset | 0.94743 |
1507.04060 | Hayder Albehadili | Hayder Albehadili and Naz Islam | Unsupervised Decision Forest for Data Clustering and Density Estimation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An algorithm to improve performance parameter for unsupervised decision
forest clustering and density estimation is presented. Specifically, a dual
assignment parameter is introduced as a density estimator by combining Random
Forest and Gaussian Mixture Model. The Random Forest method has been
specifically applied to construct a robust affinity graph that provides
information on the underlying structure of data objects used in clustering. The
proposed algorithm differs from the commonly used spectral clustering methods
where the computed distance metric is used to find similarities between data
points. Experiments were conducted using five datasets. A comparison with six
other state-of-the-art methods shows that our model is superior to existing
approaches. Efficiency of the proposed model is in capturing the underlying
structure for a given set of data points. The proposed method is also robust,
and can discriminate between the complex features of data points among
different clusters.
| [
{
"version": "v1",
"created": "Wed, 15 Jul 2015 00:50:06 GMT"
}
] | 2015-07-19T00:00:00 | [
[
"Albehadili",
"Hayder",
""
],
[
"Islam",
"Naz",
""
]
] | TITLE: Unsupervised Decision Forest for Data Clustering and Density Estimation
ABSTRACT: An algorithm to improve performance parameter for unsupervised decision
forest clustering and density estimation is presented. Specifically, a dual
assignment parameter is introduced as a density estimator by combining Random
Forest and Gaussian Mixture Model. The Random Forest method has been
specifically applied to construct a robust affinity graph that provides
information on the underlying structure of data objects used in clustering. The
proposed algorithm differs from the commonly used spectral clustering methods
where the computed distance metric is used to find similarities between data
points. Experiments were conducted using five datasets. A comparison with six
other state-of-the-art methods shows that our model is superior to existing
approaches. Efficiency of the proposed model is in capturing the underlying
structure for a given set of data points. The proposed method is also robust,
and can discriminate between the complex features of data points among
different clusters.
| no_new_dataset | 0.950641 |
1501.01062 | Ilya Razenshteyn | Alexandr Andoni, Ilya Razenshteyn | Optimal Data-Dependent Hashing for Approximate Near Neighbors | 36 pages, 5 figures, an extended abstract appeared in the proceedings
of the 47th ACM Symposium on Theory of Computing (STOC 2015) | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We show an optimal data-dependent hashing scheme for the approximate near
neighbor problem. For an $n$-point data set in a $d$-dimensional space our data
structure achieves query time $O(d n^{\rho+o(1)})$ and space $O(n^{1+\rho+o(1)}
+ dn)$, where $\rho=\tfrac{1}{2c^2-1}$ for the Euclidean space and
approximation $c>1$. For the Hamming space, we obtain an exponent of
$\rho=\tfrac{1}{2c-1}$.
Our result completes the direction set forth in [AINR14] who gave a
proof-of-concept that data-dependent hashing can outperform classical Locality
Sensitive Hashing (LSH). In contrast to [AINR14], the new bound is not only
optimal, but in fact improves over the best (optimal) LSH data structures
[IM98,AI06] for all approximation factors $c>1$.
From the technical perspective, we proceed by decomposing an arbitrary
dataset into several subsets that are, in a certain sense, pseudo-random.
| [
{
"version": "v1",
"created": "Tue, 6 Jan 2015 02:21:59 GMT"
},
{
"version": "v2",
"created": "Wed, 18 Mar 2015 04:12:39 GMT"
},
{
"version": "v3",
"created": "Thu, 16 Jul 2015 03:37:53 GMT"
}
] | 2015-07-17T00:00:00 | [
[
"Andoni",
"Alexandr",
""
],
[
"Razenshteyn",
"Ilya",
""
]
] | TITLE: Optimal Data-Dependent Hashing for Approximate Near Neighbors
ABSTRACT: We show an optimal data-dependent hashing scheme for the approximate near
neighbor problem. For an $n$-point data set in a $d$-dimensional space our data
structure achieves query time $O(d n^{\rho+o(1)})$ and space $O(n^{1+\rho+o(1)}
+ dn)$, where $\rho=\tfrac{1}{2c^2-1}$ for the Euclidean space and
approximation $c>1$. For the Hamming space, we obtain an exponent of
$\rho=\tfrac{1}{2c-1}$.
Our result completes the direction set forth in [AINR14] who gave a
proof-of-concept that data-dependent hashing can outperform classical Locality
Sensitive Hashing (LSH). In contrast to [AINR14], the new bound is not only
optimal, but in fact improves over the best (optimal) LSH data structures
[IM98,AI06] for all approximation factors $c>1$.
From the technical perspective, we proceed by decomposing an arbitrary
dataset into several subsets that are, in a certain sense, pseudo-random.
| no_new_dataset | 0.9455 |
1503.03514 | Jose Rivera | Jose Rivera-Rubio, Ioannis Alexiou and Anil A. Bharath | Appearance-based indoor localization: A comparison of patch descriptor
performance | Accepted for publication on Pattern Recognition Letters | null | 10.1016/j.patrec.2015.03.003 | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Vision is one of the most important of the senses, and humans use it
extensively during navigation. We evaluated different types of image and video
frame descriptors that could be used to determine distinctive visual landmarks
for localizing a person based on what is seen by a camera that they carry. To
do this, we created a database containing over 3 km of video-sequences with
ground-truth in the form of distance travelled along different corridors. Using
this database, the accuracy of localization - both in terms of knowing which
route a user is on - and in terms of position along a certain route, can be
evaluated. For each type of descriptor, we also tested different techniques to
encode visual structure and to search between journeys to estimate a user's
position. The techniques include single-frame descriptors, those using
sequences of frames, and both colour and achromatic descriptors. We found that
single-frame indexing worked better within this particular dataset. This might
be because the motion of the person holding the camera makes the video too
dependent on individual steps and motions of one particular journey. Our
results suggest that appearance-based information could be an additional source
of navigational data indoors, augmenting that provided by, say, radio signal
strength indicators (RSSIs). Such visual information could be collected by
crowdsourcing low-resolution video feeds, allowing journeys made by different
users to be associated with each other, and location to be inferred without
requiring explicit mapping. This offers a complementary approach to methods
based on simultaneous localization and mapping (SLAM) algorithms.
| [
{
"version": "v1",
"created": "Wed, 11 Mar 2015 21:43:46 GMT"
}
] | 2015-07-17T00:00:00 | [
[
"Rivera-Rubio",
"Jose",
""
],
[
"Alexiou",
"Ioannis",
""
],
[
"Bharath",
"Anil A.",
""
]
] | TITLE: Appearance-based indoor localization: A comparison of patch descriptor
performance
ABSTRACT: Vision is one of the most important of the senses, and humans use it
extensively during navigation. We evaluated different types of image and video
frame descriptors that could be used to determine distinctive visual landmarks
for localizing a person based on what is seen by a camera that they carry. To
do this, we created a database containing over 3 km of video-sequences with
ground-truth in the form of distance travelled along different corridors. Using
this database, the accuracy of localization - both in terms of knowing which
route a user is on - and in terms of position along a certain route, can be
evaluated. For each type of descriptor, we also tested different techniques to
encode visual structure and to search between journeys to estimate a user's
position. The techniques include single-frame descriptors, those using
sequences of frames, and both colour and achromatic descriptors. We found that
single-frame indexing worked better within this particular dataset. This might
be because the motion of the person holding the camera makes the video too
dependent on individual steps and motions of one particular journey. Our
results suggest that appearance-based information could be an additional source
of navigational data indoors, augmenting that provided by, say, radio signal
strength indicators (RSSIs). Such visual information could be collected by
crowdsourcing low-resolution video feeds, allowing journeys made by different
users to be associated with each other, and location to be inferred without
requiring explicit mapping. This offers a complementary approach to methods
based on simultaneous localization and mapping (SLAM) algorithms.
| new_dataset | 0.857052 |
1507.04457 | Dohyung Park | Dohyung Park, Joe Neeman, Jin Zhang, Sujay Sanghavi, Inderjit S.
Dhillon | Preference Completion: Large-scale Collaborative Ranking from Pairwise
Comparisons | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we consider the collaborative ranking setting: a pool of users
each provides a small number of pairwise preferences between $d$ possible
items; from these we need to predict preferences of the users for items they
have not yet seen. We do so by fitting a rank $r$ score matrix to the pairwise
data, and provide two main contributions: (a) we show that an algorithm based
on convex optimization provides good generalization guarantees once each user
provides as few as $O(r\log^2 d)$ pairwise comparisons -- essentially matching
the sample complexity required in the related matrix completion setting (which
uses actual numerical as opposed to pairwise information), and (b) we develop a
large-scale non-convex implementation, which we call AltSVM, that trains a
factored form of the matrix via alternating minimization (which we show reduces
to alternating SVM problems), and scales and parallelizes very well to large
problem settings. It also outperforms common baselines on many moderately large
popular collaborative filtering datasets in both NDCG and in other measures of
ranking performance.
| [
{
"version": "v1",
"created": "Thu, 16 Jul 2015 06:00:51 GMT"
}
] | 2015-07-17T00:00:00 | [
[
"Park",
"Dohyung",
""
],
[
"Neeman",
"Joe",
""
],
[
"Zhang",
"Jin",
""
],
[
"Sanghavi",
"Sujay",
""
],
[
"Dhillon",
"Inderjit S.",
""
]
] | TITLE: Preference Completion: Large-scale Collaborative Ranking from Pairwise
Comparisons
ABSTRACT: In this paper we consider the collaborative ranking setting: a pool of users
each provides a small number of pairwise preferences between $d$ possible
items; from these we need to predict preferences of the users for items they
have not yet seen. We do so by fitting a rank $r$ score matrix to the pairwise
data, and provide two main contributions: (a) we show that an algorithm based
on convex optimization provides good generalization guarantees once each user
provides as few as $O(r\log^2 d)$ pairwise comparisons -- essentially matching
the sample complexity required in the related matrix completion setting (which
uses actual numerical as opposed to pairwise information), and (b) we develop a
large-scale non-convex implementation, which we call AltSVM, that trains a
factored form of the matrix via alternating minimization (which we show reduces
to alternating SVM problems), and scales and parallelizes very well to large
problem settings. It also outperforms common baselines on many moderately large
popular collaborative filtering datasets in both NDCG and in other measures of
ranking performance.
| no_new_dataset | 0.948202 |
1507.04502 | Nicholas H. Kirk | Nicholas H. Kirk and Ilya Dianov | Towards Predicting First Daily Departure Times: a Gaussian Modeling
Approach for Load Shift Forecasting | 2015 IEEE International Conference on Systems, Man and Cybernetics
[accepted] | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work provides two statistical Gaussian forecasting methods for
predicting First Daily Departure Times (FDDTs) of everyday use electric
vehicles. This is important in smart grid applications to understand
disconnection times of such mobile storage units, for instance to forecast
storage of non dispatchable loads (e.g. wind and solar power). We provide a
review of the relevant state-of-the-art driving behavior features towards FDDT
prediction, to then propose an approximated Gaussian method which qualitatively
forecasts how many vehicles will depart within a given time frame, by assuming
that departure times follow a normal distribution. This method considers
sampling sessions as Poisson distributions which are superimposed to obtain a
single approximated Gaussian model. Given the Gaussian distribution assumption
of the departure times, we also model the problem with Gaussian Mixture Models
(GMM), in which the priorly set number of clusters represents the desired time
granularity. Evaluation has proven that for the dataset tested, low error and
high confidence ($\approx 95\%$) is possible for 15 and 10 minute intervals,
and that GMM outperforms traditional modeling but is less generalizable across
datasets, as it is a closer fit to the sampling data. Conclusively we discuss
future possibilities and practical applications of the discussed model.
| [
{
"version": "v1",
"created": "Thu, 16 Jul 2015 09:28:27 GMT"
}
] | 2015-07-17T00:00:00 | [
[
"Kirk",
"Nicholas H.",
""
],
[
"Dianov",
"Ilya",
""
]
] | TITLE: Towards Predicting First Daily Departure Times: a Gaussian Modeling
Approach for Load Shift Forecasting
ABSTRACT: This work provides two statistical Gaussian forecasting methods for
predicting First Daily Departure Times (FDDTs) of everyday use electric
vehicles. This is important in smart grid applications to understand
disconnection times of such mobile storage units, for instance to forecast
storage of non dispatchable loads (e.g. wind and solar power). We provide a
review of the relevant state-of-the-art driving behavior features towards FDDT
prediction, to then propose an approximated Gaussian method which qualitatively
forecasts how many vehicles will depart within a given time frame, by assuming
that departure times follow a normal distribution. This method considers
sampling sessions as Poisson distributions which are superimposed to obtain a
single approximated Gaussian model. Given the Gaussian distribution assumption
of the departure times, we also model the problem with Gaussian Mixture Models
(GMM), in which the priorly set number of clusters represents the desired time
granularity. Evaluation has proven that for the dataset tested, low error and
high confidence ($\approx 95\%$) is possible for 15 and 10 minute intervals,
and that GMM outperforms traditional modeling but is less generalizable across
datasets, as it is a closer fit to the sampling data. Conclusively we discuss
future possibilities and practical applications of the discussed model.
| no_new_dataset | 0.944791 |
1507.04646 | Yang Liu | Yang Liu, Furu Wei, Sujian Li, Heng Ji, Ming Zhou, Houfeng Wang | A Dependency-Based Neural Network for Relation Classification | This preprint is the full version of a short paper accepted in the
annual meeting of the Association for Computational Linguistics (ACL) 2015
(Beijing, China) | null | null | null | cs.CL cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Previous research on relation classification has verified the effectiveness
of using dependency shortest paths or subtrees. In this paper, we further
explore how to make full use of the combination of these dependency
information. We first propose a new structure, termed augmented dependency path
(ADP), which is composed of the shortest dependency path between two entities
and the subtrees attached to the shortest path. To exploit the semantic
representation behind the ADP structure, we develop dependency-based neural
networks (DepNN): a recursive neural network designed to model the subtrees,
and a convolutional neural network to capture the most important features on
the shortest path. Experiments on the SemEval-2010 dataset show that our
proposed method achieves state-of-art results.
| [
{
"version": "v1",
"created": "Thu, 16 Jul 2015 16:43:55 GMT"
}
] | 2015-07-17T00:00:00 | [
[
"Liu",
"Yang",
""
],
[
"Wei",
"Furu",
""
],
[
"Li",
"Sujian",
""
],
[
"Ji",
"Heng",
""
],
[
"Zhou",
"Ming",
""
],
[
"Wang",
"Houfeng",
""
]
] | TITLE: A Dependency-Based Neural Network for Relation Classification
ABSTRACT: Previous research on relation classification has verified the effectiveness
of using dependency shortest paths or subtrees. In this paper, we further
explore how to make full use of the combination of these dependency
information. We first propose a new structure, termed augmented dependency path
(ADP), which is composed of the shortest dependency path between two entities
and the subtrees attached to the shortest path. To exploit the semantic
representation behind the ADP structure, we develop dependency-based neural
networks (DepNN): a recursive neural network designed to model the subtrees,
and a convolutional neural network to capture the most important features on
the shortest path. Experiments on the SemEval-2010 dataset show that our
proposed method achieves state-of-art results.
| no_new_dataset | 0.953923 |
1406.5774 | Hossein Azizpour | Hossein Azizpour, Ali Sharif Razavian, Josephine Sullivan, Atsuto
Maki, Stefan Carlsson | Factors of Transferability for a Generic ConvNet Representation | Extended version of the workshop paper with more experiments and
updated text and title. Original CVPR15 DeepVision workshop paper title:
"From Generic to Specific Deep Representations for Visual Recognition" | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Evidence is mounting that Convolutional Networks (ConvNets) are the most
effective representation learning method for visual recognition tasks. In the
common scenario, a ConvNet is trained on a large labeled dataset (source) and
the feed-forward units activation of the trained network, at a certain layer of
the network, is used as a generic representation of an input image for a task
with relatively smaller training set (target). Recent studies have shown this
form of representation transfer to be suitable for a wide range of target
visual recognition tasks. This paper introduces and investigates several
factors affecting the transferability of such representations. It includes
parameters for training of the source ConvNet such as its architecture,
distribution of the training data, etc. and also the parameters of feature
extraction such as layer of the trained ConvNet, dimensionality reduction, etc.
Then, by optimizing these factors, we show that significant improvements can be
achieved on various (17) visual recognition tasks. We further show that these
visual recognition tasks can be categorically ordered based on their distance
from the source task such that a correlation between the performance of tasks
and their distance from the source task w.r.t. the proposed factors is
observed.
| [
{
"version": "v1",
"created": "Sun, 22 Jun 2014 21:57:46 GMT"
},
{
"version": "v2",
"created": "Wed, 17 Dec 2014 15:37:50 GMT"
},
{
"version": "v3",
"created": "Wed, 15 Jul 2015 10:02:19 GMT"
}
] | 2015-07-16T00:00:00 | [
[
"Azizpour",
"Hossein",
""
],
[
"Razavian",
"Ali Sharif",
""
],
[
"Sullivan",
"Josephine",
""
],
[
"Maki",
"Atsuto",
""
],
[
"Carlsson",
"Stefan",
""
]
] | TITLE: Factors of Transferability for a Generic ConvNet Representation
ABSTRACT: Evidence is mounting that Convolutional Networks (ConvNets) are the most
effective representation learning method for visual recognition tasks. In the
common scenario, a ConvNet is trained on a large labeled dataset (source) and
the feed-forward units activation of the trained network, at a certain layer of
the network, is used as a generic representation of an input image for a task
with relatively smaller training set (target). Recent studies have shown this
form of representation transfer to be suitable for a wide range of target
visual recognition tasks. This paper introduces and investigates several
factors affecting the transferability of such representations. It includes
parameters for training of the source ConvNet such as its architecture,
distribution of the training data, etc. and also the parameters of feature
extraction such as layer of the trained ConvNet, dimensionality reduction, etc.
Then, by optimizing these factors, we show that significant improvements can be
achieved on various (17) visual recognition tasks. We further show that these
visual recognition tasks can be categorically ordered based on their distance
from the source task such that a correlation between the performance of tasks
and their distance from the source task w.r.t. the proposed factors is
observed.
| no_new_dataset | 0.946597 |
1507.04019 | Pavan Kumar D S | D. S. Pavan Kumar | Feature Normalisation for Robust Speech Recognition | null | null | null | null | cs.CL cs.SD | http://creativecommons.org/licenses/by-sa/4.0/ | Speech recognition system performance degrades in noisy environments. If the
acoustic models are built using features of clean utterances, the features of a
noisy test utterance would be acoustically mismatched with the trained model.
This gives poor likelihoods and poor recognition accuracy. Model adaptation and
feature normalisation are two broad areas that address this problem. While the
former often gives better performance, the latter involves estimation of lesser
number of parameters, making the system feasible for practical implementations.
This research focuses on the efficacies of various subspace, statistical and
stereo based feature normalisation techniques. A subspace projection based
method has been investigated as a standalone and adjunct technique involving
reconstruction of noisy speech features from a precomputed set of clean speech
building-blocks. The building blocks are learned using non-negative matrix
factorisation (NMF) on log-Mel filter bank coefficients, which form a basis for
the clean speech subspace. The work provides a detailed study on how the method
can be incorporated into the extraction process of Mel-frequency cepstral
coefficients. Experimental results show that the new features are robust to
noise, and achieve better results when combined with the existing techniques.
The work also proposes a modification to the training process of SPLICE
algorithm for noise robust speech recognition. It is based on feature
correlations, and enables this stereo-based algorithm to improve the
performance in all noise conditions, especially in unseen cases. Further, the
modified framework is extended to work for non-stereo datasets where clean and
noisy training utterances, but not stereo counterparts, are required. An
MLLR-based computationally efficient run-time noise adaptation method in SPLICE
framework has been proposed.
| [
{
"version": "v1",
"created": "Tue, 14 Jul 2015 20:34:16 GMT"
}
] | 2015-07-16T00:00:00 | [
[
"Kumar",
"D. S. Pavan",
""
]
] | TITLE: Feature Normalisation for Robust Speech Recognition
ABSTRACT: Speech recognition system performance degrades in noisy environments. If the
acoustic models are built using features of clean utterances, the features of a
noisy test utterance would be acoustically mismatched with the trained model.
This gives poor likelihoods and poor recognition accuracy. Model adaptation and
feature normalisation are two broad areas that address this problem. While the
former often gives better performance, the latter involves estimation of lesser
number of parameters, making the system feasible for practical implementations.
This research focuses on the efficacies of various subspace, statistical and
stereo based feature normalisation techniques. A subspace projection based
method has been investigated as a standalone and adjunct technique involving
reconstruction of noisy speech features from a precomputed set of clean speech
building-blocks. The building blocks are learned using non-negative matrix
factorisation (NMF) on log-Mel filter bank coefficients, which form a basis for
the clean speech subspace. The work provides a detailed study on how the method
can be incorporated into the extraction process of Mel-frequency cepstral
coefficients. Experimental results show that the new features are robust to
noise, and achieve better results when combined with the existing techniques.
The work also proposes a modification to the training process of SPLICE
algorithm for noise robust speech recognition. It is based on feature
correlations, and enables this stereo-based algorithm to improve the
performance in all noise conditions, especially in unseen cases. Further, the
modified framework is extended to work for non-stereo datasets where clean and
noisy training utterances, but not stereo counterparts, are required. An
MLLR-based computationally efficient run-time noise adaptation method in SPLICE
framework has been proposed.
| no_new_dataset | 0.945701 |
1507.04180 | S\"oren Auer | Ali Ismayilov and Dimitris Kontokostas and S\"oren Auer and Jens
Lehmann and Sebastian Hellmann | Wikidata through the Eyes of DBpedia | 8 pages | null | null | null | cs.DB | http://creativecommons.org/licenses/by/4.0/ | DBpedia is one of the first and most prominent nodes of the Linked Open Data
cloud. It provides structured data for more than 100 Wikipedia language
editions as well as Wikimedia Commons, has a mature ontology and a stable and
thorough Linked Data publishing lifecycle. Wikidata, on the other hand, has
recently emerged as a user curated source for structured information which is
included in Wikipedia. In this paper, we present how Wikidata is incorporated
in the DBpedia ecosystem. Enriching DBpedia with structured information from
Wikidata provides added value for a number of usage scenarios. We outline those
scenarios and describe the structure and conversion process of the
DBpediaWikidata dataset.
| [
{
"version": "v1",
"created": "Wed, 15 Jul 2015 11:59:07 GMT"
}
] | 2015-07-16T00:00:00 | [
[
"Ismayilov",
"Ali",
""
],
[
"Kontokostas",
"Dimitris",
""
],
[
"Auer",
"Sören",
""
],
[
"Lehmann",
"Jens",
""
],
[
"Hellmann",
"Sebastian",
""
]
] | TITLE: Wikidata through the Eyes of DBpedia
ABSTRACT: DBpedia is one of the first and most prominent nodes of the Linked Open Data
cloud. It provides structured data for more than 100 Wikipedia language
editions as well as Wikimedia Commons, has a mature ontology and a stable and
thorough Linked Data publishing lifecycle. Wikidata, on the other hand, has
recently emerged as a user curated source for structured information which is
included in Wikipedia. In this paper, we present how Wikidata is incorporated
in the DBpedia ecosystem. Enriching DBpedia with structured information from
Wikidata provides added value for a number of usage scenarios. We outline those
scenarios and describe the structure and conversion process of the
DBpediaWikidata dataset.
| no_new_dataset | 0.945951 |
1507.04299 | Ilya Razenshteyn | Alexandr Andoni, Ilya Razenshteyn | Tight Lower Bounds for Data-Dependent Locality-Sensitive Hashing | 16 pages, no figures | null | null | null | cs.DS cs.CC cs.CG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We prove a tight lower bound for the exponent $\rho$ for data-dependent
Locality-Sensitive Hashing schemes, recently used to design efficient solutions
for the $c$-approximate nearest neighbor search. In particular, our lower bound
matches the bound of $\rho\le \frac{1}{2c-1}+o(1)$ for the $\ell_1$ space,
obtained via the recent algorithm from [Andoni-Razenshteyn, STOC'15].
In recent years it emerged that data-dependent hashing is strictly superior
to the classical Locality-Sensitive Hashing, when the hash function is
data-independent. In the latter setting, the best exponent has been already
known: for the $\ell_1$ space, the tight bound is $\rho=1/c$, with the upper
bound from [Indyk-Motwani, STOC'98] and the matching lower bound from
[O'Donnell-Wu-Zhou, ITCS'11].
We prove that, even if the hashing is data-dependent, it must hold that
$\rho\ge \frac{1}{2c-1}-o(1)$. To prove the result, we need to formalize the
exact notion of data-dependent hashing that also captures the complexity of the
hash functions (in addition to their collision properties). Without restricting
such complexity, we would allow for obviously infeasible solutions such as the
Voronoi diagram of a dataset. To preclude such solutions, we require our hash
functions to be succinct. This condition is satisfied by all the known
algorithmic results.
| [
{
"version": "v1",
"created": "Wed, 15 Jul 2015 17:02:20 GMT"
}
] | 2015-07-16T00:00:00 | [
[
"Andoni",
"Alexandr",
""
],
[
"Razenshteyn",
"Ilya",
""
]
] | TITLE: Tight Lower Bounds for Data-Dependent Locality-Sensitive Hashing
ABSTRACT: We prove a tight lower bound for the exponent $\rho$ for data-dependent
Locality-Sensitive Hashing schemes, recently used to design efficient solutions
for the $c$-approximate nearest neighbor search. In particular, our lower bound
matches the bound of $\rho\le \frac{1}{2c-1}+o(1)$ for the $\ell_1$ space,
obtained via the recent algorithm from [Andoni-Razenshteyn, STOC'15].
In recent years it emerged that data-dependent hashing is strictly superior
to the classical Locality-Sensitive Hashing, when the hash function is
data-independent. In the latter setting, the best exponent has been already
known: for the $\ell_1$ space, the tight bound is $\rho=1/c$, with the upper
bound from [Indyk-Motwani, STOC'98] and the matching lower bound from
[O'Donnell-Wu-Zhou, ITCS'11].
We prove that, even if the hashing is data-dependent, it must hold that
$\rho\ge \frac{1}{2c-1}-o(1)$. To prove the result, we need to formalize the
exact notion of data-dependent hashing that also captures the complexity of the
hash functions (in addition to their collision properties). Without restricting
such complexity, we would allow for obviously infeasible solutions such as the
Voronoi diagram of a dataset. To preclude such solutions, we require our hash
functions to be succinct. This condition is satisfied by all the known
algorithmic results.
| no_new_dataset | 0.945951 |
1408.4002 | Benjamin Eltzner | Benjamin Eltzner, Carina Wollnik, Carsten Gottschlich, Stephan
Huckemann, Florian Rehfeldt | The Filament Sensor for Near Real-Time Detection of Cytoskeletal Fiber
Structures | 32 pages, 21 figures | PLoS ONE 10(5): e0126346, May 2015 | 10.1371/journal.pone.0126346 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A reliable extraction of filament data from microscopic images is of high
interest in the analysis of acto-myosin structures as early morphological
markers in mechanically guided differentiation of human mesenchymal stem cells
and the understanding of the underlying fiber arrangement processes. In this
paper, we propose the filament sensor (FS), a fast and robust processing
sequence which detects and records location, orientation, length and width for
each single filament of an image, and thus allows for the above described
analysis. The extraction of these features has previously not been possible
with existing methods. We evaluate the performance of the proposed FS in terms
of accuracy and speed in comparison to three existing methods with respect to
their limited output. Further, we provide a benchmark dataset of real cell
images along with filaments manually marked by a human expert as well as
simulated benchmark images. The FS clearly outperforms existing methods in
terms of computational runtime and filament extraction accuracy. The
implementation of the FS and the benchmark database are available as open
source.
| [
{
"version": "v1",
"created": "Mon, 18 Aug 2014 13:06:03 GMT"
},
{
"version": "v2",
"created": "Sat, 11 Jul 2015 13:19:42 GMT"
},
{
"version": "v3",
"created": "Tue, 14 Jul 2015 08:40:32 GMT"
}
] | 2015-07-15T00:00:00 | [
[
"Eltzner",
"Benjamin",
""
],
[
"Wollnik",
"Carina",
""
],
[
"Gottschlich",
"Carsten",
""
],
[
"Huckemann",
"Stephan",
""
],
[
"Rehfeldt",
"Florian",
""
]
] | TITLE: The Filament Sensor for Near Real-Time Detection of Cytoskeletal Fiber
Structures
ABSTRACT: A reliable extraction of filament data from microscopic images is of high
interest in the analysis of acto-myosin structures as early morphological
markers in mechanically guided differentiation of human mesenchymal stem cells
and the understanding of the underlying fiber arrangement processes. In this
paper, we propose the filament sensor (FS), a fast and robust processing
sequence which detects and records location, orientation, length and width for
each single filament of an image, and thus allows for the above described
analysis. The extraction of these features has previously not been possible
with existing methods. We evaluate the performance of the proposed FS in terms
of accuracy and speed in comparison to three existing methods with respect to
their limited output. Further, we provide a benchmark dataset of real cell
images along with filaments manually marked by a human expert as well as
simulated benchmark images. The FS clearly outperforms existing methods in
terms of computational runtime and filament extraction accuracy. The
implementation of the FS and the benchmark database are available as open
source.
| new_dataset | 0.963472 |
1507.03867 | Rong Ge | Rong Ge and James Zou | Rich Component Analysis | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In many settings, we have multiple data sets (also called views) that capture
different and overlapping aspects of the same phenomenon. We are often
interested in finding patterns that are unique to one or to a subset of the
views. For example, we might have one set of molecular observations and one set
of physiological observations on the same group of individuals, and we want to
quantify molecular patterns that are uncorrelated with physiology. Despite
being a common problem, this is highly challenging when the correlations come
from complex distributions. In this paper, we develop the general framework of
Rich Component Analysis (RCA) to model settings where the observations from
different views are driven by different sets of latent components, and each
component can be a complex, high-dimensional distribution. We introduce
algorithms based on cumulant extraction that provably learn each of the
components without having to model the other components. We show how to
integrate RCA with stochastic gradient descent into a meta-algorithm for
learning general models, and demonstrate substantial improvement in accuracy on
several synthetic and real datasets in both supervised and unsupervised tasks.
Our method makes it possible to learn latent variable models when we don't have
samples from the true model but only samples after complex perturbations.
| [
{
"version": "v1",
"created": "Tue, 14 Jul 2015 14:38:23 GMT"
}
] | 2015-07-15T00:00:00 | [
[
"Ge",
"Rong",
""
],
[
"Zou",
"James",
""
]
] | TITLE: Rich Component Analysis
ABSTRACT: In many settings, we have multiple data sets (also called views) that capture
different and overlapping aspects of the same phenomenon. We are often
interested in finding patterns that are unique to one or to a subset of the
views. For example, we might have one set of molecular observations and one set
of physiological observations on the same group of individuals, and we want to
quantify molecular patterns that are uncorrelated with physiology. Despite
being a common problem, this is highly challenging when the correlations come
from complex distributions. In this paper, we develop the general framework of
Rich Component Analysis (RCA) to model settings where the observations from
different views are driven by different sets of latent components, and each
component can be a complex, high-dimensional distribution. We introduce
algorithms based on cumulant extraction that provably learn each of the
components without having to model the other components. We show how to
integrate RCA with stochastic gradient descent into a meta-algorithm for
learning general models, and demonstrate substantial improvement in accuracy on
several synthetic and real datasets in both supervised and unsupervised tasks.
Our method makes it possible to learn latent variable models when we don't have
samples from the true model but only samples after complex perturbations.
| no_new_dataset | 0.9463 |
1507.03928 | Fernando Diaz | Fernando Diaz | Pseudo-Query Reformulation | null | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automatic query reformulation refers to rewriting a user's original query in
order to improve the ranking of retrieval results compared to the original
query. We present a general framework for automatic query reformulation based
on discrete optimization. Our approach, referred to as pseudo-query
reformulation, treats automatic query reformulation as a search problem over
the graph of unweighted queries linked by minimal transformations (e.g. term
additions, deletions). This framework allows us to test existing performance
prediction methods as heuristics for the graph search process. We demonstrate
the effectiveness of the approach on several publicly available datasets.
| [
{
"version": "v1",
"created": "Tue, 14 Jul 2015 17:06:51 GMT"
}
] | 2015-07-15T00:00:00 | [
[
"Diaz",
"Fernando",
""
]
] | TITLE: Pseudo-Query Reformulation
ABSTRACT: Automatic query reformulation refers to rewriting a user's original query in
order to improve the ranking of retrieval results compared to the original
query. We present a general framework for automatic query reformulation based
on discrete optimization. Our approach, referred to as pseudo-query
reformulation, treats automatic query reformulation as a search problem over
the graph of unweighted queries linked by minimal transformations (e.g. term
additions, deletions). This framework allows us to test existing performance
prediction methods as heuristics for the graph search process. We demonstrate
the effectiveness of the approach on several publicly available datasets.
| no_new_dataset | 0.946892 |
1411.6660 | Zhenzhong Lan | Zhenzhong Lan, Ming Lin, Xuanchong Li, Alexander G. Hauptmann, Bhiksha
Raj | Beyond Gaussian Pyramid: Multi-skip Feature Stacking for Action
Recognition | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Most state-of-the-art action feature extractors involve differential
operators, which act as highpass filters and tend to attenuate low frequency
action information. This attenuation introduces bias to the resulting features
and generates ill-conditioned feature matrices. The Gaussian Pyramid has been
used as a feature enhancing technique that encodes scale-invariant
characteristics into the feature space in an attempt to deal with this
attenuation. However, at the core of the Gaussian Pyramid is a convolutional
smoothing operation, which makes it incapable of generating new features at
coarse scales. In order to address this problem, we propose a novel feature
enhancing technique called Multi-skIp Feature Stacking (MIFS), which stacks
features extracted using a family of differential filters parameterized with
multiple time skips and encodes shift-invariance into the frequency space. MIFS
compensates for information lost from using differential operators by
recapturing information at coarse scales. This recaptured information allows us
to match actions at different speeds and ranges of motion. We prove that MIFS
enhances the learnability of differential-based features exponentially. The
resulting feature matrices from MIFS have much smaller conditional numbers and
variances than those from conventional methods. Experimental results show
significantly improved performance on challenging action recognition and event
detection tasks. Specifically, our method exceeds the state-of-the-arts on
Hollywood2, UCF101 and UCF50 datasets and is comparable to state-of-the-arts on
HMDB51 and Olympics Sports datasets. MIFS can also be used as a speedup
strategy for feature extraction with minimal or no accuracy cost.
| [
{
"version": "v1",
"created": "Mon, 24 Nov 2014 21:40:09 GMT"
},
{
"version": "v2",
"created": "Sat, 21 Mar 2015 19:22:51 GMT"
},
{
"version": "v3",
"created": "Fri, 10 Apr 2015 19:25:22 GMT"
},
{
"version": "v4",
"created": "Sun, 19 Apr 2015 19:13:42 GMT"
}
] | 2015-07-14T00:00:00 | [
[
"Lan",
"Zhenzhong",
""
],
[
"Lin",
"Ming",
""
],
[
"Li",
"Xuanchong",
""
],
[
"Hauptmann",
"Alexander G.",
""
],
[
"Raj",
"Bhiksha",
""
]
] | TITLE: Beyond Gaussian Pyramid: Multi-skip Feature Stacking for Action
Recognition
ABSTRACT: Most state-of-the-art action feature extractors involve differential
operators, which act as highpass filters and tend to attenuate low frequency
action information. This attenuation introduces bias to the resulting features
and generates ill-conditioned feature matrices. The Gaussian Pyramid has been
used as a feature enhancing technique that encodes scale-invariant
characteristics into the feature space in an attempt to deal with this
attenuation. However, at the core of the Gaussian Pyramid is a convolutional
smoothing operation, which makes it incapable of generating new features at
coarse scales. In order to address this problem, we propose a novel feature
enhancing technique called Multi-skIp Feature Stacking (MIFS), which stacks
features extracted using a family of differential filters parameterized with
multiple time skips and encodes shift-invariance into the frequency space. MIFS
compensates for information lost from using differential operators by
recapturing information at coarse scales. This recaptured information allows us
to match actions at different speeds and ranges of motion. We prove that MIFS
enhances the learnability of differential-based features exponentially. The
resulting feature matrices from MIFS have much smaller conditional numbers and
variances than those from conventional methods. Experimental results show
significantly improved performance on challenging action recognition and event
detection tasks. Specifically, our method exceeds the state-of-the-arts on
Hollywood2, UCF101 and UCF50 datasets and is comparable to state-of-the-arts on
HMDB51 and Olympics Sports datasets. MIFS can also be used as a speedup
strategy for feature extraction with minimal or no accuracy cost.
| no_new_dataset | 0.950549 |
1507.03183 | Ankit Sharma | Ankit Sharma, Xiaodong Feng, Kartik Singhal, Rui Kuang and Jaideep
Srivastava | Predicting Small Group Accretion in Social Networks: A topology based
incremental approach | null | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Small Group evolution has been of central importance in social sciences and
also in the industry for understanding dynamics of team formation. While most
of research works studying groups deal at a macro level with evolution of
arbitrary size communities, in this paper we restrict ourselves to studying
evolution of small group (size $\leq20$) which is governed by contrasting
sociological phenomenon. Given a previous history of group collaboration
between a set of actors, we address the problem of predicting likely future
group collaborations. Unfortunately, predicting groups requires choosing from
$n \choose r$ possibilities (where $r$ is group size and $n$ is total number of
actors), which becomes computationally intractable as group size increases.
However, our statistical analysis of a real world dataset has shown that two
processes: an external actor joining an existing group (incremental accretion
(IA)) or collaborating with a subset of actors of an exiting group (subgroup
accretion (SA)), are largely responsible for future group formation. This helps
to drastically reduce the $n\choose r$ possibilities. We therefore, model the
attachment of a group for different actors outside this group. In this paper,
we have built three topology based prediction models to study these phenomena.
The performance of these models is evaluated using extensive experiments over
DBLP dataset. Our prediction results shows that the proposed models are
significantly useful for future group predictions both for IA and SA.
| [
{
"version": "v1",
"created": "Sun, 12 Jul 2015 04:01:17 GMT"
}
] | 2015-07-14T00:00:00 | [
[
"Sharma",
"Ankit",
""
],
[
"Feng",
"Xiaodong",
""
],
[
"Singhal",
"Kartik",
""
],
[
"Kuang",
"Rui",
""
],
[
"Srivastava",
"Jaideep",
""
]
] | TITLE: Predicting Small Group Accretion in Social Networks: A topology based
incremental approach
ABSTRACT: Small Group evolution has been of central importance in social sciences and
also in the industry for understanding dynamics of team formation. While most
of research works studying groups deal at a macro level with evolution of
arbitrary size communities, in this paper we restrict ourselves to studying
evolution of small group (size $\leq20$) which is governed by contrasting
sociological phenomenon. Given a previous history of group collaboration
between a set of actors, we address the problem of predicting likely future
group collaborations. Unfortunately, predicting groups requires choosing from
$n \choose r$ possibilities (where $r$ is group size and $n$ is total number of
actors), which becomes computationally intractable as group size increases.
However, our statistical analysis of a real world dataset has shown that two
processes: an external actor joining an existing group (incremental accretion
(IA)) or collaborating with a subset of actors of an exiting group (subgroup
accretion (SA)), are largely responsible for future group formation. This helps
to drastically reduce the $n\choose r$ possibilities. We therefore, model the
attachment of a group for different actors outside this group. In this paper,
we have built three topology based prediction models to study these phenomena.
The performance of these models is evaluated using extensive experiments over
DBLP dataset. Our prediction results shows that the proposed models are
significantly useful for future group predictions both for IA and SA.
| no_new_dataset | 0.94868 |
1507.03196 | Zhangyang Wang | Zhangyang Wang, Jianchao Yang, Hailin Jin, Eli Shechtman, Aseem
Agarwala, Jonathan Brandt, Thomas S. Huang | DeepFont: Identify Your Font from An Image | To Appear in ACM Multimedia as a full paper | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As font is one of the core design concepts, automatic font identification and
similar font suggestion from an image or photo has been on the wish list of
many designers. We study the Visual Font Recognition (VFR) problem, and advance
the state-of-the-art remarkably by developing the DeepFont system. First of
all, we build up the first available large-scale VFR dataset, named AdobeVFR,
consisting of both labeled synthetic data and partially labeled real-world
data. Next, to combat the domain mismatch between available training and
testing data, we introduce a Convolutional Neural Network (CNN) decomposition
approach, using a domain adaptation technique based on a Stacked Convolutional
Auto-Encoder (SCAE) that exploits a large corpus of unlabeled real-world text
images combined with synthetic data preprocessed in a specific way. Moreover,
we study a novel learning-based model compression approach, in order to reduce
the DeepFont model size without sacrificing its performance. The DeepFont
system achieves an accuracy of higher than 80% (top-5) on our collected
dataset, and also produces a good font similarity measure for font selection
and suggestion. We also achieve around 6 times compression of the model without
any visible loss of recognition accuracy.
| [
{
"version": "v1",
"created": "Sun, 12 Jul 2015 07:25:14 GMT"
}
] | 2015-07-14T00:00:00 | [
[
"Wang",
"Zhangyang",
""
],
[
"Yang",
"Jianchao",
""
],
[
"Jin",
"Hailin",
""
],
[
"Shechtman",
"Eli",
""
],
[
"Agarwala",
"Aseem",
""
],
[
"Brandt",
"Jonathan",
""
],
[
"Huang",
"Thomas S.",
""
]
] | TITLE: DeepFont: Identify Your Font from An Image
ABSTRACT: As font is one of the core design concepts, automatic font identification and
similar font suggestion from an image or photo has been on the wish list of
many designers. We study the Visual Font Recognition (VFR) problem, and advance
the state-of-the-art remarkably by developing the DeepFont system. First of
all, we build up the first available large-scale VFR dataset, named AdobeVFR,
consisting of both labeled synthetic data and partially labeled real-world
data. Next, to combat the domain mismatch between available training and
testing data, we introduce a Convolutional Neural Network (CNN) decomposition
approach, using a domain adaptation technique based on a Stacked Convolutional
Auto-Encoder (SCAE) that exploits a large corpus of unlabeled real-world text
images combined with synthetic data preprocessed in a specific way. Moreover,
we study a novel learning-based model compression approach, in order to reduce
the DeepFont model size without sacrificing its performance. The DeepFont
system achieves an accuracy of higher than 80% (top-5) on our collected
dataset, and also produces a good font similarity measure for font selection
and suggestion. We also achieve around 6 times compression of the model without
any visible loss of recognition accuracy.
| new_dataset | 0.960657 |
1505.02668 | Serena Falocco | S. Falocco, M. Paolillo, G. Covone, D. De Cicco, G. Longo, A. Grado,
L. Limatola, M. Vaccari, M.T. Botticella, G. Pignata, E. Cappellaro, D.
Trevese, F. Vagnetti, M. Salvato, M. Radovich, L. Hsu, M. Capaccioli, N.
Napolitano, W. N. Brandt, A. Baruffolo, E. Cascone | SUDARE-VOICE variability-selection of Active Galaxies in the Chandra
Deep Field South and the SERVS/SWIRE region | Published in A & A, 15 pages, 6 figures | A&A 579, A115 (2015) | 10.1051/0004-6361/201425111 | null | astro-ph.GA astro-ph.HE physics.space-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of the most peculiar characteristics of Active Galactic Nuclei (AGN) is
their variability over all wavelengths. This property has been used in the past
to select AGN samples and is foreseen to be one of the detection techniques
applied in future multi-epoch surveys, complementing photometric and
spectroscopic methods.
In this paper, we aim to construct and characterise an AGN sample using a
multi-epoch dataset in the r band from the SUDARE-VOICE survey.
Our work makes use of the VST monitoring program of an area surrounding the
Chandra Deep Field South to select variable sources. We use data spanning a six
month period over an area of 2 square degrees, to identify AGN based on their
photometric variability.
The selected sample includes 175 AGN candidates with magnitude r < 23 mag. We
distinguish different classes of variable sources through their lightcurves, as
well as X-ray, spectroscopic, SED, optical and IR information overlapping with
our survey.
We find that 12% of the sample (21/175) is represented by SN. Of the
remaining sources, 4% (6/154) are stars, while 66% (102/154) are likely AGNs
based on the available diagnostics. We estimate an upper limit to the
contamination of the variability selected AGN sample of about 34%, but we point
out that restricting the analysis to the sources with available
multi-wavelength ancillary information, the purity of our sample is close to
80% (102 AGN out of 128 non-SN sources with multi-wavelength diagnostics). Our
work thus confirms the efficiency of the variability selection method in
agreement with our previous work on the COSMOS field; in addition we show that
the variability approach is roughly consistent with the infrared selection.
| [
{
"version": "v1",
"created": "Mon, 11 May 2015 15:27:20 GMT"
},
{
"version": "v2",
"created": "Wed, 20 May 2015 09:10:21 GMT"
},
{
"version": "v3",
"created": "Fri, 10 Jul 2015 08:56:45 GMT"
}
] | 2015-07-13T00:00:00 | [
[
"Falocco",
"S.",
""
],
[
"Paolillo",
"M.",
""
],
[
"Covone",
"G.",
""
],
[
"De Cicco",
"D.",
""
],
[
"Longo",
"G.",
""
],
[
"Grado",
"A.",
""
],
[
"Limatola",
"L.",
""
],
[
"Vaccari",
"M.",
""
],
[
"Botticella",
"M. T.",
""
],
[
"Pignata",
"G.",
""
],
[
"Cappellaro",
"E.",
""
],
[
"Trevese",
"D.",
""
],
[
"Vagnetti",
"F.",
""
],
[
"Salvato",
"M.",
""
],
[
"Radovich",
"M.",
""
],
[
"Hsu",
"L.",
""
],
[
"Capaccioli",
"M.",
""
],
[
"Napolitano",
"N.",
""
],
[
"Brandt",
"W. N.",
""
],
[
"Baruffolo",
"A.",
""
],
[
"Cascone",
"E.",
""
]
] | TITLE: SUDARE-VOICE variability-selection of Active Galaxies in the Chandra
Deep Field South and the SERVS/SWIRE region
ABSTRACT: One of the most peculiar characteristics of Active Galactic Nuclei (AGN) is
their variability over all wavelengths. This property has been used in the past
to select AGN samples and is foreseen to be one of the detection techniques
applied in future multi-epoch surveys, complementing photometric and
spectroscopic methods.
In this paper, we aim to construct and characterise an AGN sample using a
multi-epoch dataset in the r band from the SUDARE-VOICE survey.
Our work makes use of the VST monitoring program of an area surrounding the
Chandra Deep Field South to select variable sources. We use data spanning a six
month period over an area of 2 square degrees, to identify AGN based on their
photometric variability.
The selected sample includes 175 AGN candidates with magnitude r < 23 mag. We
distinguish different classes of variable sources through their lightcurves, as
well as X-ray, spectroscopic, SED, optical and IR information overlapping with
our survey.
We find that 12% of the sample (21/175) is represented by SN. Of the
remaining sources, 4% (6/154) are stars, while 66% (102/154) are likely AGNs
based on the available diagnostics. We estimate an upper limit to the
contamination of the variability selected AGN sample of about 34%, but we point
out that restricting the analysis to the sources with available
multi-wavelength ancillary information, the purity of our sample is close to
80% (102 AGN out of 128 non-SN sources with multi-wavelength diagnostics). Our
work thus confirms the efficiency of the variability selection method in
agreement with our previous work on the COSMOS field; in addition we show that
the variability approach is roughly consistent with the infrared selection.
| no_new_dataset | 0.934873 |
1507.02779 | Hai Pham | Hai X. Pham, Chongyu Chen, Luc N. Dao, Vladimir Pavlovic, Jianfei Cai
and Tat-jen Cham | Robust Performance-driven 3D Face Tracking in Long Range Depth Scenes | 10 pages, 8 figures, 4 tables | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a novel robust hybrid 3D face tracking framework from RGBD video
streams, which is capable of tracking head pose and facial actions without
pre-calibration or intervention from a user. In particular, we emphasize on
improving the tracking performance in instances where the tracked subject is at
a large distance from the cameras, and the quality of point cloud deteriorates
severely. This is accomplished by the combination of a flexible 3D shape
regressor and the joint 2D+3D optimization on shape parameters. Our approach
fits facial blendshapes to the point cloud of the human head, while being
driven by an efficient and rapid 3D shape regressor trained on generic RGB
datasets. As an on-line tracking system, the identity of the unknown user is
adapted on-the-fly resulting in improved 3D model reconstruction and
consequently better tracking performance. The result is a robust RGBD face
tracker, capable of handling a wide range of target scene depths, beyond those
that can be afforded by traditional depth or RGB face trackers. Lastly, since
the blendshape is not able to accurately recover the real facial shape, we use
the tracked 3D face model as a prior in a novel filtering process to further
refine the depth map for use in other tasks, such as 3D reconstruction.
| [
{
"version": "v1",
"created": "Fri, 10 Jul 2015 04:52:36 GMT"
}
] | 2015-07-13T00:00:00 | [
[
"Pham",
"Hai X.",
""
],
[
"Chen",
"Chongyu",
""
],
[
"Dao",
"Luc N.",
""
],
[
"Pavlovic",
"Vladimir",
""
],
[
"Cai",
"Jianfei",
""
],
[
"Cham",
"Tat-jen",
""
]
] | TITLE: Robust Performance-driven 3D Face Tracking in Long Range Depth Scenes
ABSTRACT: We introduce a novel robust hybrid 3D face tracking framework from RGBD video
streams, which is capable of tracking head pose and facial actions without
pre-calibration or intervention from a user. In particular, we emphasize on
improving the tracking performance in instances where the tracked subject is at
a large distance from the cameras, and the quality of point cloud deteriorates
severely. This is accomplished by the combination of a flexible 3D shape
regressor and the joint 2D+3D optimization on shape parameters. Our approach
fits facial blendshapes to the point cloud of the human head, while being
driven by an efficient and rapid 3D shape regressor trained on generic RGB
datasets. As an on-line tracking system, the identity of the unknown user is
adapted on-the-fly resulting in improved 3D model reconstruction and
consequently better tracking performance. The result is a robust RGBD face
tracker, capable of handling a wide range of target scene depths, beyond those
that can be afforded by traditional depth or RGB face trackers. Lastly, since
the blendshape is not able to accurately recover the real facial shape, we use
the tracked 3D face model as a prior in a novel filtering process to further
refine the depth map for use in other tasks, such as 3D reconstruction.
| no_new_dataset | 0.944074 |
1507.02879 | M. Saquib Sarfraz | M. Saquib Sarfraz and Rainer Stiefelhagen | Deep Perceptual Mapping for Thermal to Visible Face Recognition | BMVC 2015 (oral) | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cross modal face matching between the thermal and visible spectrum is a much
de- sired capability for night-time surveillance and security applications. Due
to a very large modality gap, thermal-to-visible face recognition is one of the
most challenging face matching problem. In this paper, we present an approach
to bridge this modality gap by a significant margin. Our approach captures the
highly non-linear relationship be- tween the two modalities by using a deep
neural network. Our model attempts to learn a non-linear mapping from visible
to thermal spectrum while preserving the identity in- formation. We show
substantive performance improvement on a difficult thermal-visible face
dataset. The presented approach improves the state-of-the-art by more than 10%
in terms of Rank-1 identification and bridge the drop in performance due to the
modality gap by more than 40%.
| [
{
"version": "v1",
"created": "Fri, 10 Jul 2015 12:55:34 GMT"
}
] | 2015-07-13T00:00:00 | [
[
"Sarfraz",
"M. Saquib",
""
],
[
"Stiefelhagen",
"Rainer",
""
]
] | TITLE: Deep Perceptual Mapping for Thermal to Visible Face Recognition
ABSTRACT: Cross modal face matching between the thermal and visible spectrum is a much
de- sired capability for night-time surveillance and security applications. Due
to a very large modality gap, thermal-to-visible face recognition is one of the
most challenging face matching problem. In this paper, we present an approach
to bridge this modality gap by a significant margin. Our approach captures the
highly non-linear relationship be- tween the two modalities by using a deep
neural network. Our model attempts to learn a non-linear mapping from visible
to thermal spectrum while preserving the identity in- formation. We show
substantive performance improvement on a difficult thermal-visible face
dataset. The presented approach improves the state-of-the-art by more than 10%
in terms of Rank-1 identification and bridge the drop in performance due to the
modality gap by more than 40%.
| no_new_dataset | 0.954223 |
1411.6836 | Mircea Cimpoi | Mircea Cimpoi, Subhransu Maji, Andrea Vedaldi | Deep convolutional filter banks for texture recognition and segmentation | Accepted to CVPR15 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Research in texture recognition often concentrates on the problem of material
recognition in uncluttered conditions, an assumption rarely met by
applications. In this work we conduct a first study of material and describable
texture at- tributes recognition in clutter, using a new dataset derived from
the OpenSurface texture repository. Motivated by the challenge posed by this
problem, we propose a new texture descriptor, D-CNN, obtained by Fisher Vector
pooling of a Convolutional Neural Network (CNN) filter bank. D-CNN
substantially improves the state-of-the-art in texture, mate- rial and scene
recognition. Our approach achieves 82.3% accuracy on Flickr material dataset
and 81.1% accuracy on MIT indoor scenes, providing absolute gains of more than
10% over existing approaches. D-CNN easily trans- fers across domains without
requiring feature adaptation as for methods that build on the fully-connected
layers of CNNs. Furthermore, D-CNN can seamlessly incorporate multi-scale
information and describe regions of arbitrary shapes and sizes. Our approach is
particularly suited at lo- calizing stuff categories and obtains
state-of-the-art re- sults on MSRC segmentation dataset, as well as promising
results on recognizing materials and surface attributes in clutter on the
OpenSurfaces dataset.
| [
{
"version": "v1",
"created": "Tue, 25 Nov 2014 12:36:23 GMT"
},
{
"version": "v2",
"created": "Thu, 9 Jul 2015 18:25:43 GMT"
}
] | 2015-07-10T00:00:00 | [
[
"Cimpoi",
"Mircea",
""
],
[
"Maji",
"Subhransu",
""
],
[
"Vedaldi",
"Andrea",
""
]
] | TITLE: Deep convolutional filter banks for texture recognition and segmentation
ABSTRACT: Research in texture recognition often concentrates on the problem of material
recognition in uncluttered conditions, an assumption rarely met by
applications. In this work we conduct a first study of material and describable
texture at- tributes recognition in clutter, using a new dataset derived from
the OpenSurface texture repository. Motivated by the challenge posed by this
problem, we propose a new texture descriptor, D-CNN, obtained by Fisher Vector
pooling of a Convolutional Neural Network (CNN) filter bank. D-CNN
substantially improves the state-of-the-art in texture, mate- rial and scene
recognition. Our approach achieves 82.3% accuracy on Flickr material dataset
and 81.1% accuracy on MIT indoor scenes, providing absolute gains of more than
10% over existing approaches. D-CNN easily trans- fers across domains without
requiring feature adaptation as for methods that build on the fully-connected
layers of CNNs. Furthermore, D-CNN can seamlessly incorporate multi-scale
information and describe regions of arbitrary shapes and sizes. Our approach is
particularly suited at lo- calizing stuff categories and obtains
state-of-the-art re- sults on MSRC segmentation dataset, as well as promising
results on recognizing materials and surface attributes in clutter on the
OpenSurfaces dataset.
| no_new_dataset | 0.725503 |
1507.02356 | Chintan Dalal | Chintan A. Dalal, Vladimir Pavlovic, Robert E. Kopp | Intrinsic Non-stationary Covariance Function for Climate Modeling | 9 pages, 3 figures | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Designing a covariance function that represents the underlying correlation is
a crucial step in modeling complex natural systems, such as climate models.
Geospatial datasets at a global scale usually suffer from non-stationarity and
non-uniformly smooth spatial boundaries. A Gaussian process regression using a
non-stationary covariance function has shown promise for this task, as this
covariance function adapts to the variable correlation structure of the
underlying distribution. In this paper, we generalize the non-stationary
covariance function to address the aforementioned global scale geospatial
issues. We define this generalized covariance function as an intrinsic
non-stationary covariance function, because it uses intrinsic statistics of the
symmetric positive definite matrices to represent the characteristic length
scale and, thereby, models the local stochastic process. Experiments on a
synthetic and real dataset of relative sea level changes across the world
demonstrate improvements in the error metrics for the regression estimates
using our newly proposed approach.
| [
{
"version": "v1",
"created": "Thu, 9 Jul 2015 02:52:19 GMT"
}
] | 2015-07-10T00:00:00 | [
[
"Dalal",
"Chintan A.",
""
],
[
"Pavlovic",
"Vladimir",
""
],
[
"Kopp",
"Robert E.",
""
]
] | TITLE: Intrinsic Non-stationary Covariance Function for Climate Modeling
ABSTRACT: Designing a covariance function that represents the underlying correlation is
a crucial step in modeling complex natural systems, such as climate models.
Geospatial datasets at a global scale usually suffer from non-stationarity and
non-uniformly smooth spatial boundaries. A Gaussian process regression using a
non-stationary covariance function has shown promise for this task, as this
covariance function adapts to the variable correlation structure of the
underlying distribution. In this paper, we generalize the non-stationary
covariance function to address the aforementioned global scale geospatial
issues. We define this generalized covariance function as an intrinsic
non-stationary covariance function, because it uses intrinsic statistics of the
symmetric positive definite matrices to represent the characteristic length
scale and, thereby, models the local stochastic process. Experiments on a
synthetic and real dataset of relative sea level changes across the world
demonstrate improvements in the error metrics for the regression estimates
using our newly proposed approach.
| no_new_dataset | 0.950134 |
1502.04187 | Soheil Keshmiri | Soheil Keshmiri, Xin Zheng, Chee Meng Chew, Chee Khiang Pang | Application of Deep Neural Network in Estimation of the Weld Bead
Parameters | Disapproval of funding organization | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a deep learning approach to estimation of the bead parameters in
welding tasks. Our model is based on a four-hidden-layer neural network
architecture. More specifically, the first three hidden layers of this
architecture utilize Sigmoid function to produce their respective intermediate
outputs. On the other hand, the last hidden layer uses a linear transformation
to generate the final output of this architecture. This transforms our deep
network architecture from a classifier to a non-linear regression model. We
compare the performance of our deep network with a selected number of results
in the literature to show a considerable improvement in reducing the errors in
estimation of these values. Furthermore, we show its scalability on estimating
the weld bead parameters with same level of accuracy on combination of datasets
that pertain to different welding techniques. This is a nontrivial result that
is counter-intuitive to the general belief in this field of research.
| [
{
"version": "v1",
"created": "Sat, 14 Feb 2015 10:58:53 GMT"
},
{
"version": "v2",
"created": "Wed, 8 Jul 2015 11:05:10 GMT"
}
] | 2015-07-09T00:00:00 | [
[
"Keshmiri",
"Soheil",
""
],
[
"Zheng",
"Xin",
""
],
[
"Chew",
"Chee Meng",
""
],
[
"Pang",
"Chee Khiang",
""
]
] | TITLE: Application of Deep Neural Network in Estimation of the Weld Bead
Parameters
ABSTRACT: We present a deep learning approach to estimation of the bead parameters in
welding tasks. Our model is based on a four-hidden-layer neural network
architecture. More specifically, the first three hidden layers of this
architecture utilize Sigmoid function to produce their respective intermediate
outputs. On the other hand, the last hidden layer uses a linear transformation
to generate the final output of this architecture. This transforms our deep
network architecture from a classifier to a non-linear regression model. We
compare the performance of our deep network with a selected number of results
in the literature to show a considerable improvement in reducing the errors in
estimation of these values. Furthermore, we show its scalability on estimating
the weld bead parameters with same level of accuracy on combination of datasets
that pertain to different welding techniques. This is a nontrivial result that
is counter-intuitive to the general belief in this field of research.
| no_new_dataset | 0.950319 |
1504.01639 | Marc Bola\~nos | Marc Bola\~nos and Petia Radeva | Ego-Object Discovery | 9 pages, 13 figures, Submitted to: Image and Vision Computing | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Lifelogging devices are spreading faster everyday. This growth can represent
great benefits to develop methods for extraction of meaningful information
about the user wearing the device and his/her environment. In this paper, we
propose a semi-supervised strategy for easily discovering objects relevant to
the person wearing a first-person camera. Given an egocentric video/images
sequence acquired by the camera, our algorithm uses both the appearance
extracted by means of a convolutional neural network and an object refill
methodology that allows to discover objects even in case of small amount of
object appearance in the collection of images. An SVM filtering strategy is
applied to deal with the great part of the False Positive object candidates
found by most of the state of the art object detectors. We validate our method
on a new egocentric dataset of 4912 daily images acquired by 4 persons as well
as on both PASCAL 2012 and MSRC datasets. We obtain for all of them results
that largely outperform the state of the art approach. We make public both the
EDUB dataset and the algorithm code.
| [
{
"version": "v1",
"created": "Tue, 7 Apr 2015 15:23:22 GMT"
},
{
"version": "v2",
"created": "Wed, 8 Jul 2015 09:19:48 GMT"
}
] | 2015-07-09T00:00:00 | [
[
"Bolaños",
"Marc",
""
],
[
"Radeva",
"Petia",
""
]
] | TITLE: Ego-Object Discovery
ABSTRACT: Lifelogging devices are spreading faster everyday. This growth can represent
great benefits to develop methods for extraction of meaningful information
about the user wearing the device and his/her environment. In this paper, we
propose a semi-supervised strategy for easily discovering objects relevant to
the person wearing a first-person camera. Given an egocentric video/images
sequence acquired by the camera, our algorithm uses both the appearance
extracted by means of a convolutional neural network and an object refill
methodology that allows to discover objects even in case of small amount of
object appearance in the collection of images. An SVM filtering strategy is
applied to deal with the great part of the False Positive object candidates
found by most of the state of the art object detectors. We validate our method
on a new egocentric dataset of 4912 daily images acquired by 4 persons as well
as on both PASCAL 2012 and MSRC datasets. We obtain for all of them results
that largely outperform the state of the art approach. We make public both the
EDUB dataset and the algorithm code.
| new_dataset | 0.959116 |
1507.02011 | Qinxun Bai | Qinxun Bai, Henry Lam, Stan Sclaroff | A Bayesian Approach for Online Classifier Ensemble | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a Bayesian approach for recursively estimating the classifier
weights in online learning of a classifier ensemble. In contrast with past
methods, such as stochastic gradient descent or online boosting, our approach
estimates the weights by recursively updating its posterior distribution. For a
specified class of loss functions, we show that it is possible to formulate a
suitably defined likelihood function and hence use the posterior distribution
as an approximation to the global empirical loss minimizer. If the stream of
training data is sampled from a stationary process, we can also show that our
approach admits a superior rate of convergence to the expected loss minimizer
than is possible with standard stochastic gradient descent. In experiments with
real-world datasets, our formulation often performs better than
state-of-the-art stochastic gradient descent and online boosting algorithms.
| [
{
"version": "v1",
"created": "Wed, 8 Jul 2015 03:35:58 GMT"
}
] | 2015-07-09T00:00:00 | [
[
"Bai",
"Qinxun",
""
],
[
"Lam",
"Henry",
""
],
[
"Sclaroff",
"Stan",
""
]
] | TITLE: A Bayesian Approach for Online Classifier Ensemble
ABSTRACT: We propose a Bayesian approach for recursively estimating the classifier
weights in online learning of a classifier ensemble. In contrast with past
methods, such as stochastic gradient descent or online boosting, our approach
estimates the weights by recursively updating its posterior distribution. For a
specified class of loss functions, we show that it is possible to formulate a
suitably defined likelihood function and hence use the posterior distribution
as an approximation to the global empirical loss minimizer. If the stream of
training data is sampled from a stationary process, we can also show that our
approach admits a superior rate of convergence to the expected loss minimizer
than is possible with standard stochastic gradient descent. In experiments with
real-world datasets, our formulation often performs better than
state-of-the-art stochastic gradient descent and online boosting algorithms.
| no_new_dataset | 0.94868 |
1507.02062 | Xiaojun Wan | Xiaojun Wan, Ziqiang Cao, Furu Wei, Sujian Li and Ming Zhou | Multi-Document Summarization via Discriminative Summary Reranking | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Existing multi-document summarization systems usually rely on a specific
summarization model (i.e., a summarization method with a specific parameter
setting) to extract summaries for different document sets with different
topics. However, according to our quantitative analysis, none of the existing
summarization models can always produce high-quality summaries for different
document sets, and even a summarization model with good overall performance may
produce low-quality summaries for some document sets. On the contrary, a
baseline summarization model may produce high-quality summaries for some
document sets. Based on the above observations, we treat the summaries produced
by different summarization models as candidate summaries, and then explore
discriminative reranking techniques to identify high-quality summaries from the
candidates for difference document sets. We propose to extract a set of
candidate summaries for each document set based on an ILP framework, and then
leverage Ranking SVM for summary reranking. Various useful features have been
developed for the reranking process, including word-level features,
sentence-level features and summary-level features. Evaluation results on the
benchmark DUC datasets validate the efficacy and robustness of our proposed
approach.
| [
{
"version": "v1",
"created": "Wed, 8 Jul 2015 08:26:23 GMT"
}
] | 2015-07-09T00:00:00 | [
[
"Wan",
"Xiaojun",
""
],
[
"Cao",
"Ziqiang",
""
],
[
"Wei",
"Furu",
""
],
[
"Li",
"Sujian",
""
],
[
"Zhou",
"Ming",
""
]
] | TITLE: Multi-Document Summarization via Discriminative Summary Reranking
ABSTRACT: Existing multi-document summarization systems usually rely on a specific
summarization model (i.e., a summarization method with a specific parameter
setting) to extract summaries for different document sets with different
topics. However, according to our quantitative analysis, none of the existing
summarization models can always produce high-quality summaries for different
document sets, and even a summarization model with good overall performance may
produce low-quality summaries for some document sets. On the contrary, a
baseline summarization model may produce high-quality summaries for some
document sets. Based on the above observations, we treat the summaries produced
by different summarization models as candidate summaries, and then explore
discriminative reranking techniques to identify high-quality summaries from the
candidates for difference document sets. We propose to extract a set of
candidate summaries for each document set based on an ILP framework, and then
leverage Ranking SVM for summary reranking. Various useful features have been
developed for the reranking process, including word-level features,
sentence-level features and summary-level features. Evaluation results on the
benchmark DUC datasets validate the efficacy and robustness of our proposed
approach.
| no_new_dataset | 0.951051 |
1507.02140 | Xiaojun Wan | Yue Hu and Xiaojun Wan | Mining and Analyzing the Future Works in Scientific Articles | null | null | null | null | cs.DL cs.CL cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Future works in scientific articles are valuable for researchers and they can
guide researchers to new research directions or ideas. In this paper, we mine
the future works in scientific articles in order to 1) provide an insight for
future work analysis and 2) facilitate researchers to search and browse future
works in a research area. First, we study the problem of future work extraction
and propose a regular expression based method to address the problem. Second,
we define four different categories for the future works by observing the data
and investigate the multi-class future work classification problem. Third, we
apply the extraction method and the classification model to a paper dataset in
the computer science field and conduct a further analysis of the future works.
Finally, we design a prototype system to search and demonstrate the future
works mined from the scientific papers. Our evaluation results show that our
extraction method can get high precision and recall values and our
classification model can also get good results and it outperforms several
baseline models. Further analysis of the future work sentences also indicates
interesting results.
| [
{
"version": "v1",
"created": "Wed, 8 Jul 2015 13:14:38 GMT"
}
] | 2015-07-09T00:00:00 | [
[
"Hu",
"Yue",
""
],
[
"Wan",
"Xiaojun",
""
]
] | TITLE: Mining and Analyzing the Future Works in Scientific Articles
ABSTRACT: Future works in scientific articles are valuable for researchers and they can
guide researchers to new research directions or ideas. In this paper, we mine
the future works in scientific articles in order to 1) provide an insight for
future work analysis and 2) facilitate researchers to search and browse future
works in a research area. First, we study the problem of future work extraction
and propose a regular expression based method to address the problem. Second,
we define four different categories for the future works by observing the data
and investigate the multi-class future work classification problem. Third, we
apply the extraction method and the classification model to a paper dataset in
the computer science field and conduct a further analysis of the future works.
Finally, we design a prototype system to search and demonstrate the future
works mined from the scientific papers. Our evaluation results show that our
extraction method can get high precision and recall values and our
classification model can also get good results and it outperforms several
baseline models. Further analysis of the future work sentences also indicates
interesting results.
| no_new_dataset | 0.953966 |
1507.02154 | Iago Landesa-V\'azquez | Iago Landesa-V\'azquez, Jos\'e Luis Alba-Castro | Double-Base Asymmetric AdaBoost | null | Neurocomputing 118 (2013) 101-114 | 10.1016/j.neucom.2013.02.019 | null | cs.CV cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Based on the use of different exponential bases to define class-dependent
error bounds, a new and highly efficient asymmetric boosting scheme, coined as
AdaBoostDB (Double-Base), is proposed. Supported by a fully theoretical
derivation procedure, unlike most of the other approaches in the literature,
our algorithm preserves all the formal guarantees and properties of original
(cost-insensitive) AdaBoost, similarly to the state-of-the-art Cost-Sensitive
AdaBoost algorithm. However, the key advantage of AdaBoostDB is that our novel
derivation scheme enables an extremely efficient conditional search procedure,
dramatically improving and simplifying the training phase of the algorithm.
Experiments, both over synthetic and real datasets, reveal that AdaBoostDB is
able to save over 99% training time with regard to Cost-Sensitive AdaBoost,
providing the same cost-sensitive results. This computational advantage of
AdaBoostDB can make a difference in problems managing huge pools of weak
classifiers in which boosting techniques are commonly used.
| [
{
"version": "v1",
"created": "Wed, 8 Jul 2015 13:44:34 GMT"
}
] | 2015-07-09T00:00:00 | [
[
"Landesa-Vázquez",
"Iago",
""
],
[
"Alba-Castro",
"José Luis",
""
]
] | TITLE: Double-Base Asymmetric AdaBoost
ABSTRACT: Based on the use of different exponential bases to define class-dependent
error bounds, a new and highly efficient asymmetric boosting scheme, coined as
AdaBoostDB (Double-Base), is proposed. Supported by a fully theoretical
derivation procedure, unlike most of the other approaches in the literature,
our algorithm preserves all the formal guarantees and properties of original
(cost-insensitive) AdaBoost, similarly to the state-of-the-art Cost-Sensitive
AdaBoost algorithm. However, the key advantage of AdaBoostDB is that our novel
derivation scheme enables an extremely efficient conditional search procedure,
dramatically improving and simplifying the training phase of the algorithm.
Experiments, both over synthetic and real datasets, reveal that AdaBoostDB is
able to save over 99% training time with regard to Cost-Sensitive AdaBoost,
providing the same cost-sensitive results. This computational advantage of
AdaBoostDB can make a difference in problems managing huge pools of weak
classifiers in which boosting techniques are commonly used.
| no_new_dataset | 0.942665 |
1507.02159 | Limin Wang | Limin Wang, Yuanjun Xiong, Zhe Wang, Yu Qiao | Towards Good Practices for Very Deep Two-Stream ConvNets | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep convolutional networks have achieved great success for object
recognition in still images. However, for action recognition in videos, the
improvement of deep convolutional networks is not so evident. We argue that
there are two reasons that could probably explain this result. First the
current network architectures (e.g. Two-stream ConvNets) are relatively shallow
compared with those very deep models in image domain (e.g. VGGNet, GoogLeNet),
and therefore their modeling capacity is constrained by their depth. Second,
probably more importantly, the training dataset of action recognition is
extremely small compared with the ImageNet dataset, and thus it will be easy to
over-fit on the training dataset.
To address these issues, this report presents very deep two-stream ConvNets
for action recognition, by adapting recent very deep architectures into video
domain. However, this extension is not easy as the size of action recognition
is quite small. We design several good practices for the training of very deep
two-stream ConvNets, namely (i) pre-training for both spatial and temporal
nets, (ii) smaller learning rates, (iii) more data augmentation techniques,
(iv) high drop out ratio. Meanwhile, we extend the Caffe toolbox into Multi-GPU
implementation with high computational efficiency and low memory consumption.
We verify the performance of very deep two-stream ConvNets on the dataset of
UCF101 and it achieves the recognition accuracy of $91.4\%$.
| [
{
"version": "v1",
"created": "Wed, 8 Jul 2015 14:00:35 GMT"
}
] | 2015-07-09T00:00:00 | [
[
"Wang",
"Limin",
""
],
[
"Xiong",
"Yuanjun",
""
],
[
"Wang",
"Zhe",
""
],
[
"Qiao",
"Yu",
""
]
] | TITLE: Towards Good Practices for Very Deep Two-Stream ConvNets
ABSTRACT: Deep convolutional networks have achieved great success for object
recognition in still images. However, for action recognition in videos, the
improvement of deep convolutional networks is not so evident. We argue that
there are two reasons that could probably explain this result. First the
current network architectures (e.g. Two-stream ConvNets) are relatively shallow
compared with those very deep models in image domain (e.g. VGGNet, GoogLeNet),
and therefore their modeling capacity is constrained by their depth. Second,
probably more importantly, the training dataset of action recognition is
extremely small compared with the ImageNet dataset, and thus it will be easy to
over-fit on the training dataset.
To address these issues, this report presents very deep two-stream ConvNets
for action recognition, by adapting recent very deep architectures into video
domain. However, this extension is not easy as the size of action recognition
is quite small. We design several good practices for the training of very deep
two-stream ConvNets, namely (i) pre-training for both spatial and temporal
nets, (ii) smaller learning rates, (iii) more data augmentation techniques,
(iv) high drop out ratio. Meanwhile, we extend the Caffe toolbox into Multi-GPU
implementation with high computational efficiency and low memory consumption.
We verify the performance of very deep two-stream ConvNets on the dataset of
UCF101 and it achieves the recognition accuracy of $91.4\%$.
| no_new_dataset | 0.940681 |
1507.01697 | Tobias Kuhn | Tobias Kuhn and Michel Dumontier | Making Digital Artifacts on the Web Verifiable and Reliable | Extended version of conference paper: arXiv:1401.5775 | null | 10.1109/TKDE.2015.2419657 | null | cs.CR cs.DL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The current Web has no general mechanisms to make digital artifacts --- such
as datasets, code, texts, and images --- verifiable and permanent. For digital
artifacts that are supposed to be immutable, there is moreover no commonly
accepted method to enforce this immutability. These shortcomings have a serious
negative impact on the ability to reproduce the results of processes that rely
on Web resources, which in turn heavily impacts areas such as science where
reproducibility is important. To solve this problem, we propose trusty URIs
containing cryptographic hash values. We show how trusty URIs can be used for
the verification of digital artifacts, in a manner that is independent of the
serialization format in the case of structured data files such as
nanopublications. We demonstrate how the contents of these files become
immutable, including dependencies to external digital artifacts and thereby
extending the range of verifiability to the entire reference tree. Our approach
sticks to the core principles of the Web, namely openness and decentralized
architecture, and is fully compatible with existing standards and protocols.
Evaluation of our reference implementations shows that these design goals are
indeed accomplished by our approach, and that it remains practical even for
very large files.
| [
{
"version": "v1",
"created": "Tue, 7 Jul 2015 08:04:29 GMT"
}
] | 2015-07-08T00:00:00 | [
[
"Kuhn",
"Tobias",
""
],
[
"Dumontier",
"Michel",
""
]
] | TITLE: Making Digital Artifacts on the Web Verifiable and Reliable
ABSTRACT: The current Web has no general mechanisms to make digital artifacts --- such
as datasets, code, texts, and images --- verifiable and permanent. For digital
artifacts that are supposed to be immutable, there is moreover no commonly
accepted method to enforce this immutability. These shortcomings have a serious
negative impact on the ability to reproduce the results of processes that rely
on Web resources, which in turn heavily impacts areas such as science where
reproducibility is important. To solve this problem, we propose trusty URIs
containing cryptographic hash values. We show how trusty URIs can be used for
the verification of digital artifacts, in a manner that is independent of the
serialization format in the case of structured data files such as
nanopublications. We demonstrate how the contents of these files become
immutable, including dependencies to external digital artifacts and thereby
extending the range of verifiability to the entire reference tree. Our approach
sticks to the core principles of the Web, namely openness and decentralized
architecture, and is fully compatible with existing standards and protocols.
Evaluation of our reference implementations shows that these design goals are
indeed accomplished by our approach, and that it remains practical even for
very large files.
| no_new_dataset | 0.941385 |
1412.6547 | Paul Mineiro | Paul Mineiro and Nikos Karampatziakis | Fast Label Embeddings via Randomized Linear Algebra | To appear in the proceedings of the ECML/PKDD 2015 conference.
Reference implementation available at https://github.com/pmineiro/randembed | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many modern multiclass and multilabel problems are characterized by
increasingly large output spaces. For these problems, label embeddings have
been shown to be a useful primitive that can improve computational and
statistical efficiency. In this work we utilize a correspondence between rank
constrained estimation and low dimensional label embeddings that uncovers a
fast label embedding algorithm which works in both the multiclass and
multilabel settings. The result is a randomized algorithm whose running time is
exponentially faster than naive algorithms. We demonstrate our techniques on
two large-scale public datasets, from the Large Scale Hierarchical Text
Challenge and the Open Directory Project, where we obtain state of the art
results.
| [
{
"version": "v1",
"created": "Fri, 19 Dec 2014 22:09:35 GMT"
},
{
"version": "v2",
"created": "Fri, 27 Feb 2015 23:29:44 GMT"
},
{
"version": "v3",
"created": "Mon, 23 Mar 2015 16:11:14 GMT"
},
{
"version": "v4",
"created": "Mon, 30 Mar 2015 23:24:53 GMT"
},
{
"version": "v5",
"created": "Mon, 13 Apr 2015 00:29:44 GMT"
},
{
"version": "v6",
"created": "Mon, 15 Jun 2015 18:07:20 GMT"
},
{
"version": "v7",
"created": "Sun, 5 Jul 2015 15:38:11 GMT"
}
] | 2015-07-07T00:00:00 | [
[
"Mineiro",
"Paul",
""
],
[
"Karampatziakis",
"Nikos",
""
]
] | TITLE: Fast Label Embeddings via Randomized Linear Algebra
ABSTRACT: Many modern multiclass and multilabel problems are characterized by
increasingly large output spaces. For these problems, label embeddings have
been shown to be a useful primitive that can improve computational and
statistical efficiency. In this work we utilize a correspondence between rank
constrained estimation and low dimensional label embeddings that uncovers a
fast label embedding algorithm which works in both the multiclass and
multilabel settings. The result is a randomized algorithm whose running time is
exponentially faster than naive algorithms. We demonstrate our techniques on
two large-scale public datasets, from the Large Scale Hierarchical Text
Challenge and the Open Directory Project, where we obtain state of the art
results.
| no_new_dataset | 0.949576 |
1504.02162 | Diego Amancio | Diego R. Amancio, Filipi N. Silva and Luciano da F. Costa | Concentric network symmetry grasps authors' styles in word adjacency
networks | Accepted for publication in Europhys. Lett. (EPL). The supplementary
information is available from
https://dl.dropboxusercontent.com/u/2740286/symmetry.pdf | Europhys. Lett. 110 68001 (2015) | 10.1209/0295-5075/110/68001 | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Several characteristics of written texts have been inferred from statistical
analysis derived from networked models. Even though many network measurements
have been adapted to study textual properties at several levels of complexity,
some textual aspects have been disregarded. In this paper, we study the
symmetry of word adjacency networks, a well-known representation of text as a
graph. A statistical analysis of the symmetry distribution performed in several
novels showed that most of the words do not display symmetric patterns of
connectivity. More specifically, the merged symmetry displayed a distribution
similar to the ubiquitous power-law distribution. Our experiments also revealed
that the studied metrics do not correlate with other traditional network
measurements, such as the degree or betweenness centrality. The effectiveness
of the symmetry measurements was verified in the authorship attribution task.
Interestingly, we found that specific authors prefer particular types of
symmetric motifs. As a consequence, the authorship of books could be accurately
identified in 82.5% of the cases, in a dataset comprising books written by 8
authors. Because the proposed measurements for text analysis are complementary
to the traditional approach, they can be used to improve the characterization
of text networks, which might be useful for related applications, such as those
relying on the identification of topical words and information retrieval.
| [
{
"version": "v1",
"created": "Thu, 9 Apr 2015 00:49:36 GMT"
},
{
"version": "v2",
"created": "Thu, 18 Jun 2015 13:19:39 GMT"
}
] | 2015-07-07T00:00:00 | [
[
"Amancio",
"Diego R.",
""
],
[
"Silva",
"Filipi N.",
""
],
[
"Costa",
"Luciano da F.",
""
]
] | TITLE: Concentric network symmetry grasps authors' styles in word adjacency
networks
ABSTRACT: Several characteristics of written texts have been inferred from statistical
analysis derived from networked models. Even though many network measurements
have been adapted to study textual properties at several levels of complexity,
some textual aspects have been disregarded. In this paper, we study the
symmetry of word adjacency networks, a well-known representation of text as a
graph. A statistical analysis of the symmetry distribution performed in several
novels showed that most of the words do not display symmetric patterns of
connectivity. More specifically, the merged symmetry displayed a distribution
similar to the ubiquitous power-law distribution. Our experiments also revealed
that the studied metrics do not correlate with other traditional network
measurements, such as the degree or betweenness centrality. The effectiveness
of the symmetry measurements was verified in the authorship attribution task.
Interestingly, we found that specific authors prefer particular types of
symmetric motifs. As a consequence, the authorship of books could be accurately
identified in 82.5% of the cases, in a dataset comprising books written by 8
authors. Because the proposed measurements for text analysis are complementary
to the traditional approach, they can be used to improve the characterization
of text networks, which might be useful for related applications, such as those
relying on the identification of topical words and information retrieval.
| no_new_dataset | 0.891717 |
1505.04935 | Alina S\^irbu | Alina S\^irbu and Ozalp Babaoglu | Towards Data-Driven Autonomics in Data Centers | 12 pages, 6 figures | null | null | null | cs.DC cs.AI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Continued reliance on human operators for managing data centers is a major
impediment for them from ever reaching extreme dimensions. Large computer
systems in general, and data centers in particular, will ultimately be managed
using predictive computational and executable models obtained through
data-science tools, and at that point, the intervention of humans will be
limited to setting high-level goals and policies rather than performing
low-level operations. Data-driven autonomics, where management and control are
based on holistic predictive models that are built and updated using generated
data, opens one possible path towards limiting the role of operators in data
centers. In this paper, we present a data-science study of a public Google
dataset collected in a 12K-node cluster with the goal of building and
evaluating a predictive model for node failures. We use BigQuery, the big data
SQL platform from the Google Cloud suite, to process massive amounts of data
and generate a rich feature set characterizing machine state over time. We
describe how an ensemble classifier can be built out of many Random Forest
classifiers each trained on these features, to predict if machines will fail in
a future 24-hour window. Our evaluation reveals that if we limit false positive
rates to 5%, we can achieve true positive rates between 27% and 88% with
precision varying between 50% and 72%. We discuss the practicality of including
our predictive model as the central component of a data-driven autonomic
manager and operating it on-line with live data streams (rather than off-line
on data logs). All of the scripts used for BigQuery and classification analyses
are publicly available from the authors' website.
| [
{
"version": "v1",
"created": "Tue, 19 May 2015 09:58:05 GMT"
},
{
"version": "v2",
"created": "Mon, 6 Jul 2015 13:45:52 GMT"
}
] | 2015-07-07T00:00:00 | [
[
"Sîrbu",
"Alina",
""
],
[
"Babaoglu",
"Ozalp",
""
]
] | TITLE: Towards Data-Driven Autonomics in Data Centers
ABSTRACT: Continued reliance on human operators for managing data centers is a major
impediment for them from ever reaching extreme dimensions. Large computer
systems in general, and data centers in particular, will ultimately be managed
using predictive computational and executable models obtained through
data-science tools, and at that point, the intervention of humans will be
limited to setting high-level goals and policies rather than performing
low-level operations. Data-driven autonomics, where management and control are
based on holistic predictive models that are built and updated using generated
data, opens one possible path towards limiting the role of operators in data
centers. In this paper, we present a data-science study of a public Google
dataset collected in a 12K-node cluster with the goal of building and
evaluating a predictive model for node failures. We use BigQuery, the big data
SQL platform from the Google Cloud suite, to process massive amounts of data
and generate a rich feature set characterizing machine state over time. We
describe how an ensemble classifier can be built out of many Random Forest
classifiers each trained on these features, to predict if machines will fail in
a future 24-hour window. Our evaluation reveals that if we limit false positive
rates to 5%, we can achieve true positive rates between 27% and 88% with
precision varying between 50% and 72%. We discuss the practicality of including
our predictive model as the central component of a data-driven autonomic
manager and operating it on-line with live data streams (rather than off-line
on data logs). All of the scripts used for BigQuery and classification analyses
are publicly available from the authors' website.
| no_new_dataset | 0.95222 |
1506.03844 | Jose Rodrigues Jr | Marcos Bedo, Gustavo Blanco, Willian Oliveira, Mirela Cazzolato, Alceu
Costa, Jose Rodrigues, Agma Traina and Caetano Traina Jr | Techniques for effective and efficient fire detection from social media
images | 12 pages, Proceedings of the International Conference on Enterprise
Information Systems. Specifically: Marcos Bedo, Gustavo Blanco, Willian
Oliveira, Mirela Cazzolato, Alceu Costa, Jose Rodrigues, Agma Traina, Caetano
Traina, 2015, Techniques for effective and efficient fire detection from
social media images, ICEIS, 34-45 | Int Conf on Enterp Inf Systems 34-45 SCITEPRESS (2015) | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Social media could provide valuable information to support decision making in
crisis management, such as in accidents, explosions and fires. However, much of
the data from social media are images, which are uploaded in a rate that makes
it impossible for human beings to analyze them. Despite the many works on image
analysis, there are no fire detection studies on social media. To fill this
gap, we propose the use and evaluation of a broad set of content-based image
retrieval and classification techniques for fire detection. Our main
contributions are: (i) the development of the Fast-Fire Detection method
(FFDnR), which combines feature extractor and evaluation functions to support
instance-based learning, (ii) the construction of an annotated set of images
with ground-truth depicting fire occurrences -- the FlickrFire dataset, and
(iii) the evaluation of 36 efficient image descriptors for fire detection.
Using real data from Flickr, our results showed that FFDnR was able to achieve
a precision for fire detection comparable to that of human annotators.
Therefore, our work shall provide a solid basis for further developments on
monitoring images from social media.
| [
{
"version": "v1",
"created": "Thu, 11 Jun 2015 21:23:38 GMT"
},
{
"version": "v2",
"created": "Sat, 4 Jul 2015 20:02:17 GMT"
}
] | 2015-07-07T00:00:00 | [
[
"Bedo",
"Marcos",
""
],
[
"Blanco",
"Gustavo",
""
],
[
"Oliveira",
"Willian",
""
],
[
"Cazzolato",
"Mirela",
""
],
[
"Costa",
"Alceu",
""
],
[
"Rodrigues",
"Jose",
""
],
[
"Traina",
"Agma",
""
],
[
"Traina",
"Caetano",
"Jr"
]
] | TITLE: Techniques for effective and efficient fire detection from social media
images
ABSTRACT: Social media could provide valuable information to support decision making in
crisis management, such as in accidents, explosions and fires. However, much of
the data from social media are images, which are uploaded in a rate that makes
it impossible for human beings to analyze them. Despite the many works on image
analysis, there are no fire detection studies on social media. To fill this
gap, we propose the use and evaluation of a broad set of content-based image
retrieval and classification techniques for fire detection. Our main
contributions are: (i) the development of the Fast-Fire Detection method
(FFDnR), which combines feature extractor and evaluation functions to support
instance-based learning, (ii) the construction of an annotated set of images
with ground-truth depicting fire occurrences -- the FlickrFire dataset, and
(iii) the evaluation of 36 efficient image descriptors for fire detection.
Using real data from Flickr, our results showed that FFDnR was able to achieve
a precision for fire detection comparable to that of human annotators.
Therefore, our work shall provide a solid basis for further developments on
monitoring images from social media.
| no_new_dataset | 0.768212 |
1506.04006 | Sahar Vahdati | Sahar Vahdati, Farah Karim, Jyun-Yao Huang, and Christoph Lange | Mapping Large Scale Research Metadata to Linked Data: A Performance
Comparison of HBase, CSV and XML | Accepted in 0th Metadata and Semantics Research Conference | null | null | null | cs.DB cs.DL cs.PF | http://creativecommons.org/licenses/by/4.0/ | OpenAIRE, the Open Access Infrastructure for Research in Europe, comprises a
database of all EC FP7 and H2020 funded research projects, including metadata
of their results (publications and datasets). These data are stored in an HBase
NoSQL database, post-processed, and exposed as HTML for human consumption, and
as XML through a web service interface. As an intermediate format to facilitate
statistical computations, CSV is generated internally. To interlink the
OpenAIRE data with related data on the Web, we aim at exporting them as Linked
Open Data (LOD). The LOD export is required to integrate into the overall data
processing workflow, where derived data are regenerated from the base data
every day. We thus faced the challenge of identifying the best-performing
conversion approach.We evaluated the performances of creating LOD by a
MapReduce job on top of HBase, by mapping the intermediate CSV files, and by
mapping the XML output.
| [
{
"version": "v1",
"created": "Fri, 12 Jun 2015 12:40:03 GMT"
},
{
"version": "v2",
"created": "Mon, 6 Jul 2015 12:37:36 GMT"
}
] | 2015-07-07T00:00:00 | [
[
"Vahdati",
"Sahar",
""
],
[
"Karim",
"Farah",
""
],
[
"Huang",
"Jyun-Yao",
""
],
[
"Lange",
"Christoph",
""
]
] | TITLE: Mapping Large Scale Research Metadata to Linked Data: A Performance
Comparison of HBase, CSV and XML
ABSTRACT: OpenAIRE, the Open Access Infrastructure for Research in Europe, comprises a
database of all EC FP7 and H2020 funded research projects, including metadata
of their results (publications and datasets). These data are stored in an HBase
NoSQL database, post-processed, and exposed as HTML for human consumption, and
as XML through a web service interface. As an intermediate format to facilitate
statistical computations, CSV is generated internally. To interlink the
OpenAIRE data with related data on the Web, we aim at exporting them as Linked
Open Data (LOD). The LOD export is required to integrate into the overall data
processing workflow, where derived data are regenerated from the base data
every day. We thus faced the challenge of identifying the best-performing
conversion approach.We evaluated the performances of creating LOD by a
MapReduce job on top of HBase, by mapping the intermediate CSV files, and by
mapping the XML output.
| no_new_dataset | 0.943243 |
1506.07915 | Jose Rodrigues Jr | Jose Rodrigues, Luciana Romani, Agma Traina, Caetano Traina | Combining Visual Analytics and Content Based Data Retrieval Technology
for Efficient Data Analysis | Published as Jose Rodrigues, Luciana A. S. Romani, Agma Juci Machado
Traina, Caetano Traina Jr (2010), Combining Visual Analytics and Content
Based Data Retrieval Technology for Efficient Data Analysis, 14th Int Conf on
Inf Visualisation, 61-67 | 14th Int Conf on Inf Visualisation, 61-67 IEEE Press (2010) | 10.1109/IV.2010.101 | null | cs.GR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of the most useful techniques to help visual data analysis systems is
interactive filtering (brushing). However, visualization techniques often
suffer from overlap of graphical items and multiple attributes complexity,
making visual selection inefficient. In these situations, the benefits of data
visualization are not fully observable because the graphical items do not pop
up as comprehensive patterns. In this work we propose the use of content-based
data retrieval technology combined with visual analytics. The idea is to use
the similarity query functionalities provided by metric space systems in order
to select regions of the data domain according to user-guidance and interests.
After that, the data found in such regions feed multiple visualization
workspaces so that the user can inspect the correspondent datasets. Our
experiments showed that the methodology can break the visual analysis process
into smaller problems (views) and that the views hold the expectations of the
analyst according to his/her similarity query selection, improving data
perception and analytical possibilities. Our contribution introduces a
principle that can be used in all sorts of visualization techniques and
systems, this principle can be extended with different kinds of integration
visualization-metric-space, and with different metrics, expanding the
possibilities of visual data analysis in aspects such as semantics and
scalability.
| [
{
"version": "v1",
"created": "Thu, 25 Jun 2015 22:47:28 GMT"
}
] | 2015-07-07T00:00:00 | [
[
"Rodrigues",
"Jose",
""
],
[
"Romani",
"Luciana",
""
],
[
"Traina",
"Agma",
""
],
[
"Traina",
"Caetano",
""
]
] | TITLE: Combining Visual Analytics and Content Based Data Retrieval Technology
for Efficient Data Analysis
ABSTRACT: One of the most useful techniques to help visual data analysis systems is
interactive filtering (brushing). However, visualization techniques often
suffer from overlap of graphical items and multiple attributes complexity,
making visual selection inefficient. In these situations, the benefits of data
visualization are not fully observable because the graphical items do not pop
up as comprehensive patterns. In this work we propose the use of content-based
data retrieval technology combined with visual analytics. The idea is to use
the similarity query functionalities provided by metric space systems in order
to select regions of the data domain according to user-guidance and interests.
After that, the data found in such regions feed multiple visualization
workspaces so that the user can inspect the correspondent datasets. Our
experiments showed that the methodology can break the visual analysis process
into smaller problems (views) and that the views hold the expectations of the
analyst according to his/her similarity query selection, improving data
perception and analytical possibilities. Our contribution introduces a
principle that can be used in all sorts of visualization techniques and
systems, this principle can be extended with different kinds of integration
visualization-metric-space, and with different metrics, expanding the
possibilities of visual data analysis in aspects such as semantics and
scalability.
| no_new_dataset | 0.949763 |
1507.01062 | Ashish Sureka | Ashish Sureka | Intention-Oriented Process Model Discovery from Incident Management
Event Logs | null | null | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Intention-oriented process mining is based on the belief that the fundamental
nature of processes is mostly intentional (unlike activity-oriented process)
and aims at discovering strategy and intentional process models from event-logs
recorded during the process enactment. In this paper, we present an application
of intention-oriented process mining for the domain of incident management of
an Information Technology Infrastructure Library (ITIL) process. We apply the
Map Miner Method (MMM) on a large real-world dataset for discovering hidden and
unobservable user behavior, strategies and intentions. We first discover user
strategies from the given activity sequence data by applying Hidden Markov
Model (HMM) based unsupervised learning technique. We then process the emission
and transition matrices of the discovered HMM to generate a coarse-grained Map
Process Model. We present the first application or study of the new and
emerging field of Intention-oriented process mining on an incident management
event-log dataset and discuss its applicability, effectiveness and challenges.
| [
{
"version": "v1",
"created": "Sat, 4 Jul 2015 04:17:14 GMT"
}
] | 2015-07-07T00:00:00 | [
[
"Sureka",
"Ashish",
""
]
] | TITLE: Intention-Oriented Process Model Discovery from Incident Management
Event Logs
ABSTRACT: Intention-oriented process mining is based on the belief that the fundamental
nature of processes is mostly intentional (unlike activity-oriented process)
and aims at discovering strategy and intentional process models from event-logs
recorded during the process enactment. In this paper, we present an application
of intention-oriented process mining for the domain of incident management of
an Information Technology Infrastructure Library (ITIL) process. We apply the
Map Miner Method (MMM) on a large real-world dataset for discovering hidden and
unobservable user behavior, strategies and intentions. We first discover user
strategies from the given activity sequence data by applying Hidden Markov
Model (HMM) based unsupervised learning technique. We then process the emission
and transition matrices of the discovered HMM to generate a coarse-grained Map
Process Model. We present the first application or study of the new and
emerging field of Intention-oriented process mining on an incident management
event-log dataset and discuss its applicability, effectiveness and challenges.
| no_new_dataset | 0.950273 |
1507.01168 | Ashish Sureka | Ashish Sureka | Kernel Based Sequential Data Anomaly Detection in Business Process Event
Logs | null | null | null | null | cs.SE cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Business Process Management Systems (BPMS) log events and traces of
activities during the execution of a process. Anomalies are defined as
deviation or departure from the normal or common order. Anomaly detection in
business process logs has several applications such as fraud detection and
understanding the causes of process errors. In this paper, we present a novel
approach for anomaly detection in business process logs. We model the event
logs as a sequential data and apply kernel based anomaly detection techniques
to identify outliers and discordant observations. Our technique is unsupervised
(does not require a pre-annotated training dataset), employs kNN (k-nearest
neighbor) kernel based technique and normalized longest common subsequence
(LCS) similarity measure. We conduct experiments on a recent, large and
real-world incident management data of an enterprise and demonstrate that our
approach is effective.
| [
{
"version": "v1",
"created": "Sun, 5 Jul 2015 05:33:22 GMT"
}
] | 2015-07-07T00:00:00 | [
[
"Sureka",
"Ashish",
""
]
] | TITLE: Kernel Based Sequential Data Anomaly Detection in Business Process Event
Logs
ABSTRACT: Business Process Management Systems (BPMS) log events and traces of
activities during the execution of a process. Anomalies are defined as
deviation or departure from the normal or common order. Anomaly detection in
business process logs has several applications such as fraud detection and
understanding the causes of process errors. In this paper, we present a novel
approach for anomaly detection in business process logs. We model the event
logs as a sequential data and apply kernel based anomaly detection techniques
to identify outliers and discordant observations. Our technique is unsupervised
(does not require a pre-annotated training dataset), employs kNN (k-nearest
neighbor) kernel based technique and normalized longest common subsequence
(LCS) similarity measure. We conduct experiments on a recent, large and
real-world incident management data of an enterprise and demonstrate that our
approach is effective.
| no_new_dataset | 0.951051 |
1507.01208 | Puneet Dokania | Puneet K. Dokania and M. Pawan Kumar | Parsimonious Labeling | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a new family of discrete energy minimization problems, which we
call parsimonious labeling. Specifically, our energy functional consists of
unary potentials and high-order clique potentials. While the unary potentials
are arbitrary, the clique potentials are proportional to the {\em diversity} of
set of the unique labels assigned to the clique. Intuitively, our energy
functional encourages the labeling to be parsimonious, that is, use as few
labels as possible. This in turn allows us to capture useful cues for important
computer vision applications such as stereo correspondence and image denoising.
Furthermore, we propose an efficient graph-cuts based algorithm for the
parsimonious labeling problem that provides strong theoretical guarantees on
the quality of the solution. Our algorithm consists of three steps. First, we
approximate a given diversity using a mixture of a novel hierarchical $P^n$
Potts model. Second, we use a divide-and-conquer approach for each mixture
component, where each subproblem is solved using an effficient
$\alpha$-expansion algorithm. This provides us with a small number of putative
labelings, one for each mixture component. Third, we choose the best putative
labeling in terms of the energy value. Using both sythetic and standard real
datasets, we show that our algorithm significantly outperforms other graph-cuts
based approaches.
| [
{
"version": "v1",
"created": "Sun, 5 Jul 2015 11:59:43 GMT"
}
] | 2015-07-07T00:00:00 | [
[
"Dokania",
"Puneet K.",
""
],
[
"Kumar",
"M. Pawan",
""
]
] | TITLE: Parsimonious Labeling
ABSTRACT: We propose a new family of discrete energy minimization problems, which we
call parsimonious labeling. Specifically, our energy functional consists of
unary potentials and high-order clique potentials. While the unary potentials
are arbitrary, the clique potentials are proportional to the {\em diversity} of
set of the unique labels assigned to the clique. Intuitively, our energy
functional encourages the labeling to be parsimonious, that is, use as few
labels as possible. This in turn allows us to capture useful cues for important
computer vision applications such as stereo correspondence and image denoising.
Furthermore, we propose an efficient graph-cuts based algorithm for the
parsimonious labeling problem that provides strong theoretical guarantees on
the quality of the solution. Our algorithm consists of three steps. First, we
approximate a given diversity using a mixture of a novel hierarchical $P^n$
Potts model. Second, we use a divide-and-conquer approach for each mixture
component, where each subproblem is solved using an effficient
$\alpha$-expansion algorithm. This provides us with a small number of putative
labelings, one for each mixture component. Third, we choose the best putative
labeling in terms of the energy value. Using both sythetic and standard real
datasets, we show that our algorithm significantly outperforms other graph-cuts
based approaches.
| no_new_dataset | 0.946646 |
1507.01209 | Raghvendra Kannao | Raghvendra Kannao and Prithwijit Guha | TV News Commercials Detection using Success based Locally Weighted
Kernel Combination | null | null | null | null | cs.CV cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Commercial detection in news broadcast videos involves judicious selection of
meaningful audio-visual feature combinations and efficient classifiers. And,
this problem becomes much simpler if these combinations can be learned from the
data. To this end, we propose an Multiple Kernel Learning based method for
boosting successful kernel functions while ignoring the irrelevant ones. We
adopt a intermediate fusion approach where, a SVM is trained with a weighted
linear combination of different kernel functions instead of single kernel
function. Each kernel function is characterized by a feature set and kernel
type. We identify the feature sub-space locations of the prediction success of
a particular classifier trained only with particular kernel function. We
propose to estimate a weighing function using support vector regression (with
RBF kernel) for each kernel function which has high values (near 1.0) where the
classifier learned on kernel function succeeded and lower values (nearly 0.0)
otherwise. Second contribution of this work is TV News Commercials Dataset of
150 Hours of News videos. Classifier trained with our proposed scheme has
outperformed the baseline methods on 6 of 8 benchmark dataset and our own TV
commercials dataset.
| [
{
"version": "v1",
"created": "Sun, 5 Jul 2015 12:01:34 GMT"
}
] | 2015-07-07T00:00:00 | [
[
"Kannao",
"Raghvendra",
""
],
[
"Guha",
"Prithwijit",
""
]
] | TITLE: TV News Commercials Detection using Success based Locally Weighted
Kernel Combination
ABSTRACT: Commercial detection in news broadcast videos involves judicious selection of
meaningful audio-visual feature combinations and efficient classifiers. And,
this problem becomes much simpler if these combinations can be learned from the
data. To this end, we propose an Multiple Kernel Learning based method for
boosting successful kernel functions while ignoring the irrelevant ones. We
adopt a intermediate fusion approach where, a SVM is trained with a weighted
linear combination of different kernel functions instead of single kernel
function. Each kernel function is characterized by a feature set and kernel
type. We identify the feature sub-space locations of the prediction success of
a particular classifier trained only with particular kernel function. We
propose to estimate a weighing function using support vector regression (with
RBF kernel) for each kernel function which has high values (near 1.0) where the
classifier learned on kernel function succeeded and lower values (nearly 0.0)
otherwise. Second contribution of this work is TV News Commercials Dataset of
150 Hours of News videos. Classifier trained with our proposed scheme has
outperformed the baseline methods on 6 of 8 benchmark dataset and our own TV
commercials dataset.
| no_new_dataset | 0.931213 |
1507.01251 | Hamid Tizhoosh | Zehra Camlica, H.R. Tizhoosh, Farzad Khalvati | Autoencoding the Retrieval Relevance of Medical Images | To appear in proceedings of The 5th International Conference on Image
Processing Theory, Tools and Applications (IPTA'15), Nov 10-13, 2015,
Orleans, France | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Content-based image retrieval (CBIR) of medical images is a crucial task that
can contribute to a more reliable diagnosis if applied to big data. Recent
advances in feature extraction and classification have enormously improved CBIR
results for digital images. However, considering the increasing accessibility
of big data in medical imaging, we are still in need of reducing both memory
requirements and computational expenses of image retrieval systems. This work
proposes to exclude the features of image blocks that exhibit a low encoding
error when learned by a $n/p/n$ autoencoder ($p\!<\!n$). We examine the
histogram of autoendcoding errors of image blocks for each image class to
facilitate the decision which image regions, or roughly what percentage of an
image perhaps, shall be declared relevant for the retrieval task. This leads to
reduction of feature dimensionality and speeds up the retrieval process. To
validate the proposed scheme, we employ local binary patterns (LBP) and support
vector machines (SVM) which are both well-established approaches in CBIR
research community. As well, we use IRMA dataset with 14,410 x-ray images as
test data. The results show that the dimensionality of annotated feature
vectors can be reduced by up to 50% resulting in speedups greater than 27% at
expense of less than 1% decrease in the accuracy of retrieval when validating
the precision and recall of the top 20 hits.
| [
{
"version": "v1",
"created": "Sun, 5 Jul 2015 18:40:14 GMT"
}
] | 2015-07-07T00:00:00 | [
[
"Camlica",
"Zehra",
""
],
[
"Tizhoosh",
"H. R.",
""
],
[
"Khalvati",
"Farzad",
""
]
] | TITLE: Autoencoding the Retrieval Relevance of Medical Images
ABSTRACT: Content-based image retrieval (CBIR) of medical images is a crucial task that
can contribute to a more reliable diagnosis if applied to big data. Recent
advances in feature extraction and classification have enormously improved CBIR
results for digital images. However, considering the increasing accessibility
of big data in medical imaging, we are still in need of reducing both memory
requirements and computational expenses of image retrieval systems. This work
proposes to exclude the features of image blocks that exhibit a low encoding
error when learned by a $n/p/n$ autoencoder ($p\!<\!n$). We examine the
histogram of autoendcoding errors of image blocks for each image class to
facilitate the decision which image regions, or roughly what percentage of an
image perhaps, shall be declared relevant for the retrieval task. This leads to
reduction of feature dimensionality and speeds up the retrieval process. To
validate the proposed scheme, we employ local binary patterns (LBP) and support
vector machines (SVM) which are both well-established approaches in CBIR
research community. As well, we use IRMA dataset with 14,410 x-ray images as
test data. The results show that the dimensionality of annotated feature
vectors can be reduced by up to 50% resulting in speedups greater than 27% at
expense of less than 1% decrease in the accuracy of retrieval when validating
the precision and recall of the top 20 hits.
| no_new_dataset | 0.941654 |
1507.01422 | Xavier Gir\'o-i-Nieto | Junting Pan and Xavier Gir\'o-i-Nieto | End-to-end Convolutional Network for Saliency Prediction | Winner of the saliency prediction challenge in the Large-scale Scene
Understanding (LSUN) Challenge in the associated workshop of the IEEE
Conference on Computer Vision and Pattern Recognition (CVPR) 2015 | null | null | null | cs.CV cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The prediction of saliency areas in images has been traditionally addressed
with hand crafted features based on neuroscience principles. This paper however
addresses the problem with a completely data-driven approach by training a
convolutional network. The learning process is formulated as a minimization of
a loss function that measures the Euclidean distance of the predicted saliency
map with the provided ground truth. The recent publication of large datasets of
saliency prediction has provided enough data to train a not very deep
architecture which is both fast and accurate. The convolutional network in this
paper, named JuntingNet, won the LSUN 2015 challenge on saliency prediction
with a superior performance in all considered metrics.
| [
{
"version": "v1",
"created": "Mon, 6 Jul 2015 12:43:26 GMT"
}
] | 2015-07-07T00:00:00 | [
[
"Pan",
"Junting",
""
],
[
"Giró-i-Nieto",
"Xavier",
""
]
] | TITLE: End-to-end Convolutional Network for Saliency Prediction
ABSTRACT: The prediction of saliency areas in images has been traditionally addressed
with hand crafted features based on neuroscience principles. This paper however
addresses the problem with a completely data-driven approach by training a
convolutional network. The learning process is formulated as a minimization of
a loss function that measures the Euclidean distance of the predicted saliency
map with the provided ground truth. The recent publication of large datasets of
saliency prediction has provided enough data to train a not very deep
architecture which is both fast and accurate. The convolutional network in this
paper, named JuntingNet, won the LSUN 2015 challenge on saliency prediction
with a superior performance in all considered metrics.
| no_new_dataset | 0.949995 |
1507.01442 | Shicong Liu | Shicong Liu, Hongtao Lu | Learning Better Encoding for Approximate Nearest Neighbor Search with
Dictionary Annealing | 10 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a novel dictionary optimization method for high-dimensional
vector quantization employed in approximate nearest neighbor (ANN) search.
Vector quantization methods first seek a series of dictionaries, then
approximate each vector by a sum of elements selected from these dictionaries.
An optimal series of dictionaries should be mutually independent, and each
dictionary should generate a balanced encoding for the target dataset. Existing
methods did not explicitly consider this. To achieve these goals along with
minimizing the quantization error (residue), we propose a novel dictionary
optimization method called \emph{Dictionary Annealing} that alternatively
"heats up" a single dictionary by generating an intermediate dataset with
residual vectors, "cools down" the dictionary by fitting the intermediate
dataset, then extracts the new residual vectors for the next iteration. Better
codes can be learned by DA for the ANN search tasks. DA is easily implemented
on GPU to utilize the latest computing technology, and can easily extended to
an online dictionary learning scheme. We show by experiments that our optimized
dictionaries substantially reduce the overall quantization error. Jointly used
with residual vector quantization, our optimized dictionaries lead to a better
approximate nearest neighbor search performance compared to the
state-of-the-art methods.
| [
{
"version": "v1",
"created": "Mon, 6 Jul 2015 13:25:35 GMT"
}
] | 2015-07-07T00:00:00 | [
[
"Liu",
"Shicong",
""
],
[
"Lu",
"Hongtao",
""
]
] | TITLE: Learning Better Encoding for Approximate Nearest Neighbor Search with
Dictionary Annealing
ABSTRACT: We introduce a novel dictionary optimization method for high-dimensional
vector quantization employed in approximate nearest neighbor (ANN) search.
Vector quantization methods first seek a series of dictionaries, then
approximate each vector by a sum of elements selected from these dictionaries.
An optimal series of dictionaries should be mutually independent, and each
dictionary should generate a balanced encoding for the target dataset. Existing
methods did not explicitly consider this. To achieve these goals along with
minimizing the quantization error (residue), we propose a novel dictionary
optimization method called \emph{Dictionary Annealing} that alternatively
"heats up" a single dictionary by generating an intermediate dataset with
residual vectors, "cools down" the dictionary by fitting the intermediate
dataset, then extracts the new residual vectors for the next iteration. Better
codes can be learned by DA for the ANN search tasks. DA is easily implemented
on GPU to utilize the latest computing technology, and can easily extended to
an online dictionary learning scheme. We show by experiments that our optimized
dictionaries substantially reduce the overall quantization error. Jointly used
with residual vector quantization, our optimized dictionaries lead to a better
approximate nearest neighbor search performance compared to the
state-of-the-art methods.
| no_new_dataset | 0.948537 |
1502.03508 | Martin Jaggi | Chenxin Ma, Virginia Smith, Martin Jaggi, Michael I. Jordan, Peter
Richt\'arik and Martin Tak\'a\v{c} | Adding vs. Averaging in Distributed Primal-Dual Optimization | ICML 2015: JMLR W&CP volume37, Proceedings of The 32nd International
Conference on Machine Learning, pp. 1973-1982 | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Distributed optimization methods for large-scale machine learning suffer from
a communication bottleneck. It is difficult to reduce this bottleneck while
still efficiently and accurately aggregating partial work from different
machines. In this paper, we present a novel generalization of the recent
communication-efficient primal-dual framework (CoCoA) for distributed
optimization. Our framework, CoCoA+, allows for additive combination of local
updates to the global parameters at each iteration, whereas previous schemes
with convergence guarantees only allow conservative averaging. We give stronger
(primal-dual) convergence rate guarantees for both CoCoA as well as our new
variants, and generalize the theory for both methods to cover non-smooth convex
loss functions. We provide an extensive experimental comparison that shows the
markedly improved performance of CoCoA+ on several real-world distributed
datasets, especially when scaling up the number of machines.
| [
{
"version": "v1",
"created": "Thu, 12 Feb 2015 01:51:08 GMT"
},
{
"version": "v2",
"created": "Fri, 3 Jul 2015 19:35:13 GMT"
}
] | 2015-07-06T00:00:00 | [
[
"Ma",
"Chenxin",
""
],
[
"Smith",
"Virginia",
""
],
[
"Jaggi",
"Martin",
""
],
[
"Jordan",
"Michael I.",
""
],
[
"Richtárik",
"Peter",
""
],
[
"Takáč",
"Martin",
""
]
] | TITLE: Adding vs. Averaging in Distributed Primal-Dual Optimization
ABSTRACT: Distributed optimization methods for large-scale machine learning suffer from
a communication bottleneck. It is difficult to reduce this bottleneck while
still efficiently and accurately aggregating partial work from different
machines. In this paper, we present a novel generalization of the recent
communication-efficient primal-dual framework (CoCoA) for distributed
optimization. Our framework, CoCoA+, allows for additive combination of local
updates to the global parameters at each iteration, whereas previous schemes
with convergence guarantees only allow conservative averaging. We give stronger
(primal-dual) convergence rate guarantees for both CoCoA as well as our new
variants, and generalize the theory for both methods to cover non-smooth convex
loss functions. We provide an extensive experimental comparison that shows the
markedly improved performance of CoCoA+ on several real-world distributed
datasets, especially when scaling up the number of machines.
| no_new_dataset | 0.948442 |
1505.06449 | Zachary Lipton | Zachary C. Lipton, Charles Elkan | Efficient Elastic Net Regularization for Sparse Linear Models | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents an algorithm for efficient training of sparse linear
models with elastic net regularization. Extending previous work on delayed
updates, the new algorithm applies stochastic gradient updates to non-zero
features only, bringing weights current as needed with closed-form updates.
Closed-form delayed updates for the $\ell_1$, $\ell_{\infty}$, and rarely used
$\ell_2$ regularizers have been described previously. This paper provides
closed-form updates for the popular squared norm $\ell^2_2$ and elastic net
regularizers.
We provide dynamic programming algorithms that perform each delayed update in
constant time. The new $\ell^2_2$ and elastic net methods handle both fixed and
varying learning rates, and both standard {stochastic gradient descent} (SGD)
and {forward backward splitting (FoBoS)}. Experimental results show that on a
bag-of-words dataset with $260,941$ features, but only $88$ nonzero features on
average per training example, the dynamic programming method trains a logistic
regression classifier with elastic net regularization over $2000$ times faster
than otherwise.
| [
{
"version": "v1",
"created": "Sun, 24 May 2015 15:42:58 GMT"
},
{
"version": "v2",
"created": "Tue, 26 May 2015 07:28:50 GMT"
},
{
"version": "v3",
"created": "Thu, 2 Jul 2015 20:44:57 GMT"
}
] | 2015-07-06T00:00:00 | [
[
"Lipton",
"Zachary C.",
""
],
[
"Elkan",
"Charles",
""
]
] | TITLE: Efficient Elastic Net Regularization for Sparse Linear Models
ABSTRACT: This paper presents an algorithm for efficient training of sparse linear
models with elastic net regularization. Extending previous work on delayed
updates, the new algorithm applies stochastic gradient updates to non-zero
features only, bringing weights current as needed with closed-form updates.
Closed-form delayed updates for the $\ell_1$, $\ell_{\infty}$, and rarely used
$\ell_2$ regularizers have been described previously. This paper provides
closed-form updates for the popular squared norm $\ell^2_2$ and elastic net
regularizers.
We provide dynamic programming algorithms that perform each delayed update in
constant time. The new $\ell^2_2$ and elastic net methods handle both fixed and
varying learning rates, and both standard {stochastic gradient descent} (SGD)
and {forward backward splitting (FoBoS)}. Experimental results show that on a
bag-of-words dataset with $260,941$ features, but only $88$ nonzero features on
average per training example, the dynamic programming method trains a logistic
regression classifier with elastic net regularization over $2000$ times faster
than otherwise.
| no_new_dataset | 0.945701 |
1506.00080 | Francesco Gadaleta | Francesco Gadaleta and Kyrylo Bessonov | Integration of Gene Expression Data and Methylation Reveals Genetic
Networks for Glioblastoma | This paper has been withdrawn by the author due to submission to
commercial journal | null | null | null | cs.CE q-bio.GN | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Motivation: The consistent amount of different types of omics data requires
novel methods of analysis and data integration. In this work we describe
Regression2Net, a computational approach to analyse gene expression and
methylation profiles via regression analysis and network-based techniques.
Results: We identified 284 and 447 unique candidate genes potentially
associated to the Glioblastoma pathology from two networks inferred from mixed
genetic datasets. In-depth biological analysis of these networks reveals genes
that are related to energy metabolism, cell cycle control (AATF), immune system
response and several types of cancer. Importantly, we observed significant
over- representation of cancer related pathways including glioma especially in
the methylation network. This confirms the strong link between methylation and
glioblastomas. Potential glioma suppressor genes ACCN3 and ACCN4 linked to
NBPF1 neuroblastoma breakpoint family have been identified in our expression
network. Numerous ABC transporter genes (ABCA1, ABCB1) present in the
expression network suggest drug resistance of glioblastoma tumors.
| [
{
"version": "v1",
"created": "Sat, 30 May 2015 07:02:48 GMT"
},
{
"version": "v2",
"created": "Fri, 3 Jul 2015 12:08:25 GMT"
}
] | 2015-07-06T00:00:00 | [
[
"Gadaleta",
"Francesco",
""
],
[
"Bessonov",
"Kyrylo",
""
]
] | TITLE: Integration of Gene Expression Data and Methylation Reveals Genetic
Networks for Glioblastoma
ABSTRACT: Motivation: The consistent amount of different types of omics data requires
novel methods of analysis and data integration. In this work we describe
Regression2Net, a computational approach to analyse gene expression and
methylation profiles via regression analysis and network-based techniques.
Results: We identified 284 and 447 unique candidate genes potentially
associated to the Glioblastoma pathology from two networks inferred from mixed
genetic datasets. In-depth biological analysis of these networks reveals genes
that are related to energy metabolism, cell cycle control (AATF), immune system
response and several types of cancer. Importantly, we observed significant
over- representation of cancer related pathways including glioma especially in
the methylation network. This confirms the strong link between methylation and
glioblastomas. Potential glioma suppressor genes ACCN3 and ACCN4 linked to
NBPF1 neuroblastoma breakpoint family have been identified in our expression
network. Numerous ABC transporter genes (ABCA1, ABCB1) present in the
expression network suggest drug resistance of glioblastoma tumors.
| no_new_dataset | 0.947624 |
1507.00824 | Behnam Babagholami-Mohamadabadi | Behnam Babagholami-Mohamadabadi, Sejong Yoon, Vladimir Pavlovic | D-MFVI: Distributed Mean Field Variational Inference using Bregman ADMM | 19 pages, 6 figures | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bayesian models provide a framework for probabilistic modelling of complex
datasets. However, many of such models are computationally demanding especially
in the presence of large datasets. On the other hand, in sensor network
applications, statistical (Bayesian) parameter estimation usually needs
distributed algorithms, in which both data and computation are distributed
across the nodes of the network. In this paper we propose a general framework
for distributed Bayesian learning using Bregman Alternating Direction Method of
Multipliers (B-ADMM). We demonstrate the utility of our framework, with Mean
Field Variational Bayes (MFVB) as the primitive for distributed Matrix
Factorization (MF) and distributed affine structure from motion (SfM).
| [
{
"version": "v1",
"created": "Fri, 3 Jul 2015 06:14:26 GMT"
}
] | 2015-07-06T00:00:00 | [
[
"Babagholami-Mohamadabadi",
"Behnam",
""
],
[
"Yoon",
"Sejong",
""
],
[
"Pavlovic",
"Vladimir",
""
]
] | TITLE: D-MFVI: Distributed Mean Field Variational Inference using Bregman ADMM
ABSTRACT: Bayesian models provide a framework for probabilistic modelling of complex
datasets. However, many of such models are computationally demanding especially
in the presence of large datasets. On the other hand, in sensor network
applications, statistical (Bayesian) parameter estimation usually needs
distributed algorithms, in which both data and computation are distributed
across the nodes of the network. In this paper we propose a general framework
for distributed Bayesian learning using Bregman Alternating Direction Method of
Multipliers (B-ADMM). We demonstrate the utility of our framework, with Mean
Field Variational Bayes (MFVB) as the primitive for distributed Matrix
Factorization (MF) and distributed affine structure from motion (SfM).
| no_new_dataset | 0.94868 |
1507.00913 | Erik Rodner | Erik Rodner and Marcel Simon and Gunnar Brehm and Stephanie Pietsch
and J. Wolfgang W\"agele and Joachim Denzler | Fine-grained Recognition Datasets for Biodiversity Analysis | CVPR FGVC Workshop 2015; dataset available | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the following paper, we present and discuss challenging applications for
fine-grained visual classification (FGVC): biodiversity and species analysis.
We not only give details about two challenging new datasets suitable for
computer vision research with up to 675 highly similar classes, but also
present first results with localized features using convolutional neural
networks (CNN). We conclude with a list of challenging new research directions
in the area of visual classification for biodiversity research.
| [
{
"version": "v1",
"created": "Fri, 3 Jul 2015 13:53:26 GMT"
}
] | 2015-07-06T00:00:00 | [
[
"Rodner",
"Erik",
""
],
[
"Simon",
"Marcel",
""
],
[
"Brehm",
"Gunnar",
""
],
[
"Pietsch",
"Stephanie",
""
],
[
"Wägele",
"J. Wolfgang",
""
],
[
"Denzler",
"Joachim",
""
]
] | TITLE: Fine-grained Recognition Datasets for Biodiversity Analysis
ABSTRACT: In the following paper, we present and discuss challenging applications for
fine-grained visual classification (FGVC): biodiversity and species analysis.
We not only give details about two challenging new datasets suitable for
computer vision research with up to 675 highly similar classes, but also
present first results with localized features using convolutional neural
networks (CNN). We conclude with a list of challenging new research directions
in the area of visual classification for biodiversity research.
| new_dataset | 0.72487 |
1507.00421 | Yao Xie | Yang Cao, Yao Xie | Categorical Matrix Completion | Submitted | null | null | null | cs.NA cs.LG math.ST stat.ML stat.TH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the problem of completing a matrix with categorical-valued
entries from partial observations. This is achieved by extending the
formulation and theory of one-bit matrix completion. We recover a low-rank
matrix $X$ by maximizing the likelihood ratio with a constraint on the nuclear
norm of $X$, and the observations are mapped from entries of $X$ through
multiple link functions. We establish theoretical upper and lower bounds on the
recovery error, which meet up to a constant factor $\mathcal{O}(K^{3/2})$ where
$K$ is the fixed number of categories. The upper bound in our case depends on
the number of categories implicitly through a maximization of terms that
involve the smoothness of the link functions. In contrast to one-bit matrix
completion, our bounds for categorical matrix completion are optimal up to a
factor on the order of the square root of the number of categories, which is
consistent with an intuition that the problem becomes harder when the number of
categories increases. By comparing the performance of our method with the
conventional matrix completion method on the MovieLens dataset, we demonstrate
the advantage of our method.
| [
{
"version": "v1",
"created": "Thu, 2 Jul 2015 03:58:47 GMT"
}
] | 2015-07-03T00:00:00 | [
[
"Cao",
"Yang",
""
],
[
"Xie",
"Yao",
""
]
] | TITLE: Categorical Matrix Completion
ABSTRACT: We consider the problem of completing a matrix with categorical-valued
entries from partial observations. This is achieved by extending the
formulation and theory of one-bit matrix completion. We recover a low-rank
matrix $X$ by maximizing the likelihood ratio with a constraint on the nuclear
norm of $X$, and the observations are mapped from entries of $X$ through
multiple link functions. We establish theoretical upper and lower bounds on the
recovery error, which meet up to a constant factor $\mathcal{O}(K^{3/2})$ where
$K$ is the fixed number of categories. The upper bound in our case depends on
the number of categories implicitly through a maximization of terms that
involve the smoothness of the link functions. In contrast to one-bit matrix
completion, our bounds for categorical matrix completion are optimal up to a
factor on the order of the square root of the number of categories, which is
consistent with an intuition that the problem becomes harder when the number of
categories increases. By comparing the performance of our method with the
conventional matrix completion method on the MovieLens dataset, we demonstrate
the advantage of our method.
| no_new_dataset | 0.938913 |
1507.00443 | Vincent Primault | Vincent Primault (DRIM, INSA Lyon), Sonia Ben Mokhtar (DRIM, INSA
Lyon), C\'edric Lauradoux (PRIVATICS), Lionel Brunie (DRIM, INSA Lyon) | Time Distortion Anonymization for the Publication of Mobility Data with
High Utility | in 14th IEEE International Conference on Trust, Security and Privacy
in Computing and Communications, Aug 2015, Helsinki, Finland | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An increasing amount of mobility data is being collected every day by
different means, such as mobile applications or crowd-sensing campaigns. This
data is sometimes published after the application of simple anonymization
techniques (e.g., putting an identifier instead of the users' names), which
might lead to severe threats to the privacy of the participating users.
Literature contains more sophisticated anonymization techniques, often based on
adding noise to the spatial data. However, these techniques either compromise
the privacy if the added noise is too little or the utility of the data if the
added noise is too strong. We investigate in this paper an alternative
solution, which builds on time distortion instead of spatial distortion.
Specifically, our contribution lies in (1) the introduction of the concept of
time distortion to anonymize mobility datasets (2) Promesse, a protection
mechanism implementing this concept (3) a practical study of Promesse compared
to two representative spatial distortion mechanisms, namely Wait For Me, which
enforces k-anonymity, and Geo-Indistinguishability, which enforces differential
privacy. We evaluate our mechanism practically using three real-life datasets.
Our results show that time distortion reduces the number of points of interest
that can be retrieved by an adversary to under 3 %, while the introduced
spatial error is almost null and the distortion introduced on the results of
range queries is kept under 13 % on average.
| [
{
"version": "v1",
"created": "Thu, 2 Jul 2015 06:56:30 GMT"
}
] | 2015-07-03T00:00:00 | [
[
"Primault",
"Vincent",
"",
"DRIM, INSA Lyon"
],
[
"Mokhtar",
"Sonia Ben",
"",
"DRIM, INSA\n Lyon"
],
[
"Lauradoux",
"Cédric",
"",
"PRIVATICS"
],
[
"Brunie",
"Lionel",
"",
"DRIM, INSA Lyon"
]
] | TITLE: Time Distortion Anonymization for the Publication of Mobility Data with
High Utility
ABSTRACT: An increasing amount of mobility data is being collected every day by
different means, such as mobile applications or crowd-sensing campaigns. This
data is sometimes published after the application of simple anonymization
techniques (e.g., putting an identifier instead of the users' names), which
might lead to severe threats to the privacy of the participating users.
Literature contains more sophisticated anonymization techniques, often based on
adding noise to the spatial data. However, these techniques either compromise
the privacy if the added noise is too little or the utility of the data if the
added noise is too strong. We investigate in this paper an alternative
solution, which builds on time distortion instead of spatial distortion.
Specifically, our contribution lies in (1) the introduction of the concept of
time distortion to anonymize mobility datasets (2) Promesse, a protection
mechanism implementing this concept (3) a practical study of Promesse compared
to two representative spatial distortion mechanisms, namely Wait For Me, which
enforces k-anonymity, and Geo-Indistinguishability, which enforces differential
privacy. We evaluate our mechanism practically using three real-life datasets.
Our results show that time distortion reduces the number of points of interest
that can be retrieved by an adversary to under 3 %, while the introduced
spatial error is almost null and the distortion introduced on the results of
range queries is kept under 13 % on average.
| no_new_dataset | 0.954052 |
1507.00500 | Remi Flamary | L\'ea Laporte (IRIT), R\'emi Flamary (OCA, LAGRANGE), Stephane Canu
(LITIS), S\'ebastien D\'ejean (IMT), Josiane Mothe (IRIT) | Non-convex Regularizations for Feature Selection in Ranking With Sparse
SVM | null | IEEE Transactions on Neural Networks and Learning Systems, IEEE,
2013, pp.1,1 | 10.1109/TNNLS.2013.2286696 | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Feature selection in learning to rank has recently emerged as a crucial
issue. Whereas several preprocessing approaches have been proposed, only a few
works have been focused on integrating the feature selection into the learning
process. In this work, we propose a general framework for feature selection in
learning to rank using SVM with a sparse regularization term. We investigate
both classical convex regularizations such as $\ell\_1$ or weighted $\ell\_1$
and non-convex regularization terms such as log penalty, Minimax Concave
Penalty (MCP) or $\ell\_p$ pseudo norm with $p\textless{}1$. Two algorithms are
proposed, first an accelerated proximal approach for solving the convex
problems, second a reweighted $\ell\_1$ scheme to address the non-convex
regularizations. We conduct intensive experiments on nine datasets from Letor
3.0 and Letor 4.0 corpora. Numerical results show that the use of non-convex
regularizations we propose leads to more sparsity in the resulting models while
prediction performance is preserved. The number of features is decreased by up
to a factor of six compared to the $\ell\_1$ regularization. In addition, the
software is publicly available on the web.
| [
{
"version": "v1",
"created": "Thu, 2 Jul 2015 10:06:02 GMT"
}
] | 2015-07-03T00:00:00 | [
[
"Laporte",
"Léa",
"",
"IRIT"
],
[
"Flamary",
"Rémi",
"",
"OCA, LAGRANGE"
],
[
"Canu",
"Stephane",
"",
"LITIS"
],
[
"Déjean",
"Sébastien",
"",
"IMT"
],
[
"Mothe",
"Josiane",
"",
"IRIT"
]
] | TITLE: Non-convex Regularizations for Feature Selection in Ranking With Sparse
SVM
ABSTRACT: Feature selection in learning to rank has recently emerged as a crucial
issue. Whereas several preprocessing approaches have been proposed, only a few
works have been focused on integrating the feature selection into the learning
process. In this work, we propose a general framework for feature selection in
learning to rank using SVM with a sparse regularization term. We investigate
both classical convex regularizations such as $\ell\_1$ or weighted $\ell\_1$
and non-convex regularization terms such as log penalty, Minimax Concave
Penalty (MCP) or $\ell\_p$ pseudo norm with $p\textless{}1$. Two algorithms are
proposed, first an accelerated proximal approach for solving the convex
problems, second a reweighted $\ell\_1$ scheme to address the non-convex
regularizations. We conduct intensive experiments on nine datasets from Letor
3.0 and Letor 4.0 corpora. Numerical results show that the use of non-convex
regularizations we propose leads to more sparsity in the resulting models while
prediction performance is preserved. The number of features is decreased by up
to a factor of six compared to the $\ell\_1$ regularization. In addition, the
software is publicly available on the web.
| no_new_dataset | 0.949482 |
1507.00639 | Daoud Clarke | Daoud Clarke | Simple, Fast Semantic Parsing with a Tensor Kernel | in CICLing 2015 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We describe a simple approach to semantic parsing based on a tensor product
kernel. We extract two feature vectors: one for the query and one for each
candidate logical form. We then train a classifier using the tensor product of
the two vectors. Using very simple features for both, our system achieves an
average F1 score of 40.1% on the WebQuestions dataset. This is comparable to
more complex systems but is simpler to implement and runs faster.
| [
{
"version": "v1",
"created": "Thu, 2 Jul 2015 15:58:25 GMT"
}
] | 2015-07-03T00:00:00 | [
[
"Clarke",
"Daoud",
""
]
] | TITLE: Simple, Fast Semantic Parsing with a Tensor Kernel
ABSTRACT: We describe a simple approach to semantic parsing based on a tensor product
kernel. We extract two feature vectors: one for the query and one for each
candidate logical form. We then train a classifier using the tensor product of
the two vectors. Using very simple features for both, our system achieves an
average F1 score of 40.1% on the WebQuestions dataset. This is comparable to
more complex systems but is simpler to implement and runs faster.
| no_new_dataset | 0.953188 |
1507.00674 | Cibele Freire | Cibele Freire, Wolfgang Gatterbauer, Neil Immerman, Alexandra Meliou | A Characterization of the Complexity of Resilience and Responsibility
for Self-join-free Conjunctive Queries | 36 pages, 13 figures | null | null | null | cs.DB cs.CC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Several research thrusts in the area of data management have focused on
understanding how changes in the data affect the output of a view or standing
query. Example applications are explaining query results, propagating updates
through views, and anonymizing datasets. These applications usually rely on
understanding how interventions in a database impact the output of a query. An
important aspect of this analysis is the problem of deleting a minimum number
of tuples from the input tables to make a given Boolean query false. We refer
to this problem as "the resilience of a query" and show its connections to the
well-studied problems of deletion propagation and causal responsibility. In
this paper, we study the complexity of resilience for self-join-free
conjunctive queries, and also make several contributions to previous known
results for the problems of deletion propagation with source side-effects and
causal responsibility: (1) We define the notion of resilience and provide a
complete dichotomy for the class of self-join-free conjunctive queries with
arbitrary functional dependencies; this dichotomy also extends and generalizes
previous tractability results on deletion propagation with source side-effects.
(2) We formalize the connection between resilience and causal responsibility,
and show that resilience has a larger class of tractable queries than
responsibility. (3) We identify a mistake in a previous dichotomy for the
problem of causal responsibility and offer a revised characterization based on
new, simpler, and more intuitive notions. (4) Finally, we extend the dichotomy
for causal responsibility in two ways: (a) we treat cases where the input
tables contain functional dependencies, and (b) we compute responsibility for a
set of tuples specified via wildcards.
| [
{
"version": "v1",
"created": "Thu, 2 Jul 2015 17:45:32 GMT"
}
] | 2015-07-03T00:00:00 | [
[
"Freire",
"Cibele",
""
],
[
"Gatterbauer",
"Wolfgang",
""
],
[
"Immerman",
"Neil",
""
],
[
"Meliou",
"Alexandra",
""
]
] | TITLE: A Characterization of the Complexity of Resilience and Responsibility
for Self-join-free Conjunctive Queries
ABSTRACT: Several research thrusts in the area of data management have focused on
understanding how changes in the data affect the output of a view or standing
query. Example applications are explaining query results, propagating updates
through views, and anonymizing datasets. These applications usually rely on
understanding how interventions in a database impact the output of a query. An
important aspect of this analysis is the problem of deleting a minimum number
of tuples from the input tables to make a given Boolean query false. We refer
to this problem as "the resilience of a query" and show its connections to the
well-studied problems of deletion propagation and causal responsibility. In
this paper, we study the complexity of resilience for self-join-free
conjunctive queries, and also make several contributions to previous known
results for the problems of deletion propagation with source side-effects and
causal responsibility: (1) We define the notion of resilience and provide a
complete dichotomy for the class of self-join-free conjunctive queries with
arbitrary functional dependencies; this dichotomy also extends and generalizes
previous tractability results on deletion propagation with source side-effects.
(2) We formalize the connection between resilience and causal responsibility,
and show that resilience has a larger class of tractable queries than
responsibility. (3) We identify a mistake in a previous dichotomy for the
problem of causal responsibility and offer a revised characterization based on
new, simpler, and more intuitive notions. (4) Finally, we extend the dichotomy
for causal responsibility in two ways: (a) we treat cases where the input
tables contain functional dependencies, and (b) we compute responsibility for a
set of tuples specified via wildcards.
| no_new_dataset | 0.951369 |
1504.00581 | Andrea Cimatoribus | Andrea A. Cimatoribus, Hans van Haren | Temperature statistics above a deep-ocean sloping boundary | 22 pages, 10 figures, 3 tables. Accepted version | Journal of Fluid Mechanics (2015), 775, pp 415-435 | 10.1017/jfm.2015.295 | null | physics.flu-dyn physics.ao-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a detailed analysis of the temperature statistics in an
oceanographic observational dataset. The data are collected using a moored
array of thermistors, 100 m tall and starting 5 m above the bottom, deployed
during four months above the slopes of a Seamount in the north-eastern Atlantic
Ocean. Turbulence at this location is strongly affected by the semidiurnal
tidal wave. Mean stratification is stable in the entire dataset. We compute
structure functions, of order up to 10, of the distributions of temperature
increments. Strong intermittency is observed, in particular, during the
downslope phase of the tide, and farther from the solid bottom. In the lower
half of the mooring during the upslope phase, the temperature statistics are
consistent with those of a passive scalar. In the upper half of the mooring,
the temperature statistics deviate from those of a passive scalar, and evidence
of turbulent convective activity is found. The downslope phase is generally
thought to be more shear-dominated, but our results suggest on the other hand
that convective activity is present. High-order moments also show that the
turbulence scaling behaviour breaks at a well-defined scale (of the order of
the buoyancy length scale), which is however dependent on the flow state (tidal
phase, height above the bottom). At larger scales, wave motions are dominant.
We suggest that our results could provide an important reference for laboratory
and numerical studies of mixing in geophysical flows.
| [
{
"version": "v1",
"created": "Thu, 2 Apr 2015 14:54:20 GMT"
},
{
"version": "v2",
"created": "Thu, 4 Jun 2015 12:02:23 GMT"
}
] | 2015-07-02T00:00:00 | [
[
"Cimatoribus",
"Andrea A.",
""
],
[
"van Haren",
"Hans",
""
]
] | TITLE: Temperature statistics above a deep-ocean sloping boundary
ABSTRACT: We present a detailed analysis of the temperature statistics in an
oceanographic observational dataset. The data are collected using a moored
array of thermistors, 100 m tall and starting 5 m above the bottom, deployed
during four months above the slopes of a Seamount in the north-eastern Atlantic
Ocean. Turbulence at this location is strongly affected by the semidiurnal
tidal wave. Mean stratification is stable in the entire dataset. We compute
structure functions, of order up to 10, of the distributions of temperature
increments. Strong intermittency is observed, in particular, during the
downslope phase of the tide, and farther from the solid bottom. In the lower
half of the mooring during the upslope phase, the temperature statistics are
consistent with those of a passive scalar. In the upper half of the mooring,
the temperature statistics deviate from those of a passive scalar, and evidence
of turbulent convective activity is found. The downslope phase is generally
thought to be more shear-dominated, but our results suggest on the other hand
that convective activity is present. High-order moments also show that the
turbulence scaling behaviour breaks at a well-defined scale (of the order of
the buoyancy length scale), which is however dependent on the flow state (tidal
phase, height above the bottom). At larger scales, wave motions are dominant.
We suggest that our results could provide an important reference for laboratory
and numerical studies of mixing in geophysical flows.
| no_new_dataset | 0.947575 |
1507.00087 | Brandon Oselio | Brandon Oselio, Alex Kulesza, Alfred Hero | Information Extraction from Larger Multi-layer Social Networks | 2015 ICASSP | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Social networks often encode community structure using multiple distinct
types of links between nodes. In this paper we introduce a novel method to
extract information from such multi-layer networks, where each type of link
forms its own layer. Using the concept of Pareto optimality, community
detection in this multi-layer setting is formulated as a multiple criterion
optimization problem. We propose an algorithm for finding an approximate Pareto
frontier containing a family of solutions. The power of this approach is
demonstrated on a Twitter dataset, where the nodes are hashtags and the layers
correspond to (1) behavioral edges connecting pairs of hashtags whose temporal
profiles are similar and (2) relational edges connecting pairs of hashtags that
appear in the same tweets.
| [
{
"version": "v1",
"created": "Wed, 1 Jul 2015 01:50:31 GMT"
}
] | 2015-07-02T00:00:00 | [
[
"Oselio",
"Brandon",
""
],
[
"Kulesza",
"Alex",
""
],
[
"Hero",
"Alfred",
""
]
] | TITLE: Information Extraction from Larger Multi-layer Social Networks
ABSTRACT: Social networks often encode community structure using multiple distinct
types of links between nodes. In this paper we introduce a novel method to
extract information from such multi-layer networks, where each type of link
forms its own layer. Using the concept of Pareto optimality, community
detection in this multi-layer setting is formulated as a multiple criterion
optimization problem. We propose an algorithm for finding an approximate Pareto
frontier containing a family of solutions. The power of this approach is
demonstrated on a Twitter dataset, where the nodes are hashtags and the layers
correspond to (1) behavioral edges connecting pairs of hashtags whose temporal
profiles are similar and (2) relational edges connecting pairs of hashtags that
appear in the same tweets.
| no_new_dataset | 0.94868 |
1507.00210 | Guillaume Desjardins | Guillaume Desjardins, Karen Simonyan, Razvan Pascanu, Koray
Kavukcuoglu | Natural Neural Networks | null | null | null | null | stat.ML cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce Natural Neural Networks, a novel family of algorithms that speed
up convergence by adapting their internal representation during training to
improve conditioning of the Fisher matrix. In particular, we show a specific
example that employs a simple and efficient reparametrization of the neural
network weights by implicitly whitening the representation obtained at each
layer, while preserving the feed-forward computation of the network. Such
networks can be trained efficiently via the proposed Projected Natural Gradient
Descent algorithm (PRONG), which amortizes the cost of these reparametrizations
over many parameter updates and is closely related to the Mirror Descent online
learning algorithm. We highlight the benefits of our method on both
unsupervised and supervised learning tasks, and showcase its scalability by
training on the large-scale ImageNet Challenge dataset.
| [
{
"version": "v1",
"created": "Wed, 1 Jul 2015 12:42:01 GMT"
}
] | 2015-07-02T00:00:00 | [
[
"Desjardins",
"Guillaume",
""
],
[
"Simonyan",
"Karen",
""
],
[
"Pascanu",
"Razvan",
""
],
[
"Kavukcuoglu",
"Koray",
""
]
] | TITLE: Natural Neural Networks
ABSTRACT: We introduce Natural Neural Networks, a novel family of algorithms that speed
up convergence by adapting their internal representation during training to
improve conditioning of the Fisher matrix. In particular, we show a specific
example that employs a simple and efficient reparametrization of the neural
network weights by implicitly whitening the representation obtained at each
layer, while preserving the feed-forward computation of the network. Such
networks can be trained efficiently via the proposed Projected Natural Gradient
Descent algorithm (PRONG), which amortizes the cost of these reparametrizations
over many parameter updates and is closely related to the Mirror Descent online
learning algorithm. We highlight the benefits of our method on both
unsupervised and supervised learning tasks, and showcase its scalability by
training on the large-scale ImageNet Challenge dataset.
| no_new_dataset | 0.948346 |
1507.00220 | Alexander Cloninger | Alexander Cloninger, Ronald R. Coifman, Nicholas Downing, Harlan M.
Krumholz | Bigeometric Organization of Deep Nets | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we build an organization of high-dimensional datasets that
cannot be cleanly embedded into a low-dimensional representation due to missing
entries and a subset of the features being irrelevant to modeling functions of
interest. Our algorithm begins by defining coarse neighborhoods of the points
and defining an expected empirical function value on these neighborhoods. We
then generate new non-linear features with deep net representations tuned to
model the approximate function, and re-organize the geometry of the points with
respect to the new representation. Finally, the points are locally z-scored to
create an intrinsic geometric organization which is independent of the
parameters of the deep net, a geometry designed to assure smoothness with
respect to the empirical function. We examine this approach on data from the
Center for Medicare and Medicaid Services Hospital Quality Initiative, and
generate an intrinsic low-dimensional organization of the hospitals that is
smooth with respect to an expert driven function of quality.
| [
{
"version": "v1",
"created": "Wed, 1 Jul 2015 13:18:53 GMT"
}
] | 2015-07-02T00:00:00 | [
[
"Cloninger",
"Alexander",
""
],
[
"Coifman",
"Ronald R.",
""
],
[
"Downing",
"Nicholas",
""
],
[
"Krumholz",
"Harlan M.",
""
]
] | TITLE: Bigeometric Organization of Deep Nets
ABSTRACT: In this paper, we build an organization of high-dimensional datasets that
cannot be cleanly embedded into a low-dimensional representation due to missing
entries and a subset of the features being irrelevant to modeling functions of
interest. Our algorithm begins by defining coarse neighborhoods of the points
and defining an expected empirical function value on these neighborhoods. We
then generate new non-linear features with deep net representations tuned to
model the approximate function, and re-organize the geometry of the points with
respect to the new representation. Finally, the points are locally z-scored to
create an intrinsic geometric organization which is independent of the
parameters of the deep net, a geometry designed to assure smoothness with
respect to the empirical function. We examine this approach on data from the
Center for Medicare and Medicaid Services Hospital Quality Initiative, and
generate an intrinsic low-dimensional organization of the hospitals that is
smooth with respect to an expert driven function of quality.
| no_new_dataset | 0.953057 |
1406.1626 | Khalid Raza | Khalid Raza and Mahish Kohli | Ant Colony Optimization for Inferring Key Gene Interactions | 8 pages, 2 figures and 4 tables | Proc. of 9th INDIACom-2015, 2nd International Conference on
Computing for Sustainable Global Development, March 11-13, 2015 pp. 1242-1246 | null | null | cs.NE cs.CE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Inferring gene interaction network from gene expression data is an important
task in systems biology research. The gene interaction network, especially key
interactions, plays an important role in identifying biomarkers for disease
that further helps in drug design. Ant colony optimization is an optimization
algorithm based on natural evolution and has been used in many optimization
problems. In this paper, we applied ant colony optimization algorithm for
inferring the key gene interactions from gene expression data. The algorithm
has been tested on two different kinds of benchmark datasets and observed that
it successfully identify some key gene interactions.
| [
{
"version": "v1",
"created": "Fri, 6 Jun 2014 10:06:35 GMT"
}
] | 2015-07-01T00:00:00 | [
[
"Raza",
"Khalid",
""
],
[
"Kohli",
"Mahish",
""
]
] | TITLE: Ant Colony Optimization for Inferring Key Gene Interactions
ABSTRACT: Inferring gene interaction network from gene expression data is an important
task in systems biology research. The gene interaction network, especially key
interactions, plays an important role in identifying biomarkers for disease
that further helps in drug design. Ant colony optimization is an optimization
algorithm based on natural evolution and has been used in many optimization
problems. In this paper, we applied ant colony optimization algorithm for
inferring the key gene interactions from gene expression data. The algorithm
has been tested on two different kinds of benchmark datasets and observed that
it successfully identify some key gene interactions.
| no_new_dataset | 0.953405 |
1505.07599 | Xipeng Qiu | Xipeng Qiu, Peng Qian, Liusong Yin, Shiyu Wu, Xuanjing Huang | Overview of the NLPCC 2015 Shared Task: Chinese Word Segmentation and
POS Tagging for Micro-blog Texts | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we give an overview for the shared task at the 4th CCF
Conference on Natural Language Processing \& Chinese Computing (NLPCC 2015):
Chinese word segmentation and part-of-speech (POS) tagging for micro-blog
texts. Different with the popular used newswire datasets, the dataset of this
shared task consists of the relatively informal micro-texts. The shared task
has two sub-tasks: (1) individual Chinese word segmentation and (2) joint
Chinese word segmentation and POS Tagging. Each subtask has three tracks to
distinguish the systems with different resources. We first introduce the
dataset and task, then we characterize the different approaches of the
participating systems, report the test results, and provide a overview analysis
of these results. An online system is available for open registration and
evaluation at http://nlp.fudan.edu.cn/nlpcc2015.
| [
{
"version": "v1",
"created": "Thu, 28 May 2015 08:54:13 GMT"
},
{
"version": "v2",
"created": "Fri, 29 May 2015 02:45:24 GMT"
},
{
"version": "v3",
"created": "Tue, 30 Jun 2015 18:44:59 GMT"
}
] | 2015-07-01T00:00:00 | [
[
"Qiu",
"Xipeng",
""
],
[
"Qian",
"Peng",
""
],
[
"Yin",
"Liusong",
""
],
[
"Wu",
"Shiyu",
""
],
[
"Huang",
"Xuanjing",
""
]
] | TITLE: Overview of the NLPCC 2015 Shared Task: Chinese Word Segmentation and
POS Tagging for Micro-blog Texts
ABSTRACT: In this paper, we give an overview for the shared task at the 4th CCF
Conference on Natural Language Processing \& Chinese Computing (NLPCC 2015):
Chinese word segmentation and part-of-speech (POS) tagging for micro-blog
texts. Different with the popular used newswire datasets, the dataset of this
shared task consists of the relatively informal micro-texts. The shared task
has two sub-tasks: (1) individual Chinese word segmentation and (2) joint
Chinese word segmentation and POS Tagging. Each subtask has three tracks to
distinguish the systems with different resources. We first introduce the
dataset and task, then we characterize the different approaches of the
participating systems, report the test results, and provide a overview analysis
of these results. An online system is available for open registration and
evaluation at http://nlp.fudan.edu.cn/nlpcc2015.
| new_dataset | 0.961965 |
1506.08839 | Julian McAuley | Julian McAuley and Rahul Pandey and Jure Leskovec | Inferring Networks of Substitutable and Complementary Products | 12 pages, 6 figures | null | null | null | cs.SI cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In a modern recommender system, it is important to understand how products
relate to each other. For example, while a user is looking for mobile phones,
it might make sense to recommend other phones, but once they buy a phone, we
might instead want to recommend batteries, cases, or chargers. These two types
of recommendations are referred to as substitutes and complements: substitutes
are products that can be purchased instead of each other, while complements are
products that can be purchased in addition to each other.
Here we develop a method to infer networks of substitutable and complementary
products. We formulate this as a supervised link prediction task, where we
learn the semantics of substitutes and complements from data associated with
products. The primary source of data we use is the text of product reviews,
though our method also makes use of features such as ratings, specifications,
prices, and brands. Methodologically, we build topic models that are trained to
automatically discover topics from text that are successful at predicting and
explaining such relationships. Experimentally, we evaluate our system on the
Amazon product catalog, a large dataset consisting of 9 million products, 237
million links, and 144 million reviews.
| [
{
"version": "v1",
"created": "Mon, 29 Jun 2015 20:06:28 GMT"
}
] | 2015-07-01T00:00:00 | [
[
"McAuley",
"Julian",
""
],
[
"Pandey",
"Rahul",
""
],
[
"Leskovec",
"Jure",
""
]
] | TITLE: Inferring Networks of Substitutable and Complementary Products
ABSTRACT: In a modern recommender system, it is important to understand how products
relate to each other. For example, while a user is looking for mobile phones,
it might make sense to recommend other phones, but once they buy a phone, we
might instead want to recommend batteries, cases, or chargers. These two types
of recommendations are referred to as substitutes and complements: substitutes
are products that can be purchased instead of each other, while complements are
products that can be purchased in addition to each other.
Here we develop a method to infer networks of substitutable and complementary
products. We formulate this as a supervised link prediction task, where we
learn the semantics of substitutes and complements from data associated with
products. The primary source of data we use is the text of product reviews,
though our method also makes use of features such as ratings, specifications,
prices, and brands. Methodologically, we build topic models that are trained to
automatically discover topics from text that are successful at predicting and
explaining such relationships. Experimentally, we evaluate our system on the
Amazon product catalog, a large dataset consisting of 9 million products, 237
million links, and 144 million reviews.
| new_dataset | 0.974337 |
1506.08916 | Brandon Oselio | Brandon Oselio, Alex Kulesza, Alfred Hero | Socio-Spatial Pareto Frontiers of Twitter Networks | null | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Social media provides a rich source of networked data. This data is
represented by a set of nodes and a set of relations (edges). It is often
possible to obtain or infer multiple types of relations from the same set of
nodes, such as observed friend connections, inferred links via semantic
comparison, or relations based off of geographic proximity. These edge sets can
be represented by one multi-layer network. In this paper we review a method to
perform community detection of multilayer networks, and illustrate its use as a
visualization tool for analyzing different community partitions. The algorithm
is illustrated on a dataset from Twitter, specifically regarding the National
Football League (NFL).
| [
{
"version": "v1",
"created": "Tue, 30 Jun 2015 01:56:19 GMT"
}
] | 2015-07-01T00:00:00 | [
[
"Oselio",
"Brandon",
""
],
[
"Kulesza",
"Alex",
""
],
[
"Hero",
"Alfred",
""
]
] | TITLE: Socio-Spatial Pareto Frontiers of Twitter Networks
ABSTRACT: Social media provides a rich source of networked data. This data is
represented by a set of nodes and a set of relations (edges). It is often
possible to obtain or infer multiple types of relations from the same set of
nodes, such as observed friend connections, inferred links via semantic
comparison, or relations based off of geographic proximity. These edge sets can
be represented by one multi-layer network. In this paper we review a method to
perform community detection of multilayer networks, and illustrate its use as a
visualization tool for analyzing different community partitions. The algorithm
is illustrated on a dataset from Twitter, specifically regarding the National
Football League (NFL).
| no_new_dataset | 0.942454 |
1506.08938 | Nguyen Duy Khuong | Duy-Khuong Nguyen and Tu-Bao Ho | Accelerated Parallel and Distributed Algorithm using Limited Internal
Memory for Nonnegative Matrix Factorization | null | null | null | null | math.OC cs.NA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Nonnegative matrix factorization (NMF) is a powerful technique for dimension
reduction, extracting latent factors and learning part-based representation.
For large datasets, NMF performance depends on some major issues: fast
algorithms, fully parallel distributed feasibility and limited internal memory.
This research aims to design a fast fully parallel and distributed algorithm
using limited internal memory to reach high NMF performance for large datasets.
In particular, we propose a flexible accelerated algorithm for NMF with all its
$L_1$ $L_2$ regularized variants based on full decomposition, which is a
combination of an anti-lopsided algorithm and a fast block coordinate descent
algorithm. The proposed algorithm takes advantages of both these algorithms to
achieve a linear convergence rate of $\mathcal{O}(1-\frac{1}{||Q||_2})^k$ in
optimizing each factor matrix when fixing the other factor one in the sub-space
of passive variables, where $r$ is the number of latent components; where
$\sqrt{r} \leq ||Q||_2 \leq r$. In addition, the algorithm can exploit the data
sparseness to run on large datasets with limited internal memory of machines.
Furthermore, our experimental results are highly competitive with 7
state-of-the-art methods about three significant aspects of convergence,
optimality and average of the iteration number. Therefore, the proposed
algorithm is superior to fast block coordinate descent methods and accelerated
methods.
| [
{
"version": "v1",
"created": "Tue, 30 Jun 2015 04:58:10 GMT"
}
] | 2015-07-01T00:00:00 | [
[
"Nguyen",
"Duy-Khuong",
""
],
[
"Ho",
"Tu-Bao",
""
]
] | TITLE: Accelerated Parallel and Distributed Algorithm using Limited Internal
Memory for Nonnegative Matrix Factorization
ABSTRACT: Nonnegative matrix factorization (NMF) is a powerful technique for dimension
reduction, extracting latent factors and learning part-based representation.
For large datasets, NMF performance depends on some major issues: fast
algorithms, fully parallel distributed feasibility and limited internal memory.
This research aims to design a fast fully parallel and distributed algorithm
using limited internal memory to reach high NMF performance for large datasets.
In particular, we propose a flexible accelerated algorithm for NMF with all its
$L_1$ $L_2$ regularized variants based on full decomposition, which is a
combination of an anti-lopsided algorithm and a fast block coordinate descent
algorithm. The proposed algorithm takes advantages of both these algorithms to
achieve a linear convergence rate of $\mathcal{O}(1-\frac{1}{||Q||_2})^k$ in
optimizing each factor matrix when fixing the other factor one in the sub-space
of passive variables, where $r$ is the number of latent components; where
$\sqrt{r} \leq ||Q||_2 \leq r$. In addition, the algorithm can exploit the data
sparseness to run on large datasets with limited internal memory of machines.
Furthermore, our experimental results are highly competitive with 7
state-of-the-art methods about three significant aspects of convergence,
optimality and average of the iteration number. Therefore, the proposed
algorithm is superior to fast block coordinate descent methods and accelerated
methods.
| no_new_dataset | 0.941708 |
1506.09067 | Sabri Pllana | Andre Viebke and Sabri Pllana | The Potential of the Intel Xeon Phi for Supervised Deep Learning | The 17th IEEE International Conference on High Performance Computing
and Communications (HPCC 2015), Aug. 24 - 26, 2015, New York, USA | null | null | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Supervised learning of Convolutional Neural Networks (CNNs), also known as
supervised Deep Learning, is a computationally demanding process. To find the
most suitable parameters of a network for a given application, numerous
training sessions are required. Therefore, reducing the training time per
session is essential to fully utilize CNNs in practice. While numerous research
groups have addressed the training of CNNs using GPUs, so far not much
attention has been paid to the Intel Xeon Phi coprocessor. In this paper we
investigate empirically and theoretically the potential of the Intel Xeon Phi
for supervised learning of CNNs. We design and implement a parallelization
scheme named CHAOS that exploits both the thread- and SIMD-parallelism of the
coprocessor. Our approach is evaluated on the Intel Xeon Phi 7120P using the
MNIST dataset of handwritten digits for various thread counts and CNN
architectures. Results show a 103.5x speed up when training our large network
for 15 epochs using 244 threads, compared to one thread on the coprocessor.
Moreover, we develop a performance model and use it to assess our
implementation and answer what-if questions.
| [
{
"version": "v1",
"created": "Tue, 30 Jun 2015 12:54:09 GMT"
}
] | 2015-07-01T00:00:00 | [
[
"Viebke",
"Andre",
""
],
[
"Pllana",
"Sabri",
""
]
] | TITLE: The Potential of the Intel Xeon Phi for Supervised Deep Learning
ABSTRACT: Supervised learning of Convolutional Neural Networks (CNNs), also known as
supervised Deep Learning, is a computationally demanding process. To find the
most suitable parameters of a network for a given application, numerous
training sessions are required. Therefore, reducing the training time per
session is essential to fully utilize CNNs in practice. While numerous research
groups have addressed the training of CNNs using GPUs, so far not much
attention has been paid to the Intel Xeon Phi coprocessor. In this paper we
investigate empirically and theoretically the potential of the Intel Xeon Phi
for supervised learning of CNNs. We design and implement a parallelization
scheme named CHAOS that exploits both the thread- and SIMD-parallelism of the
coprocessor. Our approach is evaluated on the Intel Xeon Phi 7120P using the
MNIST dataset of handwritten digits for various thread counts and CNN
architectures. Results show a 103.5x speed up when training our large network
for 15 epochs using 244 threads, compared to one thread on the coprocessor.
Moreover, we develop a performance model and use it to assess our
implementation and answer what-if questions.
| no_new_dataset | 0.948106 |
1506.09124 | Saehoon Yi | Saehoon Yi and Vladimir Pavlovic | Multi-Cue Structure Preserving MRF for Unconstrained Video Segmentation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Video segmentation is a stepping stone to understanding video context. Video
segmentation enables one to represent a video by decomposing it into coherent
regions which comprise whole or parts of objects. However, the challenge
originates from the fact that most of the video segmentation algorithms are
based on unsupervised learning due to expensive cost of pixelwise video
annotation and intra-class variability within similar unconstrained video
classes. We propose a Markov Random Field model for unconstrained video
segmentation that relies on tight integration of multiple cues: vertices are
defined from contour based superpixels, unary potentials from temporal smooth
label likelihood and pairwise potentials from global structure of a video.
Multi-cue structure is a breakthrough to extracting coherent object regions for
unconstrained videos in absence of supervision. Our experiments on VSB100
dataset show that the proposed model significantly outperforms competing
state-of-the-art algorithms. Qualitative analysis illustrates that video
segmentation result of the proposed model is consistent with human perception
of objects.
| [
{
"version": "v1",
"created": "Tue, 30 Jun 2015 15:39:37 GMT"
}
] | 2015-07-01T00:00:00 | [
[
"Yi",
"Saehoon",
""
],
[
"Pavlovic",
"Vladimir",
""
]
] | TITLE: Multi-Cue Structure Preserving MRF for Unconstrained Video Segmentation
ABSTRACT: Video segmentation is a stepping stone to understanding video context. Video
segmentation enables one to represent a video by decomposing it into coherent
regions which comprise whole or parts of objects. However, the challenge
originates from the fact that most of the video segmentation algorithms are
based on unsupervised learning due to expensive cost of pixelwise video
annotation and intra-class variability within similar unconstrained video
classes. We propose a Markov Random Field model for unconstrained video
segmentation that relies on tight integration of multiple cues: vertices are
defined from contour based superpixels, unary potentials from temporal smooth
label likelihood and pairwise potentials from global structure of a video.
Multi-cue structure is a breakthrough to extracting coherent object regions for
unconstrained videos in absence of supervision. Our experiments on VSB100
dataset show that the proposed model significantly outperforms competing
state-of-the-art algorithms. Qualitative analysis illustrates that video
segmentation result of the proposed model is consistent with human perception
of objects.
| no_new_dataset | 0.947478 |
1506.09153 | Gunnar R\"atsch | Christian Widmer, Marius Kloft, Vipin T Sreedharan, Gunnar R\"atsch | Framework for Multi-task Multiple Kernel Learning and Applications in
Genome Analysis | null | null | null | null | stat.ML cs.CE cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a general regularization-based framework for Multi-task learning
(MTL), in which the similarity between tasks can be learned or refined using
$\ell_p$-norm Multiple Kernel learning (MKL). Based on this very general
formulation (including a general loss function), we derive the corresponding
dual formulation using Fenchel duality applied to Hermitian matrices. We show
that numerous established MTL methods can be derived as special cases from
both, the primal and dual of our formulation. Furthermore, we derive a modern
dual-coordinate descend optimization strategy for the hinge-loss variant of our
formulation and provide convergence bounds for our algorithm. As a special
case, we implement in C++ a fast LibLinear-style solver for $\ell_p$-norm MKL.
In the experimental section, we analyze various aspects of our algorithm such
as predictive performance and ability to reconstruct task relationships on
biologically inspired synthetic data, where we have full control over the
underlying ground truth. We also experiment on a new dataset from the domain of
computational biology that we collected for the purpose of this paper. It
concerns the prediction of transcription start sites (TSS) over nine organisms,
which is a crucial task in gene finding. Our solvers including all discussed
special cases are made available as open-source software as part of the SHOGUN
machine learning toolbox (available at \url{http://shogun.ml}).
| [
{
"version": "v1",
"created": "Tue, 30 Jun 2015 16:52:27 GMT"
}
] | 2015-07-01T00:00:00 | [
[
"Widmer",
"Christian",
""
],
[
"Kloft",
"Marius",
""
],
[
"Sreedharan",
"Vipin T",
""
],
[
"Rätsch",
"Gunnar",
""
]
] | TITLE: Framework for Multi-task Multiple Kernel Learning and Applications in
Genome Analysis
ABSTRACT: We present a general regularization-based framework for Multi-task learning
(MTL), in which the similarity between tasks can be learned or refined using
$\ell_p$-norm Multiple Kernel learning (MKL). Based on this very general
formulation (including a general loss function), we derive the corresponding
dual formulation using Fenchel duality applied to Hermitian matrices. We show
that numerous established MTL methods can be derived as special cases from
both, the primal and dual of our formulation. Furthermore, we derive a modern
dual-coordinate descend optimization strategy for the hinge-loss variant of our
formulation and provide convergence bounds for our algorithm. As a special
case, we implement in C++ a fast LibLinear-style solver for $\ell_p$-norm MKL.
In the experimental section, we analyze various aspects of our algorithm such
as predictive performance and ability to reconstruct task relationships on
biologically inspired synthetic data, where we have full control over the
underlying ground truth. We also experiment on a new dataset from the domain of
computational biology that we collected for the purpose of this paper. It
concerns the prediction of transcription start sites (TSS) over nine organisms,
which is a crucial task in gene finding. Our solvers including all discussed
special cases are made available as open-source software as part of the SHOGUN
machine learning toolbox (available at \url{http://shogun.ml}).
| new_dataset | 0.961965 |
1506.09179 | Ali Madooei | Ali Madooei, Mark S. Drew, Hossein Hajimirsadeghi | Learning to Detect Blue-white Structures in Dermoscopy Images with Weak
Supervision | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a novel approach to identify one of the most significant
dermoscopic criteria in the diagnosis of Cutaneous Melanoma: the Blue-whitish
structure. In this paper, we achieve this goal in a Multiple Instance Learning
framework using only image-level labels of whether the feature is present or
not. As the output, we predict the image classification label and as well
localize the feature in the image. Experiments are conducted on a challenging
dataset with results outperforming state-of-the-art. This study provides an
improvement on the scope of modelling for computerized image analysis of skin
lesions, in particular in that it puts forward a framework for identification
of dermoscopic local features from weakly-labelled data.
| [
{
"version": "v1",
"created": "Tue, 30 Jun 2015 17:49:40 GMT"
}
] | 2015-07-01T00:00:00 | [
[
"Madooei",
"Ali",
""
],
[
"Drew",
"Mark S.",
""
],
[
"Hajimirsadeghi",
"Hossein",
""
]
] | TITLE: Learning to Detect Blue-white Structures in Dermoscopy Images with Weak
Supervision
ABSTRACT: We propose a novel approach to identify one of the most significant
dermoscopic criteria in the diagnosis of Cutaneous Melanoma: the Blue-whitish
structure. In this paper, we achieve this goal in a Multiple Instance Learning
framework using only image-level labels of whether the feature is present or
not. As the output, we predict the image classification label and as well
localize the feature in the image. Experiments are conducted on a challenging
dataset with results outperforming state-of-the-art. This study provides an
improvement on the scope of modelling for computerized image analysis of skin
lesions, in particular in that it puts forward a framework for identification
of dermoscopic local features from weakly-labelled data.
| no_new_dataset | 0.95018 |
1409.5209 | Chunhua Shen | Sakrapee Paisitkriangkrai, Chunhua Shen, Anton van den Hengel | Pedestrian Detection with Spatially Pooled Features and Structured
Ensemble Learning | 19 pages | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many typical applications of object detection operate within a prescribed
false-positive range. In this situation the performance of a detector should be
assessed on the basis of the area under the ROC curve over that range, rather
than over the full curve, as the performance outside the range is irrelevant.
This measure is labelled as the partial area under the ROC curve (pAUC). We
propose a novel ensemble learning method which achieves a maximal detection
rate at a user-defined range of false positive rates by directly optimizing the
partial AUC using structured learning.
In order to achieve a high object detection performance, we propose a new
approach to extract low-level visual features based on spatial pooling.
Incorporating spatial pooling improves the translational invariance and thus
the robustness of the detection process. Experimental results on both synthetic
and real-world data sets demonstrate the effectiveness of our approach, and we
show that it is possible to train state-of-the-art pedestrian detectors using
the proposed structured ensemble learning method with spatially pooled
features. The result is the current best reported performance on the
Caltech-USA pedestrian detection dataset.
| [
{
"version": "v1",
"created": "Thu, 18 Sep 2014 07:14:33 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Oct 2014 02:35:33 GMT"
},
{
"version": "v3",
"created": "Sun, 28 Jun 2015 10:15:37 GMT"
}
] | 2015-06-30T00:00:00 | [
[
"Paisitkriangkrai",
"Sakrapee",
""
],
[
"Shen",
"Chunhua",
""
],
[
"Hengel",
"Anton van den",
""
]
] | TITLE: Pedestrian Detection with Spatially Pooled Features and Structured
Ensemble Learning
ABSTRACT: Many typical applications of object detection operate within a prescribed
false-positive range. In this situation the performance of a detector should be
assessed on the basis of the area under the ROC curve over that range, rather
than over the full curve, as the performance outside the range is irrelevant.
This measure is labelled as the partial area under the ROC curve (pAUC). We
propose a novel ensemble learning method which achieves a maximal detection
rate at a user-defined range of false positive rates by directly optimizing the
partial AUC using structured learning.
In order to achieve a high object detection performance, we propose a new
approach to extract low-level visual features based on spatial pooling.
Incorporating spatial pooling improves the translational invariance and thus
the robustness of the detection process. Experimental results on both synthetic
and real-world data sets demonstrate the effectiveness of our approach, and we
show that it is possible to train state-of-the-art pedestrian detectors using
the proposed structured ensemble learning method with spatially pooled
features. The result is the current best reported performance on the
Caltech-USA pedestrian detection dataset.
| no_new_dataset | 0.949995 |
1412.4181 | Sam Hallman | Sam Hallman, Charless C. Fowlkes | Oriented Edge Forests for Boundary Detection | updated to include contents of CVPR version + new figure showing
example segmentation results | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a simple, efficient model for learning boundary detection based on
a random forest classifier. Our approach combines (1) efficient clustering of
training examples based on simple partitioning of the space of local edge
orientations and (2) scale-dependent calibration of individual tree output
probabilities prior to multiscale combination. The resulting model outperforms
published results on the challenging BSDS500 boundary detection benchmark.
Further, on large datasets our model requires substantially less memory for
training and speeds up training time by a factor of 10 over the structured
forest model.
| [
{
"version": "v1",
"created": "Sat, 13 Dec 2014 02:30:59 GMT"
},
{
"version": "v2",
"created": "Sun, 28 Jun 2015 19:37:56 GMT"
}
] | 2015-06-30T00:00:00 | [
[
"Hallman",
"Sam",
""
],
[
"Fowlkes",
"Charless C.",
""
]
] | TITLE: Oriented Edge Forests for Boundary Detection
ABSTRACT: We present a simple, efficient model for learning boundary detection based on
a random forest classifier. Our approach combines (1) efficient clustering of
training examples based on simple partitioning of the space of local edge
orientations and (2) scale-dependent calibration of individual tree output
probabilities prior to multiscale combination. The resulting model outperforms
published results on the challenging BSDS500 boundary detection benchmark.
Further, on large datasets our model requires substantially less memory for
training and speeds up training time by a factor of 10 over the structured
forest model.
| no_new_dataset | 0.952838 |
1506.04352 | Zhe Wang | Zhe Wang, Kai Hu, Baolin Yin | Internet Traffic Matrix Structural Analysis Based on Multi-Resolution
RPCA | 18 pages, in Chinese. This unpublished manuscript is an improvement
on our previous papers in references [12] and [13] | null | null | null | cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Internet traffic matrix plays a significant roll in network operation and
management, therefore, the structural analysis of traffic matrix, which
decomposes different traffic components of this high-dimensional traffic
dataset, is quite valuable to some network applications. In this study, based
on the Robust Principal Component Analysis (RPCA) theory, a novel traffic
matrix structural analysis approach named Multi-Resolution RPCA is created,
which utilizes the wavelet multi-resolution analysis. Firstly, we build the
Multi-Resolution Traffic Matrix Decomposition Model (MR-TMDM), which
characterizes the smoothness of the deterministic traffic by its wavelet
coefficients. Secondly, based on this model, we improve the Stable Principal
Component Pursuit (SPCP), propose a new traffic matrix decomposition method
named SPCP-MRC with Multi-Resolution Constraints, and design its numerical
algorithm. Specifically, we give and prove the closed-form solution to a
sub-problem in the algorithm. Lastly, we evaluate different traffic
decomposition methods by multiple groups of simulated traffic matrices
containing different kinds of anomalies and distinct noise levels. It is
demonstrated that SPCP-MRC, compared with other methods, achieves more accurate
and more reasonable traffic decompositions.
| [
{
"version": "v1",
"created": "Sun, 14 Jun 2015 05:12:56 GMT"
},
{
"version": "v2",
"created": "Fri, 26 Jun 2015 06:43:46 GMT"
}
] | 2015-06-29T00:00:00 | [
[
"Wang",
"Zhe",
""
],
[
"Hu",
"Kai",
""
],
[
"Yin",
"Baolin",
""
]
] | TITLE: Internet Traffic Matrix Structural Analysis Based on Multi-Resolution
RPCA
ABSTRACT: The Internet traffic matrix plays a significant roll in network operation and
management, therefore, the structural analysis of traffic matrix, which
decomposes different traffic components of this high-dimensional traffic
dataset, is quite valuable to some network applications. In this study, based
on the Robust Principal Component Analysis (RPCA) theory, a novel traffic
matrix structural analysis approach named Multi-Resolution RPCA is created,
which utilizes the wavelet multi-resolution analysis. Firstly, we build the
Multi-Resolution Traffic Matrix Decomposition Model (MR-TMDM), which
characterizes the smoothness of the deterministic traffic by its wavelet
coefficients. Secondly, based on this model, we improve the Stable Principal
Component Pursuit (SPCP), propose a new traffic matrix decomposition method
named SPCP-MRC with Multi-Resolution Constraints, and design its numerical
algorithm. Specifically, we give and prove the closed-form solution to a
sub-problem in the algorithm. Lastly, we evaluate different traffic
decomposition methods by multiple groups of simulated traffic matrices
containing different kinds of anomalies and distinct noise levels. It is
demonstrated that SPCP-MRC, compared with other methods, achieves more accurate
and more reasonable traffic decompositions.
| no_new_dataset | 0.9462 |
1506.08110 | Richard Charles | Richard M. Charles, Kye M. Taylor and James H. Curry | Nonnegative Matrix Factorization applied to reordered pixels of single
images based on patches to achieve structured nonnegative dictionaries | 34 pages, 15 figures, 2 tables | null | null | null | cs.CV math.NA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent improvements in computing allow for the processing and analysis of
very large datasets in a variety of fields. Often the analysis requires the
creation of low-rank approximations to the datasets leading to efficient
storage. This article presents and analyzes a novel approach for creating
nonnegative, structured dictionaries using NMF applied to reordered pixels of
single, natural images. We reorder the pixels based on patches and present our
approach in general. We investigate our approach when using the Singular Value
Decomposition (SVD) and Nonnegative Matrix Factorizations (NMF) as low-rank
approximations. Peak Signal-to-Noise Ratio (PSNR) and Mean Structural
Similarity Index (MSSIM) are used to evaluate the algorithm. We report that
while the SVD provides the best reconstructions, its dictionary of vectors lose
both the sign structure of the original image and details of localized image
content. In contrast, the dictionaries produced using NMF preserves the sign
structure of the original image matrix and offer a nonnegative, parts-based
dictionary.
| [
{
"version": "v1",
"created": "Wed, 24 Jun 2015 17:27:11 GMT"
}
] | 2015-06-29T00:00:00 | [
[
"Charles",
"Richard M.",
""
],
[
"Taylor",
"Kye M.",
""
],
[
"Curry",
"James H.",
""
]
] | TITLE: Nonnegative Matrix Factorization applied to reordered pixels of single
images based on patches to achieve structured nonnegative dictionaries
ABSTRACT: Recent improvements in computing allow for the processing and analysis of
very large datasets in a variety of fields. Often the analysis requires the
creation of low-rank approximations to the datasets leading to efficient
storage. This article presents and analyzes a novel approach for creating
nonnegative, structured dictionaries using NMF applied to reordered pixels of
single, natural images. We reorder the pixels based on patches and present our
approach in general. We investigate our approach when using the Singular Value
Decomposition (SVD) and Nonnegative Matrix Factorizations (NMF) as low-rank
approximations. Peak Signal-to-Noise Ratio (PSNR) and Mean Structural
Similarity Index (MSSIM) are used to evaluate the algorithm. We report that
while the SVD provides the best reconstructions, its dictionary of vectors lose
both the sign structure of the original image and details of localized image
content. In contrast, the dictionaries produced using NMF preserves the sign
structure of the original image matrix and offer a nonnegative, parts-based
dictionary.
| no_new_dataset | 0.949059 |
1506.08180 | Amar Shah | Amar Shah and David A. Knowles and Zoubin Ghahramani | An Empirical Study of Stochastic Variational Algorithms for the Beta
Bernoulli Process | ICML, 12 pages. Volume 37: Proceedings of The 32nd International
Conference on Machine Learning, 2015 | null | null | null | stat.ML cs.LG stat.AP stat.CO stat.ME | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Stochastic variational inference (SVI) is emerging as the most promising
candidate for scaling inference in Bayesian probabilistic models to large
datasets. However, the performance of these methods has been assessed primarily
in the context of Bayesian topic models, particularly latent Dirichlet
allocation (LDA). Deriving several new algorithms, and using synthetic, image
and genomic datasets, we investigate whether the understanding gleaned from LDA
applies in the setting of sparse latent factor models, specifically beta
process factor analysis (BPFA). We demonstrate that the big picture is
consistent: using Gibbs sampling within SVI to maintain certain posterior
dependencies is extremely effective. However, we find that different posterior
dependencies are important in BPFA relative to LDA. Particularly,
approximations able to model intra-local variable dependence perform best.
| [
{
"version": "v1",
"created": "Fri, 26 Jun 2015 18:55:11 GMT"
}
] | 2015-06-29T00:00:00 | [
[
"Shah",
"Amar",
""
],
[
"Knowles",
"David A.",
""
],
[
"Ghahramani",
"Zoubin",
""
]
] | TITLE: An Empirical Study of Stochastic Variational Algorithms for the Beta
Bernoulli Process
ABSTRACT: Stochastic variational inference (SVI) is emerging as the most promising
candidate for scaling inference in Bayesian probabilistic models to large
datasets. However, the performance of these methods has been assessed primarily
in the context of Bayesian topic models, particularly latent Dirichlet
allocation (LDA). Deriving several new algorithms, and using synthetic, image
and genomic datasets, we investigate whether the understanding gleaned from LDA
applies in the setting of sparse latent factor models, specifically beta
process factor analysis (BPFA). We demonstrate that the big picture is
consistent: using Gibbs sampling within SVI to maintain certain posterior
dependencies is extremely effective. However, we find that different posterior
dependencies are important in BPFA relative to LDA. Particularly,
approximations able to model intra-local variable dependence perform best.
| no_new_dataset | 0.945197 |
1408.4966 | Jimmy Dubuisson | Jimmy Dubuisson, Jean-Pierre Eckmann and Andrea Agazzi | Diffusion Fingerprints | null | null | null | null | stat.ML cs.IR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce, test and discuss a method for classifying and clustering data
modeled as directed graphs. The idea is to start diffusion processes from any
subset of a data collection, generating corresponding distributions for
reaching points in the network. These distributions take the form of
high-dimensional numerical vectors and capture essential topological properties
of the original dataset. We show how these diffusion vectors can be
successfully applied for getting state-of-the-art accuracies in the problem of
extracting pathways from metabolic networks. We also provide a guideline to
illustrate how to use our method for classification problems, and discuss
important details of its implementation. In particular, we present a simple
dimensionality reduction technique that lowers the computational cost of
classifying diffusion vectors, while leaving the predictive power of the
classification process substantially unaltered. Although the method has very
few parameters, the results we obtain show its flexibility and power. This
should make it helpful in many other contexts.
| [
{
"version": "v1",
"created": "Thu, 21 Aug 2014 11:34:37 GMT"
},
{
"version": "v2",
"created": "Thu, 25 Jun 2015 13:48:40 GMT"
}
] | 2015-06-26T00:00:00 | [
[
"Dubuisson",
"Jimmy",
""
],
[
"Eckmann",
"Jean-Pierre",
""
],
[
"Agazzi",
"Andrea",
""
]
] | TITLE: Diffusion Fingerprints
ABSTRACT: We introduce, test and discuss a method for classifying and clustering data
modeled as directed graphs. The idea is to start diffusion processes from any
subset of a data collection, generating corresponding distributions for
reaching points in the network. These distributions take the form of
high-dimensional numerical vectors and capture essential topological properties
of the original dataset. We show how these diffusion vectors can be
successfully applied for getting state-of-the-art accuracies in the problem of
extracting pathways from metabolic networks. We also provide a guideline to
illustrate how to use our method for classification problems, and discuss
important details of its implementation. In particular, we present a simple
dimensionality reduction technique that lowers the computational cost of
classifying diffusion vectors, while leaving the predictive power of the
classification process substantially unaltered. Although the method has very
few parameters, the results we obtain show its flexibility and power. This
should make it helpful in many other contexts.
| no_new_dataset | 0.948442 |
1502.02445 | Giovanni Montana | Alexandre de Brebisson, Giovanni Montana | Deep Neural Networks for Anatomical Brain Segmentation | null | null | null | null | cs.CV cs.LG stat.AP stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a novel approach to automatically segment magnetic resonance (MR)
images of the human brain into anatomical regions. Our methodology is based on
a deep artificial neural network that assigns each voxel in an MR image of the
brain to its corresponding anatomical region. The inputs of the network capture
information at different scales around the voxel of interest: 3D and orthogonal
2D intensity patches capture the local spatial context while large, compressed
2D orthogonal patches and distances to the regional centroids enforce global
spatial consistency. Contrary to commonly used segmentation methods, our
technique does not require any non-linear registration of the MR images. To
benchmark our model, we used the dataset provided for the MICCAI 2012 challenge
on multi-atlas labelling, which consists of 35 manually segmented MR images of
the brain. We obtained competitive results (mean dice coefficient 0.725, error
rate 0.163) showing the potential of our approach. To our knowledge, our
technique is the first to tackle the anatomical segmentation of the whole brain
using deep neural networks.
| [
{
"version": "v1",
"created": "Mon, 9 Feb 2015 11:48:42 GMT"
},
{
"version": "v2",
"created": "Thu, 25 Jun 2015 16:19:44 GMT"
}
] | 2015-06-26T00:00:00 | [
[
"de Brebisson",
"Alexandre",
""
],
[
"Montana",
"Giovanni",
""
]
] | TITLE: Deep Neural Networks for Anatomical Brain Segmentation
ABSTRACT: We present a novel approach to automatically segment magnetic resonance (MR)
images of the human brain into anatomical regions. Our methodology is based on
a deep artificial neural network that assigns each voxel in an MR image of the
brain to its corresponding anatomical region. The inputs of the network capture
information at different scales around the voxel of interest: 3D and orthogonal
2D intensity patches capture the local spatial context while large, compressed
2D orthogonal patches and distances to the regional centroids enforce global
spatial consistency. Contrary to commonly used segmentation methods, our
technique does not require any non-linear registration of the MR images. To
benchmark our model, we used the dataset provided for the MICCAI 2012 challenge
on multi-atlas labelling, which consists of 35 manually segmented MR images of
the brain. We obtained competitive results (mean dice coefficient 0.725, error
rate 0.163) showing the potential of our approach. To our knowledge, our
technique is the first to tackle the anatomical segmentation of the whole brain
using deep neural networks.
| no_new_dataset | 0.945951 |
1506.06155 | Mohammad Norouzi | Mohammad Norouzi, Maxwell D. Collins, David J. Fleet, Pushmeet Kohli | CO2 Forest: Improved Random Forest by Continuous Optimization of Oblique
Splits | null | null | null | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a novel algorithm for optimizing multivariate linear threshold
functions as split functions of decision trees to create improved Random Forest
classifiers. Standard tree induction methods resort to sampling and exhaustive
search to find good univariate split functions. In contrast, our method
computes a linear combination of the features at each node, and optimizes the
parameters of the linear combination (oblique) split functions by adopting a
variant of latent variable SVM formulation. We develop a convex-concave upper
bound on the classification loss for a one-level decision tree, and optimize
the bound by stochastic gradient descent at each internal node of the tree.
Forests of up to 1000 Continuously Optimized Oblique (CO2) decision trees are
created, which significantly outperform Random Forest with univariate splits
and previous techniques for constructing oblique trees. Experimental results
are reported on multi-class classification benchmarks and on Labeled Faces in
the Wild (LFW) dataset.
| [
{
"version": "v1",
"created": "Fri, 19 Jun 2015 20:42:47 GMT"
},
{
"version": "v2",
"created": "Wed, 24 Jun 2015 21:23:43 GMT"
}
] | 2015-06-26T00:00:00 | [
[
"Norouzi",
"Mohammad",
""
],
[
"Collins",
"Maxwell D.",
""
],
[
"Fleet",
"David J.",
""
],
[
"Kohli",
"Pushmeet",
""
]
] | TITLE: CO2 Forest: Improved Random Forest by Continuous Optimization of Oblique
Splits
ABSTRACT: We propose a novel algorithm for optimizing multivariate linear threshold
functions as split functions of decision trees to create improved Random Forest
classifiers. Standard tree induction methods resort to sampling and exhaustive
search to find good univariate split functions. In contrast, our method
computes a linear combination of the features at each node, and optimizes the
parameters of the linear combination (oblique) split functions by adopting a
variant of latent variable SVM formulation. We develop a convex-concave upper
bound on the classification loss for a one-level decision tree, and optimize
the bound by stochastic gradient descent at each internal node of the tree.
Forests of up to 1000 Continuously Optimized Oblique (CO2) decision trees are
created, which significantly outperform Random Forest with univariate splits
and previous techniques for constructing oblique trees. Experimental results
are reported on multi-class classification benchmarks and on Labeled Faces in
the Wild (LFW) dataset.
| no_new_dataset | 0.952442 |
1506.07563 | Luiz Capretz Dr. | Saiqa Aleem, Luiz Fernando Capretz, Faheem Ahmed | Benchmarking Machine Learning Technologies for Software Defect Detection | null | International Journal of Software Engineering & Applications
(IJSEA), Volume 6, No.3, pp. 11-23, May 2015 | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Machine Learning approaches are good in solving problems that have less
information. In most cases, the software domain problems characterize as a
process of learning that depend on the various circumstances and changes
accordingly. A predictive model is constructed by using machine learning
approaches and classified them into defective and non-defective modules.
Machine learning techniques help developers to retrieve useful information
after the classification and enable them to analyse data from different
perspectives. Machine learning techniques are proven to be useful in terms of
software bug prediction. This study used public available data sets of software
modules and provides comparative performance analysis of different machine
learning techniques for software bug prediction. Results showed most of the
machine learning methods performed well on software bug datasets.
| [
{
"version": "v1",
"created": "Wed, 24 Jun 2015 21:07:47 GMT"
}
] | 2015-06-26T00:00:00 | [
[
"Aleem",
"Saiqa",
""
],
[
"Capretz",
"Luiz Fernando",
""
],
[
"Ahmed",
"Faheem",
""
]
] | TITLE: Benchmarking Machine Learning Technologies for Software Defect Detection
ABSTRACT: Machine Learning approaches are good in solving problems that have less
information. In most cases, the software domain problems characterize as a
process of learning that depend on the various circumstances and changes
accordingly. A predictive model is constructed by using machine learning
approaches and classified them into defective and non-defective modules.
Machine learning techniques help developers to retrieve useful information
after the classification and enable them to analyse data from different
perspectives. Machine learning techniques are proven to be useful in terms of
software bug prediction. This study used public available data sets of software
modules and provides comparative performance analysis of different machine
learning techniques for software bug prediction. Results showed most of the
machine learning methods performed well on software bug datasets.
| no_new_dataset | 0.940626 |
1506.07609 | Vikas Garg | Vikas K. Garg, Cynthia Rudin, and Tommi Jaakkola | CRAFT: ClusteR-specific Assorted Feature selecTion | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a framework for clustering with cluster-specific feature
selection. The framework, CRAFT, is derived from asymptotic log posterior
formulations of nonparametric MAP-based clustering models. CRAFT handles
assorted data, i.e., both numeric and categorical data, and the underlying
objective functions are intuitively appealing. The resulting algorithm is
simple to implement and scales nicely, requires minimal parameter tuning,
obviates the need to specify the number of clusters a priori, and compares
favorably with other methods on real datasets.
| [
{
"version": "v1",
"created": "Thu, 25 Jun 2015 04:14:49 GMT"
}
] | 2015-06-26T00:00:00 | [
[
"Garg",
"Vikas K.",
""
],
[
"Rudin",
"Cynthia",
""
],
[
"Jaakkola",
"Tommi",
""
]
] | TITLE: CRAFT: ClusteR-specific Assorted Feature selecTion
ABSTRACT: We present a framework for clustering with cluster-specific feature
selection. The framework, CRAFT, is derived from asymptotic log posterior
formulations of nonparametric MAP-based clustering models. CRAFT handles
assorted data, i.e., both numeric and categorical data, and the underlying
objective functions are intuitively appealing. The resulting algorithm is
simple to implement and scales nicely, requires minimal parameter tuning,
obviates the need to specify the number of clusters a priori, and compares
favorably with other methods on real datasets.
| no_new_dataset | 0.947235 |
1506.07650 | Kun Xu | Kun Xu, Yansong Feng, Songfang Huang, Dongyan Zhao | Semantic Relation Classification via Convolutional Neural Networks with
Simple Negative Sampling | null | null | null | null | cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Syntactic features play an essential role in identifying relationship in a
sentence. Previous neural network models often suffer from irrelevant
information introduced when subjects and objects are in a long distance. In
this paper, we propose to learn more robust relation representations from the
shortest dependency path through a convolution neural network. We further
propose a straightforward negative sampling strategy to improve the assignment
of subjects and objects. Experimental results show that our method outperforms
the state-of-the-art methods on the SemEval-2010 Task 8 dataset.
| [
{
"version": "v1",
"created": "Thu, 25 Jun 2015 07:51:55 GMT"
}
] | 2015-06-26T00:00:00 | [
[
"Xu",
"Kun",
""
],
[
"Feng",
"Yansong",
""
],
[
"Huang",
"Songfang",
""
],
[
"Zhao",
"Dongyan",
""
]
] | TITLE: Semantic Relation Classification via Convolutional Neural Networks with
Simple Negative Sampling
ABSTRACT: Syntactic features play an essential role in identifying relationship in a
sentence. Previous neural network models often suffer from irrelevant
information introduced when subjects and objects are in a long distance. In
this paper, we propose to learn more robust relation representations from the
shortest dependency path through a convolution neural network. We further
propose a straightforward negative sampling strategy to improve the assignment
of subjects and objects. Experimental results show that our method outperforms
the state-of-the-art methods on the SemEval-2010 Task 8 dataset.
| no_new_dataset | 0.952794 |
1506.07651 | Girija Chetty | Mohammad Alwadi and Girija Chetty | Sensor Selection Scheme in Temperature Wireless Sensor Network | Keywords: Wireless sensor Networks, Physical environment Monitoring,
machine learning, data mining, feature selection, adaptive routing | null | null | null | cs.NI | http://creativecommons.org/licenses/by/3.0/ | In this paper, we propose a novel energy efficient environment monitoring
scheme for wireless sensor networks, based on data mining formulation. The
proposed adapting routing scheme for sensors for achieving energy efficiency
from temperature wireless sensor network data set. The experimental validation
of the proposed approach using publicly available Intel Berkeley lab Wireless
Sensor Network dataset shows that it is possible to achieve energy efficient
environment monitoring for wireless sensor networks, with a trade-off between
accuracy and life time extension factor of sensors, using the proposed
approach.
| [
{
"version": "v1",
"created": "Thu, 25 Jun 2015 07:53:19 GMT"
}
] | 2015-06-26T00:00:00 | [
[
"Alwadi",
"Mohammad",
""
],
[
"Chetty",
"Girija",
""
]
] | TITLE: Sensor Selection Scheme in Temperature Wireless Sensor Network
ABSTRACT: In this paper, we propose a novel energy efficient environment monitoring
scheme for wireless sensor networks, based on data mining formulation. The
proposed adapting routing scheme for sensors for achieving energy efficiency
from temperature wireless sensor network data set. The experimental validation
of the proposed approach using publicly available Intel Berkeley lab Wireless
Sensor Network dataset shows that it is possible to achieve energy efficient
environment monitoring for wireless sensor networks, with a trade-off between
accuracy and life time extension factor of sensors, using the proposed
approach.
| no_new_dataset | 0.953708 |
1506.07763 | Georg Groh | Halgurt Bapierre, Chakajkla Jesdabodi, Georg Groh | Mobile Homophily and Social Location Prediction | null | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The mobility behavior of human beings is predictable to a varying degree e.g.
depending on the traits of their personality such as the trait extraversion -
introversion: the mobility of introvert users may be more dominated by routines
and habitual movement patterns, resulting in a more predictable mobility
behavior on the basis of their own location history while, in contrast,
extrovert users get about a lot and are explorative by nature, which may hamper
the prediction of their mobility. However, socially more active and extrovert
users meet more people and share information, experiences, believes, thoughts
etc. with others. which in turn leads to a high interdependency between their
mobility and social lives. Using a large LBSN dataset, his paper investigates
the interdependency between human mobility and social proximity, the influence
of social networks on enhancing location prediction of an individual and the
transmission of social trends/influences within social networks.
| [
{
"version": "v1",
"created": "Thu, 25 Jun 2015 14:13:14 GMT"
}
] | 2015-06-26T00:00:00 | [
[
"Bapierre",
"Halgurt",
""
],
[
"Jesdabodi",
"Chakajkla",
""
],
[
"Groh",
"Georg",
""
]
] | TITLE: Mobile Homophily and Social Location Prediction
ABSTRACT: The mobility behavior of human beings is predictable to a varying degree e.g.
depending on the traits of their personality such as the trait extraversion -
introversion: the mobility of introvert users may be more dominated by routines
and habitual movement patterns, resulting in a more predictable mobility
behavior on the basis of their own location history while, in contrast,
extrovert users get about a lot and are explorative by nature, which may hamper
the prediction of their mobility. However, socially more active and extrovert
users meet more people and share information, experiences, believes, thoughts
etc. with others. which in turn leads to a high interdependency between their
mobility and social lives. Using a large LBSN dataset, his paper investigates
the interdependency between human mobility and social proximity, the influence
of social networks on enhancing location prediction of an individual and the
transmission of social trends/influences within social networks.
| no_new_dataset | 0.949856 |
1506.07840 | Gal Mishne | Gal Mishne, Uri Shaham, Alexander Cloninger and Israel Cohen | Diffusion Nets | 24 pages, 12 figures | null | null | null | stat.ML cs.LG math.CA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Non-linear manifold learning enables high-dimensional data analysis, but
requires out-of-sample-extension methods to process new data points. In this
paper, we propose a manifold learning algorithm based on deep learning to
create an encoder, which maps a high-dimensional dataset and its
low-dimensional embedding, and a decoder, which takes the embedded data back to
the high-dimensional space. Stacking the encoder and decoder together
constructs an autoencoder, which we term a diffusion net, that performs
out-of-sample-extension as well as outlier detection. We introduce new neural
net constraints for the encoder, which preserves the local geometry of the
points, and we prove rates of convergence for the encoder. Also, our approach
is efficient in both computational complexity and memory requirements, as
opposed to previous methods that require storage of all training points in both
the high-dimensional and the low-dimensional spaces to calculate the
out-of-sample-extension and the pre-image.
| [
{
"version": "v1",
"created": "Thu, 25 Jun 2015 18:13:49 GMT"
}
] | 2015-06-26T00:00:00 | [
[
"Mishne",
"Gal",
""
],
[
"Shaham",
"Uri",
""
],
[
"Cloninger",
"Alexander",
""
],
[
"Cohen",
"Israel",
""
]
] | TITLE: Diffusion Nets
ABSTRACT: Non-linear manifold learning enables high-dimensional data analysis, but
requires out-of-sample-extension methods to process new data points. In this
paper, we propose a manifold learning algorithm based on deep learning to
create an encoder, which maps a high-dimensional dataset and its
low-dimensional embedding, and a decoder, which takes the embedded data back to
the high-dimensional space. Stacking the encoder and decoder together
constructs an autoencoder, which we term a diffusion net, that performs
out-of-sample-extension as well as outlier detection. We introduce new neural
net constraints for the encoder, which preserves the local geometry of the
points, and we prove rates of convergence for the encoder. Also, our approach
is efficient in both computational complexity and memory requirements, as
opposed to previous methods that require storage of all training points in both
the high-dimensional and the low-dimensional spaces to calculate the
out-of-sample-extension and the pre-image.
| no_new_dataset | 0.950319 |
physics/0609229 | Javier Buldu | Juyong Park, Oscar Celma, Markus Koppenberger, Pedro Cano and Javier
M. Buld\'u | The Social Network of Contemporary Popular Musicians | 7 pages, 2 figures | Int. J. of Bifurcation and Chaos, 17, 2281-2288 (2007) | 10.1142/S0218127407018385 | null | physics.soc-ph | null | In this paper we analyze two social network datasets of contemporary
musicians constructed from allmusic.com (AMG), a music and artists' information
database: one is the collaboration network in which two musicians are connected
if they have performed in or produced an album together, and the other is the
similarity network in which they are connected if they where musically similar
according to music experts. We find that, while both networks exhibit typical
features of social networks such as high transitivity, several key network
features, such as degree as well as betweenness distributions suggest
fundamental differences in music collaborations and music similarity networks
are created.
| [
{
"version": "v1",
"created": "Tue, 26 Sep 2006 09:39:33 GMT"
}
] | 2015-06-26T00:00:00 | [
[
"Park",
"Juyong",
""
],
[
"Celma",
"Oscar",
""
],
[
"Koppenberger",
"Markus",
""
],
[
"Cano",
"Pedro",
""
],
[
"Buldú",
"Javier M.",
""
]
] | TITLE: The Social Network of Contemporary Popular Musicians
ABSTRACT: In this paper we analyze two social network datasets of contemporary
musicians constructed from allmusic.com (AMG), a music and artists' information
database: one is the collaboration network in which two musicians are connected
if they have performed in or produced an album together, and the other is the
similarity network in which they are connected if they where musically similar
according to music experts. We find that, while both networks exhibit typical
features of social networks such as high transitivity, several key network
features, such as degree as well as betweenness distributions suggest
fundamental differences in music collaborations and music similarity networks
are created.
| no_new_dataset | 0.949389 |
physics/0703084 | Guy Ouillon | Guy Ouillon, Caroline Ducorbier, Didier Sornette | Automatic Reconstruction of Fault Networks from Seismicity Catalogs: 3D
Optimal Anisotropic Dynamic Clustering | null | null | 10.1029/2007JB005032 | null | physics.geo-ph physics.data-an | null | We propose a new pattern recognition method that is able to reconstruct the
3D structure of the active part of a fault network using the spatial location
of earthquakes. The method is a generalization of the so-called dynamic
clustering method, that originally partitions a set of datapoints into
clusters, using a global minimization criterion over the spatial inertia of
those clusters. The new method improves on it by taking into account the full
spatial inertia tensor of each cluster, in order to partition the dataset into
fault-like, anisotropic clusters. Given a catalog of seismic events, the output
is the optimal set of plane segments that fits the spatial structure of the
data. Each plane segment is fully characterized by its location, size and
orientation. The main tunable parameter is the accuracy of the earthquake
localizations, which fixes the resolution, i.e. the residual variance of the
fit. The resolution determines the number of fault segments needed to describe
the earthquake catalog, the better the resolution, the finer the structure of
the reconstructed fault segments. The algorithm reconstructs successfully the
fault segments of synthetic earthquake catalogs. Applied to the real catalog
constituted of a subset of the aftershocks sequence of the 28th June 1992
Landers earthquake in Southern California, the reconstructed plane segments
fully agree with faults already known on geological maps, or with blind faults
that appear quite obvious on longer-term catalogs. Future improvements of the
method are discussed, as well as its potential use in the multi-scale study of
the inner structure of fault zones.
| [
{
"version": "v1",
"created": "Wed, 7 Mar 2007 15:39:16 GMT"
}
] | 2015-06-26T00:00:00 | [
[
"Ouillon",
"Guy",
""
],
[
"Ducorbier",
"Caroline",
""
],
[
"Sornette",
"Didier",
""
]
] | TITLE: Automatic Reconstruction of Fault Networks from Seismicity Catalogs: 3D
Optimal Anisotropic Dynamic Clustering
ABSTRACT: We propose a new pattern recognition method that is able to reconstruct the
3D structure of the active part of a fault network using the spatial location
of earthquakes. The method is a generalization of the so-called dynamic
clustering method, that originally partitions a set of datapoints into
clusters, using a global minimization criterion over the spatial inertia of
those clusters. The new method improves on it by taking into account the full
spatial inertia tensor of each cluster, in order to partition the dataset into
fault-like, anisotropic clusters. Given a catalog of seismic events, the output
is the optimal set of plane segments that fits the spatial structure of the
data. Each plane segment is fully characterized by its location, size and
orientation. The main tunable parameter is the accuracy of the earthquake
localizations, which fixes the resolution, i.e. the residual variance of the
fit. The resolution determines the number of fault segments needed to describe
the earthquake catalog, the better the resolution, the finer the structure of
the reconstructed fault segments. The algorithm reconstructs successfully the
fault segments of synthetic earthquake catalogs. Applied to the real catalog
constituted of a subset of the aftershocks sequence of the 28th June 1992
Landers earthquake in Southern California, the reconstructed plane segments
fully agree with faults already known on geological maps, or with blind faults
that appear quite obvious on longer-term catalogs. Future improvements of the
method are discussed, as well as its potential use in the multi-scale study of
the inner structure of fault zones.
| no_new_dataset | 0.951729 |
1405.5769 | Philipp Fischer | Philipp Fischer, Alexey Dosovitskiy, Thomas Brox | Descriptor Matching with Convolutional Neural Networks: a Comparison to
SIFT | This paper has been merged with arXiv:1406.6909 | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Latest results indicate that features learned via convolutional neural
networks outperform previous descriptors on classification tasks by a large
margin. It has been shown that these networks still work well when they are
applied to datasets or recognition tasks different from those they were trained
on. However, descriptors like SIFT are not only used in recognition but also
for many correspondence problems that rely on descriptor matching. In this
paper we compare features from various layers of convolutional neural nets to
standard SIFT descriptors. We consider a network that was trained on ImageNet
and another one that was trained without supervision. Surprisingly,
convolutional neural networks clearly outperform SIFT on descriptor matching.
This paper has been merged with arXiv:1406.6909
| [
{
"version": "v1",
"created": "Thu, 22 May 2014 14:35:52 GMT"
},
{
"version": "v2",
"created": "Wed, 24 Jun 2015 09:16:28 GMT"
}
] | 2015-06-25T00:00:00 | [
[
"Fischer",
"Philipp",
""
],
[
"Dosovitskiy",
"Alexey",
""
],
[
"Brox",
"Thomas",
""
]
] | TITLE: Descriptor Matching with Convolutional Neural Networks: a Comparison to
SIFT
ABSTRACT: Latest results indicate that features learned via convolutional neural
networks outperform previous descriptors on classification tasks by a large
margin. It has been shown that these networks still work well when they are
applied to datasets or recognition tasks different from those they were trained
on. However, descriptors like SIFT are not only used in recognition but also
for many correspondence problems that rely on descriptor matching. In this
paper we compare features from various layers of convolutional neural nets to
standard SIFT descriptors. We consider a network that was trained on ImageNet
and another one that was trained without supervision. Surprisingly,
convolutional neural networks clearly outperform SIFT on descriptor matching.
This paper has been merged with arXiv:1406.6909
| no_new_dataset | 0.952838 |
1408.3772 | Shervin Minaee | Shervin Minaee and AmirAli Abdolrashidi | Highly Accurate Multispectral Palmprint Recognition Using Statistical
and Wavelet Features | 6 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Palmprint is one of the most useful physiological biometrics that can be used
as a powerful means in personal recognition systems. The major features of the
palmprints are palm lines, wrinkles and ridges, and many approaches use them in
different ways towards solving the palmprint recognition problem. Here we have
proposed to use a set of statistical and wavelet-based features; statistical to
capture the general characteristics of palmprints; and wavelet-based to find
those information not evident in the spatial domain. Also we use two different
classification approaches, minimum distance classifier scheme and weighted
majority voting algorithm, to perform palmprint matching. The proposed method
is tested on a well-known palmprint dataset of 6000 samples and has shown an
impressive accuracy rate of 99.65\%-100\% for most scenarios.
| [
{
"version": "v1",
"created": "Sat, 16 Aug 2014 21:02:44 GMT"
},
{
"version": "v2",
"created": "Wed, 24 Jun 2015 16:31:26 GMT"
}
] | 2015-06-25T00:00:00 | [
[
"Minaee",
"Shervin",
""
],
[
"Abdolrashidi",
"AmirAli",
""
]
] | TITLE: Highly Accurate Multispectral Palmprint Recognition Using Statistical
and Wavelet Features
ABSTRACT: Palmprint is one of the most useful physiological biometrics that can be used
as a powerful means in personal recognition systems. The major features of the
palmprints are palm lines, wrinkles and ridges, and many approaches use them in
different ways towards solving the palmprint recognition problem. Here we have
proposed to use a set of statistical and wavelet-based features; statistical to
capture the general characteristics of palmprints; and wavelet-based to find
those information not evident in the spatial domain. Also we use two different
classification approaches, minimum distance classifier scheme and weighted
majority voting algorithm, to perform palmprint matching. The proposed method
is tested on a well-known palmprint dataset of 6000 samples and has shown an
impressive accuracy rate of 99.65\%-100\% for most scenarios.
| no_new_dataset | 0.910346 |
1506.07224 | Jian Guo | Jian Guo, Stephen Gould | Deep CNN Ensemble with Data Augmentation for Object Detection | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/3.0/ | We report on the methods used in our recent DeepEnsembleCoco submission to
the PASCAL VOC 2012 challenge, which achieves state-of-the-art performance on
the object detection task. Our method is a variant of the R-CNN model proposed
Girshick:CVPR14 with two key improvements to training and evaluation. First,
our method constructs an ensemble of deep CNN models with different
architectures that are complementary to each other. Second, we augment the
PASCAL VOC training set with images from the Microsoft COCO dataset to
significantly enlarge the amount training data. Importantly, we select a subset
of the Microsoft COCO images to be consistent with the PASCAL VOC task. Results
on the PASCAL VOC evaluation server show that our proposed method outperform
all previous methods on the PASCAL VOC 2012 detection task at time of
submission.
| [
{
"version": "v1",
"created": "Wed, 24 Jun 2015 02:15:17 GMT"
}
] | 2015-06-25T00:00:00 | [
[
"Guo",
"Jian",
""
],
[
"Gould",
"Stephen",
""
]
] | TITLE: Deep CNN Ensemble with Data Augmentation for Object Detection
ABSTRACT: We report on the methods used in our recent DeepEnsembleCoco submission to
the PASCAL VOC 2012 challenge, which achieves state-of-the-art performance on
the object detection task. Our method is a variant of the R-CNN model proposed
Girshick:CVPR14 with two key improvements to training and evaluation. First,
our method constructs an ensemble of deep CNN models with different
architectures that are complementary to each other. Second, we augment the
PASCAL VOC training set with images from the Microsoft COCO dataset to
significantly enlarge the amount training data. Importantly, we select a subset
of the Microsoft COCO images to be consistent with the PASCAL VOC task. Results
on the PASCAL VOC evaluation server show that our proposed method outperform
all previous methods on the PASCAL VOC 2012 detection task at time of
submission.
| no_new_dataset | 0.951863 |
1506.07251 | Kevin Vervier | K\'evin Vervier (CBIO), Pierre Mah\'e, Jean-Baptiste Veyrieras,
Jean-Philippe Vert (CBIO) | Benchmark of structured machine learning methods for microbial
identification from mass-spectrometry data | null | null | null | null | stat.ML cs.LG q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Microbial identification is a central issue in microbiology, in particular in
the fields of infectious diseases diagnosis and industrial quality control. The
concept of species is tightly linked to the concept of biological and clinical
classification where the proximity between species is generally measured in
terms of evolutionary distances and/or clinical phenotypes. Surprisingly, the
information provided by this well-known hierarchical structure is rarely used
by machine learning-based automatic microbial identification systems.
Structured machine learning methods were recently proposed for taking into
account the structure embedded in a hierarchy and using it as additional a
priori information, and could therefore allow to improve microbial
identification systems. We test and compare several state-of-the-art machine
learning methods for microbial identification on a new Matrix-Assisted Laser
Desorption/Ionization Time-of-Flight mass spectrometry (MALDI-TOF MS) dataset.
We include in the benchmark standard and structured methods, that leverage the
knowledge of the underlying hierarchical structure in the learning process. Our
results show that although some methods perform better than others, structured
methods do not consistently perform better than their "flat" counterparts. We
postulate that this is partly due to the fact that standard methods already
reach a high level of accuracy in this context, and that they mainly confuse
species close to each other in the tree, a case where using the known hierarchy
is not helpful.
| [
{
"version": "v1",
"created": "Wed, 24 Jun 2015 06:13:15 GMT"
}
] | 2015-06-25T00:00:00 | [
[
"Vervier",
"Kévin",
"",
"CBIO"
],
[
"Mahé",
"Pierre",
"",
"CBIO"
],
[
"Veyrieras",
"Jean-Baptiste",
"",
"CBIO"
],
[
"Vert",
"Jean-Philippe",
"",
"CBIO"
]
] | TITLE: Benchmark of structured machine learning methods for microbial
identification from mass-spectrometry data
ABSTRACT: Microbial identification is a central issue in microbiology, in particular in
the fields of infectious diseases diagnosis and industrial quality control. The
concept of species is tightly linked to the concept of biological and clinical
classification where the proximity between species is generally measured in
terms of evolutionary distances and/or clinical phenotypes. Surprisingly, the
information provided by this well-known hierarchical structure is rarely used
by machine learning-based automatic microbial identification systems.
Structured machine learning methods were recently proposed for taking into
account the structure embedded in a hierarchy and using it as additional a
priori information, and could therefore allow to improve microbial
identification systems. We test and compare several state-of-the-art machine
learning methods for microbial identification on a new Matrix-Assisted Laser
Desorption/Ionization Time-of-Flight mass spectrometry (MALDI-TOF MS) dataset.
We include in the benchmark standard and structured methods, that leverage the
knowledge of the underlying hierarchical structure in the learning process. Our
results show that although some methods perform better than others, structured
methods do not consistently perform better than their "flat" counterparts. We
postulate that this is partly due to the fact that standard methods already
reach a high level of accuracy in this context, and that they mainly confuse
species close to each other in the tree, a case where using the known hierarchy
is not helpful.
| no_new_dataset | 0.947284 |
1506.07254 | Ugo Louche | Ugo Louche, Liva Ralaivola | Unconfused ultraconservative multiclass algorithms | null | Machine Learning, Springer Verlag (Germany), 2015, Machine
learning, 99 (2), pp.351 | 10.1007/s10994-015-5490-3 | MLJ-2015 | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We tackle the problem of learning linear classifiers from noisy datasets in a
multiclass setting. The two-class version of this problem was studied a few
years ago where the proposed approaches to combat the noise revolve around a
Per-ceptron learning scheme fed with peculiar examples computed through a
weighted average of points from the noisy training set. We propose to build
upon these approaches and we introduce a new algorithm called UMA (for
Unconfused Multiclass additive Algorithm) which may be seen as a generalization
to the multiclass setting of the previous approaches. In order to characterize
the noise we use the confusion matrix as a multiclass extension of the
classification noise studied in the aforemen-tioned literature. Theoretically
well-founded, UMA furthermore displays very good empirical noise robustness, as
evidenced by numerical simulations conducted on both synthetic and real data.
| [
{
"version": "v1",
"created": "Wed, 24 Jun 2015 06:31:21 GMT"
}
] | 2015-06-25T00:00:00 | [
[
"Louche",
"Ugo",
""
],
[
"Ralaivola",
"Liva",
""
]
] | TITLE: Unconfused ultraconservative multiclass algorithms
ABSTRACT: We tackle the problem of learning linear classifiers from noisy datasets in a
multiclass setting. The two-class version of this problem was studied a few
years ago where the proposed approaches to combat the noise revolve around a
Per-ceptron learning scheme fed with peculiar examples computed through a
weighted average of points from the noisy training set. We propose to build
upon these approaches and we introduce a new algorithm called UMA (for
Unconfused Multiclass additive Algorithm) which may be seen as a generalization
to the multiclass setting of the previous approaches. In order to characterize
the noise we use the confusion matrix as a multiclass extension of the
classification noise studied in the aforemen-tioned literature. Theoretically
well-founded, UMA furthermore displays very good empirical noise robustness, as
evidenced by numerical simulations conducted on both synthetic and real data.
| no_new_dataset | 0.944382 |
1506.07257 | Jingyu Gao | Jingyu Gao, Jinfu Yang, Guanghui Wang and Mingai Li | A Novel Feature Extraction Method for Scene Recognition Based on
Centered Convolutional Restricted Boltzmann Machines | 22 pages, 11 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Scene recognition is an important research topic in computer vision, while
feature extraction is a key step of object recognition. Although classical
Restricted Boltzmann machines (RBM) can efficiently represent complicated data,
it is hard to handle large images due to its complexity in computation. In this
paper, a novel feature extraction method, named Centered Convolutional
Restricted Boltzmann Machines (CCRBM), is proposed for scene recognition. The
proposed model is an improved Convolutional Restricted Boltzmann Machines
(CRBM) by introducing centered factors in its learning strategy to reduce the
source of instabilities. First, the visible units of the network are redefined
using centered factors. Then, the hidden units are learned with a modified
energy function by utilizing a distribution function, and the visible units are
reconstructed using the learned hidden units. In order to achieve better
generative ability, the Centered Convolutional Deep Belief Networks (CCDBN) is
trained in a greedy layer-wise way. Finally, a softmax regression is
incorporated for scene recognition. Extensive experimental evaluations using
natural scenes, MIT-indoor scenes, and Caltech 101 datasets show that the
proposed approach performs better than other counterparts in terms of
stability, generalization, and discrimination. The CCDBN model is more suitable
for natural scene image recognition by virtue of convolutional property.
| [
{
"version": "v1",
"created": "Wed, 24 Jun 2015 06:42:42 GMT"
}
] | 2015-06-25T00:00:00 | [
[
"Gao",
"Jingyu",
""
],
[
"Yang",
"Jinfu",
""
],
[
"Wang",
"Guanghui",
""
],
[
"Li",
"Mingai",
""
]
] | TITLE: A Novel Feature Extraction Method for Scene Recognition Based on
Centered Convolutional Restricted Boltzmann Machines
ABSTRACT: Scene recognition is an important research topic in computer vision, while
feature extraction is a key step of object recognition. Although classical
Restricted Boltzmann machines (RBM) can efficiently represent complicated data,
it is hard to handle large images due to its complexity in computation. In this
paper, a novel feature extraction method, named Centered Convolutional
Restricted Boltzmann Machines (CCRBM), is proposed for scene recognition. The
proposed model is an improved Convolutional Restricted Boltzmann Machines
(CRBM) by introducing centered factors in its learning strategy to reduce the
source of instabilities. First, the visible units of the network are redefined
using centered factors. Then, the hidden units are learned with a modified
energy function by utilizing a distribution function, and the visible units are
reconstructed using the learned hidden units. In order to achieve better
generative ability, the Centered Convolutional Deep Belief Networks (CCDBN) is
trained in a greedy layer-wise way. Finally, a softmax regression is
incorporated for scene recognition. Extensive experimental evaluations using
natural scenes, MIT-indoor scenes, and Caltech 101 datasets show that the
proposed approach performs better than other counterparts in terms of
stability, generalization, and discrimination. The CCDBN model is more suitable
for natural scene image recognition by virtue of convolutional property.
| no_new_dataset | 0.950319 |
1506.07271 | Jingyu Gao | Jinfu Yang, Jingyu Gao, Guanghui Wang, Shanshan Zhang | Natural Scene Recognition Based on Superpixels and Deep Boltzmann
Machines | 29 pages, 8 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Deep Boltzmann Machines (DBM) is a state-of-the-art unsupervised learning
model, which has been successfully applied to handwritten digit recognition
and, as well as object recognition. However, the DBM is limited in scene
recognition due to the fact that natural scene images are usually very large.
In this paper, an efficient scene recognition approach is proposed based on
superpixels and the DBMs. First, a simple linear iterative clustering (SLIC)
algorithm is employed to generate superpixels of input images, where each
superpixel is regarded as an input of a learning model. Then, a two-layer DBM
model is constructed by stacking two restricted Boltzmann machines (RBMs), and
a greedy layer-wise algorithm is applied to train the DBM model. Finally, a
softmax regression is utilized to categorize scene images. The proposed
technique can effectively reduce the computational complexity and enhance the
performance for large natural image recognition. The approach is verified and
evaluated by extensive experiments, including the fifteen-scene categories
dataset the UIUC eight-sports dataset, and the SIFT flow dataset, are used to
evaluate the proposed method. The experimental results show that the proposed
approach outperforms other state-of-the-art methods in terms of recognition
rate.
| [
{
"version": "v1",
"created": "Wed, 24 Jun 2015 07:53:54 GMT"
}
] | 2015-06-25T00:00:00 | [
[
"Yang",
"Jinfu",
""
],
[
"Gao",
"Jingyu",
""
],
[
"Wang",
"Guanghui",
""
],
[
"Zhang",
"Shanshan",
""
]
] | TITLE: Natural Scene Recognition Based on Superpixels and Deep Boltzmann
Machines
ABSTRACT: The Deep Boltzmann Machines (DBM) is a state-of-the-art unsupervised learning
model, which has been successfully applied to handwritten digit recognition
and, as well as object recognition. However, the DBM is limited in scene
recognition due to the fact that natural scene images are usually very large.
In this paper, an efficient scene recognition approach is proposed based on
superpixels and the DBMs. First, a simple linear iterative clustering (SLIC)
algorithm is employed to generate superpixels of input images, where each
superpixel is regarded as an input of a learning model. Then, a two-layer DBM
model is constructed by stacking two restricted Boltzmann machines (RBMs), and
a greedy layer-wise algorithm is applied to train the DBM model. Finally, a
softmax regression is utilized to categorize scene images. The proposed
technique can effectively reduce the computational complexity and enhance the
performance for large natural image recognition. The approach is verified and
evaluated by extensive experiments, including the fifteen-scene categories
dataset the UIUC eight-sports dataset, and the SIFT flow dataset, are used to
evaluate the proposed method. The experimental results show that the proposed
approach outperforms other state-of-the-art methods in terms of recognition
rate.
| no_new_dataset | 0.938688 |
1503.04344 | Safia Abbas | Safia Abbas | Deposit subscribe Prediction using Data Mining Techniques based Real
Marketing Dataset | null | null | 10.5120/19293-0725 | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, economic depression, which scoured all over the world, affects
business organizations and banking sectors. Such economic pose causes a severe
attrition for banks and customer retention becomes impossible. Accordingly,
marketing managers are in need to increase marketing campaigns, whereas
organizations evade both expenses and business expansion. In order to solve
such riddle, data mining techniques is used as an uttermost factor in data
analysis, data summarizations, hidden pattern discovery, and data
interpretation. In this paper, rough set theory and decision tree mining
techniques have been implemented, using a real marketing data obtained from
Portuguese marketing campaign related to bank deposit subscription [Moro et
al., 2011]. The paper aims to improve the efficiency of the marketing campaigns
and helping the decision makers by reducing the number of features, that
describes the dataset and spotting on the most significant ones, and predict
the deposit customer retention criteria based on potential predictive rules.
| [
{
"version": "v1",
"created": "Sat, 14 Mar 2015 20:23:14 GMT"
}
] | 2015-06-24T00:00:00 | [
[
"Abbas",
"Safia",
""
]
] | TITLE: Deposit subscribe Prediction using Data Mining Techniques based Real
Marketing Dataset
ABSTRACT: Recently, economic depression, which scoured all over the world, affects
business organizations and banking sectors. Such economic pose causes a severe
attrition for banks and customer retention becomes impossible. Accordingly,
marketing managers are in need to increase marketing campaigns, whereas
organizations evade both expenses and business expansion. In order to solve
such riddle, data mining techniques is used as an uttermost factor in data
analysis, data summarizations, hidden pattern discovery, and data
interpretation. In this paper, rough set theory and decision tree mining
techniques have been implemented, using a real marketing data obtained from
Portuguese marketing campaign related to bank deposit subscription [Moro et
al., 2011]. The paper aims to improve the efficiency of the marketing campaigns
and helping the decision makers by reducing the number of features, that
describes the dataset and spotting on the most significant ones, and predict
the deposit customer retention criteria based on potential predictive rules.
| no_new_dataset | 0.954137 |
1504.07877 | Amina Kemmar | Amina Kemmar, Samir Loudni, Yahia Lebbah, Patrice Boizumault, Thierry
Charnois | Prefix-Projection Global Constraint for Sequential Pattern Mining | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sequential pattern mining under constraints is a challenging data mining
task. Many efficient ad hoc methods have been developed for mining sequential
patterns, but they are all suffering from a lack of genericity. Recent works
have investigated Constraint Programming (CP) methods, but they are not still
effective because of their encoding. In this paper, we propose a global
constraint based on the projected databases principle which remedies to this
drawback. Experiments show that our approach clearly outperforms CP approaches
and competes well with ad hoc methods on large datasets.
| [
{
"version": "v1",
"created": "Wed, 29 Apr 2015 14:48:07 GMT"
},
{
"version": "v2",
"created": "Tue, 23 Jun 2015 09:31:49 GMT"
}
] | 2015-06-24T00:00:00 | [
[
"Kemmar",
"Amina",
""
],
[
"Loudni",
"Samir",
""
],
[
"Lebbah",
"Yahia",
""
],
[
"Boizumault",
"Patrice",
""
],
[
"Charnois",
"Thierry",
""
]
] | TITLE: Prefix-Projection Global Constraint for Sequential Pattern Mining
ABSTRACT: Sequential pattern mining under constraints is a challenging data mining
task. Many efficient ad hoc methods have been developed for mining sequential
patterns, but they are all suffering from a lack of genericity. Recent works
have investigated Constraint Programming (CP) methods, but they are not still
effective because of their encoding. In this paper, we propose a global
constraint based on the projected databases principle which remedies to this
drawback. Experiments show that our approach clearly outperforms CP approaches
and competes well with ad hoc methods on large datasets.
| no_new_dataset | 0.953492 |
1506.05752 | Zhihai Yang | Zhihai Yang | Detecting Abnormal Profiles in Collaborative Filtering Recommender
Systems | 13 pages, 7 figures. arXiv admin note: text overlap with
arXiv:1506.04584, arXiv:1506.05247 | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Personalization collaborative filtering recommender systems (CFRSs) are the
crucial components of popular e-commerce services. In practice, CFRSs are also
particularly vulnerable to "shilling" attacks or "profile injection" attacks
due to their openness. The attackers can carefully inject chosen attack
profiles into CFRSs in order to bias the recommendation results to their
benefits. To reduce this risk, various detection techniques have been proposed
to detect such attacks, which use diverse features extracted from user
profiles. However, relying on limited features to improve the detection
performance is difficult seemingly, since the existing features can not fully
characterize the attack profiles and genuine profiles. In this paper, we
propose a novel detection method to make recommender systems resistant to the
"shilling" attacks or "profile injection" attacks. The existing features can be
briefly summarized as two aspects including rating behavior based and item
distribution based. We firstly formulate the problem as finding a mapping model
between rating behavior and item distribution by exploiting the least-squares
approximate solution. Based on the trained model, we design a detector by
employing a regressor to detect such attacks. Extensive experiments on both the
MovieLens-100K and MovieLens-ml-latest-small datasets examine the effectiveness
of our proposed detection method. Experimental results were included to
validate the outperformance of our approach in comparison with benchmarked
method including KNN.
| [
{
"version": "v1",
"created": "Thu, 18 Jun 2015 17:26:14 GMT"
},
{
"version": "v2",
"created": "Fri, 19 Jun 2015 08:05:17 GMT"
},
{
"version": "v3",
"created": "Tue, 23 Jun 2015 07:25:05 GMT"
}
] | 2015-06-24T00:00:00 | [
[
"Yang",
"Zhihai",
""
]
] | TITLE: Detecting Abnormal Profiles in Collaborative Filtering Recommender
Systems
ABSTRACT: Personalization collaborative filtering recommender systems (CFRSs) are the
crucial components of popular e-commerce services. In practice, CFRSs are also
particularly vulnerable to "shilling" attacks or "profile injection" attacks
due to their openness. The attackers can carefully inject chosen attack
profiles into CFRSs in order to bias the recommendation results to their
benefits. To reduce this risk, various detection techniques have been proposed
to detect such attacks, which use diverse features extracted from user
profiles. However, relying on limited features to improve the detection
performance is difficult seemingly, since the existing features can not fully
characterize the attack profiles and genuine profiles. In this paper, we
propose a novel detection method to make recommender systems resistant to the
"shilling" attacks or "profile injection" attacks. The existing features can be
briefly summarized as two aspects including rating behavior based and item
distribution based. We firstly formulate the problem as finding a mapping model
between rating behavior and item distribution by exploiting the least-squares
approximate solution. Based on the trained model, we design a detector by
employing a regressor to detect such attacks. Extensive experiments on both the
MovieLens-100K and MovieLens-ml-latest-small datasets examine the effectiveness
of our proposed detection method. Experimental results were included to
validate the outperformance of our approach in comparison with benchmarked
method including KNN.
| no_new_dataset | 0.94366 |
1506.06628 | Yunchao Wei | Yunchao Wei, Yao Zhao, Zhenfeng Zhu, Shikui Wei, Yanhui Xiao, Jiashi
Feng and Shuicheng Yan | Modality-dependent Cross-media Retrieval | in ACM Transactions on Intelligent Systems and Technology | null | null | null | cs.CV cs.IR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we investigate the cross-media retrieval between images and
text, i.e., using image to search text (I2T) and using text to search images
(T2I). Existing cross-media retrieval methods usually learn one couple of
projections, by which the original features of images and text can be projected
into a common latent space to measure the content similarity. However, using
the same projections for the two different retrieval tasks (I2T and T2I) may
lead to a tradeoff between their respective performances, rather than their
best performances. Different from previous works, we propose a
modality-dependent cross-media retrieval (MDCR) model, where two couples of
projections are learned for different cross-media retrieval tasks instead of
one couple of projections. Specifically, by jointly optimizing the correlation
between images and text and the linear regression from one modal space (image
or text) to the semantic space, two couples of mappings are learned to project
images and text from their original feature spaces into two common latent
subspaces (one for I2T and the other for T2I). Extensive experiments show the
superiority of the proposed MDCR compared with other methods. In particular,
based the 4,096 dimensional convolutional neural network (CNN) visual feature
and 100 dimensional LDA textual feature, the mAP of the proposed method
achieves 41.5\%, which is a new state-of-the-art performance on the Wikipedia
dataset.
| [
{
"version": "v1",
"created": "Mon, 22 Jun 2015 14:33:39 GMT"
},
{
"version": "v2",
"created": "Tue, 23 Jun 2015 01:34:01 GMT"
}
] | 2015-06-24T00:00:00 | [
[
"Wei",
"Yunchao",
""
],
[
"Zhao",
"Yao",
""
],
[
"Zhu",
"Zhenfeng",
""
],
[
"Wei",
"Shikui",
""
],
[
"Xiao",
"Yanhui",
""
],
[
"Feng",
"Jiashi",
""
],
[
"Yan",
"Shuicheng",
""
]
] | TITLE: Modality-dependent Cross-media Retrieval
ABSTRACT: In this paper, we investigate the cross-media retrieval between images and
text, i.e., using image to search text (I2T) and using text to search images
(T2I). Existing cross-media retrieval methods usually learn one couple of
projections, by which the original features of images and text can be projected
into a common latent space to measure the content similarity. However, using
the same projections for the two different retrieval tasks (I2T and T2I) may
lead to a tradeoff between their respective performances, rather than their
best performances. Different from previous works, we propose a
modality-dependent cross-media retrieval (MDCR) model, where two couples of
projections are learned for different cross-media retrieval tasks instead of
one couple of projections. Specifically, by jointly optimizing the correlation
between images and text and the linear regression from one modal space (image
or text) to the semantic space, two couples of mappings are learned to project
images and text from their original feature spaces into two common latent
subspaces (one for I2T and the other for T2I). Extensive experiments show the
superiority of the proposed MDCR compared with other methods. In particular,
based the 4,096 dimensional convolutional neural network (CNN) visual feature
and 100 dimensional LDA textual feature, the mAP of the proposed method
achieves 41.5\%, which is a new state-of-the-art performance on the Wikipedia
dataset.
| no_new_dataset | 0.949809 |
1506.06832 | Alex James Dr | Assel Davletcharova, Sherin Sugathan, Bibia Abraham, Alex Pappachen
James | Detection and Analysis of Emotion From Speech Signals | 2nd International Symposium on Computer Vision and the Internet,
2015; to appear in Procedia Computer Science Journal, Elsevier, 2015 | null | null | null | cs.SD cs.CL cs.HC | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Recognizing emotion from speech has become one the active research themes in
speech processing and in applications based on human-computer interaction. This
paper conducts an experimental study on recognizing emotions from human speech.
The emotions considered for the experiments include neutral, anger, joy and
sadness. The distinuishability of emotional features in speech were studied
first followed by emotion classification performed on a custom dataset. The
classification was performed for different classifiers. One of the main feature
attribute considered in the prepared dataset was the peak-to-peak distance
obtained from the graphical representation of the speech signals. After
performing the classification tests on a dataset formed from 30 different
subjects, it was found that for getting better accuracy, one should consider
the data collected from one person rather than considering the data from a
group of people.
| [
{
"version": "v1",
"created": "Tue, 23 Jun 2015 00:28:08 GMT"
}
] | 2015-06-24T00:00:00 | [
[
"Davletcharova",
"Assel",
""
],
[
"Sugathan",
"Sherin",
""
],
[
"Abraham",
"Bibia",
""
],
[
"James",
"Alex Pappachen",
""
]
] | TITLE: Detection and Analysis of Emotion From Speech Signals
ABSTRACT: Recognizing emotion from speech has become one the active research themes in
speech processing and in applications based on human-computer interaction. This
paper conducts an experimental study on recognizing emotions from human speech.
The emotions considered for the experiments include neutral, anger, joy and
sadness. The distinuishability of emotional features in speech were studied
first followed by emotion classification performed on a custom dataset. The
classification was performed for different classifiers. One of the main feature
attribute considered in the prepared dataset was the peak-to-peak distance
obtained from the graphical representation of the speech signals. After
performing the classification tests on a dataset formed from 30 different
subjects, it was found that for getting better accuracy, one should consider
the data collected from one person rather than considering the data from a
group of people.
| new_dataset | 0.962285 |
1506.06882 | Xavier Alameda-Pineda | Xavier Alameda-Pineda, Jacopo Staiano, Ramanathan Subramanian, Ligia
Batrinca, Elisa Ricci, Bruno Lepri, Oswald Lanz, Nicu Sebe | SALSA: A Novel Dataset for Multimodal Group Behavior Analysis | 14 pages, 11 figures | null | null | null | cs.CV | http://creativecommons.org/licenses/publicdomain/ | Studying free-standing conversational groups (FCGs) in unstructured social
settings (e.g., cocktail party ) is gratifying due to the wealth of information
available at the group (mining social networks) and individual (recognizing
native behavioral and personality traits) levels. However, analyzing social
scenes involving FCGs is also highly challenging due to the difficulty in
extracting behavioral cues such as target locations, their speaking activity
and head/body pose due to crowdedness and presence of extreme occlusions. To
this end, we propose SALSA, a novel dataset facilitating multimodal and
Synergetic sociAL Scene Analysis, and make two main contributions to research
on automated social interaction analysis: (1) SALSA records social interactions
among 18 participants in a natural, indoor environment for over 60 minutes,
under the poster presentation and cocktail party contexts presenting
difficulties in the form of low-resolution images, lighting variations,
numerous occlusions, reverberations and interfering sound sources; (2) To
alleviate these problems we facilitate multimodal analysis by recording the
social interplay using four static surveillance cameras and sociometric badges
worn by each participant, comprising the microphone, accelerometer, bluetooth
and infrared sensors. In addition to raw data, we also provide annotations
concerning individuals' personality as well as their position, head, body
orientation and F-formation information over the entire event duration. Through
extensive experiments with state-of-the-art approaches, we show (a) the
limitations of current methods and (b) how the recorded multiple cues
synergetically aid automatic analysis of social interactions. SALSA is
available at http://tev.fbk.eu/salsa.
| [
{
"version": "v1",
"created": "Tue, 23 Jun 2015 07:19:24 GMT"
}
] | 2015-06-24T00:00:00 | [
[
"Alameda-Pineda",
"Xavier",
""
],
[
"Staiano",
"Jacopo",
""
],
[
"Subramanian",
"Ramanathan",
""
],
[
"Batrinca",
"Ligia",
""
],
[
"Ricci",
"Elisa",
""
],
[
"Lepri",
"Bruno",
""
],
[
"Lanz",
"Oswald",
""
],
[
"Sebe",
"Nicu",
""
]
] | TITLE: SALSA: A Novel Dataset for Multimodal Group Behavior Analysis
ABSTRACT: Studying free-standing conversational groups (FCGs) in unstructured social
settings (e.g., cocktail party ) is gratifying due to the wealth of information
available at the group (mining social networks) and individual (recognizing
native behavioral and personality traits) levels. However, analyzing social
scenes involving FCGs is also highly challenging due to the difficulty in
extracting behavioral cues such as target locations, their speaking activity
and head/body pose due to crowdedness and presence of extreme occlusions. To
this end, we propose SALSA, a novel dataset facilitating multimodal and
Synergetic sociAL Scene Analysis, and make two main contributions to research
on automated social interaction analysis: (1) SALSA records social interactions
among 18 participants in a natural, indoor environment for over 60 minutes,
under the poster presentation and cocktail party contexts presenting
difficulties in the form of low-resolution images, lighting variations,
numerous occlusions, reverberations and interfering sound sources; (2) To
alleviate these problems we facilitate multimodal analysis by recording the
social interplay using four static surveillance cameras and sociometric badges
worn by each participant, comprising the microphone, accelerometer, bluetooth
and infrared sensors. In addition to raw data, we also provide annotations
concerning individuals' personality as well as their position, head, body
orientation and F-formation information over the entire event duration. Through
extensive experiments with state-of-the-art approaches, we show (a) the
limitations of current methods and (b) how the recorded multiple cues
synergetically aid automatic analysis of social interactions. SALSA is
available at http://tev.fbk.eu/salsa.
| new_dataset | 0.966945 |
1506.06905 | Jiuqing Wan | Jiuqing Wan, Menglin Xing | Person re-identification via efficient inference in fully connected CRF | 7 pages, 4 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we address the problem of person re-identification problem,
i.e., retrieving instances from gallery which are generated by the same person
as the given probe image. This is very challenging because the person's
appearance usually undergoes significant variations due to changes in
illumination, camera angle and view, background clutter, and occlusion over the
camera network. In this paper, we assume that the matched gallery images should
not only be similar to the probe, but also be similar to each other, under
suitable metric. We express this assumption with a fully connected CRF model in
which each node corresponds to a gallery and every pair of nodes are connected
by an edge. A label variable is associated with each node to indicate whether
the corresponding image is from target person. We define unary potential for
each node using existing feature calculation and matching techniques, which
reflect the similarity between probe and gallery image, and define pairwise
potential for each edge in terms of a weighed combination of Gaussian kernels,
which encode appearance similarity between pair of gallery images. The specific
form of pairwise potential allows us to exploit an efficient inference
algorithm to calculate the marginal distribution of each label variable for
this dense connected CRF. We show the superiority of our method by applying it
to public datasets and comparing with the state of the art.
| [
{
"version": "v1",
"created": "Tue, 23 Jun 2015 08:27:19 GMT"
}
] | 2015-06-24T00:00:00 | [
[
"Wan",
"Jiuqing",
""
],
[
"Xing",
"Menglin",
""
]
] | TITLE: Person re-identification via efficient inference in fully connected CRF
ABSTRACT: In this paper, we address the problem of person re-identification problem,
i.e., retrieving instances from gallery which are generated by the same person
as the given probe image. This is very challenging because the person's
appearance usually undergoes significant variations due to changes in
illumination, camera angle and view, background clutter, and occlusion over the
camera network. In this paper, we assume that the matched gallery images should
not only be similar to the probe, but also be similar to each other, under
suitable metric. We express this assumption with a fully connected CRF model in
which each node corresponds to a gallery and every pair of nodes are connected
by an edge. A label variable is associated with each node to indicate whether
the corresponding image is from target person. We define unary potential for
each node using existing feature calculation and matching techniques, which
reflect the similarity between probe and gallery image, and define pairwise
potential for each edge in terms of a weighed combination of Gaussian kernels,
which encode appearance similarity between pair of gallery images. The specific
form of pairwise potential allows us to exploit an efficient inference
algorithm to calculate the marginal distribution of each label variable for
this dense connected CRF. We show the superiority of our method by applying it
to public datasets and comparing with the state of the art.
| no_new_dataset | 0.951414 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.