id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
sequencelengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1504.04943 | Yu Zhang | Yu Zhang and Xiu-shen Wei and Jianxin Wu and Jianfei Cai and Jiangbo
Lu and Viet-Anh Nguyen and Minh N. Do | Weakly Supervised Fine-Grained Image Categorization | null | An extended version in IEEE Trans Image Processing, 25(4), 2016:
pp. 1713-1725 | 10.1109/TIP.2016.2531289 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we categorize fine-grained images without using any object /
part annotation neither in the training nor in the testing stage, a step
towards making it suitable for deployments. Fine-grained image categorization
aims to classify objects with subtle distinctions. Most existing works heavily
rely on object / part detectors to build the correspondence between object
parts by using object or object part annotations inside training images. The
need for expensive object annotations prevents the wide usage of these methods.
Instead, we propose to select useful parts from multi-scale part proposals in
objects, and use them to compute a global image representation for
categorization. This is specially designed for the annotation-free fine-grained
categorization task, because useful parts have shown to play an important role
in existing annotation-dependent works but accurate part detectors can be
hardly acquired. With the proposed image representation, we can further detect
and visualize the key (most discriminative) parts in objects of different
classes. In the experiment, the proposed annotation-free method achieves better
accuracy than that of state-of-the-art annotation-free and most existing
annotation-dependent methods on two challenging datasets, which shows that it
is not always necessary to use accurate object / part annotations in
fine-grained image categorization.
| [
{
"version": "v1",
"created": "Mon, 20 Apr 2015 05:58:21 GMT"
}
] | 2016-05-04T00:00:00 | [
[
"Zhang",
"Yu",
""
],
[
"Wei",
"Xiu-shen",
""
],
[
"Wu",
"Jianxin",
""
],
[
"Cai",
"Jianfei",
""
],
[
"Lu",
"Jiangbo",
""
],
[
"Nguyen",
"Viet-Anh",
""
],
[
"Do",
"Minh N.",
""
]
] | TITLE: Weakly Supervised Fine-Grained Image Categorization
ABSTRACT: In this paper, we categorize fine-grained images without using any object /
part annotation neither in the training nor in the testing stage, a step
towards making it suitable for deployments. Fine-grained image categorization
aims to classify objects with subtle distinctions. Most existing works heavily
rely on object / part detectors to build the correspondence between object
parts by using object or object part annotations inside training images. The
need for expensive object annotations prevents the wide usage of these methods.
Instead, we propose to select useful parts from multi-scale part proposals in
objects, and use them to compute a global image representation for
categorization. This is specially designed for the annotation-free fine-grained
categorization task, because useful parts have shown to play an important role
in existing annotation-dependent works but accurate part detectors can be
hardly acquired. With the proposed image representation, we can further detect
and visualize the key (most discriminative) parts in objects of different
classes. In the experiment, the proposed annotation-free method achieves better
accuracy than that of state-of-the-art annotation-free and most existing
annotation-dependent methods on two challenging datasets, which shows that it
is not always necessary to use accurate object / part annotations in
fine-grained image categorization.
| no_new_dataset | 0.949529 |
1505.04650 | Mariano Tepper | Mariano Tepper and Guillermo Sapiro | Compressed Nonnegative Matrix Factorization is Fast and Accurate | null | null | 10.1109/TSP.2016.2516971 | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Nonnegative matrix factorization (NMF) has an established reputation as a
useful data analysis technique in numerous applications. However, its usage in
practical situations is undergoing challenges in recent years. The fundamental
factor to this is the increasingly growing size of the datasets available and
needed in the information sciences. To address this, in this work we propose to
use structured random compression, that is, random projections that exploit the
data structure, for two NMF variants: classical and separable. In separable NMF
(SNMF) the left factors are a subset of the columns of the input matrix. We
present suitable formulations for each problem, dealing with different
representative algorithms within each one. We show that the resulting
compressed techniques are faster than their uncompressed variants, vastly
reduce memory demands, and do not encompass any significant deterioration in
performance. The proposed structured random projections for SNMF allow to deal
with arbitrarily shaped large matrices, beyond the standard limit of
tall-and-skinny matrices, granting access to very efficient computations in
this general setting. We accompany the algorithmic presentation with
theoretical foundations and numerous and diverse examples, showing the
suitability of the proposed approaches.
| [
{
"version": "v1",
"created": "Mon, 18 May 2015 14:12:22 GMT"
},
{
"version": "v2",
"created": "Sun, 6 Sep 2015 20:22:36 GMT"
}
] | 2016-05-04T00:00:00 | [
[
"Tepper",
"Mariano",
""
],
[
"Sapiro",
"Guillermo",
""
]
] | TITLE: Compressed Nonnegative Matrix Factorization is Fast and Accurate
ABSTRACT: Nonnegative matrix factorization (NMF) has an established reputation as a
useful data analysis technique in numerous applications. However, its usage in
practical situations is undergoing challenges in recent years. The fundamental
factor to this is the increasingly growing size of the datasets available and
needed in the information sciences. To address this, in this work we propose to
use structured random compression, that is, random projections that exploit the
data structure, for two NMF variants: classical and separable. In separable NMF
(SNMF) the left factors are a subset of the columns of the input matrix. We
present suitable formulations for each problem, dealing with different
representative algorithms within each one. We show that the resulting
compressed techniques are faster than their uncompressed variants, vastly
reduce memory demands, and do not encompass any significant deterioration in
performance. The proposed structured random projections for SNMF allow to deal
with arbitrarily shaped large matrices, beyond the standard limit of
tall-and-skinny matrices, granting access to very efficient computations in
this general setting. We accompany the algorithmic presentation with
theoretical foundations and numerous and diverse examples, showing the
suitability of the proposed approaches.
| no_new_dataset | 0.939913 |
1505.06821 | Shi-Zhe Chen | Shi-Zhe Chen, Chun-Chao Guo, Jian-Huang Lai | Deep Ranking for Person Re-identification via Joint Representation
Learning | 15 pages, 15 figures, IEEE Transactions on Image Processing (TIP),
2016 | null | 10.1109/TIP.2016.2545929 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes a novel approach to person re-identification, a
fundamental task in distributed multi-camera surveillance systems. Although a
variety of powerful algorithms have been presented in the past few years, most
of them usually focus on designing hand-crafted features and learning metrics
either individually or sequentially. Different from previous works, we
formulate a unified deep ranking framework that jointly tackles both of these
key components to maximize their strengths. We start from the principle that
the correct match of the probe image should be positioned in the top rank
within the whole gallery set. An effective learning-to-rank algorithm is
proposed to minimize the cost corresponding to the ranking disorders of the
gallery. The ranking model is solved with a deep convolutional neural network
(CNN) that builds the relation between input image pairs and their similarity
scores through joint representation learning directly from raw image pixels.
The proposed framework allows us to get rid of feature engineering and does not
rely on any assumption. An extensive comparative evaluation is given,
demonstrating that our approach significantly outperforms all state-of-the-art
approaches, including both traditional and CNN-based methods on the challenging
VIPeR, CUHK-01 and CAVIAR4REID datasets. Additionally, our approach has better
ability to generalize across datasets without fine-tuning.
| [
{
"version": "v1",
"created": "Tue, 26 May 2015 06:35:46 GMT"
},
{
"version": "v2",
"created": "Wed, 23 Mar 2016 03:37:36 GMT"
}
] | 2016-05-04T00:00:00 | [
[
"Chen",
"Shi-Zhe",
""
],
[
"Guo",
"Chun-Chao",
""
],
[
"Lai",
"Jian-Huang",
""
]
] | TITLE: Deep Ranking for Person Re-identification via Joint Representation
Learning
ABSTRACT: This paper proposes a novel approach to person re-identification, a
fundamental task in distributed multi-camera surveillance systems. Although a
variety of powerful algorithms have been presented in the past few years, most
of them usually focus on designing hand-crafted features and learning metrics
either individually or sequentially. Different from previous works, we
formulate a unified deep ranking framework that jointly tackles both of these
key components to maximize their strengths. We start from the principle that
the correct match of the probe image should be positioned in the top rank
within the whole gallery set. An effective learning-to-rank algorithm is
proposed to minimize the cost corresponding to the ranking disorders of the
gallery. The ranking model is solved with a deep convolutional neural network
(CNN) that builds the relation between input image pairs and their similarity
scores through joint representation learning directly from raw image pixels.
The proposed framework allows us to get rid of feature engineering and does not
rely on any assumption. An extensive comparative evaluation is given,
demonstrating that our approach significantly outperforms all state-of-the-art
approaches, including both traditional and CNN-based methods on the challenging
VIPeR, CUHK-01 and CAVIAR4REID datasets. Additionally, our approach has better
ability to generalize across datasets without fine-tuning.
| no_new_dataset | 0.943138 |
1509.06808 | Karthik Gangavarapu | Karthik Gangavarapu, Vyshakh Babji, Tobias Mei{\ss}ner, Andrew I. Su,
and Benjamin M. Good | Branch: An interactive, web-based tool for testing hypotheses and
developing predictive models | null | null | 10.1093/bioinformatics/btw117 | null | stat.AP cs.CY cs.HC | http://creativecommons.org/licenses/by/4.0/ | Branch is a web application that provides users with no programming with the
ability to interact directly with large biomedical datasets. The interaction is
mediated through a collaborative graphical user interface for building and
evaluating decision trees. These trees can be used to compose and test
sophisticated hypotheses and to develop predictive models. Decision trees are
evaluated based on a library of imported datasets and can be stored in a
collective area for sharing and re-use. Branch is hosted at
http://biobranch.org/ and the open source code is available at
http://bitbucket.org/sulab/biobranch/.
| [
{
"version": "v1",
"created": "Tue, 22 Sep 2015 23:15:57 GMT"
},
{
"version": "v2",
"created": "Wed, 30 Sep 2015 20:55:14 GMT"
}
] | 2016-05-04T00:00:00 | [
[
"Gangavarapu",
"Karthik",
""
],
[
"Babji",
"Vyshakh",
""
],
[
"Meißner",
"Tobias",
""
],
[
"Su",
"Andrew I.",
""
],
[
"Good",
"Benjamin M.",
""
]
] | TITLE: Branch: An interactive, web-based tool for testing hypotheses and
developing predictive models
ABSTRACT: Branch is a web application that provides users with no programming with the
ability to interact directly with large biomedical datasets. The interaction is
mediated through a collaborative graphical user interface for building and
evaluating decision trees. These trees can be used to compose and test
sophisticated hypotheses and to develop predictive models. Decision trees are
evaluated based on a library of imported datasets and can be stored in a
collective area for sharing and re-use. Branch is hosted at
http://biobranch.org/ and the open source code is available at
http://bitbucket.org/sulab/biobranch/.
| no_new_dataset | 0.951051 |
1510.03283 | Weilin Huang | Tong He, Weilin Huang, Yu Qiao, and Jian Yao | Text-Attentional Convolutional Neural Networks for Scene Text Detection | To appear in IEEE Trans. on Image Processing, 2016 | null | 10.1109/TIP.2016.2547588 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent deep learning models have demonstrated strong capabilities for
classifying text and non-text components in natural images. They extract a
high-level feature computed globally from a whole image component (patch),
where the cluttered background information may dominate true text features in
the deep representation. This leads to less discriminative power and poorer
robustness. In this work, we present a new system for scene text detection by
proposing a novel Text-Attentional Convolutional Neural Network (Text-CNN) that
particularly focuses on extracting text-related regions and features from the
image components. We develop a new learning mechanism to train the Text-CNN
with multi-level and rich supervised information, including text region mask,
character label, and binary text/nontext information. The rich supervision
information enables the Text-CNN with a strong capability for discriminating
ambiguous texts, and also increases its robustness against complicated
background components. The training process is formulated as a multi-task
learning problem, where low-level supervised information greatly facilitates
main task of text/non-text classification. In addition, a powerful low-level
detector called Contrast- Enhancement Maximally Stable Extremal Regions
(CE-MSERs) is developed, which extends the widely-used MSERs by enhancing
intensity contrast between text patterns and background. This allows it to
detect highly challenging text patterns, resulting in a higher recall. Our
approach achieved promising results on the ICDAR 2013 dataset, with a F-measure
of 0.82, improving the state-of-the-art results substantially.
| [
{
"version": "v1",
"created": "Mon, 12 Oct 2015 13:53:13 GMT"
},
{
"version": "v2",
"created": "Thu, 24 Mar 2016 23:25:52 GMT"
}
] | 2016-05-04T00:00:00 | [
[
"He",
"Tong",
""
],
[
"Huang",
"Weilin",
""
],
[
"Qiao",
"Yu",
""
],
[
"Yao",
"Jian",
""
]
] | TITLE: Text-Attentional Convolutional Neural Networks for Scene Text Detection
ABSTRACT: Recent deep learning models have demonstrated strong capabilities for
classifying text and non-text components in natural images. They extract a
high-level feature computed globally from a whole image component (patch),
where the cluttered background information may dominate true text features in
the deep representation. This leads to less discriminative power and poorer
robustness. In this work, we present a new system for scene text detection by
proposing a novel Text-Attentional Convolutional Neural Network (Text-CNN) that
particularly focuses on extracting text-related regions and features from the
image components. We develop a new learning mechanism to train the Text-CNN
with multi-level and rich supervised information, including text region mask,
character label, and binary text/nontext information. The rich supervision
information enables the Text-CNN with a strong capability for discriminating
ambiguous texts, and also increases its robustness against complicated
background components. The training process is formulated as a multi-task
learning problem, where low-level supervised information greatly facilitates
main task of text/non-text classification. In addition, a powerful low-level
detector called Contrast- Enhancement Maximally Stable Extremal Regions
(CE-MSERs) is developed, which extends the widely-used MSERs by enhancing
intensity contrast between text patterns and background. This allows it to
detect highly challenging text patterns, resulting in a higher recall. Our
approach achieved promising results on the ICDAR 2013 dataset, with a F-measure
of 0.82, improving the state-of-the-art results substantially.
| no_new_dataset | 0.948917 |
1511.06036 | Masayuki Ohzeki | Masayuki Ohzeki | Stochastic gradient method with accelerated stochastic dynamics | 12 pages, proceedings for International Meeting on High-Dimensional
Data Driven Science (HD3-2015)
(http://www.sparse-modeling.jp/HD3-2015/index_e.html) | null | 10.1088/1742-6596/699/1/012019 | null | stat.ML cond-mat.dis-nn cond-mat.stat-mech cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a novel technique to implement stochastic gradient
methods, which are beneficial for learning from large datasets, through
accelerated stochastic dynamics. A stochastic gradient method is based on
mini-batch learning for reducing the computational cost when the amount of data
is large. The stochasticity of the gradient can be mitigated by the injection
of Gaussian noise, which yields the stochastic Langevin gradient method; this
method can be used for Bayesian posterior sampling. However, the performance of
the stochastic Langevin gradient method depends on the mixing rate of the
stochastic dynamics. In this study, we propose violating the detailed balance
condition to enhance the mixing rate. Recent studies have revealed that
violating the detailed balance condition accelerates the convergence to a
stationary state and reduces the correlation time between the samplings. We
implement this violation of the detailed balance condition in the stochastic
gradient Langevin method and test our method for a simple model to demonstrate
its performance.
| [
{
"version": "v1",
"created": "Thu, 19 Nov 2015 01:01:59 GMT"
}
] | 2016-05-04T00:00:00 | [
[
"Ohzeki",
"Masayuki",
""
]
] | TITLE: Stochastic gradient method with accelerated stochastic dynamics
ABSTRACT: In this paper, we propose a novel technique to implement stochastic gradient
methods, which are beneficial for learning from large datasets, through
accelerated stochastic dynamics. A stochastic gradient method is based on
mini-batch learning for reducing the computational cost when the amount of data
is large. The stochasticity of the gradient can be mitigated by the injection
of Gaussian noise, which yields the stochastic Langevin gradient method; this
method can be used for Bayesian posterior sampling. However, the performance of
the stochastic Langevin gradient method depends on the mixing rate of the
stochastic dynamics. In this study, we propose violating the detailed balance
condition to enhance the mixing rate. Recent studies have revealed that
violating the detailed balance condition accelerates the convergence to a
stationary state and reduces the correlation time between the samplings. We
implement this violation of the detailed balance condition in the stochastic
gradient Langevin method and test our method for a simple model to demonstrate
its performance.
| no_new_dataset | 0.950365 |
1601.01074 | Tomoyuki Obuchi | Tomoyuki Obuchi and Yoshiyuki Kabashima | Sparse approximation problem: how rapid simulated annealing succeeds and
fails | 12 pages, 7 figures, a proceedings of HD^3-2015 | null | 10.1088/1742-6596/699/1/012017 | null | cs.IT cond-mat.dis-nn cond-mat.stat-mech math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Information processing techniques based on sparseness have been actively
studied in several disciplines. Among them, a mathematical framework to
approximately express a given dataset by a combination of a small number of
basis vectors of an overcomplete basis is termed the {\em sparse
approximation}. In this paper, we apply simulated annealing, a metaheuristic
algorithm for general optimization problems, to sparse approximation in the
situation where the given data have a planted sparse representation and noise
is present. The result in the noiseless case shows that our simulated annealing
works well in a reasonable parameter region: the planted solution is found
fairly rapidly. This is true even in the case where a common relaxation of the
sparse approximation problem, the $\ell_1$-relaxation, is ineffective. On the
other hand, when the dimensionality of the data is close to the number of
non-zero components, another metastable state emerges, and our algorithm fails
to find the planted solution. This phenomenon is associated with a first-order
phase transition. In the case of very strong noise, it is no longer meaningful
to search for the planted solution. In this situation, our algorithm determines
a solution with close-to-minimum distortion fairly quickly.
| [
{
"version": "v1",
"created": "Wed, 6 Jan 2016 04:15:04 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Mar 2016 09:39:00 GMT"
}
] | 2016-05-04T00:00:00 | [
[
"Obuchi",
"Tomoyuki",
""
],
[
"Kabashima",
"Yoshiyuki",
""
]
] | TITLE: Sparse approximation problem: how rapid simulated annealing succeeds and
fails
ABSTRACT: Information processing techniques based on sparseness have been actively
studied in several disciplines. Among them, a mathematical framework to
approximately express a given dataset by a combination of a small number of
basis vectors of an overcomplete basis is termed the {\em sparse
approximation}. In this paper, we apply simulated annealing, a metaheuristic
algorithm for general optimization problems, to sparse approximation in the
situation where the given data have a planted sparse representation and noise
is present. The result in the noiseless case shows that our simulated annealing
works well in a reasonable parameter region: the planted solution is found
fairly rapidly. This is true even in the case where a common relaxation of the
sparse approximation problem, the $\ell_1$-relaxation, is ineffective. On the
other hand, when the dimensionality of the data is close to the number of
non-zero components, another metastable state emerges, and our algorithm fails
to find the planted solution. This phenomenon is associated with a first-order
phase transition. In the case of very strong noise, it is no longer meaningful
to search for the planted solution. In this situation, our algorithm determines
a solution with close-to-minimum distortion fairly quickly.
| no_new_dataset | 0.949529 |
1603.01942 | Xiaqing Pan | Xiaqing Pan, Sachin Chachada, C.-C. Jay Kuo | A Two-Stage Shape Retrieval (TSR) Method with Global and Local Features | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A robust two-stage shape retrieval (TSR) method is proposed to address the 2D
shape retrieval problem. Most state-of-the-art shape retrieval methods are
based on local features matching and ranking. Their retrieval performance is
not robust since they may retrieve globally dissimilar shapes in high ranks. To
overcome this challenge, we decompose the decision process into two stages. In
the first irrelevant cluster filtering (ICF) stage, we consider both global and
local features and use them to predict the relevance of gallery shapes with
respect to the query. Irrelevant shapes are removed from the candidate shape
set. After that, a local-features-based matching and ranking (LMR) method
follows in the second stage. We apply the proposed TSR system to MPEG-7,
Kimia99 and Tari1000 three datasets and show that it outperforms all other
existing methods. The robust retrieval performance of the TSR system is
demonstrated.
| [
{
"version": "v1",
"created": "Mon, 7 Mar 2016 05:33:00 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Mar 2016 05:50:14 GMT"
},
{
"version": "v3",
"created": "Tue, 3 May 2016 04:22:41 GMT"
}
] | 2016-05-04T00:00:00 | [
[
"Pan",
"Xiaqing",
""
],
[
"Chachada",
"Sachin",
""
],
[
"Kuo",
"C. -C. Jay",
""
]
] | TITLE: A Two-Stage Shape Retrieval (TSR) Method with Global and Local Features
ABSTRACT: A robust two-stage shape retrieval (TSR) method is proposed to address the 2D
shape retrieval problem. Most state-of-the-art shape retrieval methods are
based on local features matching and ranking. Their retrieval performance is
not robust since they may retrieve globally dissimilar shapes in high ranks. To
overcome this challenge, we decompose the decision process into two stages. In
the first irrelevant cluster filtering (ICF) stage, we consider both global and
local features and use them to predict the relevance of gallery shapes with
respect to the query. Irrelevant shapes are removed from the candidate shape
set. After that, a local-features-based matching and ranking (LMR) method
follows in the second stage. We apply the proposed TSR system to MPEG-7,
Kimia99 and Tari1000 three datasets and show that it outperforms all other
existing methods. The robust retrieval performance of the TSR system is
demonstrated.
| no_new_dataset | 0.94801 |
1603.03234 | Hanjiang Lai | Hanjiang Lai, Pan Yan, Xiangbo Shu, Yunchao Wei, Shuicheng Yan | Instance-Aware Hashing for Multi-Label Image Retrieval | has been accepted as a regular paper in the IEEE Transactions on
Image Processing, 2016 | null | 10.1109/TIP.2016.2545300 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Similarity-preserving hashing is a commonly used method for nearest neighbour
search in large-scale image retrieval. For image retrieval, deep-networks-based
hashing methods are appealing since they can simultaneously learn effective
image representations and compact hash codes. This paper focuses on
deep-networks-based hashing for multi-label images, each of which may contain
objects of multiple categories. In most existing hashing methods, each image is
represented by one piece of hash code, which is referred to as semantic
hashing. This setting may be suboptimal for multi-label image retrieval. To
solve this problem, we propose a deep architecture that learns
\textbf{instance-aware} image representations for multi-label image data, which
are organized in multiple groups, with each group containing the features for
one category. The instance-aware representations not only bring advantages to
semantic hashing, but also can be used in category-aware hashing, in which an
image is represented by multiple pieces of hash codes and each piece of code
corresponds to a category. Extensive evaluations conducted on several benchmark
datasets demonstrate that, for both semantic hashing and category-aware
hashing, the proposed method shows substantial improvement over the
state-of-the-art supervised and unsupervised hashing methods.
| [
{
"version": "v1",
"created": "Thu, 10 Mar 2016 12:21:50 GMT"
}
] | 2016-05-04T00:00:00 | [
[
"Lai",
"Hanjiang",
""
],
[
"Yan",
"Pan",
""
],
[
"Shu",
"Xiangbo",
""
],
[
"Wei",
"Yunchao",
""
],
[
"Yan",
"Shuicheng",
""
]
] | TITLE: Instance-Aware Hashing for Multi-Label Image Retrieval
ABSTRACT: Similarity-preserving hashing is a commonly used method for nearest neighbour
search in large-scale image retrieval. For image retrieval, deep-networks-based
hashing methods are appealing since they can simultaneously learn effective
image representations and compact hash codes. This paper focuses on
deep-networks-based hashing for multi-label images, each of which may contain
objects of multiple categories. In most existing hashing methods, each image is
represented by one piece of hash code, which is referred to as semantic
hashing. This setting may be suboptimal for multi-label image retrieval. To
solve this problem, we propose a deep architecture that learns
\textbf{instance-aware} image representations for multi-label image data, which
are organized in multiple groups, with each group containing the features for
one category. The instance-aware representations not only bring advantages to
semantic hashing, but also can be used in category-aware hashing, in which an
image is represented by multiple pieces of hash codes and each piece of code
corresponds to a category. Extensive evaluations conducted on several benchmark
datasets demonstrate that, for both semantic hashing and category-aware
hashing, the proposed method shows substantial improvement over the
state-of-the-art supervised and unsupervised hashing methods.
| no_new_dataset | 0.950365 |
1603.05335 | Delu Zeng | Tong Zhao, Lin Li, Xinghao Ding, Yue Huang and Delu Zeng | Saliency Detection with Spaces of Background-based Distribution | 5 pages, 6 figures, Accepted by IEEE Signal Processing Letters in
March 2016 | null | 10.1109/LSP.2016.2544781 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this letter, an effective image saliency detection method is proposed by
constructing some novel spaces to model the background and redefine the
distance of the salient patches away from the background. Concretely, given the
backgroundness prior, eigendecomposition is utilized to create four spaces of
background-based distribution (SBD) to model the background, in which a more
appropriate metric (Mahalanobis distance) is quoted to delicately measure the
saliency of every image patch away from the background. After that, a coarse
saliency map is obtained by integrating the four adjusted Mahalanobis distance
maps, each of which is formed by the distances between all the patches and
background in the corresponding SBD. To be more discriminative, the coarse
saliency map is further enhanced into the posterior probability map within
Bayesian perspective. Finally, the final saliency map is generated by properly
refining the posterior probability map with geodesic distance. Experimental
results on two usual datasets show that the proposed method is effective
compared with the state-of-the-art algorithms.
| [
{
"version": "v1",
"created": "Thu, 17 Mar 2016 02:18:30 GMT"
}
] | 2016-05-04T00:00:00 | [
[
"Zhao",
"Tong",
""
],
[
"Li",
"Lin",
""
],
[
"Ding",
"Xinghao",
""
],
[
"Huang",
"Yue",
""
],
[
"Zeng",
"Delu",
""
]
] | TITLE: Saliency Detection with Spaces of Background-based Distribution
ABSTRACT: In this letter, an effective image saliency detection method is proposed by
constructing some novel spaces to model the background and redefine the
distance of the salient patches away from the background. Concretely, given the
backgroundness prior, eigendecomposition is utilized to create four spaces of
background-based distribution (SBD) to model the background, in which a more
appropriate metric (Mahalanobis distance) is quoted to delicately measure the
saliency of every image patch away from the background. After that, a coarse
saliency map is obtained by integrating the four adjusted Mahalanobis distance
maps, each of which is formed by the distances between all the patches and
background in the corresponding SBD. To be more discriminative, the coarse
saliency map is further enhanced into the posterior probability map within
Bayesian perspective. Finally, the final saliency map is generated by properly
refining the posterior probability map with geodesic distance. Experimental
results on two usual datasets show that the proposed method is effective
compared with the state-of-the-art algorithms.
| no_new_dataset | 0.948058 |
1605.00017 | Seunghyun Park | Seunghyun Park, Seonwoo Min, Hyunsoo Choi, and Sungroh Yoon | deepMiRGene: Deep Neural Network based Precursor microRNA Prediction | null | null | null | null | cs.LG q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Since microRNAs (miRNAs) play a crucial role in post-transcriptional gene
regulation, miRNA identification is one of the most essential problems in
computational biology. miRNAs are usually short in length ranging between 20
and 23 base pairs. It is thus often difficult to distinguish miRNA-encoding
sequences from other non-coding RNAs and pseudo miRNAs that have a similar
length, and most previous studies have recommended using precursor miRNAs
instead of mature miRNAs for robust detection. A great number of conventional
machine-learning-based classification methods have been proposed, but they
often have the serious disadvantage of requiring manual feature engineering,
and their performance is limited as well. In this paper, we propose a novel
miRNA precursor prediction algorithm, deepMiRGene, based on recurrent neural
networks, specifically long short-term memory networks. deepMiRGene
automatically learns suitable features from the data themselves without manual
feature engineering and constructs a model that can successfully reflect
structural characteristics of precursor miRNAs. For the performance evaluation
of our approach, we have employed several widely used evaluation metrics on
three recent benchmark datasets and verified that deepMiRGene delivered
comparable performance among the current state-of-the-art tools.
| [
{
"version": "v1",
"created": "Fri, 29 Apr 2016 20:12:04 GMT"
}
] | 2016-05-04T00:00:00 | [
[
"Park",
"Seunghyun",
""
],
[
"Min",
"Seonwoo",
""
],
[
"Choi",
"Hyunsoo",
""
],
[
"Yoon",
"Sungroh",
""
]
] | TITLE: deepMiRGene: Deep Neural Network based Precursor microRNA Prediction
ABSTRACT: Since microRNAs (miRNAs) play a crucial role in post-transcriptional gene
regulation, miRNA identification is one of the most essential problems in
computational biology. miRNAs are usually short in length ranging between 20
and 23 base pairs. It is thus often difficult to distinguish miRNA-encoding
sequences from other non-coding RNAs and pseudo miRNAs that have a similar
length, and most previous studies have recommended using precursor miRNAs
instead of mature miRNAs for robust detection. A great number of conventional
machine-learning-based classification methods have been proposed, but they
often have the serious disadvantage of requiring manual feature engineering,
and their performance is limited as well. In this paper, we propose a novel
miRNA precursor prediction algorithm, deepMiRGene, based on recurrent neural
networks, specifically long short-term memory networks. deepMiRGene
automatically learns suitable features from the data themselves without manual
feature engineering and constructs a model that can successfully reflect
structural characteristics of precursor miRNAs. For the performance evaluation
of our approach, we have employed several widely used evaluation metrics on
three recent benchmark datasets and verified that deepMiRGene delivered
comparable performance among the current state-of-the-art tools.
| no_new_dataset | 0.94801 |
1605.00448 | Sihyun Jeong | Sihyun Jeong, Giseop Noh, Hayoung Oh, Chong-kwon Kim | Follow Spam Detection based on Cascaded Social Information | 34 pages,10 figures, Preprint submitted to Elsevier Information
Sciences | null | null | null | cs.SI cs.IR | http://creativecommons.org/licenses/by/4.0/ | In the last decade we have witnessed the explosive growth of online social
networking services (SNSs) such as Facebook, Twitter, RenRen and LinkedIn.
While SNSs provide diverse benefits for example, forstering interpersonal
relationships, community formations and news propagation, they also attracted
uninvited nuiance. Spammers abuse SNSs as vehicles to spread spams rapidly and
widely. Spams, unsolicited or inappropriate messages, significantly impair the
credibility and reliability of services. Therefore, detecting spammers has
become an urgent and critical issue in SNSs. This paper deals with Follow spam
in Twitter. Instead of spreading annoying messages to the public, a spammer
follows (subscribes to) legitimate users, and followed a legitimate user. Based
on the assumption that the online relationships of spammers are different from
those of legitimate users, we proposed classification schemes that detect
follow spammers. Particularly, we focused on cascaded social relations and
devised two schemes, TSP-Filtering and SS-Filtering, each of which utilizes
Triad Significance Profile (TSP) and Social status (SS) in a two-hop subnetwork
centered at each other. We also propose an emsemble technique,
Cascaded-Filtering, that combine both TSP and SS properties. Our experiments on
real Twitter datasets demonstrated that the proposed three approaches are very
practical. The proposed schemes are scalable because instead of analyzing the
whole network, they inspect user-centered two hop social networks. Our
performance study showed that proposed methods yield significantly better
performance than prior scheme in terms of true positives and false positives.
| [
{
"version": "v1",
"created": "Mon, 2 May 2016 11:58:51 GMT"
}
] | 2016-05-04T00:00:00 | [
[
"Jeong",
"Sihyun",
""
],
[
"Noh",
"Giseop",
""
],
[
"Oh",
"Hayoung",
""
],
[
"Kim",
"Chong-kwon",
""
]
] | TITLE: Follow Spam Detection based on Cascaded Social Information
ABSTRACT: In the last decade we have witnessed the explosive growth of online social
networking services (SNSs) such as Facebook, Twitter, RenRen and LinkedIn.
While SNSs provide diverse benefits for example, forstering interpersonal
relationships, community formations and news propagation, they also attracted
uninvited nuiance. Spammers abuse SNSs as vehicles to spread spams rapidly and
widely. Spams, unsolicited or inappropriate messages, significantly impair the
credibility and reliability of services. Therefore, detecting spammers has
become an urgent and critical issue in SNSs. This paper deals with Follow spam
in Twitter. Instead of spreading annoying messages to the public, a spammer
follows (subscribes to) legitimate users, and followed a legitimate user. Based
on the assumption that the online relationships of spammers are different from
those of legitimate users, we proposed classification schemes that detect
follow spammers. Particularly, we focused on cascaded social relations and
devised two schemes, TSP-Filtering and SS-Filtering, each of which utilizes
Triad Significance Profile (TSP) and Social status (SS) in a two-hop subnetwork
centered at each other. We also propose an emsemble technique,
Cascaded-Filtering, that combine both TSP and SS properties. Our experiments on
real Twitter datasets demonstrated that the proposed three approaches are very
practical. The proposed schemes are scalable because instead of analyzing the
whole network, they inspect user-centered two hop social networks. Our
performance study showed that proposed methods yield significantly better
performance than prior scheme in terms of true positives and false positives.
| no_new_dataset | 0.950824 |
1605.00707 | Mikhail Breslav | Mikhail Breslav, Tyson L. Hedrick, Stan Sclaroff, Margrit Betke | Discovering Useful Parts for Pose Estimation in Sparsely Annotated
Datasets | Accepted at WACV 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Our work introduces a novel way to increase pose estimation accuracy by
discovering parts from unannotated regions of training images. Discovered parts
are used to generate more accurate appearance likelihoods for traditional
part-based models like Pictorial Structures [13] and its derivatives. Our
experiments on images of a hawkmoth in flight show that our proposed approach
significantly improves over existing work [27] for this application, while also
being more generally applicable. Our proposed approach localizes landmarks at
least twice as accurately as a baseline based on a Mixture of Pictorial
Structures (MPS) model. Our unique High-Resolution Moth Flight (HRMF) dataset
is made publicly available with annotations.
| [
{
"version": "v1",
"created": "Mon, 2 May 2016 23:37:11 GMT"
}
] | 2016-05-04T00:00:00 | [
[
"Breslav",
"Mikhail",
""
],
[
"Hedrick",
"Tyson L.",
""
],
[
"Sclaroff",
"Stan",
""
],
[
"Betke",
"Margrit",
""
]
] | TITLE: Discovering Useful Parts for Pose Estimation in Sparsely Annotated
Datasets
ABSTRACT: Our work introduces a novel way to increase pose estimation accuracy by
discovering parts from unannotated regions of training images. Discovered parts
are used to generate more accurate appearance likelihoods for traditional
part-based models like Pictorial Structures [13] and its derivatives. Our
experiments on images of a hawkmoth in flight show that our proposed approach
significantly improves over existing work [27] for this application, while also
being more generally applicable. Our proposed approach localizes landmarks at
least twice as accurately as a baseline based on a Mixture of Pictorial
Structures (MPS) model. Our unique High-Resolution Moth Flight (HRMF) dataset
is made publicly available with annotations.
| new_dataset | 0.951323 |
1605.00743 | Chuang Gan | Chuang Gan, Tianbao Yang, Boqing Gong | Learning Attributes Equals Multi-Source Domain Generalization | Accepted by CVPR 2016 as a spotlight presentation | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Attributes possess appealing properties and benefit many computer vision
problems, such as object recognition, learning with humans in the loop, and
image retrieval. Whereas the existing work mainly pursues utilizing attributes
for various computer vision problems, we contend that the most basic
problem---how to accurately and robustly detect attributes from images---has
been left under explored. Especially, the existing work rarely explicitly
tackles the need that attribute detectors should generalize well across
different categories, including those previously unseen. Noting that this is
analogous to the objective of multi-source domain generalization, if we treat
each category as a domain, we provide a novel perspective to attribute
detection and propose to gear the techniques in multi-source domain
generalization for the purpose of learning cross-category generalizable
attribute detectors. We validate our understanding and approach with extensive
experiments on four challenging datasets and three different problems.
| [
{
"version": "v1",
"created": "Tue, 3 May 2016 03:09:22 GMT"
}
] | 2016-05-04T00:00:00 | [
[
"Gan",
"Chuang",
""
],
[
"Yang",
"Tianbao",
""
],
[
"Gong",
"Boqing",
""
]
] | TITLE: Learning Attributes Equals Multi-Source Domain Generalization
ABSTRACT: Attributes possess appealing properties and benefit many computer vision
problems, such as object recognition, learning with humans in the loop, and
image retrieval. Whereas the existing work mainly pursues utilizing attributes
for various computer vision problems, we contend that the most basic
problem---how to accurately and robustly detect attributes from images---has
been left under explored. Especially, the existing work rarely explicitly
tackles the need that attribute detectors should generalize well across
different categories, including those previously unseen. Noting that this is
analogous to the objective of multi-source domain generalization, if we treat
each category as a domain, we provide a novel perspective to attribute
detection and propose to gear the techniques in multi-source domain
generalization for the purpose of learning cross-category generalizable
attribute detectors. We validate our understanding and approach with extensive
experiments on four challenging datasets and three different problems.
| no_new_dataset | 0.945701 |
1605.00957 | Marco Bertini | Andrea Salvi, Simone Ercoli, Marco Bertini and Alberto Del Bimbo | Bloom Filters and Compact Hash Codes for Efficient and Distributed Image
Retrieval | null | null | null | null | cs.MM cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a novel method for efficient image retrieval, based on a
simple and effective hashing of CNN features and the use of an indexing
structure based on Bloom filters. These filters are used as gatekeepers for the
database of image features, allowing to avoid to perform a query if the query
features are not stored in the database and speeding up the query process,
without affecting retrieval performance. Thanks to the limited memory
requirements the system is suitable for mobile applications and distributed
databases, associating each filter to a distributed portion of the database.
Experimental validation has been performed on three standard image retrieval
datasets, outperforming state-of-the-art hashing methods in terms of precision,
while the proposed indexing method obtains a $2\times$ speedup.
| [
{
"version": "v1",
"created": "Tue, 3 May 2016 15:50:54 GMT"
}
] | 2016-05-04T00:00:00 | [
[
"Salvi",
"Andrea",
""
],
[
"Ercoli",
"Simone",
""
],
[
"Bertini",
"Marco",
""
],
[
"Del Bimbo",
"Alberto",
""
]
] | TITLE: Bloom Filters and Compact Hash Codes for Efficient and Distributed Image
Retrieval
ABSTRACT: This paper presents a novel method for efficient image retrieval, based on a
simple and effective hashing of CNN features and the use of an indexing
structure based on Bloom filters. These filters are used as gatekeepers for the
database of image features, allowing to avoid to perform a query if the query
features are not stored in the database and speeding up the query process,
without affecting retrieval performance. Thanks to the limited memory
requirements the system is suitable for mobile applications and distributed
databases, associating each filter to a distributed portion of the database.
Experimental validation has been performed on three standard image retrieval
datasets, outperforming state-of-the-art hashing methods in terms of precision,
while the proposed indexing method obtains a $2\times$ speedup.
| no_new_dataset | 0.946892 |
1605.01010 | UshaRani Yelipe | Yelipe UshaRani, P. Sammulal | A Novel Approach for Imputation of Missing Attribute Values for
Efficient Mining of Medical Datasets - Class Based Cluster Approach | Journal Published by University of Zulia, Venezuela and Indexed by
Web of Science and Scopus , H.index-5, SJR 0.11 (2014 Elsevier SJR Report),
12 Pages | Revista Tecnica de la Facultad de Ingeniera, Vol. 39, No 2, 184 -
195, 2016 | null | null | cs.IR cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Missing attribute values are quite common in the datasets available in the
literature. Missing values are also possible because all attributes values may
not be recorded and hence unavailable due to several practical reasons. For all
these one must fix missing attribute vales if the analysis has to be done.
Imputation is the first step in analyzing medical datasets. Hence this has
achieved significant contribution from several medical domain researchers.
Several data mining researchers have proposed various methods and approaches to
impute missing values. However very few of them concentrate on dimensionality
reduction. In this paper, we discuss a novel imputation framework for missing
values imputation. Our approach of filling missing values is rooted on class
based clustering approach and essentially aims at medical records
dimensionality reduction. We use these dimensionality records for carrying
prediction and classification analysis. A case study is discussed which shows
how imputation is performed using proposed method.
| [
{
"version": "v1",
"created": "Tue, 3 May 2016 18:18:57 GMT"
}
] | 2016-05-04T00:00:00 | [
[
"UshaRani",
"Yelipe",
""
],
[
"Sammulal",
"P.",
""
]
] | TITLE: A Novel Approach for Imputation of Missing Attribute Values for
Efficient Mining of Medical Datasets - Class Based Cluster Approach
ABSTRACT: Missing attribute values are quite common in the datasets available in the
literature. Missing values are also possible because all attributes values may
not be recorded and hence unavailable due to several practical reasons. For all
these one must fix missing attribute vales if the analysis has to be done.
Imputation is the first step in analyzing medical datasets. Hence this has
achieved significant contribution from several medical domain researchers.
Several data mining researchers have proposed various methods and approaches to
impute missing values. However very few of them concentrate on dimensionality
reduction. In this paper, we discuss a novel imputation framework for missing
values imputation. Our approach of filling missing values is rooted on class
based clustering approach and essentially aims at medical records
dimensionality reduction. We use these dimensionality records for carrying
prediction and classification analysis. A case study is discussed which shows
how imputation is performed using proposed method.
| no_new_dataset | 0.947817 |
1412.3773 | Joris Mooij | Joris M. Mooij, Jonas Peters, Dominik Janzing, Jakob Zscheischler,
Bernhard Sch\"olkopf | Distinguishing cause from effect using observational data: methods and
benchmarks | 101 pages, second revision submitted to Journal of Machine Learning
Research | Journal of Machine Learning Research 17(32):1-102, 2016 | null | null | cs.LG cs.AI stat.ML stat.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The discovery of causal relationships from purely observational data is a
fundamental problem in science. The most elementary form of such a causal
discovery problem is to decide whether X causes Y or, alternatively, Y causes
X, given joint observations of two variables X, Y. An example is to decide
whether altitude causes temperature, or vice versa, given only joint
measurements of both variables. Even under the simplifying assumptions of no
confounding, no feedback loops, and no selection bias, such bivariate causal
discovery problems are challenging. Nevertheless, several approaches for
addressing those problems have been proposed in recent years. We review two
families of such methods: Additive Noise Methods (ANM) and Information
Geometric Causal Inference (IGCI). We present the benchmark CauseEffectPairs
that consists of data for 100 different cause-effect pairs selected from 37
datasets from various domains (e.g., meteorology, biology, medicine,
engineering, economy, etc.) and motivate our decisions regarding the "ground
truth" causal directions of all pairs. We evaluate the performance of several
bivariate causal discovery methods on these real-world benchmark data and in
addition on artificially simulated data. Our empirical results on real-world
data indicate that certain methods are indeed able to distinguish cause from
effect using only purely observational data, although more benchmark data would
be needed to obtain statistically significant conclusions. One of the best
performing methods overall is the additive-noise method originally proposed by
Hoyer et al. (2009), which obtains an accuracy of 63+-10 % and an AUC of
0.74+-0.05 on the real-world benchmark. As the main theoretical contribution of
this work we prove the consistency of that method.
| [
{
"version": "v1",
"created": "Thu, 11 Dec 2014 19:34:39 GMT"
},
{
"version": "v2",
"created": "Fri, 31 Jul 2015 14:51:36 GMT"
},
{
"version": "v3",
"created": "Thu, 24 Dec 2015 11:37:57 GMT"
}
] | 2016-05-03T00:00:00 | [
[
"Mooij",
"Joris M.",
""
],
[
"Peters",
"Jonas",
""
],
[
"Janzing",
"Dominik",
""
],
[
"Zscheischler",
"Jakob",
""
],
[
"Schölkopf",
"Bernhard",
""
]
] | TITLE: Distinguishing cause from effect using observational data: methods and
benchmarks
ABSTRACT: The discovery of causal relationships from purely observational data is a
fundamental problem in science. The most elementary form of such a causal
discovery problem is to decide whether X causes Y or, alternatively, Y causes
X, given joint observations of two variables X, Y. An example is to decide
whether altitude causes temperature, or vice versa, given only joint
measurements of both variables. Even under the simplifying assumptions of no
confounding, no feedback loops, and no selection bias, such bivariate causal
discovery problems are challenging. Nevertheless, several approaches for
addressing those problems have been proposed in recent years. We review two
families of such methods: Additive Noise Methods (ANM) and Information
Geometric Causal Inference (IGCI). We present the benchmark CauseEffectPairs
that consists of data for 100 different cause-effect pairs selected from 37
datasets from various domains (e.g., meteorology, biology, medicine,
engineering, economy, etc.) and motivate our decisions regarding the "ground
truth" causal directions of all pairs. We evaluate the performance of several
bivariate causal discovery methods on these real-world benchmark data and in
addition on artificially simulated data. Our empirical results on real-world
data indicate that certain methods are indeed able to distinguish cause from
effect using only purely observational data, although more benchmark data would
be needed to obtain statistically significant conclusions. One of the best
performing methods overall is the additive-noise method originally proposed by
Hoyer et al. (2009), which obtains an accuracy of 63+-10 % and an AUC of
0.74+-0.05 on the real-world benchmark. As the main theoretical contribution of
this work we prove the consistency of that method.
| no_new_dataset | 0.940298 |
1503.02619 | Dmytro Mishkin | Dmytro Mishkin, Jiri Matas, Michal Perdoch | MODS: Fast and Robust Method for Two-View Matching | Version accepted to CVIU. arXiv admin note: text overlap with
arXiv:1306.3855 | null | 10.1016/j.cviu.2015.08.005 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A novel algorithm for wide-baseline matching called MODS - Matching On Demand
with view Synthesis - is presented. The MODS algorithm is experimentally shown
to solve a broader range of wide-baseline problems than the state of the art
while being nearly as fast as standard matchers on simple problems. The
apparent robustness vs. speed trade-off is finessed by the use of progressively
more time-consuming feature detectors and by on-demand generation of
synthesized images that is performed until a reliable estimate of geometry is
obtained.
We introduce an improved method for tentative correspondence selection,
applicable both with and without view synthesis. A modification of the standard
first to second nearest distance rule increases the number of correct matches
by 5-20% at no additional computational cost.
Performance of the MODS algorithm is evaluated on several standard publicly
available datasets, and on a new set of geometrically challenging wide baseline
problems that is made public together with the ground truth. Experiments show
that the MODS outperforms the state-of-the-art in robustness and speed.
Moreover, MODS performs well on other classes of difficult two-view problems
like matching of images from different modalities, with wide temporal baseline
or with significant lighting changes.
| [
{
"version": "v1",
"created": "Mon, 9 Mar 2015 18:59:18 GMT"
},
{
"version": "v2",
"created": "Sun, 1 May 2016 14:44:35 GMT"
}
] | 2016-05-03T00:00:00 | [
[
"Mishkin",
"Dmytro",
""
],
[
"Matas",
"Jiri",
""
],
[
"Perdoch",
"Michal",
""
]
] | TITLE: MODS: Fast and Robust Method for Two-View Matching
ABSTRACT: A novel algorithm for wide-baseline matching called MODS - Matching On Demand
with view Synthesis - is presented. The MODS algorithm is experimentally shown
to solve a broader range of wide-baseline problems than the state of the art
while being nearly as fast as standard matchers on simple problems. The
apparent robustness vs. speed trade-off is finessed by the use of progressively
more time-consuming feature detectors and by on-demand generation of
synthesized images that is performed until a reliable estimate of geometry is
obtained.
We introduce an improved method for tentative correspondence selection,
applicable both with and without view synthesis. A modification of the standard
first to second nearest distance rule increases the number of correct matches
by 5-20% at no additional computational cost.
Performance of the MODS algorithm is evaluated on several standard publicly
available datasets, and on a new set of geometrically challenging wide baseline
problems that is made public together with the ground truth. Experiments show
that the MODS outperforms the state-of-the-art in robustness and speed.
Moreover, MODS performs well on other classes of difficult two-view problems
like matching of images from different modalities, with wide temporal baseline
or with significant lighting changes.
| no_new_dataset | 0.939359 |
1504.06779 | Emerson Machado | Emerson Lopes Machado, Cristiano Jacques Miosso, Ricardo von Borries,
Murilo Coutinho, Pedro de Azevedo Berger, Thiago Marques, Ricardo Pezzuol
Jacobi | Computational Cost Reduction in Learned Transform Classifications | null | null | null | null | cs.CV stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a theoretical analysis and empirical evaluations of a novel set of
techniques for computational cost reduction of classifiers that are based on
learned transform and soft-threshold. By modifying optimization procedures for
dictionary and classifier training, as well as the resulting dictionary
entries, our techniques allow to reduce the bit precision and to replace each
floating-point multiplication by a single integer bit shift. We also show how
the optimization algorithms in some dictionary training methods can be modified
to penalize higher-energy dictionaries. We applied our techniques with the
classifier Learning Algorithm for Soft-Thresholding, testing on the datasets
used in its original paper. Our results indicate it is feasible to use solely
sums and bit shifts of integers to classify at test time with a limited
reduction of the classification accuracy. These low power operations are a
valuable trade off in FPGA implementations as they increase the classification
throughput while decrease both energy consumption and manufacturing cost.
| [
{
"version": "v1",
"created": "Sun, 26 Apr 2015 01:16:44 GMT"
},
{
"version": "v2",
"created": "Sat, 30 Apr 2016 15:03:29 GMT"
}
] | 2016-05-03T00:00:00 | [
[
"Machado",
"Emerson Lopes",
""
],
[
"Miosso",
"Cristiano Jacques",
""
],
[
"von Borries",
"Ricardo",
""
],
[
"Coutinho",
"Murilo",
""
],
[
"Berger",
"Pedro de Azevedo",
""
],
[
"Marques",
"Thiago",
""
],
[
"Jacobi",
"Ricardo Pezzuol",
""
]
] | TITLE: Computational Cost Reduction in Learned Transform Classifications
ABSTRACT: We present a theoretical analysis and empirical evaluations of a novel set of
techniques for computational cost reduction of classifiers that are based on
learned transform and soft-threshold. By modifying optimization procedures for
dictionary and classifier training, as well as the resulting dictionary
entries, our techniques allow to reduce the bit precision and to replace each
floating-point multiplication by a single integer bit shift. We also show how
the optimization algorithms in some dictionary training methods can be modified
to penalize higher-energy dictionaries. We applied our techniques with the
classifier Learning Algorithm for Soft-Thresholding, testing on the datasets
used in its original paper. Our results indicate it is feasible to use solely
sums and bit shifts of integers to classify at test time with a limited
reduction of the classification accuracy. These low power operations are a
valuable trade off in FPGA implementations as they increase the classification
throughput while decrease both energy consumption and manufacturing cost.
| no_new_dataset | 0.946597 |
1511.05622 | Yann Dauphin | Yann N. Dauphin, David Grangier | Predicting distributions with Linearizing Belief Networks | null | null | null | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Conditional belief networks introduce stochastic binary variables in neural
networks. Contrary to a classical neural network, a belief network can predict
more than the expected value of the output $Y$ given the input $X$. It can
predict a distribution of outputs $Y$ which is useful when an input can admit
multiple outputs whose average is not necessarily a valid answer. Such networks
are particularly relevant to inverse problems such as image prediction for
denoising, or text to speech. However, traditional sigmoid belief networks are
hard to train and are not suited to continuous problems. This work introduces a
new family of networks called linearizing belief nets or LBNs. A LBN decomposes
into a deep linear network where each linear unit can be turned on or off by
non-deterministic binary latent units. It is a universal approximator of
real-valued conditional distributions and can be trained using gradient
descent. Moreover, the linear pathways efficiently propagate continuous
information and they act as multiplicative skip-connections that help
optimization by removing gradient diffusion. This yields a model which trains
efficiently and improves the state-of-the-art on image denoising and facial
expression generation with the Toronto faces dataset.
| [
{
"version": "v1",
"created": "Tue, 17 Nov 2015 23:50:35 GMT"
},
{
"version": "v2",
"created": "Fri, 20 Nov 2015 00:40:38 GMT"
},
{
"version": "v3",
"created": "Tue, 24 Nov 2015 01:45:01 GMT"
},
{
"version": "v4",
"created": "Mon, 2 May 2016 03:22:01 GMT"
}
] | 2016-05-03T00:00:00 | [
[
"Dauphin",
"Yann N.",
""
],
[
"Grangier",
"David",
""
]
] | TITLE: Predicting distributions with Linearizing Belief Networks
ABSTRACT: Conditional belief networks introduce stochastic binary variables in neural
networks. Contrary to a classical neural network, a belief network can predict
more than the expected value of the output $Y$ given the input $X$. It can
predict a distribution of outputs $Y$ which is useful when an input can admit
multiple outputs whose average is not necessarily a valid answer. Such networks
are particularly relevant to inverse problems such as image prediction for
denoising, or text to speech. However, traditional sigmoid belief networks are
hard to train and are not suited to continuous problems. This work introduces a
new family of networks called linearizing belief nets or LBNs. A LBN decomposes
into a deep linear network where each linear unit can be turned on or off by
non-deterministic binary latent units. It is a universal approximator of
real-valued conditional distributions and can be trained using gradient
descent. Moreover, the linear pathways efficiently propagate continuous
information and they act as multiplicative skip-connections that help
optimization by removing gradient diffusion. This yields a model which trains
efficiently and improves the state-of-the-art on image denoising and facial
expression generation with the Toronto faces dataset.
| no_new_dataset | 0.943919 |
1604.02917 | Stefanos Eleftheriadis | Stefanos Eleftheriadis and Ognjen Rudovic and Marc P. Deisenroth and
Maja Pantic | Gaussian Process Domain Experts for Model Adaptation in Facial Behavior
Analysis | null | null | null | null | stat.ML cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a novel approach for supervised domain adaptation that is based
upon the probabilistic framework of Gaussian processes (GPs). Specifically, we
introduce domain-specific GPs as local experts for facial expression
classification from face images. The adaptation of the classifier is
facilitated in probabilistic fashion by conditioning the target expert on
multiple source experts. Furthermore, in contrast to existing adaptation
approaches, we also learn a target expert from available target data solely.
Then, a single and confident classifier is obtained by combining the
predictions from multiple experts based on their confidence. Learning of the
model is efficient and requires no retraining/reweighting of the source
classifiers. We evaluate the proposed approach on two publicly available
datasets for multi-class (MultiPIE) and multi-label (DISFA) facial expression
classification. To this end, we perform adaptation of two contextual factors:
'where' (view) and 'who' (subject). We show in our experiments that the
proposed approach consistently outperforms both source and target classifiers,
while using as few as 30 target examples. It also outperforms the
state-of-the-art approaches for supervised domain adaptation.
| [
{
"version": "v1",
"created": "Mon, 11 Apr 2016 12:37:36 GMT"
},
{
"version": "v2",
"created": "Mon, 2 May 2016 18:54:08 GMT"
}
] | 2016-05-03T00:00:00 | [
[
"Eleftheriadis",
"Stefanos",
""
],
[
"Rudovic",
"Ognjen",
""
],
[
"Deisenroth",
"Marc P.",
""
],
[
"Pantic",
"Maja",
""
]
] | TITLE: Gaussian Process Domain Experts for Model Adaptation in Facial Behavior
Analysis
ABSTRACT: We present a novel approach for supervised domain adaptation that is based
upon the probabilistic framework of Gaussian processes (GPs). Specifically, we
introduce domain-specific GPs as local experts for facial expression
classification from face images. The adaptation of the classifier is
facilitated in probabilistic fashion by conditioning the target expert on
multiple source experts. Furthermore, in contrast to existing adaptation
approaches, we also learn a target expert from available target data solely.
Then, a single and confident classifier is obtained by combining the
predictions from multiple experts based on their confidence. Learning of the
model is efficient and requires no retraining/reweighting of the source
classifiers. We evaluate the proposed approach on two publicly available
datasets for multi-class (MultiPIE) and multi-label (DISFA) facial expression
classification. To this end, we perform adaptation of two contextual factors:
'where' (view) and 'who' (subject). We show in our experiments that the
proposed approach consistently outperforms both source and target classifiers,
while using as few as 30 target examples. It also outperforms the
state-of-the-art approaches for supervised domain adaptation.
| no_new_dataset | 0.946051 |
1605.00029 | Lisa Koch | Lisa M.Koch, Martin Rajchl, Wenjia Bai, Christian F. Baumgartner, Tong
Tong, Jonathan Passerat-Palmbach, Paul Aljabar, Daniel Rueckert | Multi-Atlas Segmentation using Partially Annotated Data: Methods and
Annotation Strategies | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-atlas segmentation is a widely used tool in medical image analysis,
providing robust and accurate results by learning from annotated atlas
datasets. However, the availability of fully annotated atlas images for
training is limited due to the time required for the labelling task.
Segmentation methods requiring only a proportion of each atlas image to be
labelled could therefore reduce the workload on expert raters tasked with
annotating atlas images. To address this issue, we first re-examine the
labelling problem common in many existing approaches and formulate its solution
in terms of a Markov Random Field energy minimisation problem on a graph
connecting atlases and the target image. This provides a unifying framework for
multi-atlas segmentation. We then show how modifications in the graph
configuration of the proposed framework enable the use of partially annotated
atlas images and investigate different partial annotation strategies. The
proposed method was evaluated on two Magnetic Resonance Imaging (MRI) datasets
for hippocampal and cardiac segmentation. Experiments were performed aimed at
(1) recreating existing segmentation techniques with the proposed framework and
(2) demonstrating the potential of employing sparsely annotated atlas data for
multi-atlas segmentation.
| [
{
"version": "v1",
"created": "Fri, 29 Apr 2016 21:34:29 GMT"
}
] | 2016-05-03T00:00:00 | [
[
"Koch",
"Lisa M.",
""
],
[
"Rajchl",
"Martin",
""
],
[
"Bai",
"Wenjia",
""
],
[
"Baumgartner",
"Christian F.",
""
],
[
"Tong",
"Tong",
""
],
[
"Passerat-Palmbach",
"Jonathan",
""
],
[
"Aljabar",
"Paul",
""
],
[
"Rueckert",
"Daniel",
""
]
] | TITLE: Multi-Atlas Segmentation using Partially Annotated Data: Methods and
Annotation Strategies
ABSTRACT: Multi-atlas segmentation is a widely used tool in medical image analysis,
providing robust and accurate results by learning from annotated atlas
datasets. However, the availability of fully annotated atlas images for
training is limited due to the time required for the labelling task.
Segmentation methods requiring only a proportion of each atlas image to be
labelled could therefore reduce the workload on expert raters tasked with
annotating atlas images. To address this issue, we first re-examine the
labelling problem common in many existing approaches and formulate its solution
in terms of a Markov Random Field energy minimisation problem on a graph
connecting atlases and the target image. This provides a unifying framework for
multi-atlas segmentation. We then show how modifications in the graph
configuration of the proposed framework enable the use of partially annotated
atlas images and investigate different partial annotation strategies. The
proposed method was evaluated on two Magnetic Resonance Imaging (MRI) datasets
for hippocampal and cardiac segmentation. Experiments were performed aimed at
(1) recreating existing segmentation techniques with the proposed framework and
(2) demonstrating the potential of employing sparsely annotated atlas data for
multi-atlas segmentation.
| no_new_dataset | 0.951594 |
1605.00052 | Lingxi Xie | Lingxi Xie, Liang Zheng, Jingdong Wang, Alan Yuille, Qi Tian | InterActive: Inter-Layer Activeness Propagation | To appear, in CVPR 2016 (10 pages, 3 figures) | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An increasing number of computer vision tasks can be tackled with deep
features, which are the intermediate outputs of a pre-trained Convolutional
Neural Network. Despite the astonishing performance, deep features extracted
from low-level neurons are still below satisfaction, arguably because they
cannot access the spatial context contained in the higher layers. In this
paper, we present InterActive, a novel algorithm which computes the activeness
of neurons and network connections. Activeness is propagated through a neural
network in a top-down manner, carrying high-level context and improving the
descriptive power of low-level and mid-level neurons. Visualization indicates
that neuron activeness can be interpreted as spatial-weighted neuron responses.
We achieve state-of-the-art classification performance on a wide range of image
datasets.
| [
{
"version": "v1",
"created": "Sat, 30 Apr 2016 02:28:11 GMT"
}
] | 2016-05-03T00:00:00 | [
[
"Xie",
"Lingxi",
""
],
[
"Zheng",
"Liang",
""
],
[
"Wang",
"Jingdong",
""
],
[
"Yuille",
"Alan",
""
],
[
"Tian",
"Qi",
""
]
] | TITLE: InterActive: Inter-Layer Activeness Propagation
ABSTRACT: An increasing number of computer vision tasks can be tackled with deep
features, which are the intermediate outputs of a pre-trained Convolutional
Neural Network. Despite the astonishing performance, deep features extracted
from low-level neurons are still below satisfaction, arguably because they
cannot access the spatial context contained in the higher layers. In this
paper, we present InterActive, a novel algorithm which computes the activeness
of neurons and network connections. Activeness is propagated through a neural
network in a top-down manner, carrying high-level context and improving the
descriptive power of low-level and mid-level neurons. Visualization indicates
that neuron activeness can be interpreted as spatial-weighted neuron responses.
We achieve state-of-the-art classification performance on a wide range of image
datasets.
| no_new_dataset | 0.945901 |
1605.00055 | Lingxi Xie | Lingxi Xie, Jingdong Wang, Zhen Wei, Meng Wang, Qi Tian | DisturbLabel: Regularizing CNN on the Loss Layer | To appear in CVPR 2016 (10 pages, 10 figures) | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | During a long period of time we are combating over-fitting in the CNN
training process with model regularization, including weight decay, model
averaging, data augmentation, etc. In this paper, we present DisturbLabel, an
extremely simple algorithm which randomly replaces a part of labels as
incorrect values in each iteration. Although it seems weird to intentionally
generate incorrect training labels, we show that DisturbLabel prevents the
network training from over-fitting by implicitly averaging over exponentially
many networks which are trained with different label sets. To the best of our
knowledge, DisturbLabel serves as the first work which adds noises on the loss
layer. Meanwhile, DisturbLabel cooperates well with Dropout to provide
complementary regularization functions. Experiments demonstrate competitive
recognition results on several popular image recognition datasets.
| [
{
"version": "v1",
"created": "Sat, 30 Apr 2016 02:44:48 GMT"
}
] | 2016-05-03T00:00:00 | [
[
"Xie",
"Lingxi",
""
],
[
"Wang",
"Jingdong",
""
],
[
"Wei",
"Zhen",
""
],
[
"Wang",
"Meng",
""
],
[
"Tian",
"Qi",
""
]
] | TITLE: DisturbLabel: Regularizing CNN on the Loss Layer
ABSTRACT: During a long period of time we are combating over-fitting in the CNN
training process with model regularization, including weight decay, model
averaging, data augmentation, etc. In this paper, we present DisturbLabel, an
extremely simple algorithm which randomly replaces a part of labels as
incorrect values in each iteration. Although it seems weird to intentionally
generate incorrect training labels, we show that DisturbLabel prevents the
network training from over-fitting by implicitly averaging over exponentially
many networks which are trained with different label sets. To the best of our
knowledge, DisturbLabel serves as the first work which adds noises on the loss
layer. Meanwhile, DisturbLabel cooperates well with Dropout to provide
complementary regularization functions. Experiments demonstrate competitive
recognition results on several popular image recognition datasets.
| no_new_dataset | 0.94743 |
1605.00241 | Basem Elbarashy | Basem G. El-Barashy | Common-Description Learning: A Framework for Learning Algorithms and
Generating Subproblems from Few Examples | 32 pages, 13 figures | null | null | null | cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Current learning algorithms face many difficulties in learning simple
patterns and using them to learn more complex ones. They also require more
examples than humans do to learn the same pattern, assuming no prior knowledge.
In this paper, a new learning framework is introduced that is called
common-description learning (CDL). This framework has been tested on 32 small
multi-task datasets, and the results show that it was able to learn complex
algorithms from a few number of examples. The final model is perfectly
interpretable and its depth depends on the question. What is meant by depth
here is that whenever needed, the model learns to break down the problem into
simpler subproblems and solves them using previously learned models. Finally,
we explain the capabilities of our framework in discovering complex relations
in data and how it can help in improving language understanding in machines.
| [
{
"version": "v1",
"created": "Sun, 1 May 2016 11:56:01 GMT"
}
] | 2016-05-03T00:00:00 | [
[
"El-Barashy",
"Basem G.",
""
]
] | TITLE: Common-Description Learning: A Framework for Learning Algorithms and
Generating Subproblems from Few Examples
ABSTRACT: Current learning algorithms face many difficulties in learning simple
patterns and using them to learn more complex ones. They also require more
examples than humans do to learn the same pattern, assuming no prior knowledge.
In this paper, a new learning framework is introduced that is called
common-description learning (CDL). This framework has been tested on 32 small
multi-task datasets, and the results show that it was able to learn complex
algorithms from a few number of examples. The final model is perfectly
interpretable and its depth depends on the question. What is meant by depth
here is that whenever needed, the model learns to break down the problem into
simpler subproblems and solves them using previously learned models. Finally,
we explain the capabilities of our framework in discovering complex relations
in data and how it can help in improving language understanding in machines.
| no_new_dataset | 0.946349 |
1605.00324 | Hirokatsu Kataoka | Hirokatsu Kataoka, Masaki Hayashi, Kenji Iwata, Yutaka Satoh,
Yoshimitsu Aoki, Slobodan Ilic | Dominant Codewords Selection with Topic Model for Action Recognition | in CVPRW16 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a framework for recognizing human activities that
uses only in-topic dominant codewords and a mixture of intertopic vectors.
Latent Dirichlet allocation (LDA) is used to develop approximations of human
motion primitives; these are mid-level representations, and they adaptively
integrate dominant vectors when classifying human activities. In LDA topic
modeling, action videos (documents) are represented by a bag-of-words (input
from a dictionary), and these are based on improved dense trajectories. The
output topics correspond to human motion primitives, such as finger moving or
subtle leg motion. We eliminate the impurities, such as missed tracking or
changing light conditions, in each motion primitive. The assembled vector of
motion primitives is an improved representation of the action. We demonstrate
our method on four different datasets.
| [
{
"version": "v1",
"created": "Sun, 1 May 2016 23:58:06 GMT"
}
] | 2016-05-03T00:00:00 | [
[
"Kataoka",
"Hirokatsu",
""
],
[
"Hayashi",
"Masaki",
""
],
[
"Iwata",
"Kenji",
""
],
[
"Satoh",
"Yutaka",
""
],
[
"Aoki",
"Yoshimitsu",
""
],
[
"Ilic",
"Slobodan",
""
]
] | TITLE: Dominant Codewords Selection with Topic Model for Action Recognition
ABSTRACT: In this paper, we propose a framework for recognizing human activities that
uses only in-topic dominant codewords and a mixture of intertopic vectors.
Latent Dirichlet allocation (LDA) is used to develop approximations of human
motion primitives; these are mid-level representations, and they adaptively
integrate dominant vectors when classifying human activities. In LDA topic
modeling, action videos (documents) are represented by a bag-of-words (input
from a dictionary), and these are based on improved dense trajectories. The
output topics correspond to human motion primitives, such as finger moving or
subtle leg motion. We eliminate the impurities, such as missed tracking or
changing light conditions, in each motion primitive. The assembled vector of
motion primitives is an improved representation of the action. We demonstrate
our method on four different datasets.
| no_new_dataset | 0.949949 |
1605.00366 | Pavel Svoboda | Pavel Svoboda and Michal Hradis and David Barina and Pavel Zemcik | Compression Artifacts Removal Using Convolutional Neural Networks | To be published in WSCG 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper shows that it is possible to train large and deep convolutional
neural networks (CNN) for JPEG compression artifacts reduction, and that such
networks can provide significantly better reconstruction quality compared to
previously used smaller networks as well as to any other state-of-the-art
methods. We were able to train networks with 8 layers in a single step and in
relatively short time by combining residual learning, skip architecture, and
symmetric weight initialization. We provide further insights into convolution
networks for JPEG artifact reduction by evaluating three different objectives,
generalization with respect to training dataset size, and generalization with
respect to JPEG quality level.
| [
{
"version": "v1",
"created": "Mon, 2 May 2016 06:40:08 GMT"
}
] | 2016-05-03T00:00:00 | [
[
"Svoboda",
"Pavel",
""
],
[
"Hradis",
"Michal",
""
],
[
"Barina",
"David",
""
],
[
"Zemcik",
"Pavel",
""
]
] | TITLE: Compression Artifacts Removal Using Convolutional Neural Networks
ABSTRACT: This paper shows that it is possible to train large and deep convolutional
neural networks (CNN) for JPEG compression artifacts reduction, and that such
networks can provide significantly better reconstruction quality compared to
previously used smaller networks as well as to any other state-of-the-art
methods. We were able to train networks with 8 layers in a single step and in
relatively short time by combining residual learning, skip architecture, and
symmetric weight initialization. We provide further insights into convolution
networks for JPEG artifact reduction by evaluating three different objectives,
generalization with respect to training dataset size, and generalization with
respect to JPEG quality level.
| no_new_dataset | 0.951051 |
1605.00392 | Andrea Zunino | Andrea Zunino, Jacopo Cavazza, Vittorio Murino | Revisiting Human Action Recognition: Personalization vs. Generalization | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | By thoroughly revisiting the classic human action recognition paradigm, this
paper aims at proposing a new approach for the design of effective action
classification systems. Taking as testbed publicly available three-dimensional
(MoCap) action/activity datasets, we analyzed and validated different
training/testing strategies. In particular, considering that each human action
in the datasets is performed several times by different subjects, we were able
to precisely quantify the effect of inter- and intra-subject variability, so as
to figure out the impact of several learning approaches in terms of
classification performance. The net result is that standard testing strategies
consisting in cross-validating the algorithm using typical splits of the data
(holdout, k-fold, or one-subject-out) is always outperformed by a
"personalization" strategy which learns how a subject is performing an action.
In other words, it is advantageous to customize (i.e., personalize) the method
to learn the actions carried out by each subject, rather than trying to
generalize the actions executions across subjects. Consequently, we finally
propose an action recognition framework consisting of a two-stage
classification approach where, given a test action, the subject is first
identified before the actual recognition of the action takes place. Despite the
basic, off-the-shelf descriptors and standard classifiers adopted, we noted a
relevant increase in performance with respect to standard state-of-the-art
algorithms, so motivating the usage of personalized approaches for designing
effective action recognition systems.
| [
{
"version": "v1",
"created": "Mon, 2 May 2016 08:46:23 GMT"
}
] | 2016-05-03T00:00:00 | [
[
"Zunino",
"Andrea",
""
],
[
"Cavazza",
"Jacopo",
""
],
[
"Murino",
"Vittorio",
""
]
] | TITLE: Revisiting Human Action Recognition: Personalization vs. Generalization
ABSTRACT: By thoroughly revisiting the classic human action recognition paradigm, this
paper aims at proposing a new approach for the design of effective action
classification systems. Taking as testbed publicly available three-dimensional
(MoCap) action/activity datasets, we analyzed and validated different
training/testing strategies. In particular, considering that each human action
in the datasets is performed several times by different subjects, we were able
to precisely quantify the effect of inter- and intra-subject variability, so as
to figure out the impact of several learning approaches in terms of
classification performance. The net result is that standard testing strategies
consisting in cross-validating the algorithm using typical splits of the data
(holdout, k-fold, or one-subject-out) is always outperformed by a
"personalization" strategy which learns how a subject is performing an action.
In other words, it is advantageous to customize (i.e., personalize) the method
to learn the actions carried out by each subject, rather than trying to
generalize the actions executions across subjects. Consequently, we finally
propose an action recognition framework consisting of a two-stage
classification approach where, given a test action, the subject is first
identified before the actual recognition of the action takes place. Despite the
basic, off-the-shelf descriptors and standard classifiers adopted, we noted a
relevant increase in performance with respect to standard state-of-the-art
algorithms, so motivating the usage of personalized approaches for designing
effective action recognition systems.
| no_new_dataset | 0.945751 |
1605.00420 | Ritesh Sarkhel | Ritesh Sarkhel, Amit K Saha, Nibaran Das | An Enhanced Harmony Search Method for Bangla Handwritten Character
Recognition Using Region Sampling | 2nd IEEE International Conference on Recent Trends in Information
Systems, 2015 | null | 10.1109/ReTIS.2015.7232899 | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Identification of minimum number of local regions of a handwritten character
image, containing well-defined discriminating features which are sufficient for
a minimal but complete description of the character is a challenging task. A
new region selection technique based on the idea of an enhanced Harmony Search
methodology has been proposed here. The powerful framework of Harmony Search
has been utilized to search the region space and detect only the most
informative regions for correctly recognizing the handwritten character. The
proposed method has been tested on handwritten samples of Bangla Basic,
Compound and mixed (Basic and Compound characters) characters separately with
SVM based classifier using a longest run based feature-set obtained from the
image subregions formed by a CG based quad-tree partitioning approach. Applying
this methodology on the above mentioned three types of datasets, respectively
43.75%, 12.5% and 37.5% gains have been achieved in terms of region reduction
and 2.3%, 0.6% and 1.2% gains have been achieved in terms of recognition
accuracy. The results show a sizeable reduction in the minimal number of
descriptive regions as well a significant increase in recognition accuracy for
all the datasets using the proposed technique. Thus the time and cost related
to feature extraction is decreased without dampening the corresponding
recognition accuracy.
| [
{
"version": "v1",
"created": "Mon, 2 May 2016 10:28:07 GMT"
}
] | 2016-05-03T00:00:00 | [
[
"Sarkhel",
"Ritesh",
""
],
[
"Saha",
"Amit K",
""
],
[
"Das",
"Nibaran",
""
]
] | TITLE: An Enhanced Harmony Search Method for Bangla Handwritten Character
Recognition Using Region Sampling
ABSTRACT: Identification of minimum number of local regions of a handwritten character
image, containing well-defined discriminating features which are sufficient for
a minimal but complete description of the character is a challenging task. A
new region selection technique based on the idea of an enhanced Harmony Search
methodology has been proposed here. The powerful framework of Harmony Search
has been utilized to search the region space and detect only the most
informative regions for correctly recognizing the handwritten character. The
proposed method has been tested on handwritten samples of Bangla Basic,
Compound and mixed (Basic and Compound characters) characters separately with
SVM based classifier using a longest run based feature-set obtained from the
image subregions formed by a CG based quad-tree partitioning approach. Applying
this methodology on the above mentioned three types of datasets, respectively
43.75%, 12.5% and 37.5% gains have been achieved in terms of region reduction
and 2.3%, 0.6% and 1.2% gains have been achieved in terms of recognition
accuracy. The results show a sizeable reduction in the minimal number of
descriptive regions as well a significant increase in recognition accuracy for
all the datasets using the proposed technique. Thus the time and cost related
to feature extraction is decreased without dampening the corresponding
recognition accuracy.
| no_new_dataset | 0.953275 |
1605.00459 | Desmond Elliott | Desmond Elliott, Stella Frank, Khalil Sima'an, Lucia Specia | Multi30K: Multilingual English-German Image Descriptions | null | null | null | null | cs.CL cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce the Multi30K dataset to stimulate multilingual multimodal
research. Recent advances in image description have been demonstrated on
English-language datasets almost exclusively, but image description should not
be limited to English. This dataset extends the Flickr30K dataset with i)
German translations created by professional translators over a subset of the
English descriptions, and ii) descriptions crowdsourced independently of the
original English descriptions. We outline how the data can be used for
multilingual image description and multimodal machine translation, but we
anticipate the data will be useful for a broader range of tasks.
| [
{
"version": "v1",
"created": "Mon, 2 May 2016 12:38:03 GMT"
}
] | 2016-05-03T00:00:00 | [
[
"Elliott",
"Desmond",
""
],
[
"Frank",
"Stella",
""
],
[
"Sima'an",
"Khalil",
""
],
[
"Specia",
"Lucia",
""
]
] | TITLE: Multi30K: Multilingual English-German Image Descriptions
ABSTRACT: We introduce the Multi30K dataset to stimulate multilingual multimodal
research. Recent advances in image description have been demonstrated on
English-language datasets almost exclusively, but image description should not
be limited to English. This dataset extends the Flickr30K dataset with i)
German translations created by professional translators over a subset of the
English descriptions, and ii) descriptions crowdsourced independently of the
original English descriptions. We outline how the data can be used for
multilingual image description and multimodal machine translation, but we
anticipate the data will be useful for a broader range of tasks.
| new_dataset | 0.959459 |
1605.00596 | Shuai Li | Shuai Li and Claudio Gentile and Alexandros Karatzoglou | Graph Clustering Bandits for Recommendation | null | null | null | null | stat.ML cs.AI cs.IR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate an efficient context-dependent clustering technique for
recommender systems based on exploration-exploitation strategies through
multi-armed bandits over multiple users. Our algorithm dynamically groups users
based on their observed behavioral similarity during a sequence of logged
activities. In doing so, the algorithm reacts to the currently served user by
shaping clusters around him/her but, at the same time, it explores the
generation of clusters over users which are not currently engaged. We motivate
the effectiveness of this clustering policy, and provide an extensive empirical
analysis on real-world datasets, showing scalability and improved prediction
performance over state-of-the-art methods for sequential clustering of users in
multi-armed bandit scenarios.
| [
{
"version": "v1",
"created": "Mon, 2 May 2016 18:13:04 GMT"
}
] | 2016-05-03T00:00:00 | [
[
"Li",
"Shuai",
""
],
[
"Gentile",
"Claudio",
""
],
[
"Karatzoglou",
"Alexandros",
""
]
] | TITLE: Graph Clustering Bandits for Recommendation
ABSTRACT: We investigate an efficient context-dependent clustering technique for
recommender systems based on exploration-exploitation strategies through
multi-armed bandits over multiple users. Our algorithm dynamically groups users
based on their observed behavioral similarity during a sequence of logged
activities. In doing so, the algorithm reacts to the currently served user by
shaping clusters around him/her but, at the same time, it explores the
generation of clusters over users which are not currently engaged. We motivate
the effectiveness of this clustering policy, and provide an extensive empirical
analysis on real-world datasets, showing scalability and improved prediction
performance over state-of-the-art methods for sequential clustering of users in
multi-armed bandit scenarios.
| no_new_dataset | 0.949295 |
1603.09302 | Valsamis Ntouskos | Valsamis Ntouskos, Fiora Pirri | Confidence driven TGV fusion | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a novel model for spatially varying variational data fusion,
driven by point-wise confidence values. The proposed model allows for the joint
estimation of the data and the confidence values based on the spatial coherence
of the data. We discuss the main properties of the introduced model as well as
suitable algorithms for estimating the solution of the corresponding biconvex
minimization problem and their convergence. The performance of the proposed
model is evaluated considering the problem of depth image fusion by using both
synthetic and real data from publicly available datasets.
| [
{
"version": "v1",
"created": "Wed, 30 Mar 2016 18:27:22 GMT"
},
{
"version": "v2",
"created": "Fri, 29 Apr 2016 17:25:58 GMT"
}
] | 2016-05-02T00:00:00 | [
[
"Ntouskos",
"Valsamis",
""
],
[
"Pirri",
"Fiora",
""
]
] | TITLE: Confidence driven TGV fusion
ABSTRACT: We introduce a novel model for spatially varying variational data fusion,
driven by point-wise confidence values. The proposed model allows for the joint
estimation of the data and the confidence values based on the spatial coherence
of the data. We discuss the main properties of the introduced model as well as
suitable algorithms for estimating the solution of the corresponding biconvex
minimization problem and their convergence. The performance of the proposed
model is evaluated considering the problem of depth image fusion by using both
synthetic and real data from publicly available datasets.
| no_new_dataset | 0.95096 |
1604.08642 | Yongyi Mao Dr | Jianfeng Wen, Jianxin Li, Yongyi Mao, Shini Chen, Richong Zhang | On the representation and embedding of knowledge bases beyond binary
relations | 8 pages, to appear in IJCAI 2016 | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The models developed to date for knowledge base embedding are all based on
the assumption that the relations contained in knowledge bases are binary. For
the training and testing of these embedding models, multi-fold (or n-ary)
relational data are converted to triples (e.g., in FB15K dataset) and
interpreted as instances of binary relations. This paper presents a canonical
representation of knowledge bases containing multi-fold relations. We show that
the existing embedding models on the popular FB15K datasets correspond to a
sub-optimal modelling framework, resulting in a loss of structural information.
We advocate a novel modelling framework, which models multi-fold relations
directly using this canonical representation. Using this framework, the
existing TransH model is generalized to a new model, m-TransH. We demonstrate
experimentally that m-TransH outperforms TransH by a large margin, thereby
establishing a new state of the art.
| [
{
"version": "v1",
"created": "Thu, 28 Apr 2016 22:42:38 GMT"
}
] | 2016-05-02T00:00:00 | [
[
"Wen",
"Jianfeng",
""
],
[
"Li",
"Jianxin",
""
],
[
"Mao",
"Yongyi",
""
],
[
"Chen",
"Shini",
""
],
[
"Zhang",
"Richong",
""
]
] | TITLE: On the representation and embedding of knowledge bases beyond binary
relations
ABSTRACT: The models developed to date for knowledge base embedding are all based on
the assumption that the relations contained in knowledge bases are binary. For
the training and testing of these embedding models, multi-fold (or n-ary)
relational data are converted to triples (e.g., in FB15K dataset) and
interpreted as instances of binary relations. This paper presents a canonical
representation of knowledge bases containing multi-fold relations. We show that
the existing embedding models on the popular FB15K datasets correspond to a
sub-optimal modelling framework, resulting in a loss of structural information.
We advocate a novel modelling framework, which models multi-fold relations
directly using this canonical representation. Using this framework, the
existing TransH model is generalized to a new model, m-TransH. We demonstrate
experimentally that m-TransH outperforms TransH by a large margin, thereby
establishing a new state of the art.
| no_new_dataset | 0.949809 |
1604.08691 | Junzhou Zhao | Pinghui Wang, Xiangliang Zhang, Zhenguo Li, Jiefeng Cheng, John C.S.
Lui, Don Towsley, Junzhou Zhao, Jing Tao, Xiaohong Guan | A Fast Sampling Method of Exploring Graphlet Degrees of Large Directed
and Undirected Graphs | null | null | null | null | cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Exploring small connected and induced subgraph patterns (CIS patterns, or
graphlets) has recently attracted considerable attention. Despite recent
efforts on computing the number of instances a specific graphlet appears in a
large graph (i.e., the total number of CISes isomorphic to the graphlet),
little attention has been paid to characterizing a node's graphlet degree,
i.e., the number of CISes isomorphic to the graphlet that include the node,
which is an important metric for analyzing complex networks such as social and
biological networks. Similar to global graphlet counting, it is challenging to
compute node graphlet degrees for a large graph due to the combinatorial nature
of the problem. Unfortunately, previous methods of computing global graphlet
counts are not suited to solve this problem. In this paper we propose sampling
methods to estimate node graphlet degrees for undirected and directed graphs,
and analyze the error of our estimates. To the best of our knowledge, we are
the first to study this problem and give a fast scalable solution. We conduct
experiments on a variety of real-word datasets that demonstrate that our
methods accurately and efficiently estimate node graphlet degrees for graphs
with millions of edges.
| [
{
"version": "v1",
"created": "Fri, 29 Apr 2016 05:24:24 GMT"
}
] | 2016-05-02T00:00:00 | [
[
"Wang",
"Pinghui",
""
],
[
"Zhang",
"Xiangliang",
""
],
[
"Li",
"Zhenguo",
""
],
[
"Cheng",
"Jiefeng",
""
],
[
"Lui",
"John C. S.",
""
],
[
"Towsley",
"Don",
""
],
[
"Zhao",
"Junzhou",
""
],
[
"Tao",
"Jing",
""
],
[
"Guan",
"Xiaohong",
""
]
] | TITLE: A Fast Sampling Method of Exploring Graphlet Degrees of Large Directed
and Undirected Graphs
ABSTRACT: Exploring small connected and induced subgraph patterns (CIS patterns, or
graphlets) has recently attracted considerable attention. Despite recent
efforts on computing the number of instances a specific graphlet appears in a
large graph (i.e., the total number of CISes isomorphic to the graphlet),
little attention has been paid to characterizing a node's graphlet degree,
i.e., the number of CISes isomorphic to the graphlet that include the node,
which is an important metric for analyzing complex networks such as social and
biological networks. Similar to global graphlet counting, it is challenging to
compute node graphlet degrees for a large graph due to the combinatorial nature
of the problem. Unfortunately, previous methods of computing global graphlet
counts are not suited to solve this problem. In this paper we propose sampling
methods to estimate node graphlet degrees for undirected and directed graphs,
and analyze the error of our estimates. To the best of our knowledge, we are
the first to study this problem and give a fast scalable solution. We conduct
experiments on a variety of real-word datasets that demonstrate that our
methods accurately and efficiently estimate node graphlet degrees for graphs
with millions of edges.
| no_new_dataset | 0.949201 |
1604.08723 | Bob Sturm | Bob L. Sturm, Jo\~ao Felipe Santos, Oded Ben-Tal and Iryna Korshunova | Music transcription modelling and composition using deep learning | 16 pages, 4 figures, contribution to 1st Conference on Computer
Simulation of Musical Creativity | null | null | null | cs.SD cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We apply deep learning methods, specifically long short-term memory (LSTM)
networks, to music transcription modelling and composition. We build and train
LSTM networks using approximately 23,000 music transcriptions expressed with a
high-level vocabulary (ABC notation), and use them to generate new
transcriptions. Our practical aim is to create music transcription models
useful in particular contexts of music composition. We present results from
three perspectives: 1) at the population level, comparing descriptive
statistics of the set of training transcriptions and generated transcriptions;
2) at the individual level, examining how a generated transcription reflects
the conventions of a music practice in the training transcriptions (Celtic
folk); 3) at the application level, using the system for idea generation in
music composition. We make our datasets, software and sound examples open and
available: \url{https://github.com/IraKorshunova/folk-rnn}.
| [
{
"version": "v1",
"created": "Fri, 29 Apr 2016 08:03:00 GMT"
}
] | 2016-05-02T00:00:00 | [
[
"Sturm",
"Bob L.",
""
],
[
"Santos",
"João Felipe",
""
],
[
"Ben-Tal",
"Oded",
""
],
[
"Korshunova",
"Iryna",
""
]
] | TITLE: Music transcription modelling and composition using deep learning
ABSTRACT: We apply deep learning methods, specifically long short-term memory (LSTM)
networks, to music transcription modelling and composition. We build and train
LSTM networks using approximately 23,000 music transcriptions expressed with a
high-level vocabulary (ABC notation), and use them to generate new
transcriptions. Our practical aim is to create music transcription models
useful in particular contexts of music composition. We present results from
three perspectives: 1) at the population level, comparing descriptive
statistics of the set of training transcriptions and generated transcriptions;
2) at the individual level, examining how a generated transcription reflects
the conventions of a music practice in the training transcriptions (Celtic
folk); 3) at the application level, using the system for idea generation in
music composition. We make our datasets, software and sound examples open and
available: \url{https://github.com/IraKorshunova/folk-rnn}.
| no_new_dataset | 0.905071 |
1604.08772 | Frederic Besse | Karol Gregor, Frederic Besse, Danilo Jimenez Rezende, Ivo Danihelka
and Daan Wierstra | Towards Conceptual Compression | 14 pages, 13 figures | null | null | null | stat.ML cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a simple recurrent variational auto-encoder architecture that
significantly improves image modeling. The system represents the
state-of-the-art in latent variable models for both the ImageNet and Omniglot
datasets. We show that it naturally separates global conceptual information
from lower level details, thus addressing one of the fundamentally desired
properties of unsupervised learning. Furthermore, the possibility of
restricting ourselves to storing only global information about an image allows
us to achieve high quality 'conceptual compression'.
| [
{
"version": "v1",
"created": "Fri, 29 Apr 2016 11:02:52 GMT"
}
] | 2016-05-02T00:00:00 | [
[
"Gregor",
"Karol",
""
],
[
"Besse",
"Frederic",
""
],
[
"Rezende",
"Danilo Jimenez",
""
],
[
"Danihelka",
"Ivo",
""
],
[
"Wierstra",
"Daan",
""
]
] | TITLE: Towards Conceptual Compression
ABSTRACT: We introduce a simple recurrent variational auto-encoder architecture that
significantly improves image modeling. The system represents the
state-of-the-art in latent variable models for both the ImageNet and Omniglot
datasets. We show that it naturally separates global conceptual information
from lower level details, thus addressing one of the fundamentally desired
properties of unsupervised learning. Furthermore, the possibility of
restricting ourselves to storing only global information about an image allows
us to achieve high quality 'conceptual compression'.
| no_new_dataset | 0.945601 |
1604.08807 | Steven Wren | Steven Wren | Neutrino Mass Ordering Studies with PINGU and IceCube/DeepCore | null | null | null | null | physics.ins-det hep-ex hep-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Precision IceCube Next Generation Upgrade (PINGU) is a proposed extension
to the IceCube detector. The design of PINGU would augment the existing 86
strings with an additional 40 with the main goal of determining the neutrino
mass ordering (NMO). Preliminary studies of the NMO can start with
IceCube/DeepCore, a sub-array of more densely- packed strings in operation
since 2011. This detector has a neutrino energy threshold of roughly 10 GeV and
allows for high-statistics datasets of atmospheric neutrinos to be collected.
This data provides a unique opportunity to better understand the systematic
effects involved in making the NMO measurement by comparing the simulation
studies to real data. These proceedings will present the current status of
these studies in Monte Carlo simulations with projected DeepCore sensitivity
for the NMO.
| [
{
"version": "v1",
"created": "Fri, 29 Apr 2016 12:54:21 GMT"
}
] | 2016-05-02T00:00:00 | [
[
"Wren",
"Steven",
""
]
] | TITLE: Neutrino Mass Ordering Studies with PINGU and IceCube/DeepCore
ABSTRACT: The Precision IceCube Next Generation Upgrade (PINGU) is a proposed extension
to the IceCube detector. The design of PINGU would augment the existing 86
strings with an additional 40 with the main goal of determining the neutrino
mass ordering (NMO). Preliminary studies of the NMO can start with
IceCube/DeepCore, a sub-array of more densely- packed strings in operation
since 2011. This detector has a neutrino energy threshold of roughly 10 GeV and
allows for high-statistics datasets of atmospheric neutrinos to be collected.
This data provides a unique opportunity to better understand the systematic
effects involved in making the NMO measurement by comparing the simulation
studies to real data. These proceedings will present the current status of
these studies in Monte Carlo simulations with projected DeepCore sensitivity
for the NMO.
| no_new_dataset | 0.93835 |
1604.08826 | Katsunori Ohnishi | Katsunori Ohnishi, Masatoshi Hidaka, Tatsuya Harada | Improved Dense Trajectory with Cross Streams | 6 pages, 3 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Improved dense trajectories (iDT) have shown great performance in action
recognition, and their combination with the two-stream approach has achieved
state-of-the-art performance. It is, however, difficult for iDT to completely
remove background trajectories from video with camera shaking. Trajectories in
less discriminative regions should be given modest weights in order to create
more discriminative local descriptors for action recognition. In addition, the
two-stream approach, which learns appearance and motion information separately,
cannot focus on motion in important regions when extracting features from
spatial convolutional layers of the appearance network, and vice versa. In
order to address the above mentioned problems, we propose a new local
descriptor that pools a new convolutional layer obtained from crossing two
networks along iDT. This new descriptor is calculated by applying
discriminative weights learned from one network to a convolutional layer of the
other network. Our method has achieved state-of-the-art performance on ordinal
action recognition datasets, 92.3% on UCF101, and 66.2% on HMDB51.
| [
{
"version": "v1",
"created": "Fri, 29 Apr 2016 13:39:40 GMT"
}
] | 2016-05-02T00:00:00 | [
[
"Ohnishi",
"Katsunori",
""
],
[
"Hidaka",
"Masatoshi",
""
],
[
"Harada",
"Tatsuya",
""
]
] | TITLE: Improved Dense Trajectory with Cross Streams
ABSTRACT: Improved dense trajectories (iDT) have shown great performance in action
recognition, and their combination with the two-stream approach has achieved
state-of-the-art performance. It is, however, difficult for iDT to completely
remove background trajectories from video with camera shaking. Trajectories in
less discriminative regions should be given modest weights in order to create
more discriminative local descriptors for action recognition. In addition, the
two-stream approach, which learns appearance and motion information separately,
cannot focus on motion in important regions when extracting features from
spatial convolutional layers of the appearance network, and vice versa. In
order to address the above mentioned problems, we propose a new local
descriptor that pools a new convolutional layer obtained from crossing two
networks along iDT. This new descriptor is calculated by applying
discriminative weights learned from one network to a convolutional layer of the
other network. Our method has achieved state-of-the-art performance on ordinal
action recognition datasets, 92.3% on UCF101, and 66.2% on HMDB51.
| no_new_dataset | 0.951097 |
1604.08880 | Shane Halloran | Nils Y. Hammerla, Shane Halloran and Thomas Ploetz | Deep, Convolutional, and Recurrent Models for Human Activity Recognition
using Wearables | Extended version has been accepted for publication at International
Joint Conference on Artificial Intelligence (IJCAI) | null | null | null | cs.LG cs.AI cs.HC stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Human activity recognition (HAR) in ubiquitous computing is beginning to
adopt deep learning to substitute for well-established analysis techniques that
rely on hand-crafted feature extraction and classification techniques. From
these isolated applications of custom deep architectures it is, however,
difficult to gain an overview of their suitability for problems ranging from
the recognition of manipulative gestures to the segmentation and identification
of physical activities like running or ascending stairs. In this paper we
rigorously explore deep, convolutional, and recurrent approaches across three
representative datasets that contain movement data captured with wearable
sensors. We describe how to train recurrent approaches in this setting,
introduce a novel regularisation approach, and illustrate how they outperform
the state-of-the-art on a large benchmark dataset. Across thousands of
recognition experiments with randomly sampled model configurations we
investigate the suitability of each model for different tasks in HAR, explore
the impact of hyperparameters using the fANOVA framework, and provide
guidelines for the practitioner who wants to apply deep learning in their
problem setting.
| [
{
"version": "v1",
"created": "Fri, 29 Apr 2016 15:38:44 GMT"
}
] | 2016-05-02T00:00:00 | [
[
"Hammerla",
"Nils Y.",
""
],
[
"Halloran",
"Shane",
""
],
[
"Ploetz",
"Thomas",
""
]
] | TITLE: Deep, Convolutional, and Recurrent Models for Human Activity Recognition
using Wearables
ABSTRACT: Human activity recognition (HAR) in ubiquitous computing is beginning to
adopt deep learning to substitute for well-established analysis techniques that
rely on hand-crafted feature extraction and classification techniques. From
these isolated applications of custom deep architectures it is, however,
difficult to gain an overview of their suitability for problems ranging from
the recognition of manipulative gestures to the segmentation and identification
of physical activities like running or ascending stairs. In this paper we
rigorously explore deep, convolutional, and recurrent approaches across three
representative datasets that contain movement data captured with wearable
sensors. We describe how to train recurrent approaches in this setting,
introduce a novel regularisation approach, and illustrate how they outperform
the state-of-the-art on a large benchmark dataset. Across thousands of
recognition experiments with randomly sampled model configurations we
investigate the suitability of each model for different tasks in HAR, explore
the impact of hyperparameters using the fANOVA framework, and provide
guidelines for the practitioner who wants to apply deep learning in their
problem setting.
| no_new_dataset | 0.941654 |
1604.08426 | Cheng Chen | Cheng Chen, Xilin Zhang, Yizhou Wang, Fang Fang | A Novel Method to Study Bottom-up Visual Saliency and its Neural
Mechanism | null | null | null | null | cs.CV q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this study, we propose a novel method to measure bottom-up saliency maps
of natural images. In order to eliminate the influence of top-down signals,
backward masking is used to make stimuli (natural images) subjectively
invisible to subjects, however, the bottom-up saliency can still orient the
subjects attention. To measure this orientation/attention effect, we adopt the
cueing effect paradigm by deploying discrimination tasks at each location of an
image, and measure the discrimination performance variation across the image as
the attentional effect of the bottom-up saliency. Such attentional effects are
combined to construct a final bottomup saliency map. Based on the proposed
method, we introduce a new bottom-up saliency map dataset of natural images to
benchmark computational models. We compare several state-of-the-art saliency
models on the dataset. Moreover, the proposed paradigm is applied to
investigate the neural basis of the bottom-up visual saliency map by analyzing
psychophysical and fMRI experimental results. Our findings suggest that the
bottom-up saliency maps of natural images are constructed in V1. It provides a
strong scientific evidence to resolve the long standing dispute in neuroscience
about where the bottom-up saliency map is constructed in human brain.
| [
{
"version": "v1",
"created": "Wed, 13 Apr 2016 12:14:31 GMT"
}
] | 2016-04-30T00:00:00 | [
[
"Chen",
"Cheng",
""
],
[
"Zhang",
"Xilin",
""
],
[
"Wang",
"Yizhou",
""
],
[
"Fang",
"Fang",
""
]
] | TITLE: A Novel Method to Study Bottom-up Visual Saliency and its Neural
Mechanism
ABSTRACT: In this study, we propose a novel method to measure bottom-up saliency maps
of natural images. In order to eliminate the influence of top-down signals,
backward masking is used to make stimuli (natural images) subjectively
invisible to subjects, however, the bottom-up saliency can still orient the
subjects attention. To measure this orientation/attention effect, we adopt the
cueing effect paradigm by deploying discrimination tasks at each location of an
image, and measure the discrimination performance variation across the image as
the attentional effect of the bottom-up saliency. Such attentional effects are
combined to construct a final bottomup saliency map. Based on the proposed
method, we introduce a new bottom-up saliency map dataset of natural images to
benchmark computational models. We compare several state-of-the-art saliency
models on the dataset. Moreover, the proposed paradigm is applied to
investigate the neural basis of the bottom-up visual saliency map by analyzing
psychophysical and fMRI experimental results. Our findings suggest that the
bottom-up saliency maps of natural images are constructed in V1. It provides a
strong scientific evidence to resolve the long standing dispute in neuroscience
about where the bottom-up saliency map is constructed in human brain.
| new_dataset | 0.963916 |
1505.06795 | Nikolaos Karianakis | Nikolaos Karianakis, Jingming Dong and Stefano Soatto | An Empirical Evaluation of Current Convolutional Architectures' Ability
to Manage Nuisance Location and Scale Variability | 10 pages, 5 figures, 3 tables -- CVPR 2016, camera-ready version | null | null | null | cs.CV cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We conduct an empirical study to test the ability of Convolutional Neural
Networks (CNNs) to reduce the effects of nuisance transformations of the input
data, such as location, scale and aspect ratio. We isolate factors by adopting
a common convolutional architecture either deployed globally on the image to
compute class posterior distributions, or restricted locally to compute class
conditional distributions given location, scale and aspect ratios of bounding
boxes determined by proposal heuristics. In theory, averaging the latter should
yield inferior performance compared to proper marginalization. Yet empirical
evidence suggests the converse, leading us to conclude that - at the current
level of complexity of convolutional architectures and scale of the data sets
used to train them - CNNs are not very effective at marginalizing nuisance
variability. We also quantify the effects of context on the overall
classification task and its impact on the performance of CNNs, and propose
improved sampling techniques for heuristic proposal schemes that improve
end-to-end performance to state-of-the-art levels. We test our hypothesis on a
classification task using the ImageNet Challenge benchmark and on a
wide-baseline matching task using the Oxford and Fischer's datasets.
| [
{
"version": "v1",
"created": "Tue, 26 May 2015 03:11:11 GMT"
},
{
"version": "v2",
"created": "Thu, 28 Apr 2016 05:20:40 GMT"
}
] | 2016-04-29T00:00:00 | [
[
"Karianakis",
"Nikolaos",
""
],
[
"Dong",
"Jingming",
""
],
[
"Soatto",
"Stefano",
""
]
] | TITLE: An Empirical Evaluation of Current Convolutional Architectures' Ability
to Manage Nuisance Location and Scale Variability
ABSTRACT: We conduct an empirical study to test the ability of Convolutional Neural
Networks (CNNs) to reduce the effects of nuisance transformations of the input
data, such as location, scale and aspect ratio. We isolate factors by adopting
a common convolutional architecture either deployed globally on the image to
compute class posterior distributions, or restricted locally to compute class
conditional distributions given location, scale and aspect ratios of bounding
boxes determined by proposal heuristics. In theory, averaging the latter should
yield inferior performance compared to proper marginalization. Yet empirical
evidence suggests the converse, leading us to conclude that - at the current
level of complexity of convolutional architectures and scale of the data sets
used to train them - CNNs are not very effective at marginalizing nuisance
variability. We also quantify the effects of context on the overall
classification task and its impact on the performance of CNNs, and propose
improved sampling techniques for heuristic proposal schemes that improve
end-to-end performance to state-of-the-art levels. We test our hypothesis on a
classification task using the ImageNet Challenge benchmark and on a
wide-baseline matching task using the Oxford and Fischer's datasets.
| no_new_dataset | 0.950273 |
1511.05284 | Lisa Anne Hendricks | Lisa Anne Hendricks, Subhashini Venugopalan, Marcus Rohrbach, Raymond
Mooney, Kate Saenko, Trevor Darrell | Deep Compositional Captioning: Describing Novel Object Categories
without Paired Training Data | null | null | null | null | cs.CV cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While recent deep neural network models have achieved promising results on
the image captioning task, they rely largely on the availability of corpora
with paired image and sentence captions to describe objects in context. In this
work, we propose the Deep Compositional Captioner (DCC) to address the task of
generating descriptions of novel objects which are not present in paired
image-sentence datasets. Our method achieves this by leveraging large object
recognition datasets and external text corpora and by transferring knowledge
between semantically similar concepts. Current deep caption models can only
describe objects contained in paired image-sentence corpora, despite the fact
that they are pre-trained with large object recognition datasets, namely
ImageNet. In contrast, our model can compose sentences that describe novel
objects and their interactions with other objects. We demonstrate our model's
ability to describe novel concepts by empirically evaluating its performance on
MSCOCO and show qualitative results on ImageNet images of objects for which no
paired image-caption data exist. Further, we extend our approach to generate
descriptions of objects in video clips. Our results show that DCC has distinct
advantages over existing image and video captioning approaches for generating
descriptions of new objects in context.
| [
{
"version": "v1",
"created": "Tue, 17 Nov 2015 06:44:48 GMT"
},
{
"version": "v2",
"created": "Wed, 27 Apr 2016 23:40:55 GMT"
}
] | 2016-04-29T00:00:00 | [
[
"Hendricks",
"Lisa Anne",
""
],
[
"Venugopalan",
"Subhashini",
""
],
[
"Rohrbach",
"Marcus",
""
],
[
"Mooney",
"Raymond",
""
],
[
"Saenko",
"Kate",
""
],
[
"Darrell",
"Trevor",
""
]
] | TITLE: Deep Compositional Captioning: Describing Novel Object Categories
without Paired Training Data
ABSTRACT: While recent deep neural network models have achieved promising results on
the image captioning task, they rely largely on the availability of corpora
with paired image and sentence captions to describe objects in context. In this
work, we propose the Deep Compositional Captioner (DCC) to address the task of
generating descriptions of novel objects which are not present in paired
image-sentence datasets. Our method achieves this by leveraging large object
recognition datasets and external text corpora and by transferring knowledge
between semantically similar concepts. Current deep caption models can only
describe objects contained in paired image-sentence corpora, despite the fact
that they are pre-trained with large object recognition datasets, namely
ImageNet. In contrast, our model can compose sentences that describe novel
objects and their interactions with other objects. We demonstrate our model's
ability to describe novel concepts by empirically evaluating its performance on
MSCOCO and show qualitative results on ImageNet images of objects for which no
paired image-caption data exist. Further, we extend our approach to generate
descriptions of objects in video clips. Our results show that DCC has distinct
advantages over existing image and video captioning approaches for generating
descriptions of new objects in context.
| no_new_dataset | 0.941223 |
1511.06783 | Katsunori Ohnishi | Katsunori Ohnishi, Atsushi Kanehira, Asako Kanezaki, Tatsuya Harada | Recognizing Activities of Daily Living with a Wrist-mounted Camera | CVPR2016 spotlight presentation | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a novel dataset and a novel algorithm for recognizing activities
of daily living (ADL) from a first-person wearable camera. Handled objects are
crucially important for egocentric ADL recognition. For specific examination of
objects related to users' actions separately from other objects in an
environment, many previous works have addressed the detection of handled
objects in images captured from head-mounted and chest-mounted cameras.
Nevertheless, detecting handled objects is not always easy because they tend to
appear small in images. They can be occluded by a user's body. As described
herein, we mount a camera on a user's wrist. A wrist-mounted camera can capture
handled objects at a large scale, and thus it enables us to skip object
detection process. To compare a wrist-mounted camera and a head-mounted camera,
we also develop a novel and publicly available dataset that includes videos and
annotations of daily activities captured simultaneously by both cameras.
Additionally, we propose a discriminative video representation that retains
spatial and temporal information after encoding frame descriptors extracted by
Convolutional Neural Networks (CNN).
| [
{
"version": "v1",
"created": "Fri, 20 Nov 2015 22:02:09 GMT"
},
{
"version": "v2",
"created": "Thu, 28 Apr 2016 04:39:03 GMT"
}
] | 2016-04-29T00:00:00 | [
[
"Ohnishi",
"Katsunori",
""
],
[
"Kanehira",
"Atsushi",
""
],
[
"Kanezaki",
"Asako",
""
],
[
"Harada",
"Tatsuya",
""
]
] | TITLE: Recognizing Activities of Daily Living with a Wrist-mounted Camera
ABSTRACT: We present a novel dataset and a novel algorithm for recognizing activities
of daily living (ADL) from a first-person wearable camera. Handled objects are
crucially important for egocentric ADL recognition. For specific examination of
objects related to users' actions separately from other objects in an
environment, many previous works have addressed the detection of handled
objects in images captured from head-mounted and chest-mounted cameras.
Nevertheless, detecting handled objects is not always easy because they tend to
appear small in images. They can be occluded by a user's body. As described
herein, we mount a camera on a user's wrist. A wrist-mounted camera can capture
handled objects at a large scale, and thus it enables us to skip object
detection process. To compare a wrist-mounted camera and a head-mounted camera,
we also develop a novel and publicly available dataset that includes videos and
annotations of daily activities captured simultaneously by both cameras.
Additionally, we propose a discriminative video representation that retains
spatial and temporal information after encoding frame descriptors extracted by
Convolutional Neural Networks (CNN).
| new_dataset | 0.961822 |
1511.09439 | Xiaowei Zhou | Xiaowei Zhou, Menglong Zhu, Spyridon Leonardos, Kosta Derpanis, Kostas
Daniilidis | Sparseness Meets Deepness: 3D Human Pose Estimation from Monocular Video | Published in CVPR2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper addresses the challenge of 3D full-body human pose estimation from
a monocular image sequence. Here, two cases are considered: (i) the image
locations of the human joints are provided and (ii) the image locations of
joints are unknown. In the former case, a novel approach is introduced that
integrates a sparsity-driven 3D geometric prior and temporal smoothness. In the
latter case, the former case is extended by treating the image locations of the
joints as latent variables. A deep fully convolutional network is trained to
predict the uncertainty maps of the 2D joint locations. The 3D pose estimates
are realized via an Expectation-Maximization algorithm over the entire
sequence, where it is shown that the 2D joint location uncertainties can be
conveniently marginalized out during inference. Empirical evaluation on the
Human3.6M dataset shows that the proposed approaches achieve greater 3D pose
estimation accuracy over state-of-the-art baselines. Further, the proposed
approach outperforms a publicly available 2D pose estimation baseline on the
challenging PennAction dataset.
| [
{
"version": "v1",
"created": "Mon, 30 Nov 2015 19:41:06 GMT"
},
{
"version": "v2",
"created": "Thu, 28 Apr 2016 14:53:43 GMT"
}
] | 2016-04-29T00:00:00 | [
[
"Zhou",
"Xiaowei",
""
],
[
"Zhu",
"Menglong",
""
],
[
"Leonardos",
"Spyridon",
""
],
[
"Derpanis",
"Kosta",
""
],
[
"Daniilidis",
"Kostas",
""
]
] | TITLE: Sparseness Meets Deepness: 3D Human Pose Estimation from Monocular Video
ABSTRACT: This paper addresses the challenge of 3D full-body human pose estimation from
a monocular image sequence. Here, two cases are considered: (i) the image
locations of the human joints are provided and (ii) the image locations of
joints are unknown. In the former case, a novel approach is introduced that
integrates a sparsity-driven 3D geometric prior and temporal smoothness. In the
latter case, the former case is extended by treating the image locations of the
joints as latent variables. A deep fully convolutional network is trained to
predict the uncertainty maps of the 2D joint locations. The 3D pose estimates
are realized via an Expectation-Maximization algorithm over the entire
sequence, where it is shown that the 2D joint location uncertainties can be
conveniently marginalized out during inference. Empirical evaluation on the
Human3.6M dataset shows that the proposed approaches achieve greater 3D pose
estimation accuracy over state-of-the-art baselines. Further, the proposed
approach outperforms a publicly available 2D pose estimation baseline on the
challenging PennAction dataset.
| no_new_dataset | 0.945801 |
1602.06632 | Tejal Bhamre | Tejal Bhamre, Teng Zhang, Amit Singer | Denoising and Covariance Estimation of Single Particle Cryo-EM Images | Revision for JSB | null | 10.1016/j.jsb.2016.04.013 | null | cs.CV q-bio.BM stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The problem of image restoration in cryo-EM entails correcting for the
effects of the Contrast Transfer Function (CTF) and noise. Popular methods for
image restoration include `phase flipping', which corrects only for the Fourier
phases but not amplitudes, and Wiener filtering, which requires the spectral
signal to noise ratio. We propose a new image restoration method which we call
`Covariance Wiener Filtering' (CWF). In CWF, the covariance matrix of the
projection images is used within the classical Wiener filtering framework for
solving the image restoration deconvolution problem. Our estimation procedure
for the covariance matrix is new and successfully corrects for the CTF. We
demonstrate the efficacy of CWF by applying it to restore both simulated and
experimental cryo-EM images. Results with experimental datasets demonstrate
that CWF provides a good way to evaluate the particle images and to see what
the dataset contains even without 2D classification and averaging.
| [
{
"version": "v1",
"created": "Mon, 22 Feb 2016 03:04:44 GMT"
},
{
"version": "v2",
"created": "Tue, 23 Feb 2016 04:03:55 GMT"
},
{
"version": "v3",
"created": "Wed, 6 Apr 2016 19:41:52 GMT"
}
] | 2016-04-29T00:00:00 | [
[
"Bhamre",
"Tejal",
""
],
[
"Zhang",
"Teng",
""
],
[
"Singer",
"Amit",
""
]
] | TITLE: Denoising and Covariance Estimation of Single Particle Cryo-EM Images
ABSTRACT: The problem of image restoration in cryo-EM entails correcting for the
effects of the Contrast Transfer Function (CTF) and noise. Popular methods for
image restoration include `phase flipping', which corrects only for the Fourier
phases but not amplitudes, and Wiener filtering, which requires the spectral
signal to noise ratio. We propose a new image restoration method which we call
`Covariance Wiener Filtering' (CWF). In CWF, the covariance matrix of the
projection images is used within the classical Wiener filtering framework for
solving the image restoration deconvolution problem. Our estimation procedure
for the covariance matrix is new and successfully corrects for the CTF. We
demonstrate the efficacy of CWF by applying it to restore both simulated and
experimental cryo-EM images. Results with experimental datasets demonstrate
that CWF provides a good way to evaluate the particle images and to see what
the dataset contains even without 2D classification and averaging.
| no_new_dataset | 0.950778 |
1604.08220 | Ragav Venkatesan | Ragav Venkatesan, Baoxin Li | Diving deeper into mentee networks | null | null | null | null | cs.LG cs.CV cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Modern computer vision is all about the possession of powerful image
representations. Deeper and deeper convolutional neural networks have been
built using larger and larger datasets and are made publicly available. A large
swath of computer vision scientists use these pre-trained networks with varying
degrees of successes in various tasks. Even though there is tremendous success
in copying these networks, the representational space is not learnt from the
target dataset in a traditional manner. One of the reasons for opting to use a
pre-trained network over a network learnt from scratch is that small datasets
provide less supervision and require meticulous regularization, smaller and
careful tweaking of learning rates to even achieve stable learning without
weight explosion. It is often the case that large deep networks are not
portable, which necessitates the ability to learn mid-sized networks from
scratch.
In this article, we dive deeper into training these mid-sized networks on
small datasets from scratch by drawing additional supervision from a large
pre-trained network. Such learning also provides better generalization
accuracies than networks trained with common regularization techniques such as
l2, l1 and dropouts. We show that features learnt thus, are more general than
those learnt independently. We studied various characteristics of such networks
and found some interesting behaviors.
| [
{
"version": "v1",
"created": "Wed, 27 Apr 2016 20:05:45 GMT"
}
] | 2016-04-29T00:00:00 | [
[
"Venkatesan",
"Ragav",
""
],
[
"Li",
"Baoxin",
""
]
] | TITLE: Diving deeper into mentee networks
ABSTRACT: Modern computer vision is all about the possession of powerful image
representations. Deeper and deeper convolutional neural networks have been
built using larger and larger datasets and are made publicly available. A large
swath of computer vision scientists use these pre-trained networks with varying
degrees of successes in various tasks. Even though there is tremendous success
in copying these networks, the representational space is not learnt from the
target dataset in a traditional manner. One of the reasons for opting to use a
pre-trained network over a network learnt from scratch is that small datasets
provide less supervision and require meticulous regularization, smaller and
careful tweaking of learning rates to even achieve stable learning without
weight explosion. It is often the case that large deep networks are not
portable, which necessitates the ability to learn mid-sized networks from
scratch.
In this article, we dive deeper into training these mid-sized networks on
small datasets from scratch by drawing additional supervision from a large
pre-trained network. Such learning also provides better generalization
accuracies than networks trained with common regularization techniques such as
l2, l1 and dropouts. We show that features learnt thus, are more general than
those learnt independently. We studied various characteristics of such networks
and found some interesting behaviors.
| no_new_dataset | 0.949949 |
1604.08291 | Dacheng Tao | Chang Xu, Dacheng Tao, Chao Xu | Streaming View Learning | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An underlying assumption in conventional multi-view learning algorithms is
that all views can be simultaneously accessed. However, due to various factors
when collecting and pre-processing data from different views, the streaming
view setting, in which views arrive in a streaming manner, is becoming more
common. By assuming that the subspaces of a multi-view model trained over past
views are stable, here we fine tune their combination weights such that the
well-trained multi-view model is compatible with new views. This largely
overcomes the burden of learning new view functions and updating past view
functions. We theoretically examine convergence issues and the influence of
streaming views in the proposed algorithm. Experimental results on real-world
datasets suggest that studying the streaming views problem in multi-view
learning is significant and that the proposed algorithm can effectively handle
streaming views in different applications.
| [
{
"version": "v1",
"created": "Thu, 28 Apr 2016 02:37:03 GMT"
}
] | 2016-04-29T00:00:00 | [
[
"Xu",
"Chang",
""
],
[
"Tao",
"Dacheng",
""
],
[
"Xu",
"Chao",
""
]
] | TITLE: Streaming View Learning
ABSTRACT: An underlying assumption in conventional multi-view learning algorithms is
that all views can be simultaneously accessed. However, due to various factors
when collecting and pre-processing data from different views, the streaming
view setting, in which views arrive in a streaming manner, is becoming more
common. By assuming that the subspaces of a multi-view model trained over past
views are stable, here we fine tune their combination weights such that the
well-trained multi-view model is compatible with new views. This largely
overcomes the burden of learning new view functions and updating past view
functions. We theoretically examine convergence issues and the influence of
streaming views in the proposed algorithm. Experimental results on real-world
datasets suggest that studying the streaming views problem in multi-view
learning is significant and that the proposed algorithm can effectively handle
streaming views in different applications.
| no_new_dataset | 0.943348 |
1604.08500 | Zahra Roshan Zamir | Z. Roshan Zamir | Detection of epileptic seizure in EEG signals using linear least squares
preprocessing | Biological signal classification, Signal approximation, Feature
extraction, Data analysis, Linear least squares problems, EEG Seizure
detection | null | null | null | cs.LG math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An epileptic seizure is a transient event of abnormal excessive neuronal
discharge in the brain. This unwanted event can be obstructed by detection of
electrical changes in the brain that happen before the seizure takes place. The
automatic detection of seizures is necessary since the visual screening of EEG
recordings is a time consuming task and requires experts to improve the
diagnosis. Four linear least squares-based preprocessing models are proposed to
extract key features of an EEG signal in order to detect seizures. The first
two models are newly developed. The original signal (EEG) is approximated by a
sinusoidal curve. Its amplitude is formed by a polynomial function and compared
with the pre developed spline function.Different statistical measures namely
classification accuracy, true positive and negative rates, false positive and
negative rates and precision are utilized to assess the performance of the
proposed models. These metrics are derived from confusion matrices obtained
from classifiers. Different classifiers are used over the original dataset and
the set of extracted features. The proposed models significantly reduce the
dimension of the classification problem and the computational time while the
classification accuracy is improved in most cases. The first and third models
are promising feature extraction methods. Logistic, LazyIB1, LazyIB5 and J48
are the best classifiers. Their true positive and negative rates are $1$ while
false positive and negative rates are zero and the corresponding precision
values are $1$. Numerical results suggest that these models are robust and
efficient for detecting epileptic seizure.
| [
{
"version": "v1",
"created": "Wed, 27 Apr 2016 01:01:26 GMT"
}
] | 2016-04-29T00:00:00 | [
[
"Zamir",
"Z. Roshan",
""
]
] | TITLE: Detection of epileptic seizure in EEG signals using linear least squares
preprocessing
ABSTRACT: An epileptic seizure is a transient event of abnormal excessive neuronal
discharge in the brain. This unwanted event can be obstructed by detection of
electrical changes in the brain that happen before the seizure takes place. The
automatic detection of seizures is necessary since the visual screening of EEG
recordings is a time consuming task and requires experts to improve the
diagnosis. Four linear least squares-based preprocessing models are proposed to
extract key features of an EEG signal in order to detect seizures. The first
two models are newly developed. The original signal (EEG) is approximated by a
sinusoidal curve. Its amplitude is formed by a polynomial function and compared
with the pre developed spline function.Different statistical measures namely
classification accuracy, true positive and negative rates, false positive and
negative rates and precision are utilized to assess the performance of the
proposed models. These metrics are derived from confusion matrices obtained
from classifiers. Different classifiers are used over the original dataset and
the set of extracted features. The proposed models significantly reduce the
dimension of the classification problem and the computational time while the
classification accuracy is improved in most cases. The first and third models
are promising feature extraction methods. Logistic, LazyIB1, LazyIB5 and J48
are the best classifiers. Their true positive and negative rates are $1$ while
false positive and negative rates are zero and the corresponding precision
values are $1$. Numerical results suggest that these models are robust and
efficient for detecting epileptic seizure.
| no_new_dataset | 0.943764 |
1604.08561 | Ehsaneddin Asgari | Ehsaneddin Asgari and Mohammad R.K. Mofrad | Comparing Fifty Natural Languages and Twelve Genetic Languages Using
Word Embedding Language Divergence (WELD) as a Quantitative Measure of
Language Distance | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a new measure of distance between languages based on word
embedding, called word embedding language divergence (WELD). WELD is defined as
divergence between unified similarity distribution of words between languages.
Using such a measure, we perform language comparison for fifty natural
languages and twelve genetic languages. Our natural language dataset is a
collection of sentence-aligned parallel corpora from bible translations for
fifty languages spanning a variety of language families. Although we use
parallel corpora, which guarantees having the same content in all languages,
interestingly in many cases languages within the same family cluster together.
In addition to natural languages, we perform language comparison for the coding
regions in the genomes of 12 different organisms (4 plants, 6 animals, and two
human subjects). Our result confirms a significant high-level difference in the
genetic language model of humans/animals versus plants. The proposed method is
a step toward defining a quantitative measure of similarity between languages,
with applications in languages classification, genre identification, dialect
identification, and evaluation of translations.
| [
{
"version": "v1",
"created": "Thu, 28 Apr 2016 19:10:47 GMT"
}
] | 2016-04-29T00:00:00 | [
[
"Asgari",
"Ehsaneddin",
""
],
[
"Mofrad",
"Mohammad R. K.",
""
]
] | TITLE: Comparing Fifty Natural Languages and Twelve Genetic Languages Using
Word Embedding Language Divergence (WELD) as a Quantitative Measure of
Language Distance
ABSTRACT: We introduce a new measure of distance between languages based on word
embedding, called word embedding language divergence (WELD). WELD is defined as
divergence between unified similarity distribution of words between languages.
Using such a measure, we perform language comparison for fifty natural
languages and twelve genetic languages. Our natural language dataset is a
collection of sentence-aligned parallel corpora from bible translations for
fifty languages spanning a variety of language families. Although we use
parallel corpora, which guarantees having the same content in all languages,
interestingly in many cases languages within the same family cluster together.
In addition to natural languages, we perform language comparison for the coding
regions in the genomes of 12 different organisms (4 plants, 6 animals, and two
human subjects). Our result confirms a significant high-level difference in the
genetic language model of humans/animals versus plants. The proposed method is
a step toward defining a quantitative measure of similarity between languages,
with applications in languages classification, genre identification, dialect
identification, and evaluation of translations.
| new_dataset | 0.957794 |
1503.00024 | Sharan Vaswani | Sharan Vaswani, Laks.V.S. Lakshmanan and Mark Schmidt | Influence Maximization with Bandits | 12 pages | null | null | null | cs.SI cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the problem of \emph{influence maximization}, the problem of
maximizing the number of people that become aware of a product by finding the
`best' set of `seed' users to expose the product to. Most prior work on this
topic assumes that we know the probability of each user influencing each other
user, or we have data that lets us estimate these influences. However, this
information is typically not initially available or is difficult to obtain. To
avoid this assumption, we adopt a combinatorial multi-armed bandit paradigm
that estimates the influence probabilities as we sequentially try different
seed sets. We establish bounds on the performance of this procedure under the
existing edge-level feedback as well as a novel and more realistic node-level
feedback. Beyond our theoretical results, we describe a practical
implementation and experimentally demonstrate its efficiency and effectiveness
on four real datasets.
| [
{
"version": "v1",
"created": "Fri, 27 Feb 2015 21:59:08 GMT"
},
{
"version": "v2",
"created": "Mon, 30 Mar 2015 20:42:52 GMT"
},
{
"version": "v3",
"created": "Mon, 13 Apr 2015 19:53:49 GMT"
},
{
"version": "v4",
"created": "Wed, 27 Apr 2016 18:27:20 GMT"
}
] | 2016-04-28T00:00:00 | [
[
"Vaswani",
"Sharan",
""
],
[
"Lakshmanan",
"Laks. V. S.",
""
],
[
"Schmidt",
"Mark",
""
]
] | TITLE: Influence Maximization with Bandits
ABSTRACT: We consider the problem of \emph{influence maximization}, the problem of
maximizing the number of people that become aware of a product by finding the
`best' set of `seed' users to expose the product to. Most prior work on this
topic assumes that we know the probability of each user influencing each other
user, or we have data that lets us estimate these influences. However, this
information is typically not initially available or is difficult to obtain. To
avoid this assumption, we adopt a combinatorial multi-armed bandit paradigm
that estimates the influence probabilities as we sequentially try different
seed sets. We establish bounds on the performance of this procedure under the
existing edge-level feedback as well as a novel and more realistic node-level
feedback. Beyond our theoretical results, we describe a practical
implementation and experimentally demonstrate its efficiency and effectiveness
on four real datasets.
| no_new_dataset | 0.946001 |
1504.06243 | Weiyao Lin | Yang Shen, Weiyao Lin, Junchi Yan, Mingliang Xu, Jianxin Wu, Jingdong
Wang | Person Re-identification with Correspondence Structure Learning | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper addresses the problem of handling spatial misalignments due to
camera-view changes or human-pose variations in person re-identification. We
first introduce a boosting-based approach to learn a correspondence structure
which indicates the patch-wise matching probabilities between images from a
target camera pair. The learned correspondence structure can not only capture
the spatial correspondence pattern between cameras but also handle the
viewpoint or human-pose variation in individual images. We further introduce a
global-based matching process. It integrates a global matching constraint over
the learned correspondence structure to exclude cross-view misalignments during
the image patch matching process, hence achieving a more reliable matching
score between images. Experimental results on various datasets demonstrate the
effectiveness of our approach.
| [
{
"version": "v1",
"created": "Thu, 23 Apr 2015 16:24:43 GMT"
}
] | 2016-04-28T00:00:00 | [
[
"Shen",
"Yang",
""
],
[
"Lin",
"Weiyao",
""
],
[
"Yan",
"Junchi",
""
],
[
"Xu",
"Mingliang",
""
],
[
"Wu",
"Jianxin",
""
],
[
"Wang",
"Jingdong",
""
]
] | TITLE: Person Re-identification with Correspondence Structure Learning
ABSTRACT: This paper addresses the problem of handling spatial misalignments due to
camera-view changes or human-pose variations in person re-identification. We
first introduce a boosting-based approach to learn a correspondence structure
which indicates the patch-wise matching probabilities between images from a
target camera pair. The learned correspondence structure can not only capture
the spatial correspondence pattern between cameras but also handle the
viewpoint or human-pose variation in individual images. We further introduce a
global-based matching process. It integrates a global matching constraint over
the learned correspondence structure to exclude cross-view misalignments during
the image patch matching process, hence achieving a more reliable matching
score between images. Experimental results on various datasets demonstrate the
effectiveness of our approach.
| no_new_dataset | 0.956063 |
1511.04776 | Marc Goessling | Marc Goessling, Yali Amit | Mixtures of Sparse Autoregressive Networks | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider high-dimensional distribution estimation through autoregressive
networks. By combining the concepts of sparsity, mixtures and parameter sharing
we obtain a simple model which is fast to train and which achieves
state-of-the-art or better results on several standard benchmark datasets.
Specifically, we use an L1-penalty to regularize the conditional distributions
and introduce a procedure for automatic parameter sharing between mixture
components. Moreover, we propose a simple distributed representation which
permits exact likelihood evaluations since the latent variables are interleaved
with the observable variables and can be easily integrated out. Our model
achieves excellent generalization performance and scales well to extremely high
dimensions.
| [
{
"version": "v1",
"created": "Sun, 15 Nov 2015 22:54:02 GMT"
},
{
"version": "v2",
"created": "Wed, 25 Nov 2015 04:21:25 GMT"
},
{
"version": "v3",
"created": "Fri, 8 Jan 2016 05:01:11 GMT"
},
{
"version": "v4",
"created": "Tue, 26 Apr 2016 23:12:32 GMT"
}
] | 2016-04-28T00:00:00 | [
[
"Goessling",
"Marc",
""
],
[
"Amit",
"Yali",
""
]
] | TITLE: Mixtures of Sparse Autoregressive Networks
ABSTRACT: We consider high-dimensional distribution estimation through autoregressive
networks. By combining the concepts of sparsity, mixtures and parameter sharing
we obtain a simple model which is fast to train and which achieves
state-of-the-art or better results on several standard benchmark datasets.
Specifically, we use an L1-penalty to regularize the conditional distributions
and introduce a procedure for automatic parameter sharing between mixture
components. Moreover, we propose a simple distributed representation which
permits exact likelihood evaluations since the latent variables are interleaved
with the observable variables and can be easily integrated out. Our model
achieves excellent generalization performance and scales well to extremely high
dimensions.
| no_new_dataset | 0.946695 |
1602.09069 | William Garrison III | William C. Garrison III and Adam Shull and Steven Myers and Adam J.
Lee | On the Practicality of Cryptographically Enforcing Dynamic Access
Control Policies in the Cloud (Extended Version) | 26 pages; extended version of the IEEE S&P paper | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The ability to enforce robust and dynamic access controls on cloud-hosted
data while simultaneously ensuring confidentiality with respect to the cloud
itself is a clear goal for many users and organizations. To this end, there has
been much cryptographic research proposing the use of (hierarchical)
identity-based encryption, attribute-based encryption, predicate encryption,
functional encryption, and related technologies to perform robust and private
access control on untrusted cloud providers. However, the vast majority of this
work studies static models in which the access control policies being enforced
do not change over time. This is contrary to the needs of most practical
applications, which leverage dynamic data and/or policies. In this paper, we
show that the cryptographic enforcement of dynamic access controls on untrusted
platforms incurs computational costs that are likely prohibitive in practice.
Specifically, we develop lightweight constructions for enforcing role-based
access controls (i.e., $\mathsf{RBAC}_0$) over cloud-hosted files using
identity-based and traditional public-key cryptography. This is done under a
threat model as close as possible to the one assumed in the cryptographic
literature. We prove the correctness of these constructions, and leverage
real-world $\mathsf{RBAC}$ datasets and recent techniques developed by the
access control community to experimentally analyze, via simulation, their
associated computational costs. This analysis shows that supporting revocation,
file updates, and other state change functionality is likely to incur
prohibitive overheads in even minimally-dynamic, realistic scenarios. We
identify a number of bottlenecks in such systems, and fruitful areas for future
work that will lead to more natural and efficient constructions for the
cryptographic enforcement of dynamic access controls.
| [
{
"version": "v1",
"created": "Mon, 29 Feb 2016 17:54:49 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Mar 2016 05:42:55 GMT"
},
{
"version": "v3",
"created": "Tue, 26 Apr 2016 20:11:45 GMT"
}
] | 2016-04-28T00:00:00 | [
[
"Garrison",
"William C.",
"III"
],
[
"Shull",
"Adam",
""
],
[
"Myers",
"Steven",
""
],
[
"Lee",
"Adam J.",
""
]
] | TITLE: On the Practicality of Cryptographically Enforcing Dynamic Access
Control Policies in the Cloud (Extended Version)
ABSTRACT: The ability to enforce robust and dynamic access controls on cloud-hosted
data while simultaneously ensuring confidentiality with respect to the cloud
itself is a clear goal for many users and organizations. To this end, there has
been much cryptographic research proposing the use of (hierarchical)
identity-based encryption, attribute-based encryption, predicate encryption,
functional encryption, and related technologies to perform robust and private
access control on untrusted cloud providers. However, the vast majority of this
work studies static models in which the access control policies being enforced
do not change over time. This is contrary to the needs of most practical
applications, which leverage dynamic data and/or policies. In this paper, we
show that the cryptographic enforcement of dynamic access controls on untrusted
platforms incurs computational costs that are likely prohibitive in practice.
Specifically, we develop lightweight constructions for enforcing role-based
access controls (i.e., $\mathsf{RBAC}_0$) over cloud-hosted files using
identity-based and traditional public-key cryptography. This is done under a
threat model as close as possible to the one assumed in the cryptographic
literature. We prove the correctness of these constructions, and leverage
real-world $\mathsf{RBAC}$ datasets and recent techniques developed by the
access control community to experimentally analyze, via simulation, their
associated computational costs. This analysis shows that supporting revocation,
file updates, and other state change functionality is likely to incur
prohibitive overheads in even minimally-dynamic, realistic scenarios. We
identify a number of bottlenecks in such systems, and fruitful areas for future
work that will lead to more natural and efficient constructions for the
cryptographic enforcement of dynamic access controls.
| no_new_dataset | 0.946745 |
1603.07054 | Dangwei Li | Dangwei Li, Zhang Zhang, Xiaotang Chen, Haibin Ling, Kaiqi Huang | A Richly Annotated Dataset for Pedestrian Attribute Recognition | 16 pages, 8 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we aim to improve the dataset foundation for pedestrian
attribute recognition in real surveillance scenarios. Recognition of human
attributes, such as gender, and clothes types, has great prospects in real
applications. However, the development of suitable benchmark datasets for
attribute recognition remains lagged behind. Existing human attribute datasets
are collected from various sources or an integration of pedestrian
re-identification datasets. Such heterogeneous collection poses a big challenge
on developing high quality fine-grained attribute recognition algorithms.
Furthermore, human attribute recognition are generally severely affected by
environmental or contextual factors, such as viewpoints, occlusions and body
parts, while existing attribute datasets barely care about them. To tackle
these problems, we build a Richly Annotated Pedestrian (RAP) dataset from real
multi-camera surveillance scenarios with long term collection, where data
samples are annotated with not only fine-grained human attributes but also
environmental and contextual factors. RAP has in total 41,585 pedestrian
samples, each of which is annotated with 72 attributes as well as viewpoints,
occlusions, body parts information. To our knowledge, the RAP dataset is the
largest pedestrian attribute dataset, which is expected to greatly promote the
study of large-scale attribute recognition systems. Furthermore, we empirically
analyze the effects of different environmental and contextual factors on
pedestrian attribute recognition. Experimental results demonstrate that
viewpoints, occlusions and body parts information could assist attribute
recognition a lot in real applications.
| [
{
"version": "v1",
"created": "Wed, 23 Mar 2016 02:41:59 GMT"
},
{
"version": "v2",
"created": "Thu, 24 Mar 2016 02:54:02 GMT"
},
{
"version": "v3",
"created": "Wed, 27 Apr 2016 06:42:25 GMT"
}
] | 2016-04-28T00:00:00 | [
[
"Li",
"Dangwei",
""
],
[
"Zhang",
"Zhang",
""
],
[
"Chen",
"Xiaotang",
""
],
[
"Ling",
"Haibin",
""
],
[
"Huang",
"Kaiqi",
""
]
] | TITLE: A Richly Annotated Dataset for Pedestrian Attribute Recognition
ABSTRACT: In this paper, we aim to improve the dataset foundation for pedestrian
attribute recognition in real surveillance scenarios. Recognition of human
attributes, such as gender, and clothes types, has great prospects in real
applications. However, the development of suitable benchmark datasets for
attribute recognition remains lagged behind. Existing human attribute datasets
are collected from various sources or an integration of pedestrian
re-identification datasets. Such heterogeneous collection poses a big challenge
on developing high quality fine-grained attribute recognition algorithms.
Furthermore, human attribute recognition are generally severely affected by
environmental or contextual factors, such as viewpoints, occlusions and body
parts, while existing attribute datasets barely care about them. To tackle
these problems, we build a Richly Annotated Pedestrian (RAP) dataset from real
multi-camera surveillance scenarios with long term collection, where data
samples are annotated with not only fine-grained human attributes but also
environmental and contextual factors. RAP has in total 41,585 pedestrian
samples, each of which is annotated with 72 attributes as well as viewpoints,
occlusions, body parts information. To our knowledge, the RAP dataset is the
largest pedestrian attribute dataset, which is expected to greatly promote the
study of large-scale attribute recognition systems. Furthermore, we empirically
analyze the effects of different environmental and contextual factors on
pedestrian attribute recognition. Experimental results demonstrate that
viewpoints, occlusions and body parts information could assist attribute
recognition a lot in real applications.
| new_dataset | 0.96606 |
1604.03688 | Niall Robinson PhD | Niall H. Robinson, Rachel Prudden, Alberto Arribas | A Practical Approach to Spatiotemporal Data Compression | null | null | null | null | cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Datasets representing the world around us are becoming ever more unwieldy as
data volumes grow. This is largely due to increased measurement and modelling
resolution, but the problem is often exacerbated when data are stored at
spuriously high precisions. In an effort to facilitate analysis of these
datasets, computationally intensive calculations are increasingly being
performed on specialised remote servers before the reduced data are transferred
to the consumer. Due to bandwidth limitations, this often means data are
displayed as simple 2D data visualisations, such as scatter plots or images. We
present here a novel way to efficiently encode and transmit 4D data fields
on-demand so that they can be locally visualised and interrogated. This nascent
"4D video" format allows us to more flexibly move the boundary between data
server and consumer client. However, it has applications beyond purely
scientific visualisation, in the transmission of data to virtual and augmented
reality.
| [
{
"version": "v1",
"created": "Wed, 13 Apr 2016 08:33:36 GMT"
},
{
"version": "v2",
"created": "Wed, 27 Apr 2016 07:47:54 GMT"
}
] | 2016-04-28T00:00:00 | [
[
"Robinson",
"Niall H.",
""
],
[
"Prudden",
"Rachel",
""
],
[
"Arribas",
"Alberto",
""
]
] | TITLE: A Practical Approach to Spatiotemporal Data Compression
ABSTRACT: Datasets representing the world around us are becoming ever more unwieldy as
data volumes grow. This is largely due to increased measurement and modelling
resolution, but the problem is often exacerbated when data are stored at
spuriously high precisions. In an effort to facilitate analysis of these
datasets, computationally intensive calculations are increasingly being
performed on specialised remote servers before the reduced data are transferred
to the consumer. Due to bandwidth limitations, this often means data are
displayed as simple 2D data visualisations, such as scatter plots or images. We
present here a novel way to efficiently encode and transmit 4D data fields
on-demand so that they can be locally visualised and interrogated. This nascent
"4D video" format allows us to more flexibly move the boundary between data
server and consumer client. However, it has applications beyond purely
scientific visualisation, in the transmission of data to virtual and augmented
reality.
| no_new_dataset | 0.94256 |
1604.06232 | Andrea Romanoni | Andrea Romanoni and Matteo Matteucci | Incremental Reconstruction of Urban Environments by Edge-Points Delaunay
Triangulation | Intelligent Robots and Systems (IROS), 2015 IEEE/RSJ International
Conference on (IROS) 2015. http://hdl.handle.net/11311/972021 | null | 10.1109/IROS.2015.7354012 | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Urban reconstruction from a video captured by a surveying vehicle constitutes
a core module of automated mapping. When computational power represents a
limited resource and, a detailed map is not the primary goal, the
reconstruction can be performed incrementally, from a monocular video, carving
a 3D Delaunay triangulation of sparse points; this allows online incremental
mapping for tasks such as traversability analysis or obstacle avoidance. To
exploit the sharp edges of urban landscape, we propose to use a Delaunay
triangulation of Edge-Points, which are the 3D points corresponding to image
edges. These points constrain the edges of the 3D Delaunay triangulation to
real-world edges. Besides the use of the Edge-Points, a second contribution of
this paper is the Inverse Cone Heuristic that preemptively avoids the creation
of artifacts in the reconstructed manifold surface. We force the reconstruction
of a manifold surface since it makes it possible to apply computer graphics or
photometric refinement algorithms to the output mesh. We evaluated our approach
on four real sequences of the public available KITTI dataset by comparing the
incremental reconstruction against Velodyne measurements.
| [
{
"version": "v1",
"created": "Thu, 21 Apr 2016 09:59:42 GMT"
},
{
"version": "v2",
"created": "Wed, 27 Apr 2016 13:11:03 GMT"
}
] | 2016-04-28T00:00:00 | [
[
"Romanoni",
"Andrea",
""
],
[
"Matteucci",
"Matteo",
""
]
] | TITLE: Incremental Reconstruction of Urban Environments by Edge-Points Delaunay
Triangulation
ABSTRACT: Urban reconstruction from a video captured by a surveying vehicle constitutes
a core module of automated mapping. When computational power represents a
limited resource and, a detailed map is not the primary goal, the
reconstruction can be performed incrementally, from a monocular video, carving
a 3D Delaunay triangulation of sparse points; this allows online incremental
mapping for tasks such as traversability analysis or obstacle avoidance. To
exploit the sharp edges of urban landscape, we propose to use a Delaunay
triangulation of Edge-Points, which are the 3D points corresponding to image
edges. These points constrain the edges of the 3D Delaunay triangulation to
real-world edges. Besides the use of the Edge-Points, a second contribution of
this paper is the Inverse Cone Heuristic that preemptively avoids the creation
of artifacts in the reconstructed manifold surface. We force the reconstruction
of a manifold surface since it makes it possible to apply computer graphics or
photometric refinement algorithms to the output mesh. We evaluated our approach
on four real sequences of the public available KITTI dataset by comparing the
incremental reconstruction against Velodyne measurements.
| no_new_dataset | 0.950227 |
1604.07322 | Maria Torres Vega | Maria Torres Vega, Decebal Constantin Mocanu and Antonio Liotta | Predictive No-Reference Assessment of Video Quality | 13 pages, 8 figures, IEEE Selected Topics on Signal Processing | null | null | null | cs.MM cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Among the various means to evaluate the quality of video streams,
No-Reference (NR) methods have low computation and may be executed on thin
clients. Thus, NR algorithms would be perfect candidates in cases of real-time
quality assessment, automated quality control and, particularly, in adaptive
mobile streaming. Yet, existing NR approaches are often inaccurate, in
comparison to Full-Reference (FR) algorithms, especially under lossy network
conditions. In this work, we present an NR method that combines machine
learning with simple NR metrics to achieve a quality index comparably as
accurate as the Video Quality Metric (VQM) Full-Reference algorithm. Our method
is tested in an extensive dataset (960 videos), under lossy network conditions
and considering nine different machine learning algorithms. Overall, we achieve
an over 97% correlation with VQM, while allowing real-time assessment of video
quality of experience in realistic streaming scenarios.
| [
{
"version": "v1",
"created": "Mon, 25 Apr 2016 16:34:17 GMT"
},
{
"version": "v2",
"created": "Wed, 27 Apr 2016 06:16:40 GMT"
}
] | 2016-04-28T00:00:00 | [
[
"Vega",
"Maria Torres",
""
],
[
"Mocanu",
"Decebal Constantin",
""
],
[
"Liotta",
"Antonio",
""
]
] | TITLE: Predictive No-Reference Assessment of Video Quality
ABSTRACT: Among the various means to evaluate the quality of video streams,
No-Reference (NR) methods have low computation and may be executed on thin
clients. Thus, NR algorithms would be perfect candidates in cases of real-time
quality assessment, automated quality control and, particularly, in adaptive
mobile streaming. Yet, existing NR approaches are often inaccurate, in
comparison to Full-Reference (FR) algorithms, especially under lossy network
conditions. In this work, we present an NR method that combines machine
learning with simple NR metrics to achieve a quality index comparably as
accurate as the Video Quality Metric (VQM) Full-Reference algorithm. Our method
is tested in an extensive dataset (960 videos), under lossy network conditions
and considering nine different machine learning algorithms. Overall, we achieve
an over 97% correlation with VQM, while allowing real-time assessment of video
quality of experience in realistic streaming scenarios.
| no_new_dataset | 0.949576 |
1604.08010 | Souad Chaabouni | Souad Chaabouni, Jenny Benois-Pineau, Ofer Hadar, Chokri Ben Amar | Deep Learning for Saliency Prediction in Natural Video | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The purpose of this paper is the detection of salient areas in natural video
by using the new deep learning techniques. Salient patches in video frames are
predicted first. Then the predicted visual fixation maps are built upon them.
We design the deep architecture on the basis of CaffeNet implemented with Caffe
toolkit. We show that changing the way of data selection for optimisation of
network parameters, we can save computation cost up to 12 times. We extend deep
learning approaches for saliency prediction in still images with RGB values to
specificity of video using the sensitivity of the human visual system to
residual motion. Furthermore, we complete primary colour pixel values by
contrast features proposed in classical visual attention prediction models. The
experiments are conducted on two publicly available datasets. The first is
IRCCYN video database containing 31 videos with an overall amount of 7300
frames and eye fixations of 37 subjects. The second one is HOLLYWOOD2 provided
2517 movie clips with the eye fixations of 19 subjects. On IRCYYN dataset, the
accuracy obtained is of 89.51%. On HOLLYWOOD2 dataset, results in prediction of
saliency of patches show the improvement up to 2% with regard to RGB use only.
The resulting accuracy of 76, 6% is obtained. The AUC metric in comparison of
predicted saliency maps with visual fixation maps shows the increase up to 16%
on a sample of video clips from this dataset.
| [
{
"version": "v1",
"created": "Wed, 27 Apr 2016 10:34:21 GMT"
}
] | 2016-04-28T00:00:00 | [
[
"Chaabouni",
"Souad",
""
],
[
"Benois-Pineau",
"Jenny",
""
],
[
"Hadar",
"Ofer",
""
],
[
"Amar",
"Chokri Ben",
""
]
] | TITLE: Deep Learning for Saliency Prediction in Natural Video
ABSTRACT: The purpose of this paper is the detection of salient areas in natural video
by using the new deep learning techniques. Salient patches in video frames are
predicted first. Then the predicted visual fixation maps are built upon them.
We design the deep architecture on the basis of CaffeNet implemented with Caffe
toolkit. We show that changing the way of data selection for optimisation of
network parameters, we can save computation cost up to 12 times. We extend deep
learning approaches for saliency prediction in still images with RGB values to
specificity of video using the sensitivity of the human visual system to
residual motion. Furthermore, we complete primary colour pixel values by
contrast features proposed in classical visual attention prediction models. The
experiments are conducted on two publicly available datasets. The first is
IRCCYN video database containing 31 videos with an overall amount of 7300
frames and eye fixations of 37 subjects. The second one is HOLLYWOOD2 provided
2517 movie clips with the eye fixations of 19 subjects. On IRCYYN dataset, the
accuracy obtained is of 89.51%. On HOLLYWOOD2 dataset, results in prediction of
saliency of patches show the improvement up to 2% with regard to RGB use only.
The resulting accuracy of 76, 6% is obtained. The AUC metric in comparison of
predicted saliency maps with visual fixation maps shows the increase up to 16%
on a sample of video clips from this dataset.
| no_new_dataset | 0.945248 |
1604.08088 | Xirong Li | Xirong Li and Yujia Huo and Jieping Xu and Qin Jin | Detecting Violence in Video using Subclasses | null | null | null | null | cs.MM cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper attacks the challenging problem of violence detection in videos.
Different from existing works focusing on combining multi-modal features, we go
one step further by adding and exploiting subclasses visually related to
violence. We enrich the MediaEval 2015 violence dataset by \emph{manually}
labeling violence videos with respect to the subclasses. Such fine-grained
annotations not only help understand what have impeded previous efforts on
learning to fuse the multi-modal features, but also enhance the generalization
ability of the learned fusion to novel test data. The new subclass based
solution, with AP of 0.303 and P100 of 0.55 on the MediaEval 2015 test set,
outperforms several state-of-the-art alternatives. Notice that our solution
does not require fine-grained annotations on the test set, so it can be
directly applied on novel and fully unlabeled videos. Interestingly, our study
shows that motion related features, though being essential part in previous
systems, are dispensable.
| [
{
"version": "v1",
"created": "Wed, 27 Apr 2016 14:32:16 GMT"
}
] | 2016-04-28T00:00:00 | [
[
"Li",
"Xirong",
""
],
[
"Huo",
"Yujia",
""
],
[
"Xu",
"Jieping",
""
],
[
"Jin",
"Qin",
""
]
] | TITLE: Detecting Violence in Video using Subclasses
ABSTRACT: This paper attacks the challenging problem of violence detection in videos.
Different from existing works focusing on combining multi-modal features, we go
one step further by adding and exploiting subclasses visually related to
violence. We enrich the MediaEval 2015 violence dataset by \emph{manually}
labeling violence videos with respect to the subclasses. Such fine-grained
annotations not only help understand what have impeded previous efforts on
learning to fuse the multi-modal features, but also enhance the generalization
ability of the learned fusion to novel test data. The new subclass based
solution, with AP of 0.303 and P100 of 0.55 on the MediaEval 2015 test set,
outperforms several state-of-the-art alternatives. Notice that our solution
does not require fine-grained annotations on the test set, so it can be
directly applied on novel and fully unlabeled videos. Interestingly, our study
shows that motion related features, though being essential part in previous
systems, are dispensable.
| no_new_dataset | 0.947039 |
1503.01817 | Bart Thomee | Bart Thomee and David A. Shamma and Gerald Friedland and Benjamin
Elizalde and Karl Ni and Douglas Poland and Damian Borth and Li-Jia Li | YFCC100M: The New Data in Multimedia Research | null | Communications of the ACM, 59(2), pp. 64-73, 2016 | 10.1145/2812802 | null | cs.MM cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present the Yahoo Flickr Creative Commons 100 Million Dataset (YFCC100M),
the largest public multimedia collection that has ever been released. The
dataset contains a total of 100 million media objects, of which approximately
99.2 million are photos and 0.8 million are videos, all of which carry a
Creative Commons license. Each media object in the dataset is represented by
several pieces of metadata, e.g. Flickr identifier, owner name, camera, title,
tags, geo, media source. The collection provides a comprehensive snapshot of
how photos and videos were taken, described, and shared over the years, from
the inception of Flickr in 2004 until early 2014. In this article we explain
the rationale behind its creation, as well as the implications the dataset has
for science, research, engineering, and development. We further present several
new challenges in multimedia research that can now be expanded upon with our
dataset.
| [
{
"version": "v1",
"created": "Thu, 5 Mar 2015 23:43:42 GMT"
},
{
"version": "v2",
"created": "Mon, 25 Apr 2016 20:10:14 GMT"
}
] | 2016-04-27T00:00:00 | [
[
"Thomee",
"Bart",
""
],
[
"Shamma",
"David A.",
""
],
[
"Friedland",
"Gerald",
""
],
[
"Elizalde",
"Benjamin",
""
],
[
"Ni",
"Karl",
""
],
[
"Poland",
"Douglas",
""
],
[
"Borth",
"Damian",
""
],
[
"Li",
"Li-Jia",
""
]
] | TITLE: YFCC100M: The New Data in Multimedia Research
ABSTRACT: We present the Yahoo Flickr Creative Commons 100 Million Dataset (YFCC100M),
the largest public multimedia collection that has ever been released. The
dataset contains a total of 100 million media objects, of which approximately
99.2 million are photos and 0.8 million are videos, all of which carry a
Creative Commons license. Each media object in the dataset is represented by
several pieces of metadata, e.g. Flickr identifier, owner name, camera, title,
tags, geo, media source. The collection provides a comprehensive snapshot of
how photos and videos were taken, described, and shared over the years, from
the inception of Flickr in 2004 until early 2014. In this article we explain
the rationale behind its creation, as well as the implications the dataset has
for science, research, engineering, and development. We further present several
new challenges in multimedia research that can now be expanded upon with our
dataset.
| new_dataset | 0.948585 |
1511.06645 | Leonid Pishchulin | Leonid Pishchulin, Eldar Insafutdinov, Siyu Tang, Bjoern Andres,
Mykhaylo Andriluka, Peter Gehler, Bernt Schiele | DeepCut: Joint Subset Partition and Labeling for Multi Person Pose
Estimation | Accepted at IEEE Conference on Computer Vision and Pattern
Recognition (CVPR 2016) | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper considers the task of articulated human pose estimation of
multiple people in real world images. We propose an approach that jointly
solves the tasks of detection and pose estimation: it infers the number of
persons in a scene, identifies occluded body parts, and disambiguates body
parts between people in close proximity of each other. This joint formulation
is in contrast to previous strategies, that address the problem by first
detecting people and subsequently estimating their body pose. We propose a
partitioning and labeling formulation of a set of body-part hypotheses
generated with CNN-based part detectors. Our formulation, an instance of an
integer linear program, implicitly performs non-maximum suppression on the set
of part candidates and groups them to form configurations of body parts
respecting geometric and appearance constraints. Experiments on four different
datasets demonstrate state-of-the-art results for both single person and multi
person pose estimation. Models and code available at
http://pose.mpi-inf.mpg.de.
| [
{
"version": "v1",
"created": "Fri, 20 Nov 2015 15:37:55 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Apr 2016 04:26:29 GMT"
}
] | 2016-04-27T00:00:00 | [
[
"Pishchulin",
"Leonid",
""
],
[
"Insafutdinov",
"Eldar",
""
],
[
"Tang",
"Siyu",
""
],
[
"Andres",
"Bjoern",
""
],
[
"Andriluka",
"Mykhaylo",
""
],
[
"Gehler",
"Peter",
""
],
[
"Schiele",
"Bernt",
""
]
] | TITLE: DeepCut: Joint Subset Partition and Labeling for Multi Person Pose
Estimation
ABSTRACT: This paper considers the task of articulated human pose estimation of
multiple people in real world images. We propose an approach that jointly
solves the tasks of detection and pose estimation: it infers the number of
persons in a scene, identifies occluded body parts, and disambiguates body
parts between people in close proximity of each other. This joint formulation
is in contrast to previous strategies, that address the problem by first
detecting people and subsequently estimating their body pose. We propose a
partitioning and labeling formulation of a set of body-part hypotheses
generated with CNN-based part detectors. Our formulation, an instance of an
integer linear program, implicitly performs non-maximum suppression on the set
of part candidates and groups them to form configurations of body parts
respecting geometric and appearance constraints. Experiments on four different
datasets demonstrate state-of-the-art results for both single person and multi
person pose estimation. Models and code available at
http://pose.mpi-inf.mpg.de.
| no_new_dataset | 0.95096 |
1511.07487 | Arkaitz Zubiaga | Arkaitz Zubiaga, Maria Liakata, Rob Procter, Geraldine Wong Sak Hoi,
Peter Tolmie | Analysing How People Orient to and Spread Rumours in Social Media by
Looking at Conversational Threads | null | null | 10.1371/journal.pone.0150989 | null | cs.SI | http://creativecommons.org/licenses/by/4.0/ | As breaking news unfolds people increasingly rely on social media to stay
abreast of the latest updates. The use of social media in such situations comes
with the caveat that new information being released piecemeal may encourage
rumours, many of which remain unverified long after their point of release.
Little is known, however, about the dynamics of the life cycle of a social
media rumour. In this paper we present a methodology that has enabled us to
collect, identify and annotate a dataset of 330 rumour threads (4,842 tweets)
associated with 9 newsworthy events. We analyse this dataset to understand how
users spread, support, or deny rumours that are later proven true or false, by
distinguishing two levels of status in a rumour life cycle i.e., before and
after its veracity status is resolved. The identification of rumours associated
with each event, as well as the tweet that resolved each rumour as true or
false, was performed by a team of journalists who tracked the events in real
time. Our study shows that rumours that are ultimately proven true tend to be
resolved faster than those that turn out to be false. Whilst one can readily
see users denying rumours once they have been debunked, users appear to be less
capable of distinguishing true from false rumours when their veracity remains
in question. In fact, we show that the prevalent tendency for users is to
support every unverified rumour. We also analyse the role of different types of
users, finding that highly reputable users such as news organisations endeavour
to post well-grounded statements, which appear to be certain and accompanied by
evidence. Nevertheless, these often prove to be unverified pieces of
information that give rise to false rumours. Our study reinforces the need for
developing robust machine learning techniques that can provide assistance for
assessing the veracity of rumours.
| [
{
"version": "v1",
"created": "Mon, 23 Nov 2015 22:09:19 GMT"
},
{
"version": "v2",
"created": "Mon, 1 Feb 2016 14:25:44 GMT"
},
{
"version": "v3",
"created": "Thu, 25 Feb 2016 13:30:25 GMT"
}
] | 2016-04-27T00:00:00 | [
[
"Zubiaga",
"Arkaitz",
""
],
[
"Liakata",
"Maria",
""
],
[
"Procter",
"Rob",
""
],
[
"Hoi",
"Geraldine Wong Sak",
""
],
[
"Tolmie",
"Peter",
""
]
] | TITLE: Analysing How People Orient to and Spread Rumours in Social Media by
Looking at Conversational Threads
ABSTRACT: As breaking news unfolds people increasingly rely on social media to stay
abreast of the latest updates. The use of social media in such situations comes
with the caveat that new information being released piecemeal may encourage
rumours, many of which remain unverified long after their point of release.
Little is known, however, about the dynamics of the life cycle of a social
media rumour. In this paper we present a methodology that has enabled us to
collect, identify and annotate a dataset of 330 rumour threads (4,842 tweets)
associated with 9 newsworthy events. We analyse this dataset to understand how
users spread, support, or deny rumours that are later proven true or false, by
distinguishing two levels of status in a rumour life cycle i.e., before and
after its veracity status is resolved. The identification of rumours associated
with each event, as well as the tweet that resolved each rumour as true or
false, was performed by a team of journalists who tracked the events in real
time. Our study shows that rumours that are ultimately proven true tend to be
resolved faster than those that turn out to be false. Whilst one can readily
see users denying rumours once they have been debunked, users appear to be less
capable of distinguishing true from false rumours when their veracity remains
in question. In fact, we show that the prevalent tendency for users is to
support every unverified rumour. We also analyse the role of different types of
users, finding that highly reputable users such as news organisations endeavour
to post well-grounded statements, which appear to be certain and accompanied by
evidence. Nevertheless, these often prove to be unverified pieces of
information that give rise to false rumours. Our study reinforces the need for
developing robust machine learning techniques that can provide assistance for
assessing the veracity of rumours.
| new_dataset | 0.942454 |
1512.01979 | Antonio Cicone | Antonio Cicone, Jingfang Liu, Haomin Zhou | Hyperspectral Chemical Plume Detection Algorithms Based On
Multidimensional Iterative Filtering Decomposition | null | null | 10.1098/rsta.2015.0196 | null | math.NA cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Chemicals released in the air can be extremely dangerous for human beings and
the environment. Hyperspectral images can be used to identify chemical plumes,
however the task can be extremely challenging. Assuming we know a priori that
some chemical plume, with a known frequency spectrum, has been photographed
using a hyperspectral sensor, we can use standard techniques like the so called
matched filter or adaptive cosine estimator, plus a properly chosen threshold
value, to identify the position of the chemical plume. However, due to noise
and sensors fault, the accurate identification of chemical pixels is not easy
even in this apparently simple situation. In this paper we present a
post-processing tool that, in a completely adaptive and data driven fashion,
allows to improve the performance of any classification methods in identifying
the boundaries of a plume. This is done using the Multidimensional Iterative
Filtering (MIF) algorithm (arXiv:1411.6051, arXiv:1507.07173), which is a
non-stationary signal decomposition method like the pioneering Empirical Mode
Decomposition (EMD) method. Moreover, based on the MIF technique, we propose
also a pre-processing method that allows to decorrelate and mean-center a
hyperspectral dataset. The Cosine Similarity measure, which often fails in
practice, appears to become a successful and outperforming classifier when
equipped with such pre-processing method. We show some examples of the proposed
methods when applied to real life problems.
| [
{
"version": "v1",
"created": "Mon, 7 Dec 2015 11:06:10 GMT"
}
] | 2016-04-27T00:00:00 | [
[
"Cicone",
"Antonio",
""
],
[
"Liu",
"Jingfang",
""
],
[
"Zhou",
"Haomin",
""
]
] | TITLE: Hyperspectral Chemical Plume Detection Algorithms Based On
Multidimensional Iterative Filtering Decomposition
ABSTRACT: Chemicals released in the air can be extremely dangerous for human beings and
the environment. Hyperspectral images can be used to identify chemical plumes,
however the task can be extremely challenging. Assuming we know a priori that
some chemical plume, with a known frequency spectrum, has been photographed
using a hyperspectral sensor, we can use standard techniques like the so called
matched filter or adaptive cosine estimator, plus a properly chosen threshold
value, to identify the position of the chemical plume. However, due to noise
and sensors fault, the accurate identification of chemical pixels is not easy
even in this apparently simple situation. In this paper we present a
post-processing tool that, in a completely adaptive and data driven fashion,
allows to improve the performance of any classification methods in identifying
the boundaries of a plume. This is done using the Multidimensional Iterative
Filtering (MIF) algorithm (arXiv:1411.6051, arXiv:1507.07173), which is a
non-stationary signal decomposition method like the pioneering Empirical Mode
Decomposition (EMD) method. Moreover, based on the MIF technique, we propose
also a pre-processing method that allows to decorrelate and mean-center a
hyperspectral dataset. The Cosine Similarity measure, which often fails in
practice, appears to become a successful and outperforming classifier when
equipped with such pre-processing method. We show some examples of the proposed
methods when applied to real life problems.
| no_new_dataset | 0.944893 |
1603.01090 | Janez \v{Z}erovnik | David Kaljun, Joze Petri\v{s}i\v{c}, Janez \v{Z}erovnik | Using Newton's method to model a spatial light distribution of a LED
with attached secondary optics | submitted to Journal of Mecanical enginering (Strojni\v{s}ki vestnik,
Ljubljana) | Journal of Mechanical Engineering 62(2016)5, 307-317 | 10.5545/sv-jme.2015.3234 | null | cs.OH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In design of optical systems based on LED (Light emitting diode) technology,
a crucial task is to handle the unstructured data describing properties of
optical elements in standard formats. This leads to the problem of data fitting
within an appropriate model. Newton's method is used as an upgrade of
previously developed most promising discrete optimization heuristics showing
improvement of both performance and quality of solutions. Experiment also
indicates that a combination of an algorithm that finds promising initial
solutions as a preprocessor to Newton's method may be a winning idea, at least
on some datasets of instances.
| [
{
"version": "v1",
"created": "Thu, 3 Mar 2016 13:23:24 GMT"
}
] | 2016-04-27T00:00:00 | [
[
"Kaljun",
"David",
""
],
[
"Petrišič",
"Joze",
""
],
[
"Žerovnik",
"Janez",
""
]
] | TITLE: Using Newton's method to model a spatial light distribution of a LED
with attached secondary optics
ABSTRACT: In design of optical systems based on LED (Light emitting diode) technology,
a crucial task is to handle the unstructured data describing properties of
optical elements in standard formats. This leads to the problem of data fitting
within an appropriate model. Newton's method is used as an upgrade of
previously developed most promising discrete optimization heuristics showing
improvement of both performance and quality of solutions. Experiment also
indicates that a combination of an algorithm that finds promising initial
solutions as a preprocessor to Newton's method may be a winning idea, at least
on some datasets of instances.
| no_new_dataset | 0.941708 |
1603.07236 | Dan Stowell | Dan Stowell, Veronica Morfi, Lisa F. Gill | Individual identity in songbirds: signal representations and metric
learning for locating the information in complex corvid calls | null | null | null | null | cs.SD | http://creativecommons.org/licenses/by/4.0/ | Bird calls range from simple tones to rich dynamic multi-harmonic structures.
The more complex calls are very poorly understood at present, such as those of
the scientifically important corvid family (jackdaws, crows, ravens, etc.).
Individual birds can recognise familiar individuals from calls, but where in
the signal is this identity encoded? We studied the question by applying a
combination of feature representations to a dataset of jackdaw calls, including
linear predictive coding (LPC) and adaptive discrete Fourier transform (aDFT).
We demonstrate through a classification paradigm that we can strongly
outperform a standard spectrogram representation for identifying individuals,
and we apply metric learning to determine which time-frequency regions
contribute most strongly to robust individual identification. Computational
methods can help to direct our search for understanding of these complex
biological signals.
| [
{
"version": "v1",
"created": "Wed, 23 Mar 2016 15:29:39 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Apr 2016 16:32:24 GMT"
}
] | 2016-04-27T00:00:00 | [
[
"Stowell",
"Dan",
""
],
[
"Morfi",
"Veronica",
""
],
[
"Gill",
"Lisa F.",
""
]
] | TITLE: Individual identity in songbirds: signal representations and metric
learning for locating the information in complex corvid calls
ABSTRACT: Bird calls range from simple tones to rich dynamic multi-harmonic structures.
The more complex calls are very poorly understood at present, such as those of
the scientifically important corvid family (jackdaws, crows, ravens, etc.).
Individual birds can recognise familiar individuals from calls, but where in
the signal is this identity encoded? We studied the question by applying a
combination of feature representations to a dataset of jackdaw calls, including
linear predictive coding (LPC) and adaptive discrete Fourier transform (aDFT).
We demonstrate through a classification paradigm that we can strongly
outperform a standard spectrogram representation for identifying individuals,
and we apply metric learning to determine which time-frequency regions
contribute most strongly to robust individual identification. Computational
methods can help to direct our search for understanding of these complex
biological signals.
| no_new_dataset | 0.906983 |
1604.07528 | Tong Xiao | Tong Xiao, Hongsheng Li, Wanli Ouyang, Xiaogang Wang | Learning Deep Feature Representations with Domain Guided Dropout for
Person Re-identification | To appear in CVPR2016 | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Learning generic and robust feature representations with data from multiple
domains for the same problem is of great value, especially for the problems
that have multiple datasets but none of them are large enough to provide
abundant data variations. In this work, we present a pipeline for learning deep
feature representations from multiple domains with Convolutional Neural
Networks (CNNs). When training a CNN with data from all the domains, some
neurons learn representations shared across several domains, while some others
are effective only for a specific one. Based on this important observation, we
propose a Domain Guided Dropout algorithm to improve the feature learning
procedure. Experiments show the effectiveness of our pipeline and the proposed
algorithm. Our methods on the person re-identification problem outperform
state-of-the-art methods on multiple datasets by large margins.
| [
{
"version": "v1",
"created": "Tue, 26 Apr 2016 05:39:53 GMT"
}
] | 2016-04-27T00:00:00 | [
[
"Xiao",
"Tong",
""
],
[
"Li",
"Hongsheng",
""
],
[
"Ouyang",
"Wanli",
""
],
[
"Wang",
"Xiaogang",
""
]
] | TITLE: Learning Deep Feature Representations with Domain Guided Dropout for
Person Re-identification
ABSTRACT: Learning generic and robust feature representations with data from multiple
domains for the same problem is of great value, especially for the problems
that have multiple datasets but none of them are large enough to provide
abundant data variations. In this work, we present a pipeline for learning deep
feature representations from multiple domains with Convolutional Neural
Networks (CNNs). When training a CNN with data from all the domains, some
neurons learn representations shared across several domains, while some others
are effective only for a specific one. Based on this important observation, we
propose a Domain Guided Dropout algorithm to improve the feature learning
procedure. Experiments show the effectiveness of our pipeline and the proposed
algorithm. Our methods on the person re-identification problem outperform
state-of-the-art methods on multiple datasets by large margins.
| no_new_dataset | 0.950411 |
1604.07788 | Dong Zhang | Dong Zhang and Mubarak Shah | A Framework for Human Pose Estimation in Videos | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we present a method to estimate a sequence of human poses in
unconstrained videos. We aim to demonstrate that by using temporal information,
the human pose estimation results can be improved over image based pose
estimation methods. In contrast to the commonly employed graph optimization
formulation, which is NP-hard and needs approximate solutions, we formulate
this problem into a unified two stage tree-based optimization problem for which
an efficient and exact solution exists. Although the proposed method finds an
exact solution, it does not sacrifice the ability to model the spatial and
temporal constraints between body parts in the frames; in fact it models the
{\em symmetric} parts better than the existing methods. The proposed method is
based on two main ideas: `Abstraction' and `Association' to enforce the intra-
and inter-frame body part constraints without inducing extra computational
complexity to the polynomial time solution. Using the idea of `Abstraction', a
new concept of `abstract body part' is introduced to conceptually combine the
symmetric body parts and model them in the tree based body part structure.
Using the idea of `Association', the optimal tracklets are generated for each
abstract body part, in order to enforce the spatiotemporal constraints between
body parts in adjacent frames. A sequence of the best poses is inferred from
the abstract body part tracklets through the tree-based optimization. Finally,
the poses are refined by limb alignment and refinement schemes. We evaluated
the proposed method on three publicly available video based human pose
estimation datasets, and obtained dramatically improved performance compared to
the state-of-the-art methods.
| [
{
"version": "v1",
"created": "Tue, 26 Apr 2016 18:45:25 GMT"
}
] | 2016-04-27T00:00:00 | [
[
"Zhang",
"Dong",
""
],
[
"Shah",
"Mubarak",
""
]
] | TITLE: A Framework for Human Pose Estimation in Videos
ABSTRACT: In this paper, we present a method to estimate a sequence of human poses in
unconstrained videos. We aim to demonstrate that by using temporal information,
the human pose estimation results can be improved over image based pose
estimation methods. In contrast to the commonly employed graph optimization
formulation, which is NP-hard and needs approximate solutions, we formulate
this problem into a unified two stage tree-based optimization problem for which
an efficient and exact solution exists. Although the proposed method finds an
exact solution, it does not sacrifice the ability to model the spatial and
temporal constraints between body parts in the frames; in fact it models the
{\em symmetric} parts better than the existing methods. The proposed method is
based on two main ideas: `Abstraction' and `Association' to enforce the intra-
and inter-frame body part constraints without inducing extra computational
complexity to the polynomial time solution. Using the idea of `Abstraction', a
new concept of `abstract body part' is introduced to conceptually combine the
symmetric body parts and model them in the tree based body part structure.
Using the idea of `Association', the optimal tracklets are generated for each
abstract body part, in order to enforce the spatiotemporal constraints between
body parts in adjacent frames. A sequence of the best poses is inferred from
the abstract body part tracklets through the tree-based optimization. Finally,
the poses are refined by limb alignment and refinement schemes. We evaluated
the proposed method on three publicly available video based human pose
estimation datasets, and obtained dramatically improved performance compared to
the state-of-the-art methods.
| no_new_dataset | 0.951323 |
1506.02565 | Seungjin Choi | Yong-Deok Kim, Taewoong Jang, Bohyung Han, and Seungjin Choi | Learning to Select Pre-Trained Deep Representations with Bayesian
Evidence Framework | Appearing in CVPR-2016 (oral presentation) | null | null | null | cs.CV cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a Bayesian evidence framework to facilitate transfer learning from
pre-trained deep convolutional neural networks (CNNs). Our framework is
formulated on top of a least squares SVM (LS-SVM) classifier, which is simple
and fast in both training and testing, and achieves competitive performance in
practice. The regularization parameters in LS-SVM is estimated automatically
without grid search and cross-validation by maximizing evidence, which is a
useful measure to select the best performing CNN out of multiple candidates for
transfer learning; the evidence is optimized efficiently by employing Aitken's
delta-squared process, which accelerates convergence of fixed point update. The
proposed Bayesian evidence framework also provides a good solution to identify
the best ensemble of heterogeneous CNNs through a greedy algorithm. Our
Bayesian evidence framework for transfer learning is tested on 12 visual
recognition datasets and illustrates the state-of-the-art performance
consistently in terms of prediction accuracy and modeling efficiency.
| [
{
"version": "v1",
"created": "Mon, 8 Jun 2015 15:56:26 GMT"
},
{
"version": "v2",
"created": "Tue, 9 Jun 2015 18:57:35 GMT"
},
{
"version": "v3",
"created": "Thu, 24 Dec 2015 03:40:28 GMT"
},
{
"version": "v4",
"created": "Mon, 25 Apr 2016 01:35:31 GMT"
}
] | 2016-04-26T00:00:00 | [
[
"Kim",
"Yong-Deok",
""
],
[
"Jang",
"Taewoong",
""
],
[
"Han",
"Bohyung",
""
],
[
"Choi",
"Seungjin",
""
]
] | TITLE: Learning to Select Pre-Trained Deep Representations with Bayesian
Evidence Framework
ABSTRACT: We propose a Bayesian evidence framework to facilitate transfer learning from
pre-trained deep convolutional neural networks (CNNs). Our framework is
formulated on top of a least squares SVM (LS-SVM) classifier, which is simple
and fast in both training and testing, and achieves competitive performance in
practice. The regularization parameters in LS-SVM is estimated automatically
without grid search and cross-validation by maximizing evidence, which is a
useful measure to select the best performing CNN out of multiple candidates for
transfer learning; the evidence is optimized efficiently by employing Aitken's
delta-squared process, which accelerates convergence of fixed point update. The
proposed Bayesian evidence framework also provides a good solution to identify
the best ensemble of heterogeneous CNNs through a greedy algorithm. Our
Bayesian evidence framework for transfer learning is tested on 12 visual
recognition datasets and illustrates the state-of-the-art performance
consistently in terms of prediction accuracy and modeling efficiency.
| no_new_dataset | 0.95096 |
1510.02899 | Xirong Li | Masoud Mazloom and Xirong Li and Cees G. M. Snoek | TagBook: A Semantic Video Representation without Supervision for Event
Detection | accepted for publication as a regular paper in the IEEE Transactions
on Multimedia | null | null | null | cs.CV cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the problem of event detection in video for scenarios where only
few, or even zero examples are available for training. For this challenging
setting, the prevailing solutions in the literature rely on a semantic video
representation obtained from thousands of pre-trained concept detectors.
Different from existing work, we propose a new semantic video representation
that is based on freely available social tagged videos only, without the need
for training any intermediate concept detectors. We introduce a simple
algorithm that propagates tags from a video's nearest neighbors, similar in
spirit to the ones used for image retrieval, but redesign it for video event
detection by including video source set refinement and varying the video tag
assignment. We call our approach TagBook and study its construction,
descriptiveness and detection performance on the TRECVID 2013 and 2014
multimedia event detection datasets and the Columbia Consumer Video dataset.
Despite its simple nature, the proposed TagBook video representation is
remarkably effective for few-example and zero-example event detection, even
outperforming very recent state-of-the-art alternatives building on supervised
representations.
| [
{
"version": "v1",
"created": "Sat, 10 Oct 2015 09:28:56 GMT"
},
{
"version": "v2",
"created": "Sat, 23 Apr 2016 13:23:03 GMT"
}
] | 2016-04-26T00:00:00 | [
[
"Mazloom",
"Masoud",
""
],
[
"Li",
"Xirong",
""
],
[
"Snoek",
"Cees G. M.",
""
]
] | TITLE: TagBook: A Semantic Video Representation without Supervision for Event
Detection
ABSTRACT: We consider the problem of event detection in video for scenarios where only
few, or even zero examples are available for training. For this challenging
setting, the prevailing solutions in the literature rely on a semantic video
representation obtained from thousands of pre-trained concept detectors.
Different from existing work, we propose a new semantic video representation
that is based on freely available social tagged videos only, without the need
for training any intermediate concept detectors. We introduce a simple
algorithm that propagates tags from a video's nearest neighbors, similar in
spirit to the ones used for image retrieval, but redesign it for video event
detection by including video source set refinement and varying the video tag
assignment. We call our approach TagBook and study its construction,
descriptiveness and detection performance on the TRECVID 2013 and 2014
multimedia event detection datasets and the Columbia Consumer Video dataset.
Despite its simple nature, the proposed TagBook video representation is
remarkably effective for few-example and zero-example event detection, even
outperforming very recent state-of-the-art alternatives building on supervised
representations.
| no_new_dataset | 0.950595 |
1511.05202 | Sean Welleck | Sean J. Welleck | Efficient AUC Optimization for Information Ranking Applications | 12 pages | ECIR 2016, LNCS 9626, pp.159-170, 2016 | 10.1007/978-3-319-30671-1_12 | null | cs.IR stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Adequate evaluation of an information retrieval system to estimate future
performance is a crucial task. Area under the ROC curve (AUC) is widely used to
evaluate the generalization of a retrieval system. However, the objective
function optimized in many retrieval systems is the error rate and not the AUC
value. This paper provides an efficient and effective non-linear approach to
optimize AUC using additive regression trees, with a special emphasis on the
use of multi-class AUC (MAUC) because multiple relevance levels are widely used
in many ranking applications. Compared to a conventional linear approach, the
performance of the non-linear approach is comparable on binary-relevance
benchmark datasets and is better on multi-relevance benchmark datasets.
| [
{
"version": "v1",
"created": "Mon, 16 Nov 2015 22:12:00 GMT"
},
{
"version": "v2",
"created": "Thu, 26 Nov 2015 21:28:00 GMT"
},
{
"version": "v3",
"created": "Sat, 23 Apr 2016 23:42:09 GMT"
}
] | 2016-04-26T00:00:00 | [
[
"Welleck",
"Sean J.",
""
]
] | TITLE: Efficient AUC Optimization for Information Ranking Applications
ABSTRACT: Adequate evaluation of an information retrieval system to estimate future
performance is a crucial task. Area under the ROC curve (AUC) is widely used to
evaluate the generalization of a retrieval system. However, the objective
function optimized in many retrieval systems is the error rate and not the AUC
value. This paper provides an efficient and effective non-linear approach to
optimize AUC using additive regression trees, with a special emphasis on the
use of multi-class AUC (MAUC) because multiple relevance levels are widely used
in many ranking applications. Compared to a conventional linear approach, the
performance of the non-linear approach is comparable on binary-relevance
benchmark datasets and is better on multi-relevance benchmark datasets.
| no_new_dataset | 0.95018 |
1511.05641 | Tianqi Chen | Tianqi Chen and Ian Goodfellow and Jonathon Shlens | Net2Net: Accelerating Learning via Knowledge Transfer | ICLR 2016 submission | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce techniques for rapidly transferring the information stored in
one neural net into another neural net. The main purpose is to accelerate the
training of a significantly larger neural net. During real-world workflows, one
often trains very many different neural networks during the experimentation and
design process. This is a wasteful process in which each new model is trained
from scratch. Our Net2Net technique accelerates the experimentation process by
instantaneously transferring the knowledge from a previous network to each new
deeper or wider network. Our techniques are based on the concept of
function-preserving transformations between neural network specifications. This
differs from previous approaches to pre-training that altered the function
represented by a neural net when adding layers to it. Using our knowledge
transfer mechanism to add depth to Inception modules, we demonstrate a new
state of the art accuracy rating on the ImageNet dataset.
| [
{
"version": "v1",
"created": "Wed, 18 Nov 2015 02:09:20 GMT"
},
{
"version": "v2",
"created": "Thu, 19 Nov 2015 19:07:40 GMT"
},
{
"version": "v3",
"created": "Thu, 7 Jan 2016 22:54:48 GMT"
},
{
"version": "v4",
"created": "Sat, 23 Apr 2016 23:14:39 GMT"
}
] | 2016-04-26T00:00:00 | [
[
"Chen",
"Tianqi",
""
],
[
"Goodfellow",
"Ian",
""
],
[
"Shlens",
"Jonathon",
""
]
] | TITLE: Net2Net: Accelerating Learning via Knowledge Transfer
ABSTRACT: We introduce techniques for rapidly transferring the information stored in
one neural net into another neural net. The main purpose is to accelerate the
training of a significantly larger neural net. During real-world workflows, one
often trains very many different neural networks during the experimentation and
design process. This is a wasteful process in which each new model is trained
from scratch. Our Net2Net technique accelerates the experimentation process by
instantaneously transferring the knowledge from a previous network to each new
deeper or wider network. Our techniques are based on the concept of
function-preserving transformations between neural network specifications. This
differs from previous approaches to pre-training that altered the function
represented by a neural net when adding layers to it. Using our knowledge
transfer mechanism to add depth to Inception modules, we demonstrate a new
state of the art accuracy rating on the ImageNet dataset.
| no_new_dataset | 0.950457 |
1511.06343 | Ilya Loshchilov | Ilya Loshchilov and Frank Hutter | Online Batch Selection for Faster Training of Neural Networks | Workshop paper at ICLR 2016 | null | null | null | cs.LG cs.NE math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep neural networks are commonly trained using stochastic non-convex
optimization procedures, which are driven by gradient information estimated on
fractions (batches) of the dataset. While it is commonly accepted that batch
size is an important parameter for offline tuning, the benefits of online
selection of batches remain poorly understood. We investigate online batch
selection strategies for two state-of-the-art methods of stochastic
gradient-based optimization, AdaDelta and Adam. As the loss function to be
minimized for the whole dataset is an aggregation of loss functions of
individual datapoints, intuitively, datapoints with the greatest loss should be
considered (selected in a batch) more frequently. However, the limitations of
this intuition and the proper control of the selection pressure over time are
open questions. We propose a simple strategy where all datapoints are ranked
w.r.t. their latest known loss value and the probability to be selected decays
exponentially as a function of rank. Our experimental results on the MNIST
dataset suggest that selecting batches speeds up both AdaDelta and Adam by a
factor of about 5.
| [
{
"version": "v1",
"created": "Thu, 19 Nov 2015 20:24:09 GMT"
},
{
"version": "v2",
"created": "Thu, 7 Jan 2016 22:15:38 GMT"
},
{
"version": "v3",
"created": "Fri, 22 Jan 2016 13:06:15 GMT"
},
{
"version": "v4",
"created": "Mon, 25 Apr 2016 14:00:21 GMT"
}
] | 2016-04-26T00:00:00 | [
[
"Loshchilov",
"Ilya",
""
],
[
"Hutter",
"Frank",
""
]
] | TITLE: Online Batch Selection for Faster Training of Neural Networks
ABSTRACT: Deep neural networks are commonly trained using stochastic non-convex
optimization procedures, which are driven by gradient information estimated on
fractions (batches) of the dataset. While it is commonly accepted that batch
size is an important parameter for offline tuning, the benefits of online
selection of batches remain poorly understood. We investigate online batch
selection strategies for two state-of-the-art methods of stochastic
gradient-based optimization, AdaDelta and Adam. As the loss function to be
minimized for the whole dataset is an aggregation of loss functions of
individual datapoints, intuitively, datapoints with the greatest loss should be
considered (selected in a batch) more frequently. However, the limitations of
this intuition and the proper control of the selection pressure over time are
open questions. We propose a simple strategy where all datapoints are ranked
w.r.t. their latest known loss value and the probability to be selected decays
exponentially as a function of rank. Our experimental results on the MNIST
dataset suggest that selecting batches speeds up both AdaDelta and Adam by a
factor of about 5.
| no_new_dataset | 0.949809 |
1512.08183 | Bofang Li | Bofang Li, Tao Liu, Xiaoyong Du, Deyuan Zhang, Zhe Zhao | Learning Document Embeddings by Predicting N-grams for Sentiment
Classification of Long Movie Reviews | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite the loss of semantic information, bag-of-ngram based methods still
achieve state-of-the-art results for tasks such as sentiment classification of
long movie reviews. Many document embeddings methods have been proposed to
capture semantics, but they still can't outperform bag-of-ngram based methods
on this task. In this paper, we modify the architecture of the recently
proposed Paragraph Vector, allowing it to learn document vectors by predicting
not only words, but n-gram features as well. Our model is able to capture both
semantics and word order in documents while keeping the expressive power of
learned vectors. Experimental results on IMDB movie review dataset shows that
our model outperforms previous deep learning models and bag-of-ngram based
models due to the above advantages. More robust results are also obtained when
our model is combined with other models. The source code of our model will be
also published together with this paper.
| [
{
"version": "v1",
"created": "Sun, 27 Dec 2015 08:12:53 GMT"
},
{
"version": "v2",
"created": "Mon, 8 Feb 2016 09:03:13 GMT"
},
{
"version": "v3",
"created": "Fri, 11 Mar 2016 10:54:47 GMT"
},
{
"version": "v4",
"created": "Wed, 6 Apr 2016 14:21:56 GMT"
},
{
"version": "v5",
"created": "Sat, 23 Apr 2016 16:00:48 GMT"
}
] | 2016-04-26T00:00:00 | [
[
"Li",
"Bofang",
""
],
[
"Liu",
"Tao",
""
],
[
"Du",
"Xiaoyong",
""
],
[
"Zhang",
"Deyuan",
""
],
[
"Zhao",
"Zhe",
""
]
] | TITLE: Learning Document Embeddings by Predicting N-grams for Sentiment
Classification of Long Movie Reviews
ABSTRACT: Despite the loss of semantic information, bag-of-ngram based methods still
achieve state-of-the-art results for tasks such as sentiment classification of
long movie reviews. Many document embeddings methods have been proposed to
capture semantics, but they still can't outperform bag-of-ngram based methods
on this task. In this paper, we modify the architecture of the recently
proposed Paragraph Vector, allowing it to learn document vectors by predicting
not only words, but n-gram features as well. Our model is able to capture both
semantics and word order in documents while keeping the expressive power of
learned vectors. Experimental results on IMDB movie review dataset shows that
our model outperforms previous deep learning models and bag-of-ngram based
models due to the above advantages. More robust results are also obtained when
our model is combined with other models. The source code of our model will be
also published together with this paper.
| no_new_dataset | 0.943504 |
1602.04259 | Viktoriya Krakovna | Viktoriya Krakovna, Moshe Looks | A Minimalistic Approach to Sum-Product Network Learning for Real
Applications | Accepted to ICLR 2016 workshop track | null | null | null | cs.AI cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sum-Product Networks (SPNs) are a class of expressive yet tractable
hierarchical graphical models. LearnSPN is a structure learning algorithm for
SPNs that uses hierarchical co-clustering to simultaneously identifying similar
entities and similar features. The original LearnSPN algorithm assumes that all
the variables are discrete and there is no missing data. We introduce a
practical, simplified version of LearnSPN, MiniSPN, that runs faster and can
handle missing data and heterogeneous features common in real applications. We
demonstrate the performance of MiniSPN on standard benchmark datasets and on
two datasets from Google's Knowledge Graph exhibiting high missingness rates
and a mix of discrete and continuous features.
| [
{
"version": "v1",
"created": "Fri, 12 Feb 2016 23:11:05 GMT"
},
{
"version": "v2",
"created": "Thu, 24 Mar 2016 22:37:52 GMT"
},
{
"version": "v3",
"created": "Sun, 24 Apr 2016 23:38:43 GMT"
}
] | 2016-04-26T00:00:00 | [
[
"Krakovna",
"Viktoriya",
""
],
[
"Looks",
"Moshe",
""
]
] | TITLE: A Minimalistic Approach to Sum-Product Network Learning for Real
Applications
ABSTRACT: Sum-Product Networks (SPNs) are a class of expressive yet tractable
hierarchical graphical models. LearnSPN is a structure learning algorithm for
SPNs that uses hierarchical co-clustering to simultaneously identifying similar
entities and similar features. The original LearnSPN algorithm assumes that all
the variables are discrete and there is no missing data. We introduce a
practical, simplified version of LearnSPN, MiniSPN, that runs faster and can
handle missing data and heterogeneous features common in real applications. We
demonstrate the performance of MiniSPN on standard benchmark datasets and on
two datasets from Google's Knowledge Graph exhibiting high missingness rates
and a mix of discrete and continuous features.
| no_new_dataset | 0.949856 |
1604.02898 | Jubin Johnson | Jubin Johnson, Ehsan Shahrian Varnousfaderani, Hisham Cholakkal, and
Deepu Rajan | Sparse Coding for Alpha Matting | To appear in IEEE Transactions on Image Processing | null | 10.1109/TIP.2016.2555705 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Existing color sampling based alpha matting methods use the compositing
equation to estimate alpha at a pixel from pairs of foreground (F) and
background (B) samples. The quality of the matte depends on the selected (F,B)
pairs. In this paper, the matting problem is reinterpreted as a sparse coding
of pixel features, wherein the sum of the codes gives the estimate of the alpha
matte from a set of unpaired F and B samples. A non-parametric probabilistic
segmentation provides a certainty measure on the pixel belonging to foreground
or background, based on which a dictionary is formed for use in sparse coding.
By removing the restriction to conform to (F,B) pairs, this method allows for
better alpha estimation from multiple F and B samples. The same framework is
extended to videos, where the requirement of temporal coherence is handled
effectively. Here, the dictionary is formed by samples from multiple frames. A
multi-frame graph model, as opposed to a single image as for image matting, is
proposed that can be solved efficiently in closed form. Quantitative and
qualitative evaluations on a benchmark dataset are provided to show that the
proposed method outperforms current state-of-the-art in image and video
matting.
| [
{
"version": "v1",
"created": "Mon, 11 Apr 2016 11:48:18 GMT"
}
] | 2016-04-26T00:00:00 | [
[
"Johnson",
"Jubin",
""
],
[
"Varnousfaderani",
"Ehsan Shahrian",
""
],
[
"Cholakkal",
"Hisham",
""
],
[
"Rajan",
"Deepu",
""
]
] | TITLE: Sparse Coding for Alpha Matting
ABSTRACT: Existing color sampling based alpha matting methods use the compositing
equation to estimate alpha at a pixel from pairs of foreground (F) and
background (B) samples. The quality of the matte depends on the selected (F,B)
pairs. In this paper, the matting problem is reinterpreted as a sparse coding
of pixel features, wherein the sum of the codes gives the estimate of the alpha
matte from a set of unpaired F and B samples. A non-parametric probabilistic
segmentation provides a certainty measure on the pixel belonging to foreground
or background, based on which a dictionary is formed for use in sparse coding.
By removing the restriction to conform to (F,B) pairs, this method allows for
better alpha estimation from multiple F and B samples. The same framework is
extended to videos, where the requirement of temporal coherence is handled
effectively. Here, the dictionary is formed by samples from multiple frames. A
multi-frame graph model, as opposed to a single image as for image matting, is
proposed that can be solved efficiently in closed form. Quantitative and
qualitative evaluations on a benchmark dataset are provided to show that the
proposed method outperforms current state-of-the-art in image and video
matting.
| no_new_dataset | 0.944587 |
1604.05242 | Dinesh Govindaraj | Dinesh Govindaraj | Can Boosting with SVM as Week Learners Help? | Work done in 2009 | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Object recognition in images involves identifying objects with partial
occlusions, viewpoint changes, varying illumination, cluttered backgrounds.
Recent work in object recognition uses machine learning techniques SVM-KNN,
Local Ensemble Kernel Learning, Multiple Kernel Learning. In this paper, we
want to utilize SVM as week learners in AdaBoost. Experiments are done with
classifiers like near- est neighbor, k-nearest neighbor, Support vector
machines, Local learning(SVM- KNN) and AdaBoost. Models use Scale-Invariant
descriptors and Pyramid his- togram of gradient descriptors. AdaBoost is
trained with set of week classifier as SVMs, each with kernel distance function
on different descriptors. Results shows AdaBoost with SVM outperform other
methods for Object Categorization dataset.
| [
{
"version": "v1",
"created": "Mon, 18 Apr 2016 17:05:00 GMT"
},
{
"version": "v2",
"created": "Fri, 22 Apr 2016 23:03:27 GMT"
}
] | 2016-04-26T00:00:00 | [
[
"Govindaraj",
"Dinesh",
""
]
] | TITLE: Can Boosting with SVM as Week Learners Help?
ABSTRACT: Object recognition in images involves identifying objects with partial
occlusions, viewpoint changes, varying illumination, cluttered backgrounds.
Recent work in object recognition uses machine learning techniques SVM-KNN,
Local Ensemble Kernel Learning, Multiple Kernel Learning. In this paper, we
want to utilize SVM as week learners in AdaBoost. Experiments are done with
classifiers like near- est neighbor, k-nearest neighbor, Support vector
machines, Local learning(SVM- KNN) and AdaBoost. Models use Scale-Invariant
descriptors and Pyramid his- togram of gradient descriptors. AdaBoost is
trained with set of week classifier as SVMs, each with kernel distance function
on different descriptors. Results shows AdaBoost with SVM outperform other
methods for Object Categorization dataset.
| no_new_dataset | 0.948489 |
1604.06832 | S Shankar | Sukrit Shankar, Duncan Robertson, Yani Ioannou, Antonio Criminisi,
Roberto Cipolla | Refining Architectures of Deep Convolutional Neural Networks | 9 pages, 6 figures, CVPR 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep Convolutional Neural Networks (CNNs) have recently evinced immense
success for various image recognition tasks. However, a question of paramount
importance is somewhat unanswered in deep learning research - is the selected
CNN optimal for the dataset in terms of accuracy and model size? In this paper,
we intend to answer this question and introduce a novel strategy that alters
the architecture of a given CNN for a specified dataset, to potentially enhance
the original accuracy while possibly reducing the model size. We use two
operations for architecture refinement, viz. stretching and symmetrical
splitting. Our procedure starts with a pre-trained CNN for a given dataset, and
optimally decides the stretch and split factors across the network to refine
the architecture. We empirically demonstrate the necessity of the two
operations. We evaluate our approach on two natural scenes attributes datasets,
SUN Attributes and CAMIT-NSAD, with architectures of GoogleNet and VGG-11, that
are quite contrasting in their construction. We justify our choice of datasets,
and show that they are interestingly distinct from each other, and together
pose a challenge to our architectural refinement algorithm. Our results
substantiate the usefulness of the proposed method.
| [
{
"version": "v1",
"created": "Fri, 22 Apr 2016 22:39:55 GMT"
}
] | 2016-04-26T00:00:00 | [
[
"Shankar",
"Sukrit",
""
],
[
"Robertson",
"Duncan",
""
],
[
"Ioannou",
"Yani",
""
],
[
"Criminisi",
"Antonio",
""
],
[
"Cipolla",
"Roberto",
""
]
] | TITLE: Refining Architectures of Deep Convolutional Neural Networks
ABSTRACT: Deep Convolutional Neural Networks (CNNs) have recently evinced immense
success for various image recognition tasks. However, a question of paramount
importance is somewhat unanswered in deep learning research - is the selected
CNN optimal for the dataset in terms of accuracy and model size? In this paper,
we intend to answer this question and introduce a novel strategy that alters
the architecture of a given CNN for a specified dataset, to potentially enhance
the original accuracy while possibly reducing the model size. We use two
operations for architecture refinement, viz. stretching and symmetrical
splitting. Our procedure starts with a pre-trained CNN for a given dataset, and
optimally decides the stretch and split factors across the network to refine
the architecture. We empirically demonstrate the necessity of the two
operations. We evaluate our approach on two natural scenes attributes datasets,
SUN Attributes and CAMIT-NSAD, with architectures of GoogleNet and VGG-11, that
are quite contrasting in their construction. We justify our choice of datasets,
and show that they are interestingly distinct from each other, and together
pose a challenge to our architectural refinement algorithm. Our results
substantiate the usefulness of the proposed method.
| no_new_dataset | 0.948106 |
1604.06877 | Shangxuan Tian | Shangxuan Tian, Yifeng Pan, Chang Huang, Shijian Lu, Kai Yu, and Chew
Lim Tan | Text Flow: A Unified Text Detection System in Natural Scene Images | 9 pages, ICCV 2015 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The prevalent scene text detection approach follows four sequential steps
comprising character candidate detection, false character candidate removal,
text line extraction, and text line verification. However, errors occur and
accumulate throughout each of these sequential steps which often lead to low
detection performance. To address these issues, we propose a unified scene text
detection system, namely Text Flow, by utilizing the minimum cost (min-cost)
flow network model. With character candidates detected by cascade boosting, the
min-cost flow network model integrates the last three sequential steps into a
single process which solves the error accumulation problem at both character
level and text line level effectively. The proposed technique has been tested
on three public datasets, i.e, ICDAR2011 dataset, ICDAR2013 dataset and a
multilingual dataset and it outperforms the state-of-the-art methods on all
three datasets with much higher recall and F-score. The good performance on the
multilingual dataset shows that the proposed technique can be used for the
detection of texts in different languages.
| [
{
"version": "v1",
"created": "Sat, 23 Apr 2016 08:11:17 GMT"
}
] | 2016-04-26T00:00:00 | [
[
"Tian",
"Shangxuan",
""
],
[
"Pan",
"Yifeng",
""
],
[
"Huang",
"Chang",
""
],
[
"Lu",
"Shijian",
""
],
[
"Yu",
"Kai",
""
],
[
"Tan",
"Chew Lim",
""
]
] | TITLE: Text Flow: A Unified Text Detection System in Natural Scene Images
ABSTRACT: The prevalent scene text detection approach follows four sequential steps
comprising character candidate detection, false character candidate removal,
text line extraction, and text line verification. However, errors occur and
accumulate throughout each of these sequential steps which often lead to low
detection performance. To address these issues, we propose a unified scene text
detection system, namely Text Flow, by utilizing the minimum cost (min-cost)
flow network model. With character candidates detected by cascade boosting, the
min-cost flow network model integrates the last three sequential steps into a
single process which solves the error accumulation problem at both character
level and text line level effectively. The proposed technique has been tested
on three public datasets, i.e, ICDAR2011 dataset, ICDAR2013 dataset and a
multilingual dataset and it outperforms the state-of-the-art methods on all
three datasets with much higher recall and F-score. The good performance on the
multilingual dataset shows that the proposed technique can be used for the
detection of texts in different languages.
| no_new_dataset | 0.954478 |
1604.07093 | Yanwei Fu | Yanwei Fu, Leonid Sigal | Semi-supervised Vocabulary-informed Learning | 10 pages, Accepted by CVPR 2016 as an oral presentation | null | null | null | cs.CV cs.AI cs.LG stat.AP stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite significant progress in object categorization, in recent years, a
number of important challenges remain, mainly, ability to learn from limited
labeled data and ability to recognize object classes within large, potentially
open, set of labels. Zero-shot learning is one way of addressing these
challenges, but it has only been shown to work with limited sized class
vocabularies and typically requires separation between supervised and
unsupervised classes, allowing former to inform the latter but not vice versa.
We propose the notion of semi-supervised vocabulary-informed learning to
alleviate the above mentioned challenges and address problems of supervised,
zero-shot and open set recognition using a unified framework. Specifically, we
propose a maximum margin framework for semantic manifold-based recognition that
incorporates distance constraints from (both supervised and unsupervised)
vocabulary atoms, ensuring that labeled samples are projected closest to their
correct prototypes, in the embedding space, than to others. We show that
resulting model shows improvements in supervised, zero-shot, and large open set
recognition, with up to 310K class vocabulary on AwA and ImageNet datasets.
| [
{
"version": "v1",
"created": "Sun, 24 Apr 2016 23:36:36 GMT"
}
] | 2016-04-26T00:00:00 | [
[
"Fu",
"Yanwei",
""
],
[
"Sigal",
"Leonid",
""
]
] | TITLE: Semi-supervised Vocabulary-informed Learning
ABSTRACT: Despite significant progress in object categorization, in recent years, a
number of important challenges remain, mainly, ability to learn from limited
labeled data and ability to recognize object classes within large, potentially
open, set of labels. Zero-shot learning is one way of addressing these
challenges, but it has only been shown to work with limited sized class
vocabularies and typically requires separation between supervised and
unsupervised classes, allowing former to inform the latter but not vice versa.
We propose the notion of semi-supervised vocabulary-informed learning to
alleviate the above mentioned challenges and address problems of supervised,
zero-shot and open set recognition using a unified framework. Specifically, we
propose a maximum margin framework for semantic manifold-based recognition that
incorporates distance constraints from (both supervised and unsupervised)
vocabulary atoms, ensuring that labeled samples are projected closest to their
correct prototypes, in the embedding space, than to others. We show that
resulting model shows improvements in supervised, zero-shot, and large open set
recognition, with up to 310K class vocabulary on AwA and ImageNet datasets.
| no_new_dataset | 0.951142 |
1604.07202 | Mathura Bai Baikadolla | B.Mathura Bai, N.Mangathayaru, B.Padmaja Rani | An Approach to Find Missing Values in Medical Datasets | 7 pages,ACM Digital Library, ICEMIS September 2015 | null | 10.1145/2832987.2833083 | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mining medical datasets is a challenging problem before data mining
researchers as these datasets have several hidden challenges compared to
conventional datasets.Starting from the collection of samples through field
experiments and clinical trials to performing classification,there are numerous
challenges at every stage in the mining process. The preprocessing phase in the
mining process itself is a challenging issue when, we work on medical datasets.
One of the prime challenges in mining medical datasets is handling missing
values which is part of preprocessing phase. In this paper, we address the
issue of handling missing values in medical dataset consisting of categorical
attribute values. The main contribution of this research is to use the proposed
imputation measure to estimate and fix the missing values. We discuss a case
study to demonstrate the working of proposed measure.
| [
{
"version": "v1",
"created": "Mon, 25 Apr 2016 11:16:26 GMT"
}
] | 2016-04-26T00:00:00 | [
[
"Bai",
"B. Mathura",
""
],
[
"Mangathayaru",
"N.",
""
],
[
"Rani",
"B. Padmaja",
""
]
] | TITLE: An Approach to Find Missing Values in Medical Datasets
ABSTRACT: Mining medical datasets is a challenging problem before data mining
researchers as these datasets have several hidden challenges compared to
conventional datasets.Starting from the collection of samples through field
experiments and clinical trials to performing classification,there are numerous
challenges at every stage in the mining process. The preprocessing phase in the
mining process itself is a challenging issue when, we work on medical datasets.
One of the prime challenges in mining medical datasets is handling missing
values which is part of preprocessing phase. In this paper, we address the
issue of handling missing values in medical dataset consisting of categorical
attribute values. The main contribution of this research is to use the proposed
imputation measure to estimate and fix the missing values. We discuss a case
study to demonstrate the working of proposed measure.
| no_new_dataset | 0.95222 |
1604.07269 | Ilya Loshchilov | Ilya Loshchilov and Frank Hutter | CMA-ES for Hyperparameter Optimization of Deep Neural Networks | null | null | null | null | cs.NE cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hyperparameters of deep neural networks are often optimized by grid search,
random search or Bayesian optimization. As an alternative, we propose to use
the Covariance Matrix Adaptation Evolution Strategy (CMA-ES), which is known
for its state-of-the-art performance in derivative-free optimization. CMA-ES
has some useful invariance properties and is friendly to parallel evaluations
of solutions. We provide a toy example comparing CMA-ES and state-of-the-art
Bayesian optimization algorithms for tuning the hyperparameters of a
convolutional neural network for the MNIST dataset on 30 GPUs in parallel.
| [
{
"version": "v1",
"created": "Mon, 25 Apr 2016 14:17:08 GMT"
}
] | 2016-04-26T00:00:00 | [
[
"Loshchilov",
"Ilya",
""
],
[
"Hutter",
"Frank",
""
]
] | TITLE: CMA-ES for Hyperparameter Optimization of Deep Neural Networks
ABSTRACT: Hyperparameters of deep neural networks are often optimized by grid search,
random search or Bayesian optimization. As an alternative, we propose to use
the Covariance Matrix Adaptation Evolution Strategy (CMA-ES), which is known
for its state-of-the-art performance in derivative-free optimization. CMA-ES
has some useful invariance properties and is friendly to parallel evaluations
of solutions. We provide a toy example comparing CMA-ES and state-of-the-art
Bayesian optimization algorithms for tuning the hyperparameters of a
convolutional neural network for the MNIST dataset on 30 GPUs in parallel.
| no_new_dataset | 0.951369 |
1604.07279 | Limin Wang | Limin Wang, Yu Qiao, Xiaoou Tang, Luc Van Gool | Actionness Estimation Using Hybrid Fully Convolutional Networks | accepted by CVPR16 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Actionness was introduced to quantify the likelihood of containing a generic
action instance at a specific location. Accurate and efficient estimation of
actionness is important in video analysis and may benefit other relevant tasks
such as action recognition and action detection. This paper presents a new deep
architecture for actionness estimation, called hybrid fully convolutional
network (H-FCN), which is composed of appearance FCN (A-FCN) and motion FCN
(M-FCN). These two FCNs leverage the strong capacity of deep models to estimate
actionness maps from the perspectives of static appearance and dynamic motion,
respectively. In addition, the fully convolutional nature of H-FCN allows it to
efficiently process videos with arbitrary sizes. Experiments are conducted on
the challenging datasets of Stanford40, UCF Sports, and JHMDB to verify the
effectiveness of H-FCN on actionness estimation, which demonstrate that our
method achieves superior performance to previous ones. Moreover, we apply the
estimated actionness maps on action proposal generation and action detection.
Our actionness maps advance the current state-of-the-art performance of these
tasks substantially.
| [
{
"version": "v1",
"created": "Mon, 25 Apr 2016 14:32:28 GMT"
}
] | 2016-04-26T00:00:00 | [
[
"Wang",
"Limin",
""
],
[
"Qiao",
"Yu",
""
],
[
"Tang",
"Xiaoou",
""
],
[
"Van Gool",
"Luc",
""
]
] | TITLE: Actionness Estimation Using Hybrid Fully Convolutional Networks
ABSTRACT: Actionness was introduced to quantify the likelihood of containing a generic
action instance at a specific location. Accurate and efficient estimation of
actionness is important in video analysis and may benefit other relevant tasks
such as action recognition and action detection. This paper presents a new deep
architecture for actionness estimation, called hybrid fully convolutional
network (H-FCN), which is composed of appearance FCN (A-FCN) and motion FCN
(M-FCN). These two FCNs leverage the strong capacity of deep models to estimate
actionness maps from the perspectives of static appearance and dynamic motion,
respectively. In addition, the fully convolutional nature of H-FCN allows it to
efficiently process videos with arbitrary sizes. Experiments are conducted on
the challenging datasets of Stanford40, UCF Sports, and JHMDB to verify the
effectiveness of H-FCN on actionness estimation, which demonstrate that our
method achieves superior performance to previous ones. Moreover, we apply the
estimated actionness maps on action proposal generation and action detection.
Our actionness maps advance the current state-of-the-art performance of these
tasks substantially.
| no_new_dataset | 0.947624 |
1604.07319 | Mehrdad Gangeh | Mehrdad J. Gangeh, Safaa M.A. Bedawi, Ali Ghodsi, Fakhri Karray | Semi-supervised Dictionary Learning Based on Hilbert-Schmidt
Independence Criterion | Accepted at International conference on Image analysis and
Recognition (ICIAR) 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, a novel semi-supervised dictionary learning and sparse
representation (SS-DLSR) is proposed. The proposed method benefits from the
supervisory information by learning the dictionary in a space where the
dependency between the data and class labels is maximized. This maximization is
performed using Hilbert-Schmidt independence criterion (HSIC). On the other
hand, the global distribution of the underlying manifolds were learned from the
unlabeled data by minimizing the distances between the unlabeled data and the
corresponding nearest labeled data in the space of the dictionary learned. The
proposed SS-DLSR algorithm has closed-form solutions for both the dictionary
and sparse coefficients, and therefore does not have to learn the two
iteratively and alternately as is common in the literature of the DLSR. This
makes the solution for the proposed algorithm very fast. The experiments
confirm the improvement in classification performance on benchmark datasets by
including the information from both labeled and unlabeled data, particularly
when there are many unlabeled data.
| [
{
"version": "v1",
"created": "Mon, 25 Apr 2016 16:25:38 GMT"
}
] | 2016-04-26T00:00:00 | [
[
"Gangeh",
"Mehrdad J.",
""
],
[
"Bedawi",
"Safaa M. A.",
""
],
[
"Ghodsi",
"Ali",
""
],
[
"Karray",
"Fakhri",
""
]
] | TITLE: Semi-supervised Dictionary Learning Based on Hilbert-Schmidt
Independence Criterion
ABSTRACT: In this paper, a novel semi-supervised dictionary learning and sparse
representation (SS-DLSR) is proposed. The proposed method benefits from the
supervisory information by learning the dictionary in a space where the
dependency between the data and class labels is maximized. This maximization is
performed using Hilbert-Schmidt independence criterion (HSIC). On the other
hand, the global distribution of the underlying manifolds were learned from the
unlabeled data by minimizing the distances between the unlabeled data and the
corresponding nearest labeled data in the space of the dictionary learned. The
proposed SS-DLSR algorithm has closed-form solutions for both the dictionary
and sparse coefficients, and therefore does not have to learn the two
iteratively and alternately as is common in the literature of the DLSR. This
makes the solution for the proposed algorithm very fast. The experiments
confirm the improvement in classification performance on benchmark datasets by
including the information from both labeled and unlabeled data, particularly
when there are many unlabeled data.
| no_new_dataset | 0.948728 |
1604.07335 | Bahadir Ozdemir | Bahadir Ozdemir and Larry S. Davis | Scalable Gaussian Processes for Supervised Hashing | 10 pages, 4 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a flexible procedure for large-scale image search by hash
functions with kernels. Our method treats binary codes and pairwise semantic
similarity as latent and observed variables, respectively, in a probabilistic
model based on Gaussian processes for binary classification. We present an
efficient inference algorithm with the sparse pseudo-input Gaussian process
(SPGP) model and parallelization. Experiments on three large-scale image
dataset demonstrate the effectiveness of the proposed hashing method, Gaussian
Process Hashing (GPH), for short binary codes and the datasets without
predefined classes in comparison to the state-of-the-art supervised hashing
methods.
| [
{
"version": "v1",
"created": "Mon, 25 Apr 2016 17:30:20 GMT"
}
] | 2016-04-26T00:00:00 | [
[
"Ozdemir",
"Bahadir",
""
],
[
"Davis",
"Larry S.",
""
]
] | TITLE: Scalable Gaussian Processes for Supervised Hashing
ABSTRACT: We propose a flexible procedure for large-scale image search by hash
functions with kernels. Our method treats binary codes and pairwise semantic
similarity as latent and observed variables, respectively, in a probabilistic
model based on Gaussian processes for binary classification. We present an
efficient inference algorithm with the sparse pseudo-input Gaussian process
(SPGP) model and parallelization. Experiments on three large-scale image
dataset demonstrate the effectiveness of the proposed hashing method, Gaussian
Process Hashing (GPH), for short binary codes and the datasets without
predefined classes in comparison to the state-of-the-art supervised hashing
methods.
| no_new_dataset | 0.947914 |
1604.07339 | Ivan Bajic | Sayed Hossein Khatoonabadi, Ivan V. Bajic, Yufeng Shan | Compressed-domain visual saliency models: A comparative study | null | null | null | null | cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Computational modeling of visual saliency has become an important research
problem in recent years, with applications in video quality estimation, video
compression, object tracking, retargeting, summarization, and so on. While most
visual saliency models for dynamic scenes operate on raw video, several models
have been developed for use with compressed-domain information such as motion
vectors and transform coefficients. This paper presents a comparative study of
eleven such models as well as two high-performing pixel-domain saliency models
on two eye-tracking datasets using several comparison metrics. The results
indicate that highly accurate saliency estimation is possible based only on a
partially decoded video bitstream. The strategies that have shown success in
compressed-domain saliency modeling are highlighted, and certain challenges are
identified as potential avenues for further improvement.
| [
{
"version": "v1",
"created": "Mon, 25 Apr 2016 17:39:25 GMT"
}
] | 2016-04-26T00:00:00 | [
[
"Khatoonabadi",
"Sayed Hossein",
""
],
[
"Bajic",
"Ivan V.",
""
],
[
"Shan",
"Yufeng",
""
]
] | TITLE: Compressed-domain visual saliency models: A comparative study
ABSTRACT: Computational modeling of visual saliency has become an important research
problem in recent years, with applications in video quality estimation, video
compression, object tracking, retargeting, summarization, and so on. While most
visual saliency models for dynamic scenes operate on raw video, several models
have been developed for use with compressed-domain information such as motion
vectors and transform coefficients. This paper presents a comparative study of
eleven such models as well as two high-performing pixel-domain saliency models
on two eye-tracking datasets using several comparison metrics. The results
indicate that highly accurate saliency estimation is possible based only on a
partially decoded video bitstream. The strategies that have shown success in
compressed-domain saliency modeling are highlighted, and certain challenges are
identified as potential avenues for further improvement.
| no_new_dataset | 0.94801 |
1604.07360 | Emily Hand | Emily M. Hand and Rama Chellappa | Attributes for Improved Attributes: A Multi-Task Network for Attribute
Classification | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Attributes, or semantic features, have gained popularity in the past few
years in domains ranging from activity recognition in video to face
verification. Improving the accuracy of attribute classifiers is an important
first step in any application which uses these attributes. In most works to
date, attributes have been considered to be independent. However, we know this
not to be the case. Many attributes are very strongly related, such as heavy
makeup and wearing lipstick. We propose to take advantage of attribute
relationships in three ways: by using a multi-task deep convolutional neural
network (MCNN) sharing the lowest layers amongst all attributes, sharing the
higher layers for related attributes, and by building an auxiliary network on
top of the MCNN which utilizes the scores from all attributes to improve the
final classification of each attribute. We demonstrate the effectiveness of our
method by producing results on two challenging publicly available datasets.
| [
{
"version": "v1",
"created": "Mon, 25 Apr 2016 18:49:55 GMT"
}
] | 2016-04-26T00:00:00 | [
[
"Hand",
"Emily M.",
""
],
[
"Chellappa",
"Rama",
""
]
] | TITLE: Attributes for Improved Attributes: A Multi-Task Network for Attribute
Classification
ABSTRACT: Attributes, or semantic features, have gained popularity in the past few
years in domains ranging from activity recognition in video to face
verification. Improving the accuracy of attribute classifiers is an important
first step in any application which uses these attributes. In most works to
date, attributes have been considered to be independent. However, we know this
not to be the case. Many attributes are very strongly related, such as heavy
makeup and wearing lipstick. We propose to take advantage of attribute
relationships in three ways: by using a multi-task deep convolutional neural
network (MCNN) sharing the lowest layers amongst all attributes, sharing the
higher layers for related attributes, and by building an auxiliary network on
top of the MCNN which utilizes the scores from all attributes to improve the
final classification of each attribute. We demonstrate the effectiveness of our
method by producing results on two challenging publicly available datasets.
| no_new_dataset | 0.947332 |
1307.3782 | Karim Ahmed | Karim M. Mahmoud | Handwritten Digits Recognition using Deep Convolutional Neural Network:
An Experimental Study using EBlearn | This paper has been withdrawn by the author due to some errors and
incomplete study | null | null | null | cs.NE cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, results of an experimental study of a deep convolution neural
network architecture which can classify different handwritten digits using
EBLearn library are reported. The purpose of this neural network is to classify
input images into 10 different classes or digits (0-9) and to explore new
findings. The input dataset used consists of digits images of size 32X32 in
grayscale (MNIST dataset).
| [
{
"version": "v1",
"created": "Sun, 14 Jul 2013 21:03:39 GMT"
},
{
"version": "v2",
"created": "Tue, 19 Apr 2016 16:05:33 GMT"
},
{
"version": "v3",
"created": "Fri, 22 Apr 2016 18:45:01 GMT"
}
] | 2016-04-25T00:00:00 | [
[
"Mahmoud",
"Karim M.",
""
]
] | TITLE: Handwritten Digits Recognition using Deep Convolutional Neural Network:
An Experimental Study using EBlearn
ABSTRACT: In this paper, results of an experimental study of a deep convolution neural
network architecture which can classify different handwritten digits using
EBLearn library are reported. The purpose of this neural network is to classify
input images into 10 different classes or digits (0-9) and to explore new
findings. The input dataset used consists of digits images of size 32X32 in
grayscale (MNIST dataset).
| no_new_dataset | 0.942981 |
1506.05690 | Diego Amancio Dr. | Filipi N. Silva, Diego R. Amancio, Maria Bardosova, Osvaldo N.
Oliveira Jr., Luciano da F. Costa | Using network science and text analytics to produce surveys in a
scientific topic | null | Journal of Informetrics 10 (2016) pp. 487-502 | 10.1016/j.joi.2016.03.008 | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The use of science to understand its own structure is becoming popular, but
understanding the organization of knowledge areas is still limited because some
patterns are only discoverable with proper computational treatment of
large-scale datasets. In this paper, we introduce a network-based methodology
combined with text analytics to construct the taxonomy of science fields. The
methodology is illustrated with application to two topics: complex networks
(CN) and photonic crystals (PC). We built citation networks using data from the
Web of Science and used a community detection algorithm for partitioning to
obtain science maps of the fields considered. We also created an importance
index for text analytics in order to obtain keywords that define the
communities. A dendrogram of the relatedness among the subtopics was also
obtained. Among the interesting patterns that emerged from the analysis, we
highlight the identification of two well-defined communities in PC area, which
is consistent with the known existence of two distinct communities of
researchers in the area: telecommunication engineers and physicists. With the
methodology, it was also possible to assess the interdisciplinary and time
evolution of subtopics defined by the keywords. The automatic tools described
here are potentially useful not only to provide an overview of scientific areas
but also to assist scientists in performing systematic research on a specific
topic.
| [
{
"version": "v1",
"created": "Thu, 18 Jun 2015 14:20:54 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Mar 2016 14:20:16 GMT"
}
] | 2016-04-25T00:00:00 | [
[
"Silva",
"Filipi N.",
""
],
[
"Amancio",
"Diego R.",
""
],
[
"Bardosova",
"Maria",
""
],
[
"Oliveira",
"Osvaldo N.",
"Jr."
],
[
"Costa",
"Luciano da F.",
""
]
] | TITLE: Using network science and text analytics to produce surveys in a
scientific topic
ABSTRACT: The use of science to understand its own structure is becoming popular, but
understanding the organization of knowledge areas is still limited because some
patterns are only discoverable with proper computational treatment of
large-scale datasets. In this paper, we introduce a network-based methodology
combined with text analytics to construct the taxonomy of science fields. The
methodology is illustrated with application to two topics: complex networks
(CN) and photonic crystals (PC). We built citation networks using data from the
Web of Science and used a community detection algorithm for partitioning to
obtain science maps of the fields considered. We also created an importance
index for text analytics in order to obtain keywords that define the
communities. A dendrogram of the relatedness among the subtopics was also
obtained. Among the interesting patterns that emerged from the analysis, we
highlight the identification of two well-defined communities in PC area, which
is consistent with the known existence of two distinct communities of
researchers in the area: telecommunication engineers and physicists. With the
methodology, it was also possible to assess the interdisciplinary and time
evolution of subtopics defined by the keywords. The automatic tools described
here are potentially useful not only to provide an overview of scientific areas
but also to assist scientists in performing systematic research on a specific
topic.
| no_new_dataset | 0.948917 |
1511.04524 | Ziming Zhang | Ziming Zhang, Yuting Chen and Venkatesh Saligrama | Efficient Training of Very Deep Neural Networks for Supervised Hashing | null | null | null | null | cs.CV cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose training very deep neural networks (DNNs) for
supervised learning of hash codes. Existing methods in this context train
relatively "shallow" networks limited by the issues arising in back propagation
(e.e. vanishing gradients) as well as computational efficiency. We propose a
novel and efficient training algorithm inspired by alternating direction method
of multipliers (ADMM) that overcomes some of these limitations. Our method
decomposes the training process into independent layer-wise local updates
through auxiliary variables. Empirically we observe that our training algorithm
always converges and its computational complexity is linearly proportional to
the number of edges in the networks. Empirically we manage to train DNNs with
64 hidden layers and 1024 nodes per layer for supervised hashing in about 3
hours using a single GPU. Our proposed very deep supervised hashing (VDSH)
method significantly outperforms the state-of-the-art on several benchmark
datasets.
| [
{
"version": "v1",
"created": "Sat, 14 Nov 2015 07:35:01 GMT"
},
{
"version": "v2",
"created": "Thu, 21 Apr 2016 21:49:21 GMT"
}
] | 2016-04-25T00:00:00 | [
[
"Zhang",
"Ziming",
""
],
[
"Chen",
"Yuting",
""
],
[
"Saligrama",
"Venkatesh",
""
]
] | TITLE: Efficient Training of Very Deep Neural Networks for Supervised Hashing
ABSTRACT: In this paper, we propose training very deep neural networks (DNNs) for
supervised learning of hash codes. Existing methods in this context train
relatively "shallow" networks limited by the issues arising in back propagation
(e.e. vanishing gradients) as well as computational efficiency. We propose a
novel and efficient training algorithm inspired by alternating direction method
of multipliers (ADMM) that overcomes some of these limitations. Our method
decomposes the training process into independent layer-wise local updates
through auxiliary variables. Empirically we observe that our training algorithm
always converges and its computational complexity is linearly proportional to
the number of edges in the networks. Empirically we manage to train DNNs with
64 hidden layers and 1024 nodes per layer for supervised hashing in about 3
hours using a single GPU. Our proposed very deep supervised hashing (VDSH)
method significantly outperforms the state-of-the-art on several benchmark
datasets.
| no_new_dataset | 0.947817 |
1511.06654 | Bing Wang | Bing Wang, Gang Wang, Kap Luk Chan and Li Wang | Tracklet Association by Online Target-Specific Metric Learning and
Coherent Dynamics Estimation | IEEE Transactions on Pattern Analysis and Machine Intelligence, in
press, 2016 | null | 10.1109/TPAMI.2016.2551245 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we present a novel method based on online target-specific
metric learning and coherent dynamics estimation for tracklet (track fragment)
association by network flow optimization in long-term multi-person tracking.
Our proposed framework aims to exploit appearance and motion cues to prevent
identity switches during tracking and to recover missed detections.
Furthermore, target-specific metrics (appearance cue) and motion dynamics
(motion cue) are proposed to be learned and estimated online, i.e. during the
tracking process. Our approach is effective even when such cues fail to
identify or follow the target due to occlusions or object-to-object
interactions. We also propose to learn the weights of these two tracking cues
to handle the difficult situations, such as severe occlusions and
object-to-object interactions effectively. Our method has been validated on
several public datasets and the experimental results show that it outperforms
several state-of-the-art tracking methods.
| [
{
"version": "v1",
"created": "Fri, 20 Nov 2015 15:48:21 GMT"
},
{
"version": "v2",
"created": "Fri, 22 Apr 2016 03:53:35 GMT"
}
] | 2016-04-25T00:00:00 | [
[
"Wang",
"Bing",
""
],
[
"Wang",
"Gang",
""
],
[
"Chan",
"Kap Luk",
""
],
[
"Wang",
"Li",
""
]
] | TITLE: Tracklet Association by Online Target-Specific Metric Learning and
Coherent Dynamics Estimation
ABSTRACT: In this paper, we present a novel method based on online target-specific
metric learning and coherent dynamics estimation for tracklet (track fragment)
association by network flow optimization in long-term multi-person tracking.
Our proposed framework aims to exploit appearance and motion cues to prevent
identity switches during tracking and to recover missed detections.
Furthermore, target-specific metrics (appearance cue) and motion dynamics
(motion cue) are proposed to be learned and estimated online, i.e. during the
tracking process. Our approach is effective even when such cues fail to
identify or follow the target due to occlusions or object-to-object
interactions. We also propose to learn the weights of these two tracking cues
to handle the difficult situations, such as severe occlusions and
object-to-object interactions effectively. Our method has been validated on
several public datasets and the experimental results show that it outperforms
several state-of-the-art tracking methods.
| no_new_dataset | 0.953708 |
1512.01596 | Volodymyr Turchenko | Volodymyr Turchenko, Artur Luczak | Creation of a Deep Convolutional Auto-Encoder in Caffe | 9 pages, 7 figures, 5 tables, 34 references in the list; Added
references, corrected Table 3, changed several paragraphs in the text | null | null | null | cs.NE cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The development of a deep (stacked) convolutional auto-encoder in the Caffe
deep learning framework is presented in this paper. We describe simple
principles which we used to create this model in Caffe. The proposed model of
convolutional auto-encoder does not have pooling/unpooling layers yet. The
results of our experimental research show comparable accuracy of dimensionality
reduction in comparison with a classic auto-encoder on the example of MNIST
dataset.
| [
{
"version": "v1",
"created": "Fri, 4 Dec 2015 23:58:47 GMT"
},
{
"version": "v2",
"created": "Thu, 21 Apr 2016 01:51:14 GMT"
},
{
"version": "v3",
"created": "Fri, 22 Apr 2016 03:20:41 GMT"
}
] | 2016-04-25T00:00:00 | [
[
"Turchenko",
"Volodymyr",
""
],
[
"Luczak",
"Artur",
""
]
] | TITLE: Creation of a Deep Convolutional Auto-Encoder in Caffe
ABSTRACT: The development of a deep (stacked) convolutional auto-encoder in the Caffe
deep learning framework is presented in this paper. We describe simple
principles which we used to create this model in Caffe. The proposed model of
convolutional auto-encoder does not have pooling/unpooling layers yet. The
results of our experimental research show comparable accuracy of dimensionality
reduction in comparison with a classic auto-encoder on the example of MNIST
dataset.
| no_new_dataset | 0.954009 |
1601.01272 | Ke Tran | Ke Tran, Arianna Bisazza and Christof Monz | Recurrent Memory Networks for Language Modeling | 8 pages, 6 figures. Accepted at NAACL 2016 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recurrent Neural Networks (RNN) have obtained excellent result in many
natural language processing (NLP) tasks. However, understanding and
interpreting the source of this success remains a challenge. In this paper, we
propose Recurrent Memory Network (RMN), a novel RNN architecture, that not only
amplifies the power of RNN but also facilitates our understanding of its
internal functioning and allows us to discover underlying patterns in data. We
demonstrate the power of RMN on language modeling and sentence completion
tasks. On language modeling, RMN outperforms Long Short-Term Memory (LSTM)
network on three large German, Italian, and English dataset. Additionally we
perform in-depth analysis of various linguistic dimensions that RMN captures.
On Sentence Completion Challenge, for which it is essential to capture sentence
coherence, our RMN obtains 69.2% accuracy, surpassing the previous
state-of-the-art by a large margin.
| [
{
"version": "v1",
"created": "Wed, 6 Jan 2016 18:44:07 GMT"
},
{
"version": "v2",
"created": "Fri, 22 Apr 2016 11:13:11 GMT"
}
] | 2016-04-25T00:00:00 | [
[
"Tran",
"Ke",
""
],
[
"Bisazza",
"Arianna",
""
],
[
"Monz",
"Christof",
""
]
] | TITLE: Recurrent Memory Networks for Language Modeling
ABSTRACT: Recurrent Neural Networks (RNN) have obtained excellent result in many
natural language processing (NLP) tasks. However, understanding and
interpreting the source of this success remains a challenge. In this paper, we
propose Recurrent Memory Network (RMN), a novel RNN architecture, that not only
amplifies the power of RNN but also facilitates our understanding of its
internal functioning and allows us to discover underlying patterns in data. We
demonstrate the power of RMN on language modeling and sentence completion
tasks. On language modeling, RMN outperforms Long Short-Term Memory (LSTM)
network on three large German, Italian, and English dataset. Additionally we
perform in-depth analysis of various linguistic dimensions that RMN captures.
On Sentence Completion Challenge, for which it is essential to capture sentence
coherence, our RMN obtains 69.2% accuracy, surpassing the previous
state-of-the-art by a large margin.
| no_new_dataset | 0.948251 |
1604.04004 | Samuel Dodge | Samuel Dodge and Lina Karam | Understanding How Image Quality Affects Deep Neural Networks | Final version will appear in IEEE Xplore in the Proceedings of the
Conference on the Quality of Multimedia Experience (QoMEX), June 6-8, 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Image quality is an important practical challenge that is often overlooked in
the design of machine vision systems. Commonly, machine vision systems are
trained and tested on high quality image datasets, yet in practical
applications the input images can not be assumed to be of high quality.
Recently, deep neural networks have obtained state-of-the-art performance on
many machine vision tasks. In this paper we provide an evaluation of 4
state-of-the-art deep neural network models for image classification under
quality distortions. We consider five types of quality distortions: blur,
noise, contrast, JPEG, and JPEG2000 compression. We show that the existing
networks are susceptible to these quality distortions, particularly to blur and
noise. These results enable future work in developing deep neural networks that
are more invariant to quality distortions.
| [
{
"version": "v1",
"created": "Thu, 14 Apr 2016 00:47:50 GMT"
},
{
"version": "v2",
"created": "Thu, 21 Apr 2016 20:44:52 GMT"
}
] | 2016-04-25T00:00:00 | [
[
"Dodge",
"Samuel",
""
],
[
"Karam",
"Lina",
""
]
] | TITLE: Understanding How Image Quality Affects Deep Neural Networks
ABSTRACT: Image quality is an important practical challenge that is often overlooked in
the design of machine vision systems. Commonly, machine vision systems are
trained and tested on high quality image datasets, yet in practical
applications the input images can not be assumed to be of high quality.
Recently, deep neural networks have obtained state-of-the-art performance on
many machine vision tasks. In this paper we provide an evaluation of 4
state-of-the-art deep neural network models for image classification under
quality distortions. We consider five types of quality distortions: blur,
noise, contrast, JPEG, and JPEG2000 compression. We show that the existing
networks are susceptible to these quality distortions, particularly to blur and
noise. These results enable future work in developing deep neural networks that
are more invariant to quality distortions.
| no_new_dataset | 0.948822 |
1604.06397 | Yang Wang | Yang Wang and Minh Hoai | Improving Human Action Recognition by Non-action Classification | appears in CVPR16 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we consider the task of recognizing human actions in realistic
video where human actions are dominated by irrelevant factors. We first study
the benefits of removing non-action video segments, which are the ones that do
not portray any human action. We then learn a non-action classifier and use it
to down-weight irrelevant video segments. The non-action classifier is trained
using ActionThread, a dataset with shot-level annotation for the occurrence or
absence of a human action. The non-action classifier can be used to identify
non-action shots with high precision and subsequently used to improve the
performance of action recognition systems.
| [
{
"version": "v1",
"created": "Thu, 21 Apr 2016 17:46:25 GMT"
},
{
"version": "v2",
"created": "Fri, 22 Apr 2016 02:50:12 GMT"
}
] | 2016-04-25T00:00:00 | [
[
"Wang",
"Yang",
""
],
[
"Hoai",
"Minh",
""
]
] | TITLE: Improving Human Action Recognition by Non-action Classification
ABSTRACT: In this paper we consider the task of recognizing human actions in realistic
video where human actions are dominated by irrelevant factors. We first study
the benefits of removing non-action video segments, which are the ones that do
not portray any human action. We then learn a non-action classifier and use it
to down-weight irrelevant video segments. The non-action classifier is trained
using ActionThread, a dataset with shot-level annotation for the occurrence or
absence of a human action. The non-action classifier can be used to identify
non-action shots with high precision and subsequently used to improve the
performance of action recognition systems.
| new_dataset | 0.959421 |
1604.06570 | Jubin Johnson | Hisham Cholakkal, Jubin Johnson and Deepu Rajan | A Classifier-guided Approach for Top-down Salient Object Detection | To appear in Signal Processing: Image Communication, Elsevier.
Available online from April 2016 | null | 10.1016/j.image.2016.04.001 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a framework for top-down salient object detection that
incorporates a tightly coupled image classification module. The classifier is
trained on novel category-aware sparse codes computed on object dictionaries
used for saliency modeling. A misclassification indicates that the
corresponding saliency model is inaccurate. Hence, the classifier selects
images for which the saliency models need to be updated. The category-aware
sparse coding produces better image classification accuracy as compared to
conventional sparse coding with a reduced computational complexity. A
saliency-weighted max-pooling is proposed to improve image classification,
which is further used to refine the saliency maps. Experimental results on
Graz-02 and PASCAL VOC-07 datasets demonstrate the effectiveness of salient
object detection. Although the role of the classifier is to support salient
object detection, we evaluate its performance in image classification and also
illustrate the utility of thresholded saliency maps for image segmentation.
| [
{
"version": "v1",
"created": "Fri, 22 Apr 2016 08:43:34 GMT"
}
] | 2016-04-25T00:00:00 | [
[
"Cholakkal",
"Hisham",
""
],
[
"Johnson",
"Jubin",
""
],
[
"Rajan",
"Deepu",
""
]
] | TITLE: A Classifier-guided Approach for Top-down Salient Object Detection
ABSTRACT: We propose a framework for top-down salient object detection that
incorporates a tightly coupled image classification module. The classifier is
trained on novel category-aware sparse codes computed on object dictionaries
used for saliency modeling. A misclassification indicates that the
corresponding saliency model is inaccurate. Hence, the classifier selects
images for which the saliency models need to be updated. The category-aware
sparse coding produces better image classification accuracy as compared to
conventional sparse coding with a reduced computational complexity. A
saliency-weighted max-pooling is proposed to improve image classification,
which is further used to refine the saliency maps. Experimental results on
Graz-02 and PASCAL VOC-07 datasets demonstrate the effectiveness of salient
object detection. Although the role of the classifier is to support salient
object detection, we evaluate its performance in image classification and also
illustrate the utility of thresholded saliency maps for image segmentation.
| no_new_dataset | 0.949809 |
1604.06727 | Chee Chun Gan | Chee Chun Gan and Gerard Learmonth | An improved chromosome formulation for genetic algorithms applied to
variable selection with the inclusion of interaction terms | 20 pages, 4 figures, 4 tables, 2 appendices | null | null | null | stat.ML cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Genetic algorithms are a well-known method for tackling the problem of
variable selection. As they are non-parametric and can use a large variety of
fitness functions, they are well-suited as a variable selection wrapper that
can be applied to many different models. In almost all cases, the chromosome
formulation used in these genetic algorithms consists of a binary vector of
length n for n potential variables indicating the presence or absence of the
corresponding variables. While the aforementioned chromosome formulation has
exhibited good performance for relatively small n, there are potential problems
when the size of n grows very large, especially when interaction terms are
considered. We introduce a modification to the standard chromosome formulation
that allows for better scalability and model sparsity when interaction terms
are included in the predictor search space. Experimental results show that the
indexed chromosome formulation demonstrates improved computational efficiency
and sparsity on high-dimensional datasets with interaction terms compared to
the standard chromosome formulation.
| [
{
"version": "v1",
"created": "Fri, 22 Apr 2016 16:14:55 GMT"
}
] | 2016-04-25T00:00:00 | [
[
"Gan",
"Chee Chun",
""
],
[
"Learmonth",
"Gerard",
""
]
] | TITLE: An improved chromosome formulation for genetic algorithms applied to
variable selection with the inclusion of interaction terms
ABSTRACT: Genetic algorithms are a well-known method for tackling the problem of
variable selection. As they are non-parametric and can use a large variety of
fitness functions, they are well-suited as a variable selection wrapper that
can be applied to many different models. In almost all cases, the chromosome
formulation used in these genetic algorithms consists of a binary vector of
length n for n potential variables indicating the presence or absence of the
corresponding variables. While the aforementioned chromosome formulation has
exhibited good performance for relatively small n, there are potential problems
when the size of n grows very large, especially when interaction terms are
considered. We introduce a modification to the standard chromosome formulation
that allows for better scalability and model sparsity when interaction terms
are included in the predictor search space. Experimental results show that the
indexed chromosome formulation demonstrates improved computational efficiency
and sparsity on high-dimensional datasets with interaction terms compared to
the standard chromosome formulation.
| no_new_dataset | 0.949201 |
1604.06730 | Chee Chun Gan | Chee Chun Gan and Gerard Learmonth | Developing an ICU scoring system with interaction terms using a genetic
algorithm | 21 pages, 6 tables, 2 appendices | null | null | null | cs.NE cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | ICU mortality scoring systems attempt to predict patient mortality using
predictive models with various clinical predictors. Examples of such systems
are APACHE, SAPS and MPM. However, most such scoring systems do not actively
look for and include interaction terms, despite physicians intuitively taking
such interactions into account when making a diagnosis. One barrier to
including such terms in predictive models is the difficulty of using most
variable selection methods in high-dimensional datasets. A genetic algorithm
framework for variable selection with logistic regression models is used to
search for two-way interaction terms in a clinical dataset of adult ICU
patients, with separate models being built for each category of diagnosis upon
admittance to the ICU. The models had good discrimination across all
categories, with a weighted average AUC of 0.84 (>0.90 for several categories)
and the genetic algorithm was able to find several significant interaction
terms, which may be able to provide greater insight into mortality prediction
for health practitioners. The GA selected models had improved performance
against stepwise selection and random forest models, and provides greater
flexibility in terms of variable selection by being able to optimize over any
modeler-defined model performance metric instead of a specific variable
importance metric.
| [
{
"version": "v1",
"created": "Fri, 22 Apr 2016 16:20:29 GMT"
}
] | 2016-04-25T00:00:00 | [
[
"Gan",
"Chee Chun",
""
],
[
"Learmonth",
"Gerard",
""
]
] | TITLE: Developing an ICU scoring system with interaction terms using a genetic
algorithm
ABSTRACT: ICU mortality scoring systems attempt to predict patient mortality using
predictive models with various clinical predictors. Examples of such systems
are APACHE, SAPS and MPM. However, most such scoring systems do not actively
look for and include interaction terms, despite physicians intuitively taking
such interactions into account when making a diagnosis. One barrier to
including such terms in predictive models is the difficulty of using most
variable selection methods in high-dimensional datasets. A genetic algorithm
framework for variable selection with logistic regression models is used to
search for two-way interaction terms in a clinical dataset of adult ICU
patients, with separate models being built for each category of diagnosis upon
admittance to the ICU. The models had good discrimination across all
categories, with a weighted average AUC of 0.84 (>0.90 for several categories)
and the genetic algorithm was able to find several significant interaction
terms, which may be able to provide greater insight into mortality prediction
for health practitioners. The GA selected models had improved performance
against stepwise selection and random forest models, and provides greater
flexibility in terms of variable selection by being able to optimize over any
modeler-defined model performance metric instead of a specific variable
importance metric.
| no_new_dataset | 0.949342 |
1604.06737 | Cheng Guo | Cheng Guo and Felix Berkhahn | Entity Embeddings of Categorical Variables | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We map categorical variables in a function approximation problem into
Euclidean spaces, which are the entity embeddings of the categorical variables.
The mapping is learned by a neural network during the standard supervised
training process. Entity embedding not only reduces memory usage and speeds up
neural networks compared with one-hot encoding, but more importantly by mapping
similar values close to each other in the embedding space it reveals the
intrinsic properties of the categorical variables. We applied it successfully
in a recent Kaggle competition and were able to reach the third position with
relative simple features. We further demonstrate in this paper that entity
embedding helps the neural network to generalize better when the data is sparse
and statistics is unknown. Thus it is especially useful for datasets with lots
of high cardinality features, where other methods tend to overfit. We also
demonstrate that the embeddings obtained from the trained neural network boost
the performance of all tested machine learning methods considerably when used
as the input features instead. As entity embedding defines a distance measure
for categorical variables it can be used for visualizing categorical data and
for data clustering.
| [
{
"version": "v1",
"created": "Fri, 22 Apr 2016 16:34:30 GMT"
}
] | 2016-04-25T00:00:00 | [
[
"Guo",
"Cheng",
""
],
[
"Berkhahn",
"Felix",
""
]
] | TITLE: Entity Embeddings of Categorical Variables
ABSTRACT: We map categorical variables in a function approximation problem into
Euclidean spaces, which are the entity embeddings of the categorical variables.
The mapping is learned by a neural network during the standard supervised
training process. Entity embedding not only reduces memory usage and speeds up
neural networks compared with one-hot encoding, but more importantly by mapping
similar values close to each other in the embedding space it reveals the
intrinsic properties of the categorical variables. We applied it successfully
in a recent Kaggle competition and were able to reach the third position with
relative simple features. We further demonstrate in this paper that entity
embedding helps the neural network to generalize better when the data is sparse
and statistics is unknown. Thus it is especially useful for datasets with lots
of high cardinality features, where other methods tend to overfit. We also
demonstrate that the embeddings obtained from the trained neural network boost
the performance of all tested machine learning methods considerably when used
as the input features instead. As entity embedding defines a distance measure
for categorical variables it can be used for visualizing categorical data and
for data clustering.
| no_new_dataset | 0.94868 |
1604.06743 | Li Zhou | Li Zhou and Emma Brunskill | Latent Contextual Bandits and their Application to Personalized
Recommendations for New Users | 25th International Joint Conference on Artificial Intelligence (IJCAI
2016) | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Personalized recommendations for new users, also known as the cold-start
problem, can be formulated as a contextual bandit problem. Existing contextual
bandit algorithms generally rely on features alone to capture user variability.
Such methods are inefficient in learning new users' interests. In this paper we
propose Latent Contextual Bandits. We consider both the benefit of leveraging a
set of learned latent user classes for new users, and how we can learn such
latent classes from prior users. We show that our approach achieves a better
regret bound than existing algorithms. We also demonstrate the benefit of our
approach using a large real world dataset and a preliminary user study.
| [
{
"version": "v1",
"created": "Fri, 22 Apr 2016 16:47:04 GMT"
}
] | 2016-04-25T00:00:00 | [
[
"Zhou",
"Li",
""
],
[
"Brunskill",
"Emma",
""
]
] | TITLE: Latent Contextual Bandits and their Application to Personalized
Recommendations for New Users
ABSTRACT: Personalized recommendations for new users, also known as the cold-start
problem, can be formulated as a contextual bandit problem. Existing contextual
bandit algorithms generally rely on features alone to capture user variability.
Such methods are inefficient in learning new users' interests. In this paper we
propose Latent Contextual Bandits. We consider both the benefit of leveraging a
set of learned latent user classes for new users, and how we can learn such
latent classes from prior users. We show that our approach achieves a better
regret bound than existing algorithms. We also demonstrate the benefit of our
approach using a large real world dataset and a preliminary user study.
| no_new_dataset | 0.950595 |
1604.06751 | Mazdak Fatahi | Mazdak Fatahi, Mahmood Ahmadi, Mahyar Shahsavari, Arash Ahmadi and
Philippe Devienne | evt_MNIST: A spike based version of traditional MNIST | null | null | null | null | cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Benchmarks and datasets have important role in evaluation of machine learning
algorithms and neural network implementations. Traditional dataset for images
such as MNIST is applied to evaluate efficiency of different training
algorithms in neural networks. This demand is different in Spiking Neural
Networks (SNN) as they require spiking inputs. It is widely believed, in the
biological cortex the timing of spikes is irregular. Poisson distributions
provide adequate descriptions of the irregularity in generating appropriate
spikes. Here, we introduce a spike-based version of MNSIT (handwritten digits
dataset),using Poisson distribution and show the Poissonian property of the
generated streams. We introduce a new version of evt_MNIST which can be used
for neural network evaluation.
| [
{
"version": "v1",
"created": "Fri, 22 Apr 2016 17:06:31 GMT"
}
] | 2016-04-25T00:00:00 | [
[
"Fatahi",
"Mazdak",
""
],
[
"Ahmadi",
"Mahmood",
""
],
[
"Shahsavari",
"Mahyar",
""
],
[
"Ahmadi",
"Arash",
""
],
[
"Devienne",
"Philippe",
""
]
] | TITLE: evt_MNIST: A spike based version of traditional MNIST
ABSTRACT: Benchmarks and datasets have important role in evaluation of machine learning
algorithms and neural network implementations. Traditional dataset for images
such as MNIST is applied to evaluate efficiency of different training
algorithms in neural networks. This demand is different in Spiking Neural
Networks (SNN) as they require spiking inputs. It is widely believed, in the
biological cortex the timing of spikes is irregular. Poisson distributions
provide adequate descriptions of the irregularity in generating appropriate
spikes. Here, we introduce a spike-based version of MNSIT (handwritten digits
dataset),using Poisson distribution and show the Poissonian property of the
generated streams. We introduce a new version of evt_MNIST which can be used
for neural network evaluation.
| new_dataset | 0.962285 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.