id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
sequencelengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1701.03940 | Rafael Pinto | Rafael Pinto, Paulo Engel | Scalable and Incremental Learning of Gaussian Mixture Models | 13 pages, 1 figure, submitted for peer-review. arXiv admin note:
substantial text overlap with arXiv:1506.04422 | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work presents a fast and scalable algorithm for incremental learning of
Gaussian mixture models. By performing rank-one updates on its precision
matrices and determinants, its asymptotic time complexity is of \BigO{NKD^2}
for $N$ data points, $K$ Gaussian components and $D$ dimensions. The resulting
algorithm can be applied to high dimensional tasks, and this is confirmed by
applying it to the classification datasets MNIST and CIFAR-10. Additionally, in
order to show the algorithm's applicability to function approximation and
control tasks, it is applied to three reinforcement learning tasks and its
data-efficiency is evaluated.
| [
{
"version": "v1",
"created": "Sat, 14 Jan 2017 16:15:44 GMT"
}
] | 2017-01-17T00:00:00 | [
[
"Pinto",
"Rafael",
""
],
[
"Engel",
"Paulo",
""
]
] | TITLE: Scalable and Incremental Learning of Gaussian Mixture Models
ABSTRACT: This work presents a fast and scalable algorithm for incremental learning of
Gaussian mixture models. By performing rank-one updates on its precision
matrices and determinants, its asymptotic time complexity is of \BigO{NKD^2}
for $N$ data points, $K$ Gaussian components and $D$ dimensions. The resulting
algorithm can be applied to high dimensional tasks, and this is confirmed by
applying it to the classification datasets MNIST and CIFAR-10. Additionally, in
order to show the algorithm's applicability to function approximation and
control tasks, it is applied to three reinforcement learning tasks and its
data-efficiency is evaluated.
| no_new_dataset | 0.947039 |
1701.04273 | Hosein Azarbonyad | Hosein Azarbonyad and Mostafa Dehghani and Tom Kenter and Maarten Marx
and Jaap Kamps and Maarten de Rijke | Hierarchical Re-estimation of Topic Models for Measuring Topical
Diversity | Proceedings of the 39th European Conference on Information Retrieval
(ECIR2017) | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A high degree of topical diversity is often considered to be an important
characteristic of interesting text documents. A recent proposal for measuring
topical diversity identifies three elements for assessing diversity: words,
topics, and documents as collections of words. Topic models play a central role
in this approach. Using standard topic models for measuring diversity of
documents is suboptimal due to generality and impurity. General topics only
include common information from a background corpus and are assigned to most of
the documents in the collection. Impure topics contain words that are not
related to the topic; impurity lowers the interpretability of topic models and
impure topics are likely to get assigned to documents erroneously. We propose a
hierarchical re-estimation approach for topic models to combat generality and
impurity; the proposed approach operates at three levels: words, topics, and
documents. Our re-estimation approach for measuring documents' topical
diversity outperforms the state of the art on PubMed dataset which is commonly
used for diversity experiments.
| [
{
"version": "v1",
"created": "Mon, 16 Jan 2017 12:59:47 GMT"
}
] | 2017-01-17T00:00:00 | [
[
"Azarbonyad",
"Hosein",
""
],
[
"Dehghani",
"Mostafa",
""
],
[
"Kenter",
"Tom",
""
],
[
"Marx",
"Maarten",
""
],
[
"Kamps",
"Jaap",
""
],
[
"de Rijke",
"Maarten",
""
]
] | TITLE: Hierarchical Re-estimation of Topic Models for Measuring Topical
Diversity
ABSTRACT: A high degree of topical diversity is often considered to be an important
characteristic of interesting text documents. A recent proposal for measuring
topical diversity identifies three elements for assessing diversity: words,
topics, and documents as collections of words. Topic models play a central role
in this approach. Using standard topic models for measuring diversity of
documents is suboptimal due to generality and impurity. General topics only
include common information from a background corpus and are assigned to most of
the documents in the collection. Impure topics contain words that are not
related to the topic; impurity lowers the interpretability of topic models and
impure topics are likely to get assigned to documents erroneously. We propose a
hierarchical re-estimation approach for topic models to combat generality and
impurity; the proposed approach operates at three levels: words, topics, and
documents. Our re-estimation approach for measuring documents' topical
diversity outperforms the state of the art on PubMed dataset which is commonly
used for diversity experiments.
| no_new_dataset | 0.954816 |
1701.04355 | Hadrien Bertrand | Hadrien Bertrand, Matthieu Perrot, Roberto Ardon, Isabelle Bloch | Classification of MRI data using Deep Learning and Gaussian
Process-based Model Selection | Accepted at ISBI 2017 | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The classification of MRI images according to the anatomical field of view is
a necessary task to solve when faced with the increasing quantity of medical
images. In parallel, advances in deep learning makes it a suitable tool for
computer vision problems. Using a common architecture (such as AlexNet)
provides quite good results, but not sufficient for clinical use. Improving the
model is not an easy task, due to the large number of hyper-parameters
governing both the architecture and the training of the network, and to the
limited understanding of their relevance. Since an exhaustive search is not
tractable, we propose to optimize the network first by random search, and then
by an adaptive search based on Gaussian Processes and Probability of
Improvement. Applying this method on a large and varied MRI dataset, we show a
substantial improvement between the baseline network and the final one (up to
20\% for the most difficult classes).
| [
{
"version": "v1",
"created": "Mon, 16 Jan 2017 17:02:31 GMT"
}
] | 2017-01-17T00:00:00 | [
[
"Bertrand",
"Hadrien",
""
],
[
"Perrot",
"Matthieu",
""
],
[
"Ardon",
"Roberto",
""
],
[
"Bloch",
"Isabelle",
""
]
] | TITLE: Classification of MRI data using Deep Learning and Gaussian
Process-based Model Selection
ABSTRACT: The classification of MRI images according to the anatomical field of view is
a necessary task to solve when faced with the increasing quantity of medical
images. In parallel, advances in deep learning makes it a suitable tool for
computer vision problems. Using a common architecture (such as AlexNet)
provides quite good results, but not sufficient for clinical use. Improving the
model is not an easy task, due to the large number of hyper-parameters
governing both the architecture and the training of the network, and to the
limited understanding of their relevance. Since an exhaustive search is not
tractable, we propose to optimize the network first by random search, and then
by an adaptive search based on Gaussian Processes and Probability of
Improvement. Applying this method on a large and varied MRI dataset, we show a
substantial improvement between the baseline network and the final one (up to
20\% for the most difficult classes).
| no_new_dataset | 0.94887 |
1609.03323 | Julius Hannink | Julius Hannink, Thomas Kautz, Cristian F. Pasluosta, Karl-G\"unter
Ga{\ss}mann, Jochen Klucken, Bjoern M. Eskofier | Sensor-based Gait Parameter Extraction with Deep Convolutional Neural
Networks | in IEEE Journal of Biomedical and Health Informatics (2016) | null | 10.1109/JBHI.2016.2636456 | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Measurement of stride-related, biomechanical parameters is the common
rationale for objective gait impairment scoring. State-of-the-art double
integration approaches to extract these parameters from inertial sensor data
are, however, limited in their clinical applicability due to the underlying
assumptions. To overcome this, we present a method to translate the abstract
information provided by wearable sensors to context-related expert features
based on deep convolutional neural networks. Regarding mobile gait analysis,
this enables integration-free and data-driven extraction of a set of 8
spatio-temporal stride parameters. To this end, two modelling approaches are
compared: A combined network estimating all parameters of interest and an
ensemble approach that spawns less complex networks for each parameter
individually. The ensemble approach is outperforming the combined modelling in
the current application. On a clinically relevant and publicly available
benchmark dataset, we estimate stride length, width and medio-lateral change in
foot angle up to ${-0.15\pm6.09}$ cm, ${-0.09\pm4.22}$ cm and ${0.13 \pm
3.78^\circ}$ respectively. Stride, swing and stance time as well as heel and
toe contact times are estimated up to ${\pm 0.07}$, ${\pm0.05}$, ${\pm 0.07}$,
${\pm0.07}$ and ${\pm0.12}$ s respectively. This is comparable to and in parts
outperforming or defining state-of-the-art. Our results further indicate that
the proposed change in methodology could substitute assumption-driven
double-integration methods and enable mobile assessment of spatio-temporal
stride parameters in clinically critical situations as e.g. in the case of
spastic gait impairments.
| [
{
"version": "v1",
"created": "Mon, 12 Sep 2016 09:33:57 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Oct 2016 10:56:32 GMT"
},
{
"version": "v3",
"created": "Fri, 13 Jan 2017 12:30:39 GMT"
}
] | 2017-01-16T00:00:00 | [
[
"Hannink",
"Julius",
""
],
[
"Kautz",
"Thomas",
""
],
[
"Pasluosta",
"Cristian F.",
""
],
[
"Gaßmann",
"Karl-Günter",
""
],
[
"Klucken",
"Jochen",
""
],
[
"Eskofier",
"Bjoern M.",
""
]
] | TITLE: Sensor-based Gait Parameter Extraction with Deep Convolutional Neural
Networks
ABSTRACT: Measurement of stride-related, biomechanical parameters is the common
rationale for objective gait impairment scoring. State-of-the-art double
integration approaches to extract these parameters from inertial sensor data
are, however, limited in their clinical applicability due to the underlying
assumptions. To overcome this, we present a method to translate the abstract
information provided by wearable sensors to context-related expert features
based on deep convolutional neural networks. Regarding mobile gait analysis,
this enables integration-free and data-driven extraction of a set of 8
spatio-temporal stride parameters. To this end, two modelling approaches are
compared: A combined network estimating all parameters of interest and an
ensemble approach that spawns less complex networks for each parameter
individually. The ensemble approach is outperforming the combined modelling in
the current application. On a clinically relevant and publicly available
benchmark dataset, we estimate stride length, width and medio-lateral change in
foot angle up to ${-0.15\pm6.09}$ cm, ${-0.09\pm4.22}$ cm and ${0.13 \pm
3.78^\circ}$ respectively. Stride, swing and stance time as well as heel and
toe contact times are estimated up to ${\pm 0.07}$, ${\pm0.05}$, ${\pm 0.07}$,
${\pm0.07}$ and ${\pm0.12}$ s respectively. This is comparable to and in parts
outperforming or defining state-of-the-art. Our results further indicate that
the proposed change in methodology could substitute assumption-driven
double-integration methods and enable mobile assessment of spatio-temporal
stride parameters in clinically critical situations as e.g. in the case of
spastic gait impairments.
| no_new_dataset | 0.9455 |
1701.03102 | Xiang Xiang | Xiang Xiang, Trac D. Tran | Linear Disentangled Representation Learning for Facial Actions | Codes available at https://github.com/eglxiang/icassp15_emotion and
https://github.com/eglxiang/FacialAU. arXiv admin note: text overlap with
arXiv:1410.1606 | null | null | null | cs.CV cs.AI cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Limited annotated data available for the recognition of facial expression and
action units embarrasses the training of deep networks, which can learn
disentangled invariant features. However, a linear model with just several
parameters normally is not demanding in terms of training data. In this paper,
we propose an elegant linear model to untangle confounding factors in
challenging realistic multichannel signals such as 2D face videos. The simple
yet powerful model does not rely on huge training data and is natural for
recognizing facial actions without explicitly disentangling the identity. Base
on well-understood intuitive linear models such as Sparse Representation based
Classification (SRC), previous attempts require a prepossessing of explicit
decoupling which is practically inexact. Instead, we exploit the low-rank
property across frames to subtract the underlying neutral faces which are
modeled jointly with sparse representation on the action components with group
sparsity enforced. On the extended Cohn-Kanade dataset (CK+), our one-shot
automatic method on raw face videos performs as competitive as SRC applied on
manually prepared action components and performs even better than SRC in terms
of true positive rate. We apply the model to the even more challenging task of
facial action unit recognition, verified on the MPI Face Video Database
(MPI-VDB) achieving a decent performance. All the programs and data have been
made publicly available.
| [
{
"version": "v1",
"created": "Wed, 11 Jan 2017 16:34:29 GMT"
}
] | 2017-01-16T00:00:00 | [
[
"Xiang",
"Xiang",
""
],
[
"Tran",
"Trac D.",
""
]
] | TITLE: Linear Disentangled Representation Learning for Facial Actions
ABSTRACT: Limited annotated data available for the recognition of facial expression and
action units embarrasses the training of deep networks, which can learn
disentangled invariant features. However, a linear model with just several
parameters normally is not demanding in terms of training data. In this paper,
we propose an elegant linear model to untangle confounding factors in
challenging realistic multichannel signals such as 2D face videos. The simple
yet powerful model does not rely on huge training data and is natural for
recognizing facial actions without explicitly disentangling the identity. Base
on well-understood intuitive linear models such as Sparse Representation based
Classification (SRC), previous attempts require a prepossessing of explicit
decoupling which is practically inexact. Instead, we exploit the low-rank
property across frames to subtract the underlying neutral faces which are
modeled jointly with sparse representation on the action components with group
sparsity enforced. On the extended Cohn-Kanade dataset (CK+), our one-shot
automatic method on raw face videos performs as competitive as SRC applied on
manually prepared action components and performs even better than SRC in terms
of true positive rate. We apply the model to the even more challenging task of
facial action unit recognition, verified on the MPI Face Video Database
(MPI-VDB) achieving a decent performance. All the programs and data have been
made publicly available.
| no_new_dataset | 0.947088 |
1701.03551 | Liang Lin | Keze Wang and Dongyu Zhang and Ya Li and Ruimao Zhang and Liang Lin | Cost-Effective Active Learning for Deep Image Classification | Accepted by IEEE Transactions on Circuits and Systems for Video
Technology (TCSVT) 2016 | null | 10.1109/TCSVT.2016.2589879 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent successes in learning-based image classification, however, heavily
rely on the large number of annotated training samples, which may require
considerable human efforts. In this paper, we propose a novel active learning
framework, which is capable of building a competitive classifier with optimal
feature representation via a limited amount of labeled training instances in an
incremental learning manner. Our approach advances the existing active learning
methods in two aspects. First, we incorporate deep convolutional neural
networks into active learning. Through the properly designed framework, the
feature representation and the classifier can be simultaneously updated with
progressively annotated informative samples. Second, we present a
cost-effective sample selection strategy to improve the classification
performance with less manual annotations. Unlike traditional methods focusing
on only the uncertain samples of low prediction confidence, we especially
discover the large amount of high confidence samples from the unlabeled set for
feature learning. Specifically, these high confidence samples are automatically
selected and iteratively assigned pseudo-labels. We thus call our framework
"Cost-Effective Active Learning" (CEAL) standing for the two
advantages.Extensive experiments demonstrate that the proposed CEAL framework
can achieve promising results on two challenging image classification datasets,
i.e., face recognition on CACD database [1] and object categorization on
Caltech-256 [2].
| [
{
"version": "v1",
"created": "Fri, 13 Jan 2017 03:07:45 GMT"
}
] | 2017-01-16T00:00:00 | [
[
"Wang",
"Keze",
""
],
[
"Zhang",
"Dongyu",
""
],
[
"Li",
"Ya",
""
],
[
"Zhang",
"Ruimao",
""
],
[
"Lin",
"Liang",
""
]
] | TITLE: Cost-Effective Active Learning for Deep Image Classification
ABSTRACT: Recent successes in learning-based image classification, however, heavily
rely on the large number of annotated training samples, which may require
considerable human efforts. In this paper, we propose a novel active learning
framework, which is capable of building a competitive classifier with optimal
feature representation via a limited amount of labeled training instances in an
incremental learning manner. Our approach advances the existing active learning
methods in two aspects. First, we incorporate deep convolutional neural
networks into active learning. Through the properly designed framework, the
feature representation and the classifier can be simultaneously updated with
progressively annotated informative samples. Second, we present a
cost-effective sample selection strategy to improve the classification
performance with less manual annotations. Unlike traditional methods focusing
on only the uncertain samples of low prediction confidence, we especially
discover the large amount of high confidence samples from the unlabeled set for
feature learning. Specifically, these high confidence samples are automatically
selected and iteratively assigned pseudo-labels. We thus call our framework
"Cost-Effective Active Learning" (CEAL) standing for the two
advantages.Extensive experiments demonstrate that the proposed CEAL framework
can achieve promising results on two challenging image classification datasets,
i.e., face recognition on CACD database [1] and object categorization on
Caltech-256 [2].
| no_new_dataset | 0.947914 |
1701.03682 | Emrah Budur | Priyank Mathur, Arkajyoti Misra, Emrah Budur | LIDE: Language Identification from Text Documents | null | null | null | null | cs.CL cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The increase in the use of microblogging came along with the rapid growth on
short linguistic data. On the other hand deep learning is considered to be the
new frontier to extract meaningful information out of large amount of raw data
in an automated manner. In this study, we engaged these two emerging fields to
come up with a robust language identifier on demand, namely Language
Identification Engine (LIDE). As a result, we achieved 95.12% accuracy in
Discriminating between Similar Languages (DSL) Shared Task 2015 dataset, which
is comparable to the maximum reported accuracy of 95.54% achieved so far.
| [
{
"version": "v1",
"created": "Fri, 13 Jan 2017 14:20:06 GMT"
}
] | 2017-01-16T00:00:00 | [
[
"Mathur",
"Priyank",
""
],
[
"Misra",
"Arkajyoti",
""
],
[
"Budur",
"Emrah",
""
]
] | TITLE: LIDE: Language Identification from Text Documents
ABSTRACT: The increase in the use of microblogging came along with the rapid growth on
short linguistic data. On the other hand deep learning is considered to be the
new frontier to extract meaningful information out of large amount of raw data
in an automated manner. In this study, we engaged these two emerging fields to
come up with a robust language identifier on demand, namely Language
Identification Engine (LIDE). As a result, we achieved 95.12% accuracy in
Discriminating between Similar Languages (DSL) Shared Task 2015 dataset, which
is comparable to the maximum reported accuracy of 95.54% achieved so far.
| no_new_dataset | 0.939025 |
1607.07695 | Itir Onal Ertugrul | Itir Onal Ertugrul, Mete Ozay, Fatos Tunay Yarman Vural | Hierarchical Multi-resolution Mesh Networks for Brain Decoding | 18 pages | null | null | null | cs.NE cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a new framework, called Hierarchical Multi-resolution Mesh
Networks (HMMNs), which establishes a set of brain networks at multiple time
resolutions of fMRI signal to represent the underlying cognitive process. The
suggested framework, first, decomposes the fMRI signal into various frequency
subbands using wavelet transforms. Then, a brain network, called mesh network,
is formed at each subband by ensembling a set of local meshes. The locality
around each anatomic region is defined with respect to a neighborhood system
based on functional connectivity. The arc weights of a mesh are estimated by
ridge regression formed among the average region time series. In the final
step, the adjacency matrices of mesh networks obtained at different subbands
are ensembled for brain decoding under a hierarchical learning architecture,
called, fuzzy stacked generalization (FSG). Our results on Human Connectome
Project task-fMRI dataset reflect that the suggested HMMN model can
successfully discriminate tasks by extracting complementary information
obtained from mesh arc weights of multiple subbands. We study the topological
properties of the mesh networks at different resolutions using the network
measures, namely, node degree, node strength, betweenness centrality and global
efficiency; and investigate the connectivity of anatomic regions, during a
cognitive task. We observe significant variations among the network topologies
obtained for different subbands. We, also, analyze the diversity properties of
classifier ensemble, trained by the mesh networks in multiple subbands and
observe that the classifiers in the ensemble collaborate with each other to
fuse the complementary information freed at each subband. We conclude that the
fMRI data, recorded during a cognitive task, embed diverse information across
the anatomic regions at each resolution.
| [
{
"version": "v1",
"created": "Tue, 12 Jul 2016 17:26:31 GMT"
},
{
"version": "v2",
"created": "Wed, 11 Jan 2017 20:42:47 GMT"
}
] | 2017-01-13T00:00:00 | [
[
"Ertugrul",
"Itir Onal",
""
],
[
"Ozay",
"Mete",
""
],
[
"Vural",
"Fatos Tunay Yarman",
""
]
] | TITLE: Hierarchical Multi-resolution Mesh Networks for Brain Decoding
ABSTRACT: We propose a new framework, called Hierarchical Multi-resolution Mesh
Networks (HMMNs), which establishes a set of brain networks at multiple time
resolutions of fMRI signal to represent the underlying cognitive process. The
suggested framework, first, decomposes the fMRI signal into various frequency
subbands using wavelet transforms. Then, a brain network, called mesh network,
is formed at each subband by ensembling a set of local meshes. The locality
around each anatomic region is defined with respect to a neighborhood system
based on functional connectivity. The arc weights of a mesh are estimated by
ridge regression formed among the average region time series. In the final
step, the adjacency matrices of mesh networks obtained at different subbands
are ensembled for brain decoding under a hierarchical learning architecture,
called, fuzzy stacked generalization (FSG). Our results on Human Connectome
Project task-fMRI dataset reflect that the suggested HMMN model can
successfully discriminate tasks by extracting complementary information
obtained from mesh arc weights of multiple subbands. We study the topological
properties of the mesh networks at different resolutions using the network
measures, namely, node degree, node strength, betweenness centrality and global
efficiency; and investigate the connectivity of anatomic regions, during a
cognitive task. We observe significant variations among the network topologies
obtained for different subbands. We, also, analyze the diversity properties of
classifier ensemble, trained by the mesh networks in multiple subbands and
observe that the classifiers in the ensemble collaborate with each other to
fuse the complementary information freed at each subband. We conclude that the
fMRI data, recorded during a cognitive task, embed diverse information across
the anatomic regions at each resolution.
| no_new_dataset | 0.952794 |
1612.05476 | Paul Swoboda | Paul Swoboda, Carsten Rother, Hassan Abu Alhaija, Dagmar Kainmueller,
Bogdan Savchynskyy | A Study of Lagrangean Decompositions and Dual Ascent Solvers for Graph
Matching | Added acknowledgments | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the quadratic assignment problem, in computer vision also known as
graph matching. Two leading solvers for this problem optimize the Lagrange
decomposition duals with sub-gradient and dual ascent (also known as message
passing) updates. We explore s direction further and propose several additional
Lagrangean relaxations of the graph matching problem along with corresponding
algorithms, which are all based on a common dual ascent framework. Our
extensive empirical evaluation gives several theoretical insights and suggests
a new state-of-the-art any-time solver for the considered problem. Our
improvement over state-of-the-art is particularly visible on a new dataset with
large-scale sparse problem instances containing more than 500 graph nodes each.
| [
{
"version": "v1",
"created": "Fri, 16 Dec 2016 14:14:42 GMT"
},
{
"version": "v2",
"created": "Thu, 12 Jan 2017 11:51:00 GMT"
}
] | 2017-01-13T00:00:00 | [
[
"Swoboda",
"Paul",
""
],
[
"Rother",
"Carsten",
""
],
[
"Alhaija",
"Hassan Abu",
""
],
[
"Kainmueller",
"Dagmar",
""
],
[
"Savchynskyy",
"Bogdan",
""
]
] | TITLE: A Study of Lagrangean Decompositions and Dual Ascent Solvers for Graph
Matching
ABSTRACT: We study the quadratic assignment problem, in computer vision also known as
graph matching. Two leading solvers for this problem optimize the Lagrange
decomposition duals with sub-gradient and dual ascent (also known as message
passing) updates. We explore s direction further and propose several additional
Lagrangean relaxations of the graph matching problem along with corresponding
algorithms, which are all based on a common dual ascent framework. Our
extensive empirical evaluation gives several theoretical insights and suggests
a new state-of-the-art any-time solver for the considered problem. Our
improvement over state-of-the-art is particularly visible on a new dataset with
large-scale sparse problem instances containing more than 500 graph nodes each.
| new_dataset | 0.954858 |
1701.02291 | Tapabrata Ghosh | Tapabrata Ghosh | QuickNet: Maximizing Efficiency and Efficacy in Deep Architectures | Updated once | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present QuickNet, a fast and accurate network architecture that is both
faster and significantly more accurate than other fast deep architectures like
SqueezeNet. Furthermore, it uses less parameters than previous networks, making
it more memory efficient. We do this by making two major modifications to the
reference Darknet model (Redmon et al, 2015): 1) The use of depthwise separable
convolutions and 2) The use of parametric rectified linear units. We make the
observation that parametric rectified linear units are computationally
equivalent to leaky rectified linear units at test time and the observation
that separable convolutions can be interpreted as a compressed Inception
network (Chollet, 2016). Using these observations, we derive a network
architecture, which we call QuickNet, that is both faster and more accurate
than previous models. Our architecture provides at least four major advantages:
(1) A smaller model size, which is more tenable on memory constrained systems;
(2) A significantly faster network which is more tenable on computationally
constrained systems; (3) A high accuracy of 95.7 percent on the CIFAR-10
Dataset which outperforms all but one result published so far, although we note
that our works are orthogonal approaches and can be combined (4) Orthogonality
to previous model compression approaches allowing for further speed gains to be
realized.
| [
{
"version": "v1",
"created": "Mon, 9 Jan 2017 18:29:07 GMT"
},
{
"version": "v2",
"created": "Thu, 12 Jan 2017 07:44:17 GMT"
}
] | 2017-01-13T00:00:00 | [
[
"Ghosh",
"Tapabrata",
""
]
] | TITLE: QuickNet: Maximizing Efficiency and Efficacy in Deep Architectures
ABSTRACT: We present QuickNet, a fast and accurate network architecture that is both
faster and significantly more accurate than other fast deep architectures like
SqueezeNet. Furthermore, it uses less parameters than previous networks, making
it more memory efficient. We do this by making two major modifications to the
reference Darknet model (Redmon et al, 2015): 1) The use of depthwise separable
convolutions and 2) The use of parametric rectified linear units. We make the
observation that parametric rectified linear units are computationally
equivalent to leaky rectified linear units at test time and the observation
that separable convolutions can be interpreted as a compressed Inception
network (Chollet, 2016). Using these observations, we derive a network
architecture, which we call QuickNet, that is both faster and more accurate
than previous models. Our architecture provides at least four major advantages:
(1) A smaller model size, which is more tenable on memory constrained systems;
(2) A significantly faster network which is more tenable on computationally
constrained systems; (3) A high accuracy of 95.7 percent on the CIFAR-10
Dataset which outperforms all but one result published so far, although we note
that our works are orthogonal approaches and can be combined (4) Orthogonality
to previous model compression approaches allowing for further speed gains to be
realized.
| no_new_dataset | 0.948917 |
1701.03129 | Besat Kassaie | Besat Kassaie | De-identification In practice | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We report our effort to identify the sensitive information, subset of data
items listed by HIPAA (Health Insurance Portability and Accountability), from
medical text using the recent advances in natural language processing and
machine learning techniques. We represent the words with high dimensional
continuous vectors learned by a variant of Word2Vec called Continous Bag Of
Words (CBOW). We feed the word vectors into a simple neural network with a Long
Short-Term Memory (LSTM) architecture. Without any attempts to extract manually
crafted features and considering that our medical dataset is too small to be
fed into neural network, we obtained promising results. The results thrilled us
to think about the larger scale of the project with precise parameter tuning
and other possible improvements.
| [
{
"version": "v1",
"created": "Wed, 11 Jan 2017 19:22:56 GMT"
}
] | 2017-01-13T00:00:00 | [
[
"Kassaie",
"Besat",
""
]
] | TITLE: De-identification In practice
ABSTRACT: We report our effort to identify the sensitive information, subset of data
items listed by HIPAA (Health Insurance Portability and Accountability), from
medical text using the recent advances in natural language processing and
machine learning techniques. We represent the words with high dimensional
continuous vectors learned by a variant of Word2Vec called Continous Bag Of
Words (CBOW). We feed the word vectors into a simple neural network with a Long
Short-Term Memory (LSTM) architecture. Without any attempts to extract manually
crafted features and considering that our medical dataset is too small to be
fed into neural network, we obtained promising results. The results thrilled us
to think about the larger scale of the project with precise parameter tuning
and other possible improvements.
| no_new_dataset | 0.947478 |
1701.03151 | Mengtian Li | Mengtian Li and Daniel Huber | Guaranteed Parameter Estimation for Discrete Energy Minimization | WACV 2017: IEEE Winter Conference on Applications of Computer Vision | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Structural learning, a method to estimate the parameters for discrete energy
minimization, has been proven to be effective in solving computer vision
problems, especially in 3D scene parsing. As the complexity of the models
increases, structural learning algorithms turn to approximate inference to
retain tractability. Unfortunately, such methods often fail because the
approximation can be arbitrarily poor. In this work, we propose a method to
overcome this limitation through exploiting the properties of the joint problem
of training time inference and learning. With the help of the learning
framework, we transform the inapproximable inference problem into a polynomial
time solvable one, thereby enabling tractable exact inference while still
allowing an arbitrary graph structure and full potential interactions. Our
learning algorithm is guaranteed to return a solution with a bounded error to
the global optimal within the feasible parameter space. We demonstrate the
effectiveness of this method on two point cloud scene parsing datasets. Our
approach runs much faster and solves a problem that is intractable for
previous, well-known approaches.
| [
{
"version": "v1",
"created": "Wed, 11 Jan 2017 20:41:14 GMT"
}
] | 2017-01-13T00:00:00 | [
[
"Li",
"Mengtian",
""
],
[
"Huber",
"Daniel",
""
]
] | TITLE: Guaranteed Parameter Estimation for Discrete Energy Minimization
ABSTRACT: Structural learning, a method to estimate the parameters for discrete energy
minimization, has been proven to be effective in solving computer vision
problems, especially in 3D scene parsing. As the complexity of the models
increases, structural learning algorithms turn to approximate inference to
retain tractability. Unfortunately, such methods often fail because the
approximation can be arbitrarily poor. In this work, we propose a method to
overcome this limitation through exploiting the properties of the joint problem
of training time inference and learning. With the help of the learning
framework, we transform the inapproximable inference problem into a polynomial
time solvable one, thereby enabling tractable exact inference while still
allowing an arbitrary graph structure and full potential interactions. Our
learning algorithm is guaranteed to return a solution with a bounded error to
the global optimal within the feasible parameter space. We demonstrate the
effectiveness of this method on two point cloud scene parsing datasets. Our
approach runs much faster and solves a problem that is intractable for
previous, well-known approaches.
| no_new_dataset | 0.947284 |
1701.03281 | Tao Wei | Tao Wei, Changhu Wang, Chang Wen Chen | Modularized Morphing of Neural Networks | 12 pages, 6 figures, Under review as a conference paper at ICLR 2017 | null | null | null | cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work we study the problem of network morphism, an effective learning
scheme to morph a well-trained neural network to a new one with the network
function completely preserved. Different from existing work where basic
morphing types on the layer level were addressed, we target at the central
problem of network morphism at a higher level, i.e., how a convolutional layer
can be morphed into an arbitrary module of a neural network. To simplify the
representation of a network, we abstract a module as a graph with blobs as
vertices and convolutional layers as edges, based on which the morphing process
is able to be formulated as a graph transformation problem. Two atomic morphing
operations are introduced to compose the graphs, based on which modules are
classified into two families, i.e., simple morphable modules and complex
modules. We present practical morphing solutions for both of these two
families, and prove that any reasonable module can be morphed from a single
convolutional layer. Extensive experiments have been conducted based on the
state-of-the-art ResNet on benchmark datasets, and the effectiveness of the
proposed solution has been verified.
| [
{
"version": "v1",
"created": "Thu, 12 Jan 2017 09:48:53 GMT"
}
] | 2017-01-13T00:00:00 | [
[
"Wei",
"Tao",
""
],
[
"Wang",
"Changhu",
""
],
[
"Chen",
"Chang Wen",
""
]
] | TITLE: Modularized Morphing of Neural Networks
ABSTRACT: In this work we study the problem of network morphism, an effective learning
scheme to morph a well-trained neural network to a new one with the network
function completely preserved. Different from existing work where basic
morphing types on the layer level were addressed, we target at the central
problem of network morphism at a higher level, i.e., how a convolutional layer
can be morphed into an arbitrary module of a neural network. To simplify the
representation of a network, we abstract a module as a graph with blobs as
vertices and convolutional layers as edges, based on which the morphing process
is able to be formulated as a graph transformation problem. Two atomic morphing
operations are introduced to compose the graphs, based on which modules are
classified into two families, i.e., simple morphable modules and complex
modules. We present practical morphing solutions for both of these two
families, and prove that any reasonable module can be morphed from a single
convolutional layer. Extensive experiments have been conducted based on the
state-of-the-art ResNet on benchmark datasets, and the effectiveness of the
proposed solution has been verified.
| no_new_dataset | 0.946547 |
1701.03439 | Ruotian Luo | Ruotian Luo, Gregory Shakhnarovich | Comprehension-guided referring expressions | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider generation and comprehension of natural language referring
expression for objects in an image. Unlike generic "image captioning" which
lacks natural standard evaluation criteria, quality of a referring expression
may be measured by the receiver's ability to correctly infer which object is
being described. Following this intuition, we propose two approaches to utilize
models trained for comprehension task to generate better expressions. First, we
use a comprehension module trained on human-generated expressions, as a
"critic" of referring expression generator. The comprehension module serves as
a differentiable proxy of human evaluation, providing training signal to the
generation module. Second, we use the comprehension module in a
generate-and-rerank pipeline, which chooses from candidate expressions
generated by a model according to their performance on the comprehension task.
We show that both approaches lead to improved referring expression generation
on multiple benchmark datasets.
| [
{
"version": "v1",
"created": "Thu, 12 Jan 2017 18:03:52 GMT"
}
] | 2017-01-13T00:00:00 | [
[
"Luo",
"Ruotian",
""
],
[
"Shakhnarovich",
"Gregory",
""
]
] | TITLE: Comprehension-guided referring expressions
ABSTRACT: We consider generation and comprehension of natural language referring
expression for objects in an image. Unlike generic "image captioning" which
lacks natural standard evaluation criteria, quality of a referring expression
may be measured by the receiver's ability to correctly infer which object is
being described. Following this intuition, we propose two approaches to utilize
models trained for comprehension task to generate better expressions. First, we
use a comprehension module trained on human-generated expressions, as a
"critic" of referring expression generator. The comprehension module serves as
a differentiable proxy of human evaluation, providing training signal to the
generation module. Second, we use the comprehension module in a
generate-and-rerank pipeline, which chooses from candidate expressions
generated by a model according to their performance on the comprehension task.
We show that both approaches lead to improved referring expression generation
on multiple benchmark datasets.
| no_new_dataset | 0.947039 |
1701.03441 | Fathi Salem | Yuzhen Lu and Fathi M. Salem | Simplified Gating in Long Short-term Memory (LSTM) Recurrent Neural
Networks | 5 pages, 4 Figures, 3 Tables. arXiv admin note: substantial text
overlap with arXiv:1612.03707 | null | null | null | cs.NE stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The standard LSTM recurrent neural networks while very powerful in long-range
dependency sequence applications have highly complex structure and relatively
large (adaptive) parameters. In this work, we present empirical comparison
between the standard LSTM recurrent neural network architecture and three new
parameter-reduced variants obtained by eliminating combinations of the input
signal, bias, and hidden unit signals from individual gating signals. The
experiments on two sequence datasets show that the three new variants, called
simply as LSTM1, LSTM2, and LSTM3, can achieve comparable performance to the
standard LSTM model with less (adaptive) parameters.
| [
{
"version": "v1",
"created": "Thu, 12 Jan 2017 18:12:05 GMT"
}
] | 2017-01-13T00:00:00 | [
[
"Lu",
"Yuzhen",
""
],
[
"Salem",
"Fathi M.",
""
]
] | TITLE: Simplified Gating in Long Short-term Memory (LSTM) Recurrent Neural
Networks
ABSTRACT: The standard LSTM recurrent neural networks while very powerful in long-range
dependency sequence applications have highly complex structure and relatively
large (adaptive) parameters. In this work, we present empirical comparison
between the standard LSTM recurrent neural network architecture and three new
parameter-reduced variants obtained by eliminating combinations of the input
signal, bias, and hidden unit signals from individual gating signals. The
experiments on two sequence datasets show that the three new variants, called
simply as LSTM1, LSTM2, and LSTM3, can achieve comparable performance to the
standard LSTM model with less (adaptive) parameters.
| no_new_dataset | 0.953144 |
1701.03452 | Fathi Salem | Joel Heck and Fathi M. Salem | Simplified Minimal Gated Unit Variations for Recurrent Neural Networks | 5 pages, 3 Figures, 5 Tables | null | null | null | cs.NE stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recurrent neural networks with various types of hidden units have been used
to solve a diverse range of problems involving sequence data. Two of the most
recent proposals, gated recurrent units (GRU) and minimal gated units (MGU),
have shown comparable promising results on example public datasets. In this
paper, we introduce three model variants of the minimal gated unit (MGU) which
further simplify that design by reducing the number of parameters in the
forget-gate dynamic equation. These three model variants, referred to simply as
MGU1, MGU2, and MGU3, were tested on sequences generated from the MNIST dataset
and from the Reuters Newswire Topics (RNT) dataset. The new models have shown
similar accuracy to the MGU model while using fewer parameters and thus
lowering training expense. One model variant, namely MGU2, performed better
than MGU on the datasets considered, and thus may be used as an alternate to
MGU or GRU in recurrent neural networks.
| [
{
"version": "v1",
"created": "Thu, 12 Jan 2017 18:52:31 GMT"
}
] | 2017-01-13T00:00:00 | [
[
"Heck",
"Joel",
""
],
[
"Salem",
"Fathi M.",
""
]
] | TITLE: Simplified Minimal Gated Unit Variations for Recurrent Neural Networks
ABSTRACT: Recurrent neural networks with various types of hidden units have been used
to solve a diverse range of problems involving sequence data. Two of the most
recent proposals, gated recurrent units (GRU) and minimal gated units (MGU),
have shown comparable promising results on example public datasets. In this
paper, we introduce three model variants of the minimal gated unit (MGU) which
further simplify that design by reducing the number of parameters in the
forget-gate dynamic equation. These three model variants, referred to simply as
MGU1, MGU2, and MGU3, were tested on sequences generated from the MNIST dataset
and from the Reuters Newswire Topics (RNT) dataset. The new models have shown
similar accuracy to the MGU model while using fewer parameters and thus
lowering training expense. One model variant, namely MGU2, performed better
than MGU on the datasets considered, and thus may be used as an alternate to
MGU or GRU in recurrent neural networks.
| no_new_dataset | 0.956594 |
1609.07197 | Shyam Upadhyay | Shyam Upadhyay and Ming-Wei Chang | Annotating Derivations: A New Evaluation Strategy and Dataset for
Algebra Word Problems | EACL 2017 long paper | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a new evaluation for automatic solvers for algebra word problems,
which can identify mistakes that existing evaluations overlook. Our proposal is
to evaluate such solvers using derivations, which reflect how an equation
system was constructed from the word problem. To accomplish this, we develop an
algorithm for checking the equivalence between two derivations, and show how
derivation an- notations can be semi-automatically added to existing datasets.
To make our experiments more comprehensive, we include the derivation
annotation for DRAW-1K, a new dataset containing 1000 general algebra word
problems. In our experiments, we found that the annotated derivations enable a
more accurate evaluation of automatic solvers than previously used metrics. We
release derivation annotations for over 2300 algebra word problems for future
evaluations.
| [
{
"version": "v1",
"created": "Fri, 23 Sep 2016 00:38:59 GMT"
},
{
"version": "v2",
"created": "Tue, 10 Jan 2017 20:05:38 GMT"
}
] | 2017-01-12T00:00:00 | [
[
"Upadhyay",
"Shyam",
""
],
[
"Chang",
"Ming-Wei",
""
]
] | TITLE: Annotating Derivations: A New Evaluation Strategy and Dataset for
Algebra Word Problems
ABSTRACT: We propose a new evaluation for automatic solvers for algebra word problems,
which can identify mistakes that existing evaluations overlook. Our proposal is
to evaluate such solvers using derivations, which reflect how an equation
system was constructed from the word problem. To accomplish this, we develop an
algorithm for checking the equivalence between two derivations, and show how
derivation an- notations can be semi-automatically added to existing datasets.
To make our experiments more comprehensive, we include the derivation
annotation for DRAW-1K, a new dataset containing 1000 general algebra word
problems. In our experiments, we found that the annotated derivations enable a
more accurate evaluation of automatic solvers than previously used metrics. We
release derivation annotations for over 2300 algebra word problems for future
evaluations.
| new_dataset | 0.957991 |
1701.02829 | Chenglong Li | Chenglong Li, Guizhao Wang, Yunpeng Ma, Aihua Zheng, Bin Luo, and Jin
Tang | A Unified RGB-T Saliency Detection Benchmark: Dataset, Baselines,
Analysis and A Novel Approach | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite significant progress, image saliency detection still remains a
challenging task in complex scenes and environments. Integrating multiple
different but complementary cues, like RGB and Thermal (RGB-T), may be an
effective way for boosting saliency detection performance. The current research
in this direction, however, is limited by the lack of a comprehensive
benchmark. This work contributes such a RGB-T image dataset, which includes 821
spatially aligned RGB-T image pairs and their ground truth annotations for
saliency detection purpose. The image pairs are with high diversity recorded
under different scenes and environmental conditions, and we annotate 11
challenges on these image pairs for performing the challenge-sensitive analysis
for different saliency detection algorithms. We also implement 3 kinds of
baseline methods with different modality inputs to provide a comprehensive
comparison platform.
With this benchmark, we propose a novel approach, multi-task manifold ranking
with cross-modality consistency, for RGB-T saliency detection. In particular,
we introduce a weight for each modality to describe the reliability, and
integrate them into the graph-based manifold ranking algorithm to achieve
adaptive fusion of different source data. Moreover, we incorporate the
cross-modality consistent constraints to integrate different modalities
collaboratively. For the optimization, we design an efficient algorithm to
iteratively solve several subproblems with closed-form solutions. Extensive
experiments against other baseline methods on the newly created benchmark
demonstrate the effectiveness of the proposed approach, and we also provide
basic insights and potential future research directions for RGB-T saliency
detection.
| [
{
"version": "v1",
"created": "Wed, 11 Jan 2017 02:38:23 GMT"
}
] | 2017-01-12T00:00:00 | [
[
"Li",
"Chenglong",
""
],
[
"Wang",
"Guizhao",
""
],
[
"Ma",
"Yunpeng",
""
],
[
"Zheng",
"Aihua",
""
],
[
"Luo",
"Bin",
""
],
[
"Tang",
"Jin",
""
]
] | TITLE: A Unified RGB-T Saliency Detection Benchmark: Dataset, Baselines,
Analysis and A Novel Approach
ABSTRACT: Despite significant progress, image saliency detection still remains a
challenging task in complex scenes and environments. Integrating multiple
different but complementary cues, like RGB and Thermal (RGB-T), may be an
effective way for boosting saliency detection performance. The current research
in this direction, however, is limited by the lack of a comprehensive
benchmark. This work contributes such a RGB-T image dataset, which includes 821
spatially aligned RGB-T image pairs and their ground truth annotations for
saliency detection purpose. The image pairs are with high diversity recorded
under different scenes and environmental conditions, and we annotate 11
challenges on these image pairs for performing the challenge-sensitive analysis
for different saliency detection algorithms. We also implement 3 kinds of
baseline methods with different modality inputs to provide a comprehensive
comparison platform.
With this benchmark, we propose a novel approach, multi-task manifold ranking
with cross-modality consistency, for RGB-T saliency detection. In particular,
we introduce a weight for each modality to describe the reliability, and
integrate them into the graph-based manifold ranking algorithm to achieve
adaptive fusion of different source data. Moreover, we incorporate the
cross-modality consistent constraints to integrate different modalities
collaboratively. For the optimization, we design an efficient algorithm to
iteratively solve several subproblems with closed-form solutions. Extensive
experiments against other baseline methods on the newly created benchmark
demonstrate the effectiveness of the proposed approach, and we also provide
basic insights and potential future research directions for RGB-T saliency
detection.
| new_dataset | 0.967163 |
1701.02892 | Xiaowei Zhang | Xiaowei Zhang and Chi Xu and Yu Zhang and Tingshao Zhu and Li Cheng | Multivariate Regression with Grossly Corrupted Observations: A Robust
Approach and its Applications | null | null | null | null | stat.ML cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper studies the problem of multivariate linear regression where a
portion of the observations is grossly corrupted or is missing, and the
magnitudes and locations of such occurrences are unknown in priori. To deal
with this problem, we propose a new approach by explicitly consider the error
source as well as its sparseness nature. An interesting property of our
approach lies in its ability of allowing individual regression output elements
or tasks to possess their unique noise levels. Moreover, despite working with a
non-smooth optimization problem, our approach still guarantees to converge to
its optimal solution. Experiments on synthetic data demonstrate the
competitiveness of our approach compared with existing multivariate regression
models. In addition, empirically our approach has been validated with very
promising results on two exemplar real-world applications: The first concerns
the prediction of \textit{Big-Five} personality based on user behaviors at
social network sites (SNSs), while the second is 3D human hand pose estimation
from depth images. The implementation of our approach and comparison methods as
well as the involved datasets are made publicly available in support of the
open-source and reproducible research initiatives.
| [
{
"version": "v1",
"created": "Wed, 11 Jan 2017 08:52:53 GMT"
}
] | 2017-01-12T00:00:00 | [
[
"Zhang",
"Xiaowei",
""
],
[
"Xu",
"Chi",
""
],
[
"Zhang",
"Yu",
""
],
[
"Zhu",
"Tingshao",
""
],
[
"Cheng",
"Li",
""
]
] | TITLE: Multivariate Regression with Grossly Corrupted Observations: A Robust
Approach and its Applications
ABSTRACT: This paper studies the problem of multivariate linear regression where a
portion of the observations is grossly corrupted or is missing, and the
magnitudes and locations of such occurrences are unknown in priori. To deal
with this problem, we propose a new approach by explicitly consider the error
source as well as its sparseness nature. An interesting property of our
approach lies in its ability of allowing individual regression output elements
or tasks to possess their unique noise levels. Moreover, despite working with a
non-smooth optimization problem, our approach still guarantees to converge to
its optimal solution. Experiments on synthetic data demonstrate the
competitiveness of our approach compared with existing multivariate regression
models. In addition, empirically our approach has been validated with very
promising results on two exemplar real-world applications: The first concerns
the prediction of \textit{Big-Five} personality based on user behaviors at
social network sites (SNSs), while the second is 3D human hand pose estimation
from depth images. The implementation of our approach and comparison methods as
well as the involved datasets are made publicly available in support of the
open-source and reproducible research initiatives.
| no_new_dataset | 0.942823 |
1701.03041 | Matthew Veres | Matthew Veres, Medhat Moussa, Graham W. Taylor | Modeling Grasp Motor Imagery through Deep Conditional Generative Models | Accepted for publication in Robotics and Automation Letters (RA-L) | null | null | null | cs.RO stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Grasping is a complex process involving knowledge of the object, the
surroundings, and of oneself. While humans are able to integrate and process
all of the sensory information required for performing this task, equipping
machines with this capability is an extremely challenging endeavor. In this
paper, we investigate how deep learning techniques can allow us to translate
high-level concepts such as motor imagery to the problem of robotic grasp
synthesis. We explore a paradigm based on generative models for learning
integrated object-action representations, and demonstrate its capacity for
capturing and generating multimodal, multi-finger grasp configurations on a
simulated grasping dataset.
| [
{
"version": "v1",
"created": "Wed, 11 Jan 2017 16:20:39 GMT"
}
] | 2017-01-12T00:00:00 | [
[
"Veres",
"Matthew",
""
],
[
"Moussa",
"Medhat",
""
],
[
"Taylor",
"Graham W.",
""
]
] | TITLE: Modeling Grasp Motor Imagery through Deep Conditional Generative Models
ABSTRACT: Grasping is a complex process involving knowledge of the object, the
surroundings, and of oneself. While humans are able to integrate and process
all of the sensory information required for performing this task, equipping
machines with this capability is an extremely challenging endeavor. In this
paper, we investigate how deep learning techniques can allow us to translate
high-level concepts such as motor imagery to the problem of robotic grasp
synthesis. We explore a paradigm based on generative models for learning
integrated object-action representations, and demonstrate its capacity for
capturing and generating multimodal, multi-finger grasp configurations on a
simulated grasping dataset.
| no_new_dataset | 0.945601 |
1701.03051 | Venkata Naveen Reddy Chedeti | Tapan Sahni, Chinmay Chandak, Naveen Reddy Chedeti, Manish Singh | Efficient Twitter Sentiment Classification using Subjective Distant
Supervision | null | null | null | null | cs.SI cs.CL cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As microblogging services like Twitter are becoming more and more influential
in today's globalised world, its facets like sentiment analysis are being
extensively studied. We are no longer constrained by our own opinion. Others
opinions and sentiments play a huge role in shaping our perspective. In this
paper, we build on previous works on Twitter sentiment analysis using Distant
Supervision. The existing approach requires huge computation resource for
analysing large number of tweets. In this paper, we propose techniques to speed
up the computation process for sentiment analysis. We use tweet subjectivity to
select the right training samples. We also introduce the concept of EFWS
(Effective Word Score) of a tweet that is derived from polarity scores of
frequently used words, which is an additional heuristic that can be used to
speed up the sentiment classification with standard machine learning
algorithms. We performed our experiments using 1.6 million tweets. Experimental
evaluations show that our proposed technique is more efficient and has higher
accuracy compared to previously proposed methods. We achieve overall accuracies
of around 80% (EFWS heuristic gives an accuracy around 85%) on a training
dataset of 100K tweets, which is half the size of the dataset used for the
baseline model. The accuracy of our proposed model is 2-3% higher than the
baseline model, and the model effectively trains at twice the speed of the
baseline model.
| [
{
"version": "v1",
"created": "Wed, 11 Jan 2017 16:39:04 GMT"
}
] | 2017-01-12T00:00:00 | [
[
"Sahni",
"Tapan",
""
],
[
"Chandak",
"Chinmay",
""
],
[
"Chedeti",
"Naveen Reddy",
""
],
[
"Singh",
"Manish",
""
]
] | TITLE: Efficient Twitter Sentiment Classification using Subjective Distant
Supervision
ABSTRACT: As microblogging services like Twitter are becoming more and more influential
in today's globalised world, its facets like sentiment analysis are being
extensively studied. We are no longer constrained by our own opinion. Others
opinions and sentiments play a huge role in shaping our perspective. In this
paper, we build on previous works on Twitter sentiment analysis using Distant
Supervision. The existing approach requires huge computation resource for
analysing large number of tweets. In this paper, we propose techniques to speed
up the computation process for sentiment analysis. We use tweet subjectivity to
select the right training samples. We also introduce the concept of EFWS
(Effective Word Score) of a tweet that is derived from polarity scores of
frequently used words, which is an additional heuristic that can be used to
speed up the sentiment classification with standard machine learning
algorithms. We performed our experiments using 1.6 million tweets. Experimental
evaluations show that our proposed technique is more efficient and has higher
accuracy compared to previously proposed methods. We achieve overall accuracies
of around 80% (EFWS heuristic gives an accuracy around 85%) on a training
dataset of 100K tweets, which is half the size of the dataset used for the
baseline model. The accuracy of our proposed model is 2-3% higher than the
baseline model, and the model effectively trains at twice the speed of the
baseline model.
| no_new_dataset | 0.948537 |
1701.03091 | Besat Kassaie | Besat Kassaie | SPARQL over GraphX | null | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The ability of the RDF data model to link data from heterogeneous domains has
led to an explosive growth of RDF data. So, evaluating SPARQL queries over
large RDF data has been crucial for the semantic web community. However, due to
the graph nature of RDF data, evaluating SPARQL queries in relational databases
and common data-parallel systems needs a lot of joins and is inefficient. On
the other hand, the enormity of datasets that are graph in nature such as
social network data, has led the database community to develop graph-parallel
processing systems to support iterative graph computations efficiently. In this
work we take advantage of the graph representation of RDF data and exploit
GraphX, a new graph processing system based on Spark. We propose a subgraph
matching algorithm, compatible with the GraphX programming model to evaluate
SPARQL queries. Some experiments are performed to show the system scalability
to handle large datasets.
| [
{
"version": "v1",
"created": "Wed, 11 Jan 2017 18:38:16 GMT"
}
] | 2017-01-12T00:00:00 | [
[
"Kassaie",
"Besat",
""
]
] | TITLE: SPARQL over GraphX
ABSTRACT: The ability of the RDF data model to link data from heterogeneous domains has
led to an explosive growth of RDF data. So, evaluating SPARQL queries over
large RDF data has been crucial for the semantic web community. However, due to
the graph nature of RDF data, evaluating SPARQL queries in relational databases
and common data-parallel systems needs a lot of joins and is inefficient. On
the other hand, the enormity of datasets that are graph in nature such as
social network data, has led the database community to develop graph-parallel
processing systems to support iterative graph computations efficiently. In this
work we take advantage of the graph representation of RDF data and exploit
GraphX, a new graph processing system based on Spark. We propose a subgraph
matching algorithm, compatible with the GraphX programming model to evaluate
SPARQL queries. Some experiments are performed to show the system scalability
to handle large datasets.
| no_new_dataset | 0.943452 |
1503.06666 | David Martins de Matos | Francisco Raposo, Ricardo Ribeiro, David Martins de Matos | Using Generic Summarization to Improve Music Information Retrieval Tasks | 24 pages, 10 tables; Submitted to IEEE/ACM Transactions on Audio,
Speech and Language Processing | IEEE/ACM Transactions on Audio, Speech and Language Processing,
vol. 24, n. 6, March 2016 | 10.1109/TASLP.2016.2541299 | null | cs.IR cs.LG cs.SD | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In order to satisfy processing time constraints, many MIR tasks process only
a segment of the whole music signal. This practice may lead to decreasing
performance, since the most important information for the tasks may not be in
those processed segments. In this paper, we leverage generic summarization
algorithms, previously applied to text and speech summarization, to summarize
items in music datasets. These algorithms build summaries, that are both
concise and diverse, by selecting appropriate segments from the input signal
which makes them good candidates to summarize music as well. We evaluate the
summarization process on binary and multiclass music genre classification
tasks, by comparing the performance obtained using summarized datasets against
the performances obtained using continuous segments (which is the traditional
method used for addressing the previously mentioned time constraints) and full
songs of the same original dataset. We show that GRASSHOPPER, LexRank, LSA,
MMR, and a Support Sets-based Centrality model improve classification
performance when compared to selected 30-second baselines. We also show that
summarized datasets lead to a classification performance whose difference is
not statistically significant from using full songs. Furthermore, we make an
argument stating the advantages of sharing summarized datasets for future MIR
research.
| [
{
"version": "v1",
"created": "Mon, 23 Mar 2015 14:48:24 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Dec 2015 18:38:22 GMT"
},
{
"version": "v3",
"created": "Wed, 9 Mar 2016 16:24:42 GMT"
}
] | 2017-01-11T00:00:00 | [
[
"Raposo",
"Francisco",
""
],
[
"Ribeiro",
"Ricardo",
""
],
[
"de Matos",
"David Martins",
""
]
] | TITLE: Using Generic Summarization to Improve Music Information Retrieval Tasks
ABSTRACT: In order to satisfy processing time constraints, many MIR tasks process only
a segment of the whole music signal. This practice may lead to decreasing
performance, since the most important information for the tasks may not be in
those processed segments. In this paper, we leverage generic summarization
algorithms, previously applied to text and speech summarization, to summarize
items in music datasets. These algorithms build summaries, that are both
concise and diverse, by selecting appropriate segments from the input signal
which makes them good candidates to summarize music as well. We evaluate the
summarization process on binary and multiclass music genre classification
tasks, by comparing the performance obtained using summarized datasets against
the performances obtained using continuous segments (which is the traditional
method used for addressing the previously mentioned time constraints) and full
songs of the same original dataset. We show that GRASSHOPPER, LexRank, LSA,
MMR, and a Support Sets-based Centrality model improve classification
performance when compared to selected 30-second baselines. We also show that
summarized datasets lead to a classification performance whose difference is
not statistically significant from using full songs. Furthermore, we make an
argument stating the advantages of sharing summarized datasets for future MIR
research.
| no_new_dataset | 0.949106 |
1506.07401 | Yang Wang | Yang Wang, Dong Zhou, Armin Bunde, and Shlomo Havlin | Testing reanalysis datasets in Antarctica: Trends, persistence
properties and trend significance | 8 pages, 5 figures | Journal of Geophysical Research: Atmosphere, 121 (21):
12839-12855, 2016 | 10.1002/2016JD024864 | null | physics.ao-ph physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The reanalysis datasets provide very important sources for investigating the
climate dynamics and climate changes in Antarctica. In this paper, three major
reanalysis data are compared with Antarctic station data over the last 35
years: the National Centers for Environmental Prediction and the National
Center for Atmospheric Research reanalysis (NCEP1), NCEP-DOE Reanalysis 2
(NCEP2), and the European Centre for Medium-Range Weather Forecasts Interim
Re-Analysis (ERA-Interim). In our assessment, we compare the linear trends, the
fluctuations around the trends, the persistence properties and the significance
level of warming trends in the reanalysis data with the observational ones. We
find that NCEP1 and NCEP2 show spurious warming trends in all parts of
Antarctica except the Peninsula, while ERA-Interim is quite reliable except at
Amundsen-Scott. To investigate the persistence of the data sets, we consider
the lag-1 autocorrelation $C(1)$ and the Hurst exponent. While $C(1)$ varies
quite erratically in different stations, the Hurst exponent shows similar
patterns all over Antarctica. Regarding the significance of the trends, NCEP1
and NCEP2 differ considerably from the observational datasets by strongly
exaggerating the warming trends. In contrast, ERA-Interim gives reliable
results at most stations except at Amundsen-Scott where it shows a significant
cooling trend.
| [
{
"version": "v1",
"created": "Wed, 24 Jun 2015 14:51:40 GMT"
},
{
"version": "v2",
"created": "Tue, 10 Jan 2017 16:50:26 GMT"
}
] | 2017-01-11T00:00:00 | [
[
"Wang",
"Yang",
""
],
[
"Zhou",
"Dong",
""
],
[
"Bunde",
"Armin",
""
],
[
"Havlin",
"Shlomo",
""
]
] | TITLE: Testing reanalysis datasets in Antarctica: Trends, persistence
properties and trend significance
ABSTRACT: The reanalysis datasets provide very important sources for investigating the
climate dynamics and climate changes in Antarctica. In this paper, three major
reanalysis data are compared with Antarctic station data over the last 35
years: the National Centers for Environmental Prediction and the National
Center for Atmospheric Research reanalysis (NCEP1), NCEP-DOE Reanalysis 2
(NCEP2), and the European Centre for Medium-Range Weather Forecasts Interim
Re-Analysis (ERA-Interim). In our assessment, we compare the linear trends, the
fluctuations around the trends, the persistence properties and the significance
level of warming trends in the reanalysis data with the observational ones. We
find that NCEP1 and NCEP2 show spurious warming trends in all parts of
Antarctica except the Peninsula, while ERA-Interim is quite reliable except at
Amundsen-Scott. To investigate the persistence of the data sets, we consider
the lag-1 autocorrelation $C(1)$ and the Hurst exponent. While $C(1)$ varies
quite erratically in different stations, the Hurst exponent shows similar
patterns all over Antarctica. Regarding the significance of the trends, NCEP1
and NCEP2 differ considerably from the observational datasets by strongly
exaggerating the warming trends. In contrast, ERA-Interim gives reliable
results at most stations except at Amundsen-Scott where it shows a significant
cooling trend.
| no_new_dataset | 0.946001 |
1602.08680 | Shangwen Li | Shangwen Li, Sanjay Purushotham, Chen Chen, Yuzhuo Ren, and C.-C. Jay
Kuo | Measuring and Predicting Tag Importance for Image Retrieval | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Textual data such as tags, sentence descriptions are combined with visual
cues to reduce the semantic gap for image retrieval applications in today's
Multimodal Image Retrieval (MIR) systems. However, all tags are treated as
equally important in these systems, which may result in misalignment between
visual and textual modalities during MIR training. This will further lead to
degenerated retrieval performance at query time. To address this issue, we
investigate the problem of tag importance prediction, where the goal is to
automatically predict the tag importance and use it in image retrieval. To
achieve this, we first propose a method to measure the relative importance of
object and scene tags from image sentence descriptions. Using this as the
ground truth, we present a tag importance prediction model to jointly exploit
visual, semantic and context cues. The Structural Support Vector Machine (SSVM)
formulation is adopted to ensure efficient training of the prediction model.
Then, the Canonical Correlation Analysis (CCA) is employed to learn the
relation between the image visual feature and tag importance to obtain robust
retrieval performance. Experimental results on three real-world datasets show a
significant performance improvement of the proposed MIR with Tag Importance
Prediction (MIR/TIP) system over other MIR systems.
| [
{
"version": "v1",
"created": "Sun, 28 Feb 2016 07:38:25 GMT"
},
{
"version": "v2",
"created": "Thu, 14 Apr 2016 18:13:21 GMT"
},
{
"version": "v3",
"created": "Mon, 9 Jan 2017 22:32:36 GMT"
}
] | 2017-01-11T00:00:00 | [
[
"Li",
"Shangwen",
""
],
[
"Purushotham",
"Sanjay",
""
],
[
"Chen",
"Chen",
""
],
[
"Ren",
"Yuzhuo",
""
],
[
"Kuo",
"C. -C. Jay",
""
]
] | TITLE: Measuring and Predicting Tag Importance for Image Retrieval
ABSTRACT: Textual data such as tags, sentence descriptions are combined with visual
cues to reduce the semantic gap for image retrieval applications in today's
Multimodal Image Retrieval (MIR) systems. However, all tags are treated as
equally important in these systems, which may result in misalignment between
visual and textual modalities during MIR training. This will further lead to
degenerated retrieval performance at query time. To address this issue, we
investigate the problem of tag importance prediction, where the goal is to
automatically predict the tag importance and use it in image retrieval. To
achieve this, we first propose a method to measure the relative importance of
object and scene tags from image sentence descriptions. Using this as the
ground truth, we present a tag importance prediction model to jointly exploit
visual, semantic and context cues. The Structural Support Vector Machine (SSVM)
formulation is adopted to ensure efficient training of the prediction model.
Then, the Canonical Correlation Analysis (CCA) is employed to learn the
relation between the image visual feature and tag importance to obtain robust
retrieval performance. Experimental results on three real-world datasets show a
significant performance improvement of the proposed MIR with Tag Importance
Prediction (MIR/TIP) system over other MIR systems.
| no_new_dataset | 0.948585 |
1609.09430 | Shawn Hershey | Shawn Hershey, Sourish Chaudhuri, Daniel P. W. Ellis, Jort F. Gemmeke,
Aren Jansen, R. Channing Moore, Manoj Plakal, Devin Platt, Rif A. Saurous,
Bryan Seybold, Malcolm Slaney, Ron J. Weiss, Kevin Wilson | CNN Architectures for Large-Scale Audio Classification | Accepted for publication at ICASSP 2017 Changes: Added definitions of
mAP, AUC, and d-prime. Updated mAP/AUC/d-prime numbers for Audio Set based on
changes of latest Audio Set revision. Changed wording to fit 4 page limit
with new additions | null | null | null | cs.SD cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Convolutional Neural Networks (CNNs) have proven very effective in image
classification and show promise for audio. We use various CNN architectures to
classify the soundtracks of a dataset of 70M training videos (5.24 million
hours) with 30,871 video-level labels. We examine fully connected Deep Neural
Networks (DNNs), AlexNet [1], VGG [2], Inception [3], and ResNet [4]. We
investigate varying the size of both training set and label vocabulary, finding
that analogs of the CNNs used in image classification do well on our audio
classification task, and larger training and label sets help up to a point. A
model using embeddings from these classifiers does much better than raw
features on the Audio Set [5] Acoustic Event Detection (AED) classification
task.
| [
{
"version": "v1",
"created": "Thu, 29 Sep 2016 17:04:50 GMT"
},
{
"version": "v2",
"created": "Tue, 10 Jan 2017 18:06:51 GMT"
}
] | 2017-01-11T00:00:00 | [
[
"Hershey",
"Shawn",
""
],
[
"Chaudhuri",
"Sourish",
""
],
[
"Ellis",
"Daniel P. W.",
""
],
[
"Gemmeke",
"Jort F.",
""
],
[
"Jansen",
"Aren",
""
],
[
"Moore",
"R. Channing",
""
],
[
"Plakal",
"Manoj",
""
],
[
"Platt",
"Devin",
""
],
[
"Saurous",
"Rif A.",
""
],
[
"Seybold",
"Bryan",
""
],
[
"Slaney",
"Malcolm",
""
],
[
"Weiss",
"Ron J.",
""
],
[
"Wilson",
"Kevin",
""
]
] | TITLE: CNN Architectures for Large-Scale Audio Classification
ABSTRACT: Convolutional Neural Networks (CNNs) have proven very effective in image
classification and show promise for audio. We use various CNN architectures to
classify the soundtracks of a dataset of 70M training videos (5.24 million
hours) with 30,871 video-level labels. We examine fully connected Deep Neural
Networks (DNNs), AlexNet [1], VGG [2], Inception [3], and ResNet [4]. We
investigate varying the size of both training set and label vocabulary, finding
that analogs of the CNNs used in image classification do well on our audio
classification task, and larger training and label sets help up to a point. A
model using embeddings from these classifiers does much better than raw
features on the Audio Set [5] Acoustic Event Detection (AED) classification
task.
| no_new_dataset | 0.939803 |
1612.06549 | Heike Adel | Heike Adel and Hinrich Sch\"utze | Exploring Different Dimensions of Attention for Uncertainty Detection | accepted at EACL 2017 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neural networks with attention have proven effective for many natural
language processing tasks. In this paper, we develop attention mechanisms for
uncertainty detection. In particular, we generalize standardly used attention
mechanisms by introducing external attention and sequence-preserving attention.
These novel architectures differ from standard approaches in that they use
external resources to compute attention weights and preserve sequence
information. We compare them to other configurations along different dimensions
of attention. Our novel architectures set the new state of the art on a
Wikipedia benchmark dataset and perform similar to the state-of-the-art model
on a biomedical benchmark which uses a large set of linguistic features.
| [
{
"version": "v1",
"created": "Tue, 20 Dec 2016 08:49:59 GMT"
},
{
"version": "v2",
"created": "Tue, 10 Jan 2017 14:56:03 GMT"
}
] | 2017-01-11T00:00:00 | [
[
"Adel",
"Heike",
""
],
[
"Schütze",
"Hinrich",
""
]
] | TITLE: Exploring Different Dimensions of Attention for Uncertainty Detection
ABSTRACT: Neural networks with attention have proven effective for many natural
language processing tasks. In this paper, we develop attention mechanisms for
uncertainty detection. In particular, we generalize standardly used attention
mechanisms by introducing external attention and sequence-preserving attention.
These novel architectures differ from standard approaches in that they use
external resources to compute attention weights and preserve sequence
information. We compare them to other configurations along different dimensions
of attention. Our novel architectures set the new state of the art on a
Wikipedia benchmark dataset and perform similar to the state-of-the-art model
on a biomedical benchmark which uses a large set of linguistic features.
| no_new_dataset | 0.953188 |
1612.06825 | Le Hou | Veda Murthy, Le Hou, Dimitris Samaras, Tahsin M. Kurc, Joel H. Saltz | Center-Focusing Multi-task CNN with Injected Features for Classification
of Glioma Nuclear Images | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Classifying the various shapes and attributes of a glioma cell nucleus is
crucial for diagnosis and understanding the disease. We investigate automated
classification of glioma nuclear shapes and visual attributes using
Convolutional Neural Networks (CNNs) on pathology images of automatically
segmented nuclei. We propose three methods that improve the performance of a
previously-developed semi-supervised CNN. First, we propose a method that
allows the CNN to focus on the most important part of an image- the image's
center containing the nucleus. Second, we inject (concatenate) pre-extracted
VGG features into an intermediate layer of our Semi-Supervised CNN so that
during training, the CNN can learn a set of complementary features. Third, we
separate the losses of the two groups of target classes (nuclear shapes and
attributes) into a single-label loss and a multi-label loss so that the prior
knowledge of inter-label exclusiveness can be incorporated. On a dataset of
2078 images, the proposed methods combined reduce the error rate of attribute
and shape classification by 21.54% and 15.07% respectively compared to the
existing state-of-the-art method on the same dataset.
| [
{
"version": "v1",
"created": "Tue, 20 Dec 2016 19:54:37 GMT"
},
{
"version": "v2",
"created": "Tue, 10 Jan 2017 18:44:32 GMT"
}
] | 2017-01-11T00:00:00 | [
[
"Murthy",
"Veda",
""
],
[
"Hou",
"Le",
""
],
[
"Samaras",
"Dimitris",
""
],
[
"Kurc",
"Tahsin M.",
""
],
[
"Saltz",
"Joel H.",
""
]
] | TITLE: Center-Focusing Multi-task CNN with Injected Features for Classification
of Glioma Nuclear Images
ABSTRACT: Classifying the various shapes and attributes of a glioma cell nucleus is
crucial for diagnosis and understanding the disease. We investigate automated
classification of glioma nuclear shapes and visual attributes using
Convolutional Neural Networks (CNNs) on pathology images of automatically
segmented nuclei. We propose three methods that improve the performance of a
previously-developed semi-supervised CNN. First, we propose a method that
allows the CNN to focus on the most important part of an image- the image's
center containing the nucleus. Second, we inject (concatenate) pre-extracted
VGG features into an intermediate layer of our Semi-Supervised CNN so that
during training, the CNN can learn a set of complementary features. Third, we
separate the losses of the two groups of target classes (nuclear shapes and
attributes) into a single-label loss and a multi-label loss so that the prior
knowledge of inter-label exclusiveness can be incorporated. On a dataset of
2078 images, the proposed methods combined reduce the error rate of attribute
and shape classification by 21.54% and 15.07% respectively compared to the
existing state-of-the-art method on the same dataset.
| no_new_dataset | 0.944125 |
1701.02485 | Uzair Nadeem | Syed Afaq Ali Shah, Uzair Nadeem, Mohammed Bennamoun, Ferdous Sohel,
Roberto Togneri | Efficient Image Set Classification using Linear Regression based Image
Reconstruction | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a novel image set classification technique using linear regression
models. Downsampled gallery image sets are interpreted as subspaces of a high
dimensional space to avoid the computationally expensive training step. We
estimate regression models for each test image using the class specific gallery
subspaces. Images of the test set are then reconstructed using the regression
models. Based on the minimum reconstruction error between the reconstructed and
the original images, a weighted voting strategy is used to classify the test
set. We performed extensive evaluation on the benchmark UCSD/Honda, CMU Mobo
and YouTube Celebrity datasets for face classification, and ETH-80 dataset for
object classification. The results demonstrate that by using only a small
amount of training data, our technique achieved competitive classification
accuracy and superior computational speed compared with the state-of-the-art
methods.
| [
{
"version": "v1",
"created": "Tue, 10 Jan 2017 09:17:29 GMT"
}
] | 2017-01-11T00:00:00 | [
[
"Shah",
"Syed Afaq Ali",
""
],
[
"Nadeem",
"Uzair",
""
],
[
"Bennamoun",
"Mohammed",
""
],
[
"Sohel",
"Ferdous",
""
],
[
"Togneri",
"Roberto",
""
]
] | TITLE: Efficient Image Set Classification using Linear Regression based Image
Reconstruction
ABSTRACT: We propose a novel image set classification technique using linear regression
models. Downsampled gallery image sets are interpreted as subspaces of a high
dimensional space to avoid the computationally expensive training step. We
estimate regression models for each test image using the class specific gallery
subspaces. Images of the test set are then reconstructed using the regression
models. Based on the minimum reconstruction error between the reconstructed and
the original images, a weighted voting strategy is used to classify the test
set. We performed extensive evaluation on the benchmark UCSD/Honda, CMU Mobo
and YouTube Celebrity datasets for face classification, and ETH-80 dataset for
object classification. The results demonstrate that by using only a small
amount of training data, our technique achieved competitive classification
accuracy and superior computational speed compared with the state-of-the-art
methods.
| no_new_dataset | 0.956513 |
1510.00012 | Jianbo Ye | Jianbo Ye, Panruo Wu, James Z. Wang and Jia Li | Fast Discrete Distribution Clustering Using Wasserstein Barycenter with
Sparse Support | double-column, 17 pages, 3 figures, 5 tables. English usage improved | null | null | null | stat.CO cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In a variety of research areas, the weighted bag of vectors and the histogram
are widely used descriptors for complex objects. Both can be expressed as
discrete distributions. D2-clustering pursues the minimum total within-cluster
variation for a set of discrete distributions subject to the
Kantorovich-Wasserstein metric. D2-clustering has a severe scalability issue,
the bottleneck being the computation of a centroid distribution, called
Wasserstein barycenter, that minimizes its sum of squared distances to the
cluster members. In this paper, we develop a modified Bregman ADMM approach for
computing the approximate discrete Wasserstein barycenter of large clusters. In
the case when the support points of the barycenters are unknown and have low
cardinality, our method achieves high accuracy empirically at a much reduced
computational cost. The strengths and weaknesses of our method and its
alternatives are examined through experiments, and we recommend scenarios for
their respective usage. Moreover, we develop both serial and parallelized
versions of the algorithm. By experimenting with large-scale data, we
demonstrate the computational efficiency of the new methods and investigate
their convergence properties and numerical stability. The clustering results
obtained on several datasets in different domains are highly competitive in
comparison with some widely used methods in the corresponding areas.
| [
{
"version": "v1",
"created": "Wed, 30 Sep 2015 20:10:59 GMT"
},
{
"version": "v2",
"created": "Sun, 8 May 2016 22:40:26 GMT"
},
{
"version": "v3",
"created": "Thu, 6 Oct 2016 23:41:22 GMT"
},
{
"version": "v4",
"created": "Mon, 9 Jan 2017 18:14:20 GMT"
}
] | 2017-01-10T00:00:00 | [
[
"Ye",
"Jianbo",
""
],
[
"Wu",
"Panruo",
""
],
[
"Wang",
"James Z.",
""
],
[
"Li",
"Jia",
""
]
] | TITLE: Fast Discrete Distribution Clustering Using Wasserstein Barycenter with
Sparse Support
ABSTRACT: In a variety of research areas, the weighted bag of vectors and the histogram
are widely used descriptors for complex objects. Both can be expressed as
discrete distributions. D2-clustering pursues the minimum total within-cluster
variation for a set of discrete distributions subject to the
Kantorovich-Wasserstein metric. D2-clustering has a severe scalability issue,
the bottleneck being the computation of a centroid distribution, called
Wasserstein barycenter, that minimizes its sum of squared distances to the
cluster members. In this paper, we develop a modified Bregman ADMM approach for
computing the approximate discrete Wasserstein barycenter of large clusters. In
the case when the support points of the barycenters are unknown and have low
cardinality, our method achieves high accuracy empirically at a much reduced
computational cost. The strengths and weaknesses of our method and its
alternatives are examined through experiments, and we recommend scenarios for
their respective usage. Moreover, we develop both serial and parallelized
versions of the algorithm. By experimenting with large-scale data, we
demonstrate the computational efficiency of the new methods and investigate
their convergence properties and numerical stability. The clustering results
obtained on several datasets in different domains are highly competitive in
comparison with some widely used methods in the corresponding areas.
| no_new_dataset | 0.946547 |
1602.03966 | Yongkun Li | Pengpeng Zhao, Yongkun Li, Hong Xie, Zhiyong Wu, Yinlong Xu, John C.
S. Lui | Measuring and Maximizing Influence via Random Walk in Social Activity
Networks | 19 pages | null | null | null | cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the popularity of OSNs, finding a set of most influential users (or
nodes) so as to trigger the largest influence cascade is of significance. For
example, companies may take advantage of the "word-of-mouth" effect to trigger
a large cascade of purchases by offering free samples/discounts to those most
influential users. This task is usually modeled as an influence maximization
problem, and it has been widely studied in the past decade. However,
considering that users in OSNs may participate in various kinds of online
activities, e.g., giving ratings to products, joining discussion groups, etc.,
influence diffusion through online activities becomes even more significant.
In this paper, we study the impact of online activities by formulating the
influence maximization problem for social-activity networks (SANs) containing
both users and online activities. To address the computation challenge, we
define an influence centrality via random walks to measure influence, then use
the Monte Carlo framework to efficiently estimate the centrality in SANs.
Furthermore, we develop a greedy-based algorithm with two novel optimization
techniques to find the most influential users. By conducting extensive
experiments with real-world datasets, we show our approach is more efficient
than the state-of-the-art algorithm IMM[17] when we needs to handle large
amount of online activities.
| [
{
"version": "v1",
"created": "Fri, 12 Feb 2016 05:40:25 GMT"
},
{
"version": "v2",
"created": "Mon, 9 Jan 2017 14:01:32 GMT"
}
] | 2017-01-10T00:00:00 | [
[
"Zhao",
"Pengpeng",
""
],
[
"Li",
"Yongkun",
""
],
[
"Xie",
"Hong",
""
],
[
"Wu",
"Zhiyong",
""
],
[
"Xu",
"Yinlong",
""
],
[
"Lui",
"John C. S.",
""
]
] | TITLE: Measuring and Maximizing Influence via Random Walk in Social Activity
Networks
ABSTRACT: With the popularity of OSNs, finding a set of most influential users (or
nodes) so as to trigger the largest influence cascade is of significance. For
example, companies may take advantage of the "word-of-mouth" effect to trigger
a large cascade of purchases by offering free samples/discounts to those most
influential users. This task is usually modeled as an influence maximization
problem, and it has been widely studied in the past decade. However,
considering that users in OSNs may participate in various kinds of online
activities, e.g., giving ratings to products, joining discussion groups, etc.,
influence diffusion through online activities becomes even more significant.
In this paper, we study the impact of online activities by formulating the
influence maximization problem for social-activity networks (SANs) containing
both users and online activities. To address the computation challenge, we
define an influence centrality via random walks to measure influence, then use
the Monte Carlo framework to efficiently estimate the centrality in SANs.
Furthermore, we develop a greedy-based algorithm with two novel optimization
techniques to find the most influential users. By conducting extensive
experiments with real-world datasets, we show our approach is more efficient
than the state-of-the-art algorithm IMM[17] when we needs to handle large
amount of online activities.
| no_new_dataset | 0.947039 |
1603.02617 | Rigas Kouskouridas | Caner Sahin, Rigas Kouskouridas and Tae-Kyun Kim | Iterative Hough Forest with Histogram of Control Points for 6 DoF Object
Registration from Depth Images | IROS 2016 | null | null | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | State-of-the-art techniques proposed for 6D object pose recovery depend on
occlusion-free point clouds to accurately register objects in 3D space. To
reduce this dependency, we introduce a novel architecture called Iterative
Hough Forest with Histogram of Control Points that is capable of estimating
occluded and cluttered objects' 6D pose given a candidate 2D bounding box. Our
Iterative Hough Forest is learnt using patches extracted only from the positive
samples. These patches are represented with Histogram of Control Points (HoCP),
a "scale-variant" implicit volumetric description, which we derive from
recently introduced Implicit B-Splines (IBS). The rich discriminative
information provided by this scale-variance is leveraged during inference,
where the initial pose estimation of the object is iteratively refined based on
more discriminative control points by using our Iterative Hough Forest. We
conduct experiments on several test objects of a publicly available dataset to
test our architecture and to compare with the state-of-the-art.
| [
{
"version": "v1",
"created": "Tue, 8 Mar 2016 18:33:44 GMT"
},
{
"version": "v2",
"created": "Mon, 9 Jan 2017 12:43:53 GMT"
}
] | 2017-01-10T00:00:00 | [
[
"Sahin",
"Caner",
""
],
[
"Kouskouridas",
"Rigas",
""
],
[
"Kim",
"Tae-Kyun",
""
]
] | TITLE: Iterative Hough Forest with Histogram of Control Points for 6 DoF Object
Registration from Depth Images
ABSTRACT: State-of-the-art techniques proposed for 6D object pose recovery depend on
occlusion-free point clouds to accurately register objects in 3D space. To
reduce this dependency, we introduce a novel architecture called Iterative
Hough Forest with Histogram of Control Points that is capable of estimating
occluded and cluttered objects' 6D pose given a candidate 2D bounding box. Our
Iterative Hough Forest is learnt using patches extracted only from the positive
samples. These patches are represented with Histogram of Control Points (HoCP),
a "scale-variant" implicit volumetric description, which we derive from
recently introduced Implicit B-Splines (IBS). The rich discriminative
information provided by this scale-variance is leveraged during inference,
where the initial pose estimation of the object is iteratively refined based on
more discriminative control points by using our Iterative Hough Forest. We
conduct experiments on several test objects of a publicly available dataset to
test our architecture and to compare with the state-of-the-art.
| no_new_dataset | 0.949576 |
1609.00680 | Jinbo Xu | Sheng Wang, Siqi Sun, Zhen Li, Renyu Zhang and Jinbo Xu | Accurate De Novo Prediction of Protein Contact Map by Ultra-Deep
Learning Model | null | PLoS Comput Biol 13(1): e1005324, 2017 | 10.1371/journal.pcbi.1005324 | null | q-bio.BM cs.LG q-bio.QM stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently exciting progress has been made on protein contact prediction, but
the predicted contacts for proteins without many sequence homologs is still of
low quality and not very useful for de novo structure prediction. This paper
presents a new deep learning method that predicts contacts by integrating both
evolutionary coupling (EC) and sequence conservation information through an
ultra-deep neural network formed by two deep residual networks. This deep
neural network allows us to model very complex sequence-contact relationship as
well as long-range inter-contact correlation. Our method greatly outperforms
existing contact prediction methods and leads to much more accurate
contact-assisted protein folding. Tested on three datasets of 579 proteins, the
average top L long-range prediction accuracy obtained our method, the
representative EC method CCMpred and the CASP11 winner MetaPSICOV is 0.47, 0.21
and 0.30, respectively; the average top L/10 long-range accuracy of our method,
CCMpred and MetaPSICOV is 0.77, 0.47 and 0.59, respectively. Ab initio folding
using our predicted contacts as restraints can yield correct folds (i.e.,
TMscore>0.6) for 203 test proteins, while that using MetaPSICOV- and
CCMpred-predicted contacts can do so for only 79 and 62 proteins, respectively.
Further, our contact-assisted models have much better quality than
template-based models. Using our predicted contacts as restraints, we can (ab
initio) fold 208 of the 398 membrane proteins with TMscore>0.5. By contrast,
when the training proteins of our method are used as templates, homology
modeling can only do so for 10 of them. One interesting finding is that even if
we do not train our prediction models with any membrane proteins, our method
works very well on membrane protein prediction. Finally, in recent blind CAMEO
benchmark our method successfully folded 5 test proteins with a novel fold.
| [
{
"version": "v1",
"created": "Fri, 2 Sep 2016 17:41:54 GMT"
},
{
"version": "v2",
"created": "Mon, 5 Sep 2016 15:39:23 GMT"
},
{
"version": "v3",
"created": "Thu, 15 Sep 2016 03:09:45 GMT"
},
{
"version": "v4",
"created": "Fri, 16 Sep 2016 23:08:52 GMT"
},
{
"version": "v5",
"created": "Mon, 7 Nov 2016 06:01:32 GMT"
},
{
"version": "v6",
"created": "Sun, 27 Nov 2016 22:32:50 GMT"
}
] | 2017-01-10T00:00:00 | [
[
"Wang",
"Sheng",
""
],
[
"Sun",
"Siqi",
""
],
[
"Li",
"Zhen",
""
],
[
"Zhang",
"Renyu",
""
],
[
"Xu",
"Jinbo",
""
]
] | TITLE: Accurate De Novo Prediction of Protein Contact Map by Ultra-Deep
Learning Model
ABSTRACT: Recently exciting progress has been made on protein contact prediction, but
the predicted contacts for proteins without many sequence homologs is still of
low quality and not very useful for de novo structure prediction. This paper
presents a new deep learning method that predicts contacts by integrating both
evolutionary coupling (EC) and sequence conservation information through an
ultra-deep neural network formed by two deep residual networks. This deep
neural network allows us to model very complex sequence-contact relationship as
well as long-range inter-contact correlation. Our method greatly outperforms
existing contact prediction methods and leads to much more accurate
contact-assisted protein folding. Tested on three datasets of 579 proteins, the
average top L long-range prediction accuracy obtained our method, the
representative EC method CCMpred and the CASP11 winner MetaPSICOV is 0.47, 0.21
and 0.30, respectively; the average top L/10 long-range accuracy of our method,
CCMpred and MetaPSICOV is 0.77, 0.47 and 0.59, respectively. Ab initio folding
using our predicted contacts as restraints can yield correct folds (i.e.,
TMscore>0.6) for 203 test proteins, while that using MetaPSICOV- and
CCMpred-predicted contacts can do so for only 79 and 62 proteins, respectively.
Further, our contact-assisted models have much better quality than
template-based models. Using our predicted contacts as restraints, we can (ab
initio) fold 208 of the 398 membrane proteins with TMscore>0.5. By contrast,
when the training proteins of our method are used as templates, homology
modeling can only do so for 10 of them. One interesting finding is that even if
we do not train our prediction models with any membrane proteins, our method
works very well on membrane protein prediction. Finally, in recent blind CAMEO
benchmark our method successfully folded 5 test proteins with a novel fold.
| no_new_dataset | 0.948585 |
1612.00775 | Christopher Beckham | Christopher Beckham, Christopher Pal | A simple squared-error reformulation for ordinal classification | v1: Camera-ready abstract for NIPS for Health Workshop (2016) v2:
Clean-up of some sections, added appendix section where we briefly explore
optimisation of quadratic weighted kappa (QWK) | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we explore ordinal classification (in the context of deep
neural networks) through a simple modification of the squared error loss which
not only allows it to not only be sensitive to class ordering, but also allows
the possibility of having a discrete probability distribution over the classes.
Our formulation is based on the use of a softmax hidden layer, which has
received relatively little attention in the literature. We empirically evaluate
its performance on the Kaggle diabetic retinopathy dataset, an ordinal and
high-resolution dataset and show that it outperforms all of the baselines
employed.
| [
{
"version": "v1",
"created": "Fri, 2 Dec 2016 17:57:04 GMT"
},
{
"version": "v2",
"created": "Mon, 9 Jan 2017 16:04:38 GMT"
}
] | 2017-01-10T00:00:00 | [
[
"Beckham",
"Christopher",
""
],
[
"Pal",
"Christopher",
""
]
] | TITLE: A simple squared-error reformulation for ordinal classification
ABSTRACT: In this paper, we explore ordinal classification (in the context of deep
neural networks) through a simple modification of the squared error loss which
not only allows it to not only be sensitive to class ordering, but also allows
the possibility of having a discrete probability distribution over the classes.
Our formulation is based on the use of a softmax hidden layer, which has
received relatively little attention in the literature. We empirically evaluate
its performance on the Kaggle diabetic retinopathy dataset, an ordinal and
high-resolution dataset and show that it outperforms all of the baselines
employed.
| no_new_dataset | 0.9462 |
1701.00495 | Ariel Ephrat | Ariel Ephrat and Shmuel Peleg | Vid2speech: Speech Reconstruction from Silent Video | Accepted for publication at ICASSP 2017 | null | null | null | cs.CV cs.SD | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Speechreading is a notoriously difficult task for humans to perform. In this
paper we present an end-to-end model based on a convolutional neural network
(CNN) for generating an intelligible acoustic speech signal from silent video
frames of a speaking person. The proposed CNN generates sound features for each
frame based on its neighboring frames. Waveforms are then synthesized from the
learned speech features to produce intelligible speech. We show that by
leveraging the automatic feature learning capabilities of a CNN, we can obtain
state-of-the-art word intelligibility on the GRID dataset, and show promising
results for learning out-of-vocabulary (OOV) words.
| [
{
"version": "v1",
"created": "Mon, 2 Jan 2017 19:00:22 GMT"
},
{
"version": "v2",
"created": "Mon, 9 Jan 2017 17:35:17 GMT"
}
] | 2017-01-10T00:00:00 | [
[
"Ephrat",
"Ariel",
""
],
[
"Peleg",
"Shmuel",
""
]
] | TITLE: Vid2speech: Speech Reconstruction from Silent Video
ABSTRACT: Speechreading is a notoriously difficult task for humans to perform. In this
paper we present an end-to-end model based on a convolutional neural network
(CNN) for generating an intelligible acoustic speech signal from silent video
frames of a speaking person. The proposed CNN generates sound features for each
frame based on its neighboring frames. Waveforms are then synthesized from the
learned speech features to produce intelligible speech. We show that by
leveraging the automatic feature learning capabilities of a CNN, we can obtain
state-of-the-art word intelligibility on the GRID dataset, and show promising
results for learning out-of-vocabulary (OOV) words.
| no_new_dataset | 0.955569 |
1701.01811 | Filippos Kokkinos | Filippos Kokkinos, Alexandros Potamianos | Structural Attention Neural Networks for improved sentiment analysis | Submitted to EACL2017 for review | null | null | null | cs.CL cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a tree-structured attention neural network for sentences and
small phrases and apply it to the problem of sentiment classification. Our
model expands the current recursive models by incorporating structural
information around a node of a syntactic tree using both bottom-up and top-down
information propagation. Also, the model utilizes structural attention to
identify the most salient representations during the construction of the
syntactic tree. To our knowledge, the proposed models achieve state of the art
performance on the Stanford Sentiment Treebank dataset.
| [
{
"version": "v1",
"created": "Sat, 7 Jan 2017 09:58:49 GMT"
}
] | 2017-01-10T00:00:00 | [
[
"Kokkinos",
"Filippos",
""
],
[
"Potamianos",
"Alexandros",
""
]
] | TITLE: Structural Attention Neural Networks for improved sentiment analysis
ABSTRACT: We introduce a tree-structured attention neural network for sentences and
small phrases and apply it to the problem of sentiment classification. Our
model expands the current recursive models by incorporating structural
information around a node of a syntactic tree using both bottom-up and top-down
information propagation. Also, the model utilizes structural attention to
identify the most salient representations during the construction of the
syntactic tree. To our knowledge, the proposed models achieve state of the art
performance on the Stanford Sentiment Treebank dataset.
| no_new_dataset | 0.949669 |
1701.01854 | Mohaddeseh Bastan | Mohaddeseh Bastan, Shahram Khadivi, Mohammad Mehdi Homayounpour | Neural Machine Translation on Scarce-Resource Condition: A case-study on
Persian-English | 6 pages, Submitted in ICEE 2017 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neural Machine Translation (NMT) is a new approach for Machine Translation
(MT), and due to its success, it has absorbed the attention of many researchers
in the field. In this paper, we study NMT model on Persian-English language
pairs, to analyze the model and investigate the appropriateness of the model
for scarce-resourced scenarios, the situation that exists for Persian-centered
translation systems. We adjust the model for the Persian language and find the
best parameters and hyper parameters for two tasks: translation and
transliteration. We also apply some preprocessing task on the Persian dataset
which yields to increase for about one point in terms of BLEU score. Also, we
have modified the loss function to enhance the word alignment of the model.
This new loss function yields a total of 1.87 point improvements in terms of
BLEU score in the translation quality.
| [
{
"version": "v1",
"created": "Sat, 7 Jan 2017 16:27:44 GMT"
}
] | 2017-01-10T00:00:00 | [
[
"Bastan",
"Mohaddeseh",
""
],
[
"Khadivi",
"Shahram",
""
],
[
"Homayounpour",
"Mohammad Mehdi",
""
]
] | TITLE: Neural Machine Translation on Scarce-Resource Condition: A case-study on
Persian-English
ABSTRACT: Neural Machine Translation (NMT) is a new approach for Machine Translation
(MT), and due to its success, it has absorbed the attention of many researchers
in the field. In this paper, we study NMT model on Persian-English language
pairs, to analyze the model and investigate the appropriateness of the model
for scarce-resourced scenarios, the situation that exists for Persian-centered
translation systems. We adjust the model for the Persian language and find the
best parameters and hyper parameters for two tasks: translation and
transliteration. We also apply some preprocessing task on the Persian dataset
which yields to increase for about one point in terms of BLEU score. Also, we
have modified the loss function to enhance the word alignment of the model.
This new loss function yields a total of 1.87 point improvements in terms of
BLEU score in the translation quality.
| no_new_dataset | 0.95222 |
1701.01875 | Zeshan Hussain | Hardie Cate, Fahim Dalvi, and Zeshan Hussain | Sign Language Recognition Using Temporal Classification | 5 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Devices like the Myo armband available in the market today enable us to
collect data about the position of a user's hands and fingers over time. We can
use these technologies for sign language translation since each sign is roughly
a combination of gestures across time. In this work, we utilize a dataset
collected by a group at the University of South Wales, which contains
parameters, such as hand position, hand rotation, and finger bend, for 95
unique signs. For each input stream representing a sign, we predict which sign
class this stream falls into. We begin by implementing baseline SVM and
logistic regression models, which perform reasonably well on high quality data.
Lower quality data requires a more sophisticated approach, so we explore
different methods in temporal classification, including long short term memory
architectures and sequential pattern mining methods.
| [
{
"version": "v1",
"created": "Sat, 7 Jan 2017 20:09:52 GMT"
}
] | 2017-01-10T00:00:00 | [
[
"Cate",
"Hardie",
""
],
[
"Dalvi",
"Fahim",
""
],
[
"Hussain",
"Zeshan",
""
]
] | TITLE: Sign Language Recognition Using Temporal Classification
ABSTRACT: Devices like the Myo armband available in the market today enable us to
collect data about the position of a user's hands and fingers over time. We can
use these technologies for sign language translation since each sign is roughly
a combination of gestures across time. In this work, we utilize a dataset
collected by a group at the University of South Wales, which contains
parameters, such as hand position, hand rotation, and finger bend, for 95
unique signs. For each input stream representing a sign, we predict which sign
class this stream falls into. We begin by implementing baseline SVM and
logistic regression models, which perform reasonably well on high quality data.
Lower quality data requires a more sophisticated approach, so we explore
different methods in temporal classification, including long short term memory
architectures and sequential pattern mining methods.
| no_new_dataset | 0.942454 |
1701.01876 | Zeshan Hussain | Hardie Cate, Fahim Dalvi, and Zeshan Hussain | DeepFace: Face Generation using Deep Learning | 8 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We use CNNs to build a system that both classifies images of faces based on a
variety of different facial attributes and generates new faces given a set of
desired facial characteristics. After introducing the problem and providing
context in the first section, we discuss recent work related to image
generation in Section 2. In Section 3, we describe the methods used to
fine-tune our CNN and generate new images using a novel approach inspired by a
Gaussian mixture model. In Section 4, we discuss our working dataset and
describe our preprocessing steps and handling of facial attributes. Finally, in
Sections 5, 6 and 7, we explain our experiments and results and conclude in the
following section. Our classification system has 82\% test accuracy.
Furthermore, our generation pipeline successfully creates well-formed faces.
| [
{
"version": "v1",
"created": "Sat, 7 Jan 2017 20:22:05 GMT"
}
] | 2017-01-10T00:00:00 | [
[
"Cate",
"Hardie",
""
],
[
"Dalvi",
"Fahim",
""
],
[
"Hussain",
"Zeshan",
""
]
] | TITLE: DeepFace: Face Generation using Deep Learning
ABSTRACT: We use CNNs to build a system that both classifies images of faces based on a
variety of different facial attributes and generates new faces given a set of
desired facial characteristics. After introducing the problem and providing
context in the first section, we discuss recent work related to image
generation in Section 2. In Section 3, we describe the methods used to
fine-tune our CNN and generate new images using a novel approach inspired by a
Gaussian mixture model. In Section 4, we discuss our working dataset and
describe our preprocessing steps and handling of facial attributes. Finally, in
Sections 5, 6 and 7, we explain our experiments and results and conclude in the
following section. Our classification system has 82\% test accuracy.
Furthermore, our generation pipeline successfully creates well-formed faces.
| no_new_dataset | 0.729279 |
1701.01908 | Fan Xu | Fan Xu, Mingwen Wang and Maoxi Li | Sentence-level dialects identification in the greater China region | 12 | International Journal on Natural Language Computing (IJNLC) Vol.
5, No.6, December 2016 | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Identifying the different varieties of the same language is more challenging
than unrelated languages identification. In this paper, we propose an approach
to discriminate language varieties or dialects of Mandarin Chinese for the
Mainland China, Hong Kong, Taiwan, Macao, Malaysia and Singapore, a.k.a., the
Greater China Region (GCR). When applied to the dialects identification of the
GCR, we find that the commonly used character-level or word-level uni-gram
feature is not very efficient since there exist several specific problems such
as the ambiguity and context-dependent characteristic of words in the dialects
of the GCR. To overcome these challenges, we use not only the general features
like character-level n-gram, but also many new word-level features, including
PMI-based and word alignment-based features. A series of evaluation results on
both the news and open-domain dataset from Wikipedia show the effectiveness of
the proposed approach.
| [
{
"version": "v1",
"created": "Sun, 8 Jan 2017 03:13:37 GMT"
}
] | 2017-01-10T00:00:00 | [
[
"Xu",
"Fan",
""
],
[
"Wang",
"Mingwen",
""
],
[
"Li",
"Maoxi",
""
]
] | TITLE: Sentence-level dialects identification in the greater China region
ABSTRACT: Identifying the different varieties of the same language is more challenging
than unrelated languages identification. In this paper, we propose an approach
to discriminate language varieties or dialects of Mandarin Chinese for the
Mainland China, Hong Kong, Taiwan, Macao, Malaysia and Singapore, a.k.a., the
Greater China Region (GCR). When applied to the dialects identification of the
GCR, we find that the commonly used character-level or word-level uni-gram
feature is not very efficient since there exist several specific problems such
as the ambiguity and context-dependent characteristic of words in the dialects
of the GCR. To overcome these challenges, we use not only the general features
like character-level n-gram, but also many new word-level features, including
PMI-based and word alignment-based features. A series of evaluation results on
both the news and open-domain dataset from Wikipedia show the effectiveness of
the proposed approach.
| no_new_dataset | 0.955486 |
1701.01932 | Andrea Baraldi | Andrea Baraldi, Michael Laurence Humber, Dirk Tiede and Stefan Lang | Stage 4 validation of the Satellite Image Automatic Mapper lightweight
computer program for Earth observation Level 2 product generation, Part 2
Validation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The European Space Agency (ESA) defines an Earth Observation (EO) Level 2
product as a multispectral (MS) image corrected for geometric, atmospheric,
adjacency and topographic effects, stacked with its scene classification map
(SCM) whose legend includes quality layers such as cloud and cloud-shadow. No
ESA EO Level 2 product has ever been systematically generated at the ground
segment. To contribute toward filling an information gap from EO big sensory
data to the ESA EO Level 2 product, a Stage 4 validation (Val) of an off the
shelf Satellite Image Automatic Mapper (SIAM) lightweight computer program for
prior knowledge based MS color naming was conducted by independent means. A
time-series of annual Web Enabled Landsat Data (WELD) image composites of the
conterminous U.S. (CONUS) was selected as input dataset. The annual SIAM WELD
maps of the CONUS were validated in comparison with the U.S. National Land
Cover Data (NLCD) 2006 map. These test and reference maps share the same
spatial resolution and spatial extent, but their map legends are not the same
and must be harmonized. For the sake of readability this paper is split into
two. The previous Part 1 Theory provided the multidisciplinary background of a
priori color naming. The present Part 2 Validation presents and discusses Stage
4 Val results collected from the test SIAM WELD map time series and the
reference NLCD map by an original protocol for wall to wall thematic map
quality assessment without sampling, where the test and reference map legends
can differ in agreement with the Part 1. Conclusions are that the SIAM-WELD
maps instantiate a Level 2 SCM product whose legend is the FAO Land Cover
Classification System (LCCS) taxonomy at the Dichotomous Phase (DP) Level 1
vegetation/nonvegetation, Level 2 terrestrial/aquatic or superior LCCS level.
| [
{
"version": "v1",
"created": "Sun, 8 Jan 2017 09:35:30 GMT"
}
] | 2017-01-10T00:00:00 | [
[
"Baraldi",
"Andrea",
""
],
[
"Humber",
"Michael Laurence",
""
],
[
"Tiede",
"Dirk",
""
],
[
"Lang",
"Stefan",
""
]
] | TITLE: Stage 4 validation of the Satellite Image Automatic Mapper lightweight
computer program for Earth observation Level 2 product generation, Part 2
Validation
ABSTRACT: The European Space Agency (ESA) defines an Earth Observation (EO) Level 2
product as a multispectral (MS) image corrected for geometric, atmospheric,
adjacency and topographic effects, stacked with its scene classification map
(SCM) whose legend includes quality layers such as cloud and cloud-shadow. No
ESA EO Level 2 product has ever been systematically generated at the ground
segment. To contribute toward filling an information gap from EO big sensory
data to the ESA EO Level 2 product, a Stage 4 validation (Val) of an off the
shelf Satellite Image Automatic Mapper (SIAM) lightweight computer program for
prior knowledge based MS color naming was conducted by independent means. A
time-series of annual Web Enabled Landsat Data (WELD) image composites of the
conterminous U.S. (CONUS) was selected as input dataset. The annual SIAM WELD
maps of the CONUS were validated in comparison with the U.S. National Land
Cover Data (NLCD) 2006 map. These test and reference maps share the same
spatial resolution and spatial extent, but their map legends are not the same
and must be harmonized. For the sake of readability this paper is split into
two. The previous Part 1 Theory provided the multidisciplinary background of a
priori color naming. The present Part 2 Validation presents and discusses Stage
4 Val results collected from the test SIAM WELD map time series and the
reference NLCD map by an original protocol for wall to wall thematic map
quality assessment without sampling, where the test and reference map legends
can differ in agreement with the Part 1. Conclusions are that the SIAM-WELD
maps instantiate a Level 2 SCM product whose legend is the FAO Land Cover
Classification System (LCCS) taxonomy at the Dichotomous Phase (DP) Level 1
vegetation/nonvegetation, Level 2 terrestrial/aquatic or superior LCCS level.
| no_new_dataset | 0.960137 |
1701.02030 | Timotheos Aslanidis | Timotheos Aslanidis and Stavros Birmpilis | An open shop approach in approximating optimal data transmission
duration in WDM networks | 9 pages, 5 figures, Second International Conference on Computer
Science, Information Technology and Applications (CSITA 2016) | null | null | null | cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the past decade Optical WDM Networks (Wavelength Division Multiplexing)
are being used quite often and especially as far as broadband applications are
concerned. Message packets transmitted through such networks can be interrupted
using time slots in order to maximize network usage and minimize the time
required for all messages to reach their destination. However, preempting a
packet will result in time cost. The problem of scheduling message packets
through such a network is referred to as PBS and is known to be NP-Hard. In
this paper we have reduced PBS to Open Shop Scheduling and designed variations
of polynomially solvable instances of Open Shop to approximate PBS. We have
combined these variations and called the induced algorithm HSA (Hybridic
Scheduling Algorithm). We ran experiments to establish the efficiency of HSA
and found that in all datasets used it produces schedules very close to the
optimal. To further establish HSAs efficiency we ran tests to compare it to
SGA, another algorithm which when tested in the past has yielded excellent
results.
| [
{
"version": "v1",
"created": "Sun, 8 Jan 2017 22:35:38 GMT"
}
] | 2017-01-10T00:00:00 | [
[
"Aslanidis",
"Timotheos",
""
],
[
"Birmpilis",
"Stavros",
""
]
] | TITLE: An open shop approach in approximating optimal data transmission
duration in WDM networks
ABSTRACT: In the past decade Optical WDM Networks (Wavelength Division Multiplexing)
are being used quite often and especially as far as broadband applications are
concerned. Message packets transmitted through such networks can be interrupted
using time slots in order to maximize network usage and minimize the time
required for all messages to reach their destination. However, preempting a
packet will result in time cost. The problem of scheduling message packets
through such a network is referred to as PBS and is known to be NP-Hard. In
this paper we have reduced PBS to Open Shop Scheduling and designed variations
of polynomially solvable instances of Open Shop to approximate PBS. We have
combined these variations and called the induced algorithm HSA (Hybridic
Scheduling Algorithm). We ran experiments to establish the efficiency of HSA
and found that in all datasets used it produces schedules very close to the
optimal. To further establish HSAs efficiency we ran tests to compare it to
SGA, another algorithm which when tested in the past has yielded excellent
results.
| no_new_dataset | 0.946399 |
1701.02166 | Rigas Kouskouridas | Caner Sahin, Rigas Kouskouridas and Tae-Kyun Kim | A Learning-based Variable Size Part Extraction Architecture for 6D
Object Pose Recovery in Depth | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | State-of-the-art techniques for 6D object pose recovery depend on
occlusion-free point clouds to accurately register objects in 3D space. To deal
with this shortcoming, we introduce a novel architecture called Iterative Hough
Forest with Histogram of Control Points that is capable of estimating the 6D
pose of occluded and cluttered objects given a candidate 2D bounding box. Our
Iterative Hough Forest (IHF) is learnt using parts extracted only from the
positive samples. These parts are represented with Histogram of Control Points
(HoCP), a "scale-variant" implicit volumetric description, which we derive from
recently introduced Implicit B-Splines (IBS). The rich discriminative
information provided by the scale-variant HoCP features is leveraged during
inference. An automatic variable size part extraction framework iteratively
refines the object's initial pose that is roughly aligned due to the extraction
of coarsest parts, the ones occupying the largest area in image pixels. The
iterative refinement is accomplished based on finer (smaller) parts that are
represented with more discriminative control point descriptors by using our
Iterative Hough Forest. Experiments conducted on a publicly available dataset
report that our approach show better registration performance than the
state-of-the-art methods.
| [
{
"version": "v1",
"created": "Mon, 9 Jan 2017 13:20:32 GMT"
}
] | 2017-01-10T00:00:00 | [
[
"Sahin",
"Caner",
""
],
[
"Kouskouridas",
"Rigas",
""
],
[
"Kim",
"Tae-Kyun",
""
]
] | TITLE: A Learning-based Variable Size Part Extraction Architecture for 6D
Object Pose Recovery in Depth
ABSTRACT: State-of-the-art techniques for 6D object pose recovery depend on
occlusion-free point clouds to accurately register objects in 3D space. To deal
with this shortcoming, we introduce a novel architecture called Iterative Hough
Forest with Histogram of Control Points that is capable of estimating the 6D
pose of occluded and cluttered objects given a candidate 2D bounding box. Our
Iterative Hough Forest (IHF) is learnt using parts extracted only from the
positive samples. These parts are represented with Histogram of Control Points
(HoCP), a "scale-variant" implicit volumetric description, which we derive from
recently introduced Implicit B-Splines (IBS). The rich discriminative
information provided by the scale-variant HoCP features is leveraged during
inference. An automatic variable size part extraction framework iteratively
refines the object's initial pose that is roughly aligned due to the extraction
of coarsest parts, the ones occupying the largest area in image pixels. The
iterative refinement is accomplished based on finer (smaller) parts that are
represented with more discriminative control point descriptors by using our
Iterative Hough Forest. Experiments conducted on a publicly available dataset
report that our approach show better registration performance than the
state-of-the-art methods.
| no_new_dataset | 0.950503 |
1701.02243 | Marco Gramaglia | Marco Gramaglia, Marco Fiore, Alberto Tarable, Albert Banchs | $k^{\tau,\epsilon}$-anonymity: Towards Privacy-Preserving Publishing of
Spatiotemporal Trajectory Data | null | null | null | null | cs.CY cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mobile network operators can track subscribers via passive or active
monitoring of device locations. The recorded trajectories offer an
unprecedented outlook on the activities of large user populations, which
enables developing new networking solutions and services, and scaling up
studies across research disciplines. Yet, the disclosure of individual
trajectories raises significant privacy concerns: thus, these data are often
protected by restrictive non-disclosure agreements that limit their
availability and impede potential usages. In this paper, we contribute to the
development of technical solutions to the problem of privacy-preserving
publishing of spatiotemporal trajectories of mobile subscribers. We propose an
algorithm that generalizes the data so that they satisfy
$k^{\tau,\epsilon}$-anonymity, an original privacy criterion that thwarts
attacks on trajectories. Evaluations with real-world datasets demonstrate that
our algorithm attains its objective while retaining a substantial level of
accuracy in the data. Our work is a step forward in the direction of open,
privacy-preserving datasets of spatiotemporal trajectories.
| [
{
"version": "v1",
"created": "Mon, 9 Jan 2017 16:24:32 GMT"
}
] | 2017-01-10T00:00:00 | [
[
"Gramaglia",
"Marco",
""
],
[
"Fiore",
"Marco",
""
],
[
"Tarable",
"Alberto",
""
],
[
"Banchs",
"Albert",
""
]
] | TITLE: $k^{\tau,\epsilon}$-anonymity: Towards Privacy-Preserving Publishing of
Spatiotemporal Trajectory Data
ABSTRACT: Mobile network operators can track subscribers via passive or active
monitoring of device locations. The recorded trajectories offer an
unprecedented outlook on the activities of large user populations, which
enables developing new networking solutions and services, and scaling up
studies across research disciplines. Yet, the disclosure of individual
trajectories raises significant privacy concerns: thus, these data are often
protected by restrictive non-disclosure agreements that limit their
availability and impede potential usages. In this paper, we contribute to the
development of technical solutions to the problem of privacy-preserving
publishing of spatiotemporal trajectories of mobile subscribers. We propose an
algorithm that generalizes the data so that they satisfy
$k^{\tau,\epsilon}$-anonymity, an original privacy criterion that thwarts
attacks on trajectories. Evaluations with real-world datasets demonstrate that
our algorithm attains its objective while retaining a substantial level of
accuracy in the data. Our work is a step forward in the direction of open,
privacy-preserving datasets of spatiotemporal trajectories.
| no_new_dataset | 0.946646 |
1604.03901 | Weifeng Chen | Weifeng Chen, Zhao Fu, Dawei Yang, Jia Deng | Single-Image Depth Perception in the Wild | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper studies single-image depth perception in the wild, i.e.,
recovering depth from a single image taken in unconstrained settings. We
introduce a new dataset "Depth in the Wild" consisting of images in the wild
annotated with relative depth between pairs of random points. We also propose a
new algorithm that learns to estimate metric depth using annotations of
relative depth. Compared to the state of the art, our algorithm is simpler and
performs better. Experiments show that our algorithm, combined with existing
RGB-D data and our new relative depth annotations, significantly improves
single-image depth perception in the wild.
| [
{
"version": "v1",
"created": "Wed, 13 Apr 2016 18:19:35 GMT"
},
{
"version": "v2",
"created": "Fri, 6 Jan 2017 16:05:35 GMT"
}
] | 2017-01-09T00:00:00 | [
[
"Chen",
"Weifeng",
""
],
[
"Fu",
"Zhao",
""
],
[
"Yang",
"Dawei",
""
],
[
"Deng",
"Jia",
""
]
] | TITLE: Single-Image Depth Perception in the Wild
ABSTRACT: This paper studies single-image depth perception in the wild, i.e.,
recovering depth from a single image taken in unconstrained settings. We
introduce a new dataset "Depth in the Wild" consisting of images in the wild
annotated with relative depth between pairs of random points. We also propose a
new algorithm that learns to estimate metric depth using annotations of
relative depth. Compared to the state of the art, our algorithm is simpler and
performs better. Experiments show that our algorithm, combined with existing
RGB-D data and our new relative depth annotations, significantly improves
single-image depth perception in the wild.
| new_dataset | 0.960137 |
1609.05695 | Mengnan Shi | Mengnan Shi, Fei Qin, Qixiang Ye, Zhenjun Han, Jianbin Jiao | A scalable convolutional neural network for task-specified scenarios via
knowledge distillation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we explore the redundancy in convolutional neural network,
which scales with the complexity of vision tasks. Considering that many
front-end visual systems are interested in only a limited range of visual
targets, the removing of task-specified network redundancy can promote a wide
range of potential applications. We propose a task-specified knowledge
distillation algorithm to derive a simplified model with pre-set computation
cost and minimized accuracy loss, which suits the resource constraint front-end
systems well. Experiments on the MNIST and CIFAR10 datasets demonstrate the
feasibility of the proposed approach as well as the existence of task-specified
redundancy.
| [
{
"version": "v1",
"created": "Mon, 19 Sep 2016 12:43:32 GMT"
},
{
"version": "v2",
"created": "Fri, 6 Jan 2017 13:57:47 GMT"
}
] | 2017-01-09T00:00:00 | [
[
"Shi",
"Mengnan",
""
],
[
"Qin",
"Fei",
""
],
[
"Ye",
"Qixiang",
""
],
[
"Han",
"Zhenjun",
""
],
[
"Jiao",
"Jianbin",
""
]
] | TITLE: A scalable convolutional neural network for task-specified scenarios via
knowledge distillation
ABSTRACT: In this paper, we explore the redundancy in convolutional neural network,
which scales with the complexity of vision tasks. Considering that many
front-end visual systems are interested in only a limited range of visual
targets, the removing of task-specified network redundancy can promote a wide
range of potential applications. We propose a task-specified knowledge
distillation algorithm to derive a simplified model with pre-set computation
cost and minimized accuracy loss, which suits the resource constraint front-end
systems well. Experiments on the MNIST and CIFAR10 datasets demonstrate the
feasibility of the proposed approach as well as the existence of task-specified
redundancy.
| no_new_dataset | 0.948394 |
1701.01480 | Yi-Ling Chen | Yi-Ling Chen, Tzu-Wei Huang, Kai-Han Chang, Yu-Chen Tsai, Hwann-Tzong
Chen, Bing-Yu Chen | Quantitative Analysis of Automatic Image Cropping Algorithms: A Dataset
and Comparative Study | The dataset presented in this article can be found on <a
href="https://github.com/yiling-chen/flickr-cropping-dataset">Github</a> | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automatic photo cropping is an important tool for improving visual quality of
digital photos without resorting to tedious manual selection. Traditionally,
photo cropping is accomplished by determining the best proposal window through
visual quality assessment or saliency detection. In essence, the performance of
an image cropper highly depends on the ability to correctly rank a number of
visually similar proposal windows. Despite the ranking nature of automatic
photo cropping, little attention has been paid to learning-to-rank algorithms
in tackling such a problem. In this work, we conduct an extensive study on
traditional approaches as well as ranking-based croppers trained on various
image features. In addition, a new dataset consisting of high quality cropping
and pairwise ranking annotations is presented to evaluate the performance of
various baselines. The experimental results on the new dataset provide useful
insights into the design of better photo cropping algorithms.
| [
{
"version": "v1",
"created": "Thu, 5 Jan 2017 21:22:22 GMT"
}
] | 2017-01-09T00:00:00 | [
[
"Chen",
"Yi-Ling",
""
],
[
"Huang",
"Tzu-Wei",
""
],
[
"Chang",
"Kai-Han",
""
],
[
"Tsai",
"Yu-Chen",
""
],
[
"Chen",
"Hwann-Tzong",
""
],
[
"Chen",
"Bing-Yu",
""
]
] | TITLE: Quantitative Analysis of Automatic Image Cropping Algorithms: A Dataset
and Comparative Study
ABSTRACT: Automatic photo cropping is an important tool for improving visual quality of
digital photos without resorting to tedious manual selection. Traditionally,
photo cropping is accomplished by determining the best proposal window through
visual quality assessment or saliency detection. In essence, the performance of
an image cropper highly depends on the ability to correctly rank a number of
visually similar proposal windows. Despite the ranking nature of automatic
photo cropping, little attention has been paid to learning-to-rank algorithms
in tackling such a problem. In this work, we conduct an extensive study on
traditional approaches as well as ranking-based croppers trained on various
image features. In addition, a new dataset consisting of high quality cropping
and pairwise ranking annotations is presented to evaluate the performance of
various baselines. The experimental results on the new dataset provide useful
insights into the design of better photo cropping algorithms.
| new_dataset | 0.959116 |
1701.01565 | Edison Marrese-Taylor | Edison Marrese-Taylor, Yutaka Matsuo | Replication issues in syntax-based aspect extraction for opinion mining | Accepted in the EACL 2017 SRW | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reproducing experiments is an important instrument to validate previous work
and build upon existing approaches. It has been tackled numerous times in
different areas of science. In this paper, we introduce an empirical
replicability study of three well-known algorithms for syntactic centric
aspect-based opinion mining. We show that reproducing results continues to be a
difficult endeavor, mainly due to the lack of details regarding preprocessing
and parameter setting, as well as due to the absence of available
implementations that clarify these details. We consider these are important
threats to validity of the research on the field, specifically when compared to
other problems in NLP where public datasets and code availability are critical
validity components. We conclude by encouraging code-based research, which we
think has a key role in helping researchers to understand the meaning of the
state-of-the-art better and to generate continuous advances.
| [
{
"version": "v1",
"created": "Fri, 6 Jan 2017 08:18:38 GMT"
}
] | 2017-01-09T00:00:00 | [
[
"Marrese-Taylor",
"Edison",
""
],
[
"Matsuo",
"Yutaka",
""
]
] | TITLE: Replication issues in syntax-based aspect extraction for opinion mining
ABSTRACT: Reproducing experiments is an important instrument to validate previous work
and build upon existing approaches. It has been tackled numerous times in
different areas of science. In this paper, we introduce an empirical
replicability study of three well-known algorithms for syntactic centric
aspect-based opinion mining. We show that reproducing results continues to be a
difficult endeavor, mainly due to the lack of details regarding preprocessing
and parameter setting, as well as due to the absence of available
implementations that clarify these details. We consider these are important
threats to validity of the research on the field, specifically when compared to
other problems in NLP where public datasets and code availability are critical
validity components. We conclude by encouraging code-based research, which we
think has a key role in helping researchers to understand the meaning of the
state-of-the-art better and to generate continuous advances.
| no_new_dataset | 0.944842 |
1701.01692 | Eshed Ohn-Bar | Eshed Ohn-Bar and Mohan M. Trivedi | To Boost or Not to Boost? On the Limits of Boosted Trees for Object
Detection | ICPR, December 2016. Added WIDER FACE test results (Fig. 5) | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We aim to study the modeling limitations of the commonly employed boosted
decision trees classifier. Inspired by the success of large, data-hungry visual
recognition models (e.g. deep convolutional neural networks), this paper
focuses on the relationship between modeling capacity of the weak learners,
dataset size, and dataset properties. A set of novel experiments on the Caltech
Pedestrian Detection benchmark results in the best known performance among
non-CNN techniques while operating at fast run-time speed. Furthermore, the
performance is on par with deep architectures (9.71% log-average miss rate),
while using only HOG+LUV channels as features. The conclusions from this study
are shown to generalize over different object detection domains as demonstrated
on the FDDB face detection benchmark (93.37% accuracy). Despite the impressive
performance, this study reveals the limited modeling capacity of the common
boosted trees model, motivating a need for architectural changes in order to
compete with multi-level and very deep architectures.
| [
{
"version": "v1",
"created": "Fri, 6 Jan 2017 16:51:32 GMT"
}
] | 2017-01-09T00:00:00 | [
[
"Ohn-Bar",
"Eshed",
""
],
[
"Trivedi",
"Mohan M.",
""
]
] | TITLE: To Boost or Not to Boost? On the Limits of Boosted Trees for Object
Detection
ABSTRACT: We aim to study the modeling limitations of the commonly employed boosted
decision trees classifier. Inspired by the success of large, data-hungry visual
recognition models (e.g. deep convolutional neural networks), this paper
focuses on the relationship between modeling capacity of the weak learners,
dataset size, and dataset properties. A set of novel experiments on the Caltech
Pedestrian Detection benchmark results in the best known performance among
non-CNN techniques while operating at fast run-time speed. Furthermore, the
performance is on par with deep architectures (9.71% log-average miss rate),
while using only HOG+LUV channels as features. The conclusions from this study
are shown to generalize over different object detection domains as demonstrated
on the FDDB face detection benchmark (93.37% accuracy). Despite the impressive
performance, this study reveals the limited modeling capacity of the common
boosted trees model, motivating a need for architectural changes in order to
compete with multi-level and very deep architectures.
| no_new_dataset | 0.946349 |
1504.07469 | Ariel Ephrat | Yair Poleg, Ariel Ephrat, Shmuel Peleg, Chetan Arora | Compact CNN for Indexing Egocentric Videos | null | IEEE WACV'16, March 2016, pp. 1-9 | 10.1109/WACV.2016.7477708 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While egocentric video is becoming increasingly popular, browsing it is very
difficult. In this paper we present a compact 3D Convolutional Neural Network
(CNN) architecture for long-term activity recognition in egocentric videos.
Recognizing long-term activities enables us to temporally segment (index) long
and unstructured egocentric videos. Existing methods for this task are based on
hand tuned features derived from visible objects, location of hands, as well as
optical flow.
Given a sparse optical flow volume as input, our CNN classifies the camera
wearer's activity. We obtain classification accuracy of 89%, which outperforms
the current state-of-the-art by 19%. Additional evaluation is performed on an
extended egocentric video dataset, classifying twice the amount of categories
than current state-of-the-art. Furthermore, our CNN is able to recognize
whether a video is egocentric or not with 99.2% accuracy, up by 24% from
current state-of-the-art. To better understand what the network actually
learns, we propose a novel visualization of CNN kernels as flow fields.
| [
{
"version": "v1",
"created": "Tue, 28 Apr 2015 13:41:16 GMT"
},
{
"version": "v2",
"created": "Tue, 24 Nov 2015 21:13:18 GMT"
}
] | 2017-01-06T00:00:00 | [
[
"Poleg",
"Yair",
""
],
[
"Ephrat",
"Ariel",
""
],
[
"Peleg",
"Shmuel",
""
],
[
"Arora",
"Chetan",
""
]
] | TITLE: Compact CNN for Indexing Egocentric Videos
ABSTRACT: While egocentric video is becoming increasingly popular, browsing it is very
difficult. In this paper we present a compact 3D Convolutional Neural Network
(CNN) architecture for long-term activity recognition in egocentric videos.
Recognizing long-term activities enables us to temporally segment (index) long
and unstructured egocentric videos. Existing methods for this task are based on
hand tuned features derived from visible objects, location of hands, as well as
optical flow.
Given a sparse optical flow volume as input, our CNN classifies the camera
wearer's activity. We obtain classification accuracy of 89%, which outperforms
the current state-of-the-art by 19%. Additional evaluation is performed on an
extended egocentric video dataset, classifying twice the amount of categories
than current state-of-the-art. Furthermore, our CNN is able to recognize
whether a video is egocentric or not with 99.2% accuracy, up by 24% from
current state-of-the-art. To better understand what the network actually
learns, we propose a novel visualization of CNN kernels as flow fields.
| no_new_dataset | 0.950595 |
1604.02316 | Willem Sanberg | Willem P. Sanberg, Gijs Dubbelman, Peter H.N. de With | Free-Space Detection with Self-Supervised and Online Trained Fully
Convolutional Networks | version as accepted at IS&T Electronic Imaging - Autonomous Vehicles
and Machines Conference (San Francisco USA, January 2017); updated with two
additional robustness experiments and formatted in conference style; 8 pages,
public data available | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, vision-based Advanced Driver Assist Systems have gained broad
interest. In this work, we investigate free-space detection, for which we
propose to employ a Fully Convolutional Network (FCN). We show that this FCN
can be trained in a self-supervised manner and achieve similar results compared
to training on manually annotated data, thereby reducing the need for large
manually annotated training sets. To this end, our self-supervised training
relies on a stereo-vision disparity system, to automatically generate (weak)
training labels for the color-based FCN. Additionally, our self-supervised
training facilitates online training of the FCN instead of offline.
Consequently, given that the applied FCN is relatively small, the free-space
analysis becomes highly adaptive to any traffic scene that the vehicle
encounters. We have validated our algorithm using publicly available data and
on a new challenging benchmark dataset that is released with this paper.
Experiments show that the online training boosts performance with 5% when
compared to offline training, both for Fmax and AP.
| [
{
"version": "v1",
"created": "Fri, 8 Apr 2016 11:54:40 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Jan 2017 13:59:30 GMT"
}
] | 2017-01-06T00:00:00 | [
[
"Sanberg",
"Willem P.",
""
],
[
"Dubbelman",
"Gijs",
""
],
[
"de With",
"Peter H. N.",
""
]
] | TITLE: Free-Space Detection with Self-Supervised and Online Trained Fully
Convolutional Networks
ABSTRACT: Recently, vision-based Advanced Driver Assist Systems have gained broad
interest. In this work, we investigate free-space detection, for which we
propose to employ a Fully Convolutional Network (FCN). We show that this FCN
can be trained in a self-supervised manner and achieve similar results compared
to training on manually annotated data, thereby reducing the need for large
manually annotated training sets. To this end, our self-supervised training
relies on a stereo-vision disparity system, to automatically generate (weak)
training labels for the color-based FCN. Additionally, our self-supervised
training facilitates online training of the FCN instead of offline.
Consequently, given that the applied FCN is relatively small, the free-space
analysis becomes highly adaptive to any traffic scene that the vehicle
encounters. We have validated our algorithm using publicly available data and
on a new challenging benchmark dataset that is released with this paper.
Experiments show that the online training boosts performance with 5% when
compared to offline training, both for Fmax and AP.
| new_dataset | 0.960731 |
1610.05653 | Luca Remaggi | Luca Remaggi and Philip J. B. Jackson and Philip Coleman and Wenwu
Wang | Acoustic Reflector Localization: Novel Image Source Reversion and Direct
Localization Methods | null | IEEE/ACM Transactions on Audio, Speech, and Language Processing,
vol. 25, no. 2, pp. 296-309, February 2017 | 10.1109/TASLP.2016.2633802 | null | cs.SD | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Acoustic reflector localization is an important issue in audio signal
processing, with direct applications in spatial audio, scene reconstruction,
and source separation. Several methods have recently been proposed to estimate
the 3D positions of acoustic reflectors given room impulse responses (RIRs). In
this article, we categorize these methods as "image-source reversion", which
localizes the image source before finding the reflector position, and "direct
localization", which localizes the reflector without intermediate steps. We
present five new contributions. First, an onset detector, called the clustered
dynamic programming projected phase-slope algorithm, is proposed to
automatically extract the time of arrival for early reflections within the RIRs
of a compact microphone array. Second, we propose an image-source reversion
method that uses the RIRs from a single loudspeaker. It is constructed by
combining an image source locator (the image source direction and range (ISDAR)
algorithm), and a reflector locator (using the loudspeaker-image bisection
(LIB) algorithm). Third, two variants of it, exploiting multiple loudspeakers,
are proposed. Fourth, we present a direct localization method, the ellipsoid
tangent sample consensus (ETSAC), exploiting ellipsoid properties to localize
the reflector. Finally, systematic experiments on simulated and measured RIRs
are presented, comparing the proposed methods with the state-of-the-art. ETSAC
generates errors lower than the alternative methods compared through our
datasets. Nevertheless, the ISDAR-LIB combination performs well and has a run
time 200 times faster than ETSAC.
| [
{
"version": "v1",
"created": "Tue, 18 Oct 2016 14:48:06 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Jan 2017 13:34:14 GMT"
}
] | 2017-01-06T00:00:00 | [
[
"Remaggi",
"Luca",
""
],
[
"Jackson",
"Philip J. B.",
""
],
[
"Coleman",
"Philip",
""
],
[
"Wang",
"Wenwu",
""
]
] | TITLE: Acoustic Reflector Localization: Novel Image Source Reversion and Direct
Localization Methods
ABSTRACT: Acoustic reflector localization is an important issue in audio signal
processing, with direct applications in spatial audio, scene reconstruction,
and source separation. Several methods have recently been proposed to estimate
the 3D positions of acoustic reflectors given room impulse responses (RIRs). In
this article, we categorize these methods as "image-source reversion", which
localizes the image source before finding the reflector position, and "direct
localization", which localizes the reflector without intermediate steps. We
present five new contributions. First, an onset detector, called the clustered
dynamic programming projected phase-slope algorithm, is proposed to
automatically extract the time of arrival for early reflections within the RIRs
of a compact microphone array. Second, we propose an image-source reversion
method that uses the RIRs from a single loudspeaker. It is constructed by
combining an image source locator (the image source direction and range (ISDAR)
algorithm), and a reflector locator (using the loudspeaker-image bisection
(LIB) algorithm). Third, two variants of it, exploiting multiple loudspeakers,
are proposed. Fourth, we present a direct localization method, the ellipsoid
tangent sample consensus (ETSAC), exploiting ellipsoid properties to localize
the reflector. Finally, systematic experiments on simulated and measured RIRs
are presented, comparing the proposed methods with the state-of-the-art. ETSAC
generates errors lower than the alternative methods compared through our
datasets. Nevertheless, the ISDAR-LIB combination performs well and has a run
time 200 times faster than ETSAC.
| no_new_dataset | 0.949201 |
1701.01142 | Anastasios Karakostas | Anastasios Karakostas, Alexia Briassouli, Konstantinos Avgerinakis,
Ioannis Kompatsiaris, Magda Tsolaki | The Dem@Care Experiments and Datasets: a Technical Report | 4pages 2figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The objective of Dem@Care is the development of a complete system providing
personal health services to people with dementia, as well as medical
professionals and caregivers, by using a multitude of sensors, for
context-aware, multi-parametric monitoring of lifestyle, ambient environment,
and health parameters. Multi-sensor data analysis, combined with intelligent
decision making mechanisms, will allow an accurate representation of the
person's current status and will provide the appropriate feedback, both to the
person and the associated caregivers, enhancing the standard clinical workflow.
Within the project framework, several data collection activities have taken
place to assist technical development and evaluation tasks. In all these
activities, particular attention has been paid to adhere to ethical guidelines
and preserve the participants' privacy. This technical report describes shorty
the (a) the main objectives of the project, (b) the main ethical principles and
(c) the datasets that have been already created.
| [
{
"version": "v1",
"created": "Sat, 17 Dec 2016 19:43:18 GMT"
}
] | 2017-01-06T00:00:00 | [
[
"Karakostas",
"Anastasios",
""
],
[
"Briassouli",
"Alexia",
""
],
[
"Avgerinakis",
"Konstantinos",
""
],
[
"Kompatsiaris",
"Ioannis",
""
],
[
"Tsolaki",
"Magda",
""
]
] | TITLE: The Dem@Care Experiments and Datasets: a Technical Report
ABSTRACT: The objective of Dem@Care is the development of a complete system providing
personal health services to people with dementia, as well as medical
professionals and caregivers, by using a multitude of sensors, for
context-aware, multi-parametric monitoring of lifestyle, ambient environment,
and health parameters. Multi-sensor data analysis, combined with intelligent
decision making mechanisms, will allow an accurate representation of the
person's current status and will provide the appropriate feedback, both to the
person and the associated caregivers, enhancing the standard clinical workflow.
Within the project framework, several data collection activities have taken
place to assist technical development and evaluation tasks. In all these
activities, particular attention has been paid to adhere to ethical guidelines
and preserve the participants' privacy. This technical report describes shorty
the (a) the main objectives of the project, (b) the main ethical principles and
(c) the datasets that have been already created.
| no_new_dataset | 0.917598 |
1701.01218 | Mohamed Elhoseiny Mohamed Elhoseiny | Mohamed Elhoseiny and Ahmed Elgammal | Overlapping Cover Local Regression Machines | Long Article with more experiments and analysis of conference paper
"Overlapping Domain Cover for Scalable and Accurate Regression Kernel
Machines", presented orally 2015 at the British Machine Vision Conference
2015 (BMVC) | null | null | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present the Overlapping Domain Cover (ODC) notion for kernel machines, as
a set of overlapping subsets of the data that covers the entire training set
and optimized to be spatially cohesive as possible. We show how this notion
benefit the speed of local kernel machines for regression in terms of both
speed while achieving while minimizing the prediction error. We propose an
efficient ODC framework, which is applicable to various regression models and
in particular reduces the complexity of Twin Gaussian Processes (TGP)
regression from cubic to quadratic. Our notion is also applicable to several
kernel methods (e.g., Gaussian Process Regression(GPR) and IWTGP regression, as
shown in our experiments). We also theoretically justified the idea behind our
method to improve local prediction by the overlapping cover. We validated and
analyzed our method on three benchmark human pose estimation datasets and
interesting findings are discussed.
| [
{
"version": "v1",
"created": "Thu, 5 Jan 2017 06:04:53 GMT"
}
] | 2017-01-06T00:00:00 | [
[
"Elhoseiny",
"Mohamed",
""
],
[
"Elgammal",
"Ahmed",
""
]
] | TITLE: Overlapping Cover Local Regression Machines
ABSTRACT: We present the Overlapping Domain Cover (ODC) notion for kernel machines, as
a set of overlapping subsets of the data that covers the entire training set
and optimized to be spatially cohesive as possible. We show how this notion
benefit the speed of local kernel machines for regression in terms of both
speed while achieving while minimizing the prediction error. We propose an
efficient ODC framework, which is applicable to various regression models and
in particular reduces the complexity of Twin Gaussian Processes (TGP)
regression from cubic to quadratic. Our notion is also applicable to several
kernel methods (e.g., Gaussian Process Regression(GPR) and IWTGP regression, as
shown in our experiments). We also theoretically justified the idea behind our
method to improve local prediction by the overlapping cover. We validated and
analyzed our method on three benchmark human pose estimation datasets and
interesting findings are discussed.
| no_new_dataset | 0.950041 |
1701.01232 | Dinusha Vatsalan | Dinusha Vatsalan, Peter Christen, and Erhard Rahm | Scalable Multi-Database Privacy-Preserving Record Linkage using Counting
Bloom Filters | This is an extended version of an article published in IEEE ICDM
International Workshop on Privacy and Discrimination in Data Mining (PDDM)
2016 - Scalable privacy-preserving linking of multiple databases using
counting Bloom filters | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Privacy-preserving record linkage (PPRL) aims at integrating sensitive
information from multiple disparate databases of different organizations. PPRL
approaches are increasingly required in real-world application areas such as
healthcare, national security, and business. Previous approaches have mostly
focused on linking only two databases as well as the use of a dedicated linkage
unit. Scaling PPRL to more databases (multi-party PPRL) is an open challenge
since privacy threats as well as the computation and communication costs for
record linkage increase significantly with the number of databases. We thus
propose the use of a new encoding method of sensitive data based on Counting
Bloom Filters (CBF) to improve privacy for multi-party PPRL. We also
investigate optimizations to reduce communication and computation costs for
CBF-based multi-party PPRL with and without the use of a dedicated linkage
unit. Empirical evaluations conducted with real datasets show the viability of
the proposed approaches and demonstrate their scalability, linkage quality, and
privacy protection.
| [
{
"version": "v1",
"created": "Thu, 5 Jan 2017 07:57:55 GMT"
}
] | 2017-01-06T00:00:00 | [
[
"Vatsalan",
"Dinusha",
""
],
[
"Christen",
"Peter",
""
],
[
"Rahm",
"Erhard",
""
]
] | TITLE: Scalable Multi-Database Privacy-Preserving Record Linkage using Counting
Bloom Filters
ABSTRACT: Privacy-preserving record linkage (PPRL) aims at integrating sensitive
information from multiple disparate databases of different organizations. PPRL
approaches are increasingly required in real-world application areas such as
healthcare, national security, and business. Previous approaches have mostly
focused on linking only two databases as well as the use of a dedicated linkage
unit. Scaling PPRL to more databases (multi-party PPRL) is an open challenge
since privacy threats as well as the computation and communication costs for
record linkage increase significantly with the number of databases. We thus
propose the use of a new encoding method of sensitive data based on Counting
Bloom Filters (CBF) to improve privacy for multi-party PPRL. We also
investigate optimizations to reduce communication and computation costs for
CBF-based multi-party PPRL with and without the use of a dedicated linkage
unit. Empirical evaluations conducted with real datasets show the viability of
the proposed approaches and demonstrate their scalability, linkage quality, and
privacy protection.
| no_new_dataset | 0.944689 |
1701.01250 | Jun Wang | Jun Wang and Qiang Tang | A Probabilistic View of Neighborhood-based Recommendation Methods | accepted by: ICDM 2016 - IEEE International Conference on Data Mining
series (ICDM) workshop CLOUDMINE, 7 pages | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Probabilistic graphic model is an elegant framework to compactly present
complex real-world observations by modeling uncertainty and logical flow
(conditionally independent factors). In this paper, we present a probabilistic
framework of neighborhood-based recommendation methods (PNBM) in which
similarity is regarded as an unobserved factor. Thus, PNBM leads the estimation
of user preference to maximizing a posterior over similarity. We further
introduce a novel multi-layer similarity descriptor which models and learns the
joint influence of various features under PNBM, and name the new framework
MPNBM. Empirical results on real-world datasets show that MPNBM allows very
accurate estimation of user preferences.
| [
{
"version": "v1",
"created": "Thu, 5 Jan 2017 08:53:02 GMT"
}
] | 2017-01-06T00:00:00 | [
[
"Wang",
"Jun",
""
],
[
"Tang",
"Qiang",
""
]
] | TITLE: A Probabilistic View of Neighborhood-based Recommendation Methods
ABSTRACT: Probabilistic graphic model is an elegant framework to compactly present
complex real-world observations by modeling uncertainty and logical flow
(conditionally independent factors). In this paper, we present a probabilistic
framework of neighborhood-based recommendation methods (PNBM) in which
similarity is regarded as an unobserved factor. Thus, PNBM leads the estimation
of user preference to maximizing a posterior over similarity. We further
introduce a novel multi-layer similarity descriptor which models and learns the
joint influence of various features under PNBM, and name the new framework
MPNBM. Empirical results on real-world datasets show that MPNBM allows very
accurate estimation of user preferences.
| no_new_dataset | 0.943764 |
1701.01276 | Dominik Kowald | Dominik Kowald, Subhash Pujari, Elisabeth Lex | Temporal Effects on Hashtag Reuse in Twitter: A Cognitive-Inspired
Hashtag Recommendation Approach | Accepted at WWW 2017 | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hashtags have become a powerful tool in social platforms such as Twitter to
categorize and search for content, and to spread short messages across members
of the social network. In this paper, we study temporal hashtag usage practices
in Twitter with the aim of designing a cognitive-inspired hashtag
recommendation algorithm we call BLLi,s. Our main idea is to incorporate the
effect of time on (i) individual hashtag reuse (i.e., reusing own hashtags),
and (ii) social hashtag reuse (i.e., reusing hashtags, which has been
previously used by a followee) into a predictive model. For this, we turn to
the Base-Level Learning (BLL) equation from the cognitive architecture ACT-R,
which accounts for the time-dependent decay of item exposure in human memory.
We validate BLLi,s using two crawled Twitter datasets in two evaluation
scenarios: firstly, only temporal usage patterns of past hashtag assignments
are utilized and secondly, these patterns are combined with a content-based
analysis of the current tweet. In both scenarios, we find not only that
temporal effects play an important role for both individual and social hashtag
reuse but also that BLLi,s provides significantly better prediction accuracy
and ranking results than current state-of-the-art hashtag recommendation
methods.
| [
{
"version": "v1",
"created": "Thu, 5 Jan 2017 11:07:16 GMT"
}
] | 2017-01-06T00:00:00 | [
[
"Kowald",
"Dominik",
""
],
[
"Pujari",
"Subhash",
""
],
[
"Lex",
"Elisabeth",
""
]
] | TITLE: Temporal Effects on Hashtag Reuse in Twitter: A Cognitive-Inspired
Hashtag Recommendation Approach
ABSTRACT: Hashtags have become a powerful tool in social platforms such as Twitter to
categorize and search for content, and to spread short messages across members
of the social network. In this paper, we study temporal hashtag usage practices
in Twitter with the aim of designing a cognitive-inspired hashtag
recommendation algorithm we call BLLi,s. Our main idea is to incorporate the
effect of time on (i) individual hashtag reuse (i.e., reusing own hashtags),
and (ii) social hashtag reuse (i.e., reusing hashtags, which has been
previously used by a followee) into a predictive model. For this, we turn to
the Base-Level Learning (BLL) equation from the cognitive architecture ACT-R,
which accounts for the time-dependent decay of item exposure in human memory.
We validate BLLi,s using two crawled Twitter datasets in two evaluation
scenarios: firstly, only temporal usage patterns of past hashtag assignments
are utilized and secondly, these patterns are combined with a content-based
analysis of the current tweet. In both scenarios, we find not only that
temporal effects play an important role for both individual and social hashtag
reuse but also that BLLi,s provides significantly better prediction accuracy
and ranking results than current state-of-the-art hashtag recommendation
methods.
| no_new_dataset | 0.949809 |
1612.01848 | Aaditya Prakash | Aaditya Prakash, Siyuan Zhao, Sadid A. Hasan, Vivek Datla, Kathy Lee,
Ashequl Qadir, Joey Liu, Oladimeji Farri | Condensed Memory Networks for Clinical Diagnostic Inferencing | Accepted to AAAI 2017 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Diagnosis of a clinical condition is a challenging task, which often requires
significant medical investigation. Previous work related to diagnostic
inferencing problems mostly consider multivariate observational data (e.g.
physiological signals, lab tests etc.). In contrast, we explore the problem
using free-text medical notes recorded in an electronic health record (EHR).
Complex tasks like these can benefit from structured knowledge bases, but those
are not scalable. We instead exploit raw text from Wikipedia as a knowledge
source. Memory networks have been demonstrated to be effective in tasks which
require comprehension of free-form text. They use the final iteration of the
learned representation to predict probable classes. We introduce condensed
memory neural networks (C-MemNNs), a novel model with iterative condensation of
memory representations that preserves the hierarchy of features in the memory.
Experiments on the MIMIC-III dataset show that the proposed model outperforms
other variants of memory networks to predict the most probable diagnoses given
a complex clinical scenario.
| [
{
"version": "v1",
"created": "Tue, 6 Dec 2016 15:15:27 GMT"
},
{
"version": "v2",
"created": "Tue, 3 Jan 2017 20:41:20 GMT"
}
] | 2017-01-05T00:00:00 | [
[
"Prakash",
"Aaditya",
""
],
[
"Zhao",
"Siyuan",
""
],
[
"Hasan",
"Sadid A.",
""
],
[
"Datla",
"Vivek",
""
],
[
"Lee",
"Kathy",
""
],
[
"Qadir",
"Ashequl",
""
],
[
"Liu",
"Joey",
""
],
[
"Farri",
"Oladimeji",
""
]
] | TITLE: Condensed Memory Networks for Clinical Diagnostic Inferencing
ABSTRACT: Diagnosis of a clinical condition is a challenging task, which often requires
significant medical investigation. Previous work related to diagnostic
inferencing problems mostly consider multivariate observational data (e.g.
physiological signals, lab tests etc.). In contrast, we explore the problem
using free-text medical notes recorded in an electronic health record (EHR).
Complex tasks like these can benefit from structured knowledge bases, but those
are not scalable. We instead exploit raw text from Wikipedia as a knowledge
source. Memory networks have been demonstrated to be effective in tasks which
require comprehension of free-form text. They use the final iteration of the
learned representation to predict probable classes. We introduce condensed
memory neural networks (C-MemNNs), a novel model with iterative condensation of
memory representations that preserves the hierarchy of features in the memory.
Experiments on the MIMIC-III dataset show that the proposed model outperforms
other variants of memory networks to predict the most probable diagnoses given
a complex clinical scenario.
| no_new_dataset | 0.949295 |
1701.00831 | Alessandro Rossi | Marco Gori, Marco Maggini, Alessandro Rossi | Collapsing of dimensionality | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We analyze a new approach to Machine Learning coming from a modification of
classical regularization networks by casting the process in the time dimension,
leading to a sort of collapse of dimensionality in the problem of learning the
model parameters. This approach allows the definition of a online learning
algorithm that progressively accumulates the knowledge provided in the input
trajectory. The regularization principle leads to a solution based on a
dynamical system that is paired with a procedure to develop a graph structure
that stores the input regularities acquired from the temporal evolution. We
report an extensive experimental exploration on the behavior of the parameter
of the proposed model and an evaluation on artificial dataset.
| [
{
"version": "v1",
"created": "Tue, 3 Jan 2017 20:54:52 GMT"
}
] | 2017-01-05T00:00:00 | [
[
"Gori",
"Marco",
""
],
[
"Maggini",
"Marco",
""
],
[
"Rossi",
"Alessandro",
""
]
] | TITLE: Collapsing of dimensionality
ABSTRACT: We analyze a new approach to Machine Learning coming from a modification of
classical regularization networks by casting the process in the time dimension,
leading to a sort of collapse of dimensionality in the problem of learning the
model parameters. This approach allows the definition of a online learning
algorithm that progressively accumulates the knowledge provided in the input
trajectory. The regularization principle leads to a solution based on a
dynamical system that is paired with a procedure to develop a graph structure
that stores the input regularities acquired from the temporal evolution. We
report an extensive experimental exploration on the behavior of the parameter
of the proposed model and an evaluation on artificial dataset.
| no_new_dataset | 0.947186 |
1701.00893 | Jorge Luis Rivero | Jorge Luis Rivero P\'erez, Bernardete Ribeiro, Kadir Hector Ortiz | A Comparison of Algorithms for Intrusion Detection on Batch and Data
Stream Environments | in Spanish | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Intruders detection in computer networks has some deficiencies from machine
learning approach, given by the nature of the application. The principal
problem is the modest display of detection systems based on learning algorithms
under the constraints imposed by real environments. This article focuses on the
machine learning approach for network intrusion detection in batch and data
stream environments. First, we propose and describe three variants of KDD99
dataset preprocessing including attribute selection. Secondly, a thoroughly
experimentation is performed from evaluating and comparing representative batch
learning algorithms on the variants obtained from KDD99 pre processing.
Finally, since network traffic is a constant data stream, which can present
concept drifting with high rate of false positive, along with the fact that
there are not many researches addressing intrusion detection on streaming
environments, lead us to make a comparison of various representative data
stream classification algorithms. This research allows determining the
algorithms that better perform on the proposed variants of KDD99 for both batch
and data stream environments.
| [
{
"version": "v1",
"created": "Wed, 4 Jan 2017 03:55:55 GMT"
}
] | 2017-01-05T00:00:00 | [
[
"Pérez",
"Jorge Luis Rivero",
""
],
[
"Ribeiro",
"Bernardete",
""
],
[
"Ortiz",
"Kadir Hector",
""
]
] | TITLE: A Comparison of Algorithms for Intrusion Detection on Batch and Data
Stream Environments
ABSTRACT: Intruders detection in computer networks has some deficiencies from machine
learning approach, given by the nature of the application. The principal
problem is the modest display of detection systems based on learning algorithms
under the constraints imposed by real environments. This article focuses on the
machine learning approach for network intrusion detection in batch and data
stream environments. First, we propose and describe three variants of KDD99
dataset preprocessing including attribute selection. Secondly, a thoroughly
experimentation is performed from evaluating and comparing representative batch
learning algorithms on the variants obtained from KDD99 pre processing.
Finally, since network traffic is a constant data stream, which can present
concept drifting with high rate of false positive, along with the fact that
there are not many researches addressing intrusion detection on streaming
environments, lead us to make a comparison of various representative data
stream classification algorithms. This research allows determining the
algorithms that better perform on the proposed variants of KDD99 for both batch
and data stream environments.
| no_new_dataset | 0.947866 |
1701.00903 | Lakshmi Narasimhan Govindarajan | Li Liu and Yongzhong Yang and Lakshmi Narasimhan Govindarajan and Shu
Wang and Bin Hu and Li Cheng and David S. Rosenblum | An Interval-Based Bayesian Generative Model for Human Complex Activity
Recognition | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Complex activity recognition is challenging due to the inherent uncertainty
and diversity of performing a complex activity. Normally, each instance of a
complex activity has its own configuration of atomic actions and their temporal
dependencies. We propose in this paper an atomic action-based Bayesian model
that constructs Allen's interval relation networks to characterize complex
activities with structural varieties in a probabilistic generative way: By
introducing latent variables from the Chinese restaurant process, our approach
is able to capture all possible styles of a particular complex activity as a
unique set of distributions over atomic actions and relations. We also show
that local temporal dependencies can be retained and are globally consistent in
the resulting interval network. Moreover, network structure can be learned from
empirical data. A new dataset of complex hand activities has been constructed
and made publicly available, which is much larger in size than any existing
datasets. Empirical evaluations on benchmark datasets as well as our in-house
dataset demonstrate the competitiveness of our approach.
| [
{
"version": "v1",
"created": "Wed, 4 Jan 2017 05:53:46 GMT"
}
] | 2017-01-05T00:00:00 | [
[
"Liu",
"Li",
""
],
[
"Yang",
"Yongzhong",
""
],
[
"Govindarajan",
"Lakshmi Narasimhan",
""
],
[
"Wang",
"Shu",
""
],
[
"Hu",
"Bin",
""
],
[
"Cheng",
"Li",
""
],
[
"Rosenblum",
"David S.",
""
]
] | TITLE: An Interval-Based Bayesian Generative Model for Human Complex Activity
Recognition
ABSTRACT: Complex activity recognition is challenging due to the inherent uncertainty
and diversity of performing a complex activity. Normally, each instance of a
complex activity has its own configuration of atomic actions and their temporal
dependencies. We propose in this paper an atomic action-based Bayesian model
that constructs Allen's interval relation networks to characterize complex
activities with structural varieties in a probabilistic generative way: By
introducing latent variables from the Chinese restaurant process, our approach
is able to capture all possible styles of a particular complex activity as a
unique set of distributions over atomic actions and relations. We also show
that local temporal dependencies can be retained and are globally consistent in
the resulting interval network. Moreover, network structure can be learned from
empirical data. A new dataset of complex hand activities has been constructed
and made publicly available, which is much larger in size than any existing
datasets. Empirical evaluations on benchmark datasets as well as our in-house
dataset demonstrate the competitiveness of our approach.
| new_dataset | 0.959307 |
1701.01094 | Karamjit Singh | Karamjit Singh, Garima Gupta, Gautam Shroff, and Puneet Agarwal | Minimally-Supervised Attribute Fusion for Data Lakes | null | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Aggregate analysis, such as comparing country-wise sales versus global market
share across product categories, is often complicated by the unavailability of
common join attributes, e.g., category, across diverse datasets from different
geographies or retail chains, even after disparate data is technically ingested
into a common data lake. Sometimes this is a missing data issue, while in other
cases it may be inherent, e.g., the records in different geographical databases
may actually describe different product 'SKUs', or follow different norms for
categorization. Record linkage techniques can be used to automatically map
products in different data sources to a common set of global attributes,
thereby enabling federated aggregation joins to be performed. Traditional
record-linkage techniques are typically unsupervised, relying textual
similarity features across attributes to estimate matches. In this paper, we
present an ensemble model combining minimal supervision using Bayesian network
models together with unsupervised textual matching for automating such
'attribute fusion'. We present results of our approach on a large volume of
real-life data from a market-research scenario and compare with a standard
record matching algorithm. Finally we illustrate how attribute fusion using
machine learning could be included as a data-lake management feature,
especially as our approach also provides confidence values for matches,
enabling human intervention, if required.
| [
{
"version": "v1",
"created": "Wed, 4 Jan 2017 18:19:19 GMT"
}
] | 2017-01-05T00:00:00 | [
[
"Singh",
"Karamjit",
""
],
[
"Gupta",
"Garima",
""
],
[
"Shroff",
"Gautam",
""
],
[
"Agarwal",
"Puneet",
""
]
] | TITLE: Minimally-Supervised Attribute Fusion for Data Lakes
ABSTRACT: Aggregate analysis, such as comparing country-wise sales versus global market
share across product categories, is often complicated by the unavailability of
common join attributes, e.g., category, across diverse datasets from different
geographies or retail chains, even after disparate data is technically ingested
into a common data lake. Sometimes this is a missing data issue, while in other
cases it may be inherent, e.g., the records in different geographical databases
may actually describe different product 'SKUs', or follow different norms for
categorization. Record linkage techniques can be used to automatically map
products in different data sources to a common set of global attributes,
thereby enabling federated aggregation joins to be performed. Traditional
record-linkage techniques are typically unsupervised, relying textual
similarity features across attributes to estimate matches. In this paper, we
present an ensemble model combining minimal supervision using Bayesian network
models together with unsupervised textual matching for automating such
'attribute fusion'. We present results of our approach on a large volume of
real-life data from a market-research scenario and compare with a standard
record matching algorithm. Finally we illustrate how attribute fusion using
machine learning could be included as a data-lake management feature,
especially as our approach also provides confidence values for matches,
enabling human intervention, if required.
| no_new_dataset | 0.947962 |
1604.02646 | Biswajit Paria | Biswajit Paria, Vikas Reddy, Anirban Santara, Pabitra Mitra | Visualization Regularizers for Neural Network based Image Recognition | null | null | null | null | cs.LG cs.CV cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The success of deep neural networks is mostly due their ability to learn
meaningful features from the data. Features learned in the hidden layers of
deep neural networks trained in computer vision tasks have been shown to be
similar to mid-level vision features. We leverage this fact in this work and
propose the visualization regularizer for image tasks. The proposed
regularization technique enforces smoothness of the features learned by hidden
nodes and turns out to be a special case of Tikhonov regularization. We achieve
higher classification accuracy as compared to existing regularizers such as the
L2 norm regularizer and dropout, on benchmark datasets without changing the
training computational complexity.
| [
{
"version": "v1",
"created": "Sun, 10 Apr 2016 07:02:40 GMT"
},
{
"version": "v2",
"created": "Sun, 15 May 2016 14:38:38 GMT"
},
{
"version": "v3",
"created": "Tue, 3 Jan 2017 10:07:22 GMT"
}
] | 2017-01-04T00:00:00 | [
[
"Paria",
"Biswajit",
""
],
[
"Reddy",
"Vikas",
""
],
[
"Santara",
"Anirban",
""
],
[
"Mitra",
"Pabitra",
""
]
] | TITLE: Visualization Regularizers for Neural Network based Image Recognition
ABSTRACT: The success of deep neural networks is mostly due their ability to learn
meaningful features from the data. Features learned in the hidden layers of
deep neural networks trained in computer vision tasks have been shown to be
similar to mid-level vision features. We leverage this fact in this work and
propose the visualization regularizer for image tasks. The proposed
regularization technique enforces smoothness of the features learned by hidden
nodes and turns out to be a special case of Tikhonov regularization. We achieve
higher classification accuracy as compared to existing regularizers such as the
L2 norm regularizer and dropout, on benchmark datasets without changing the
training computational complexity.
| no_new_dataset | 0.950869 |
1607.06997 | Xiangyun Zhao | Xiangyun Zhao, Xiaodan Liang, Luoqi Liu, Teng Li, Yugang Han, Nuno
Vasconcelos, Shuicheng Yan | Peak-Piloted Deep Network for Facial Expression Recognition | Published in ECCV 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Objective functions for training of deep networks for face-related
recognition tasks, such as facial expression recognition (FER), usually
consider each sample independently. In this work, we present a novel
peak-piloted deep network (PPDN) that uses a sample with peak expression (easy
sample) to supervise the intermediate feature responses for a sample of
non-peak expression (hard sample) of the same type and from the same subject.
The expression evolving process from non-peak expression to peak expression can
thus be implicitly embedded in the network to achieve the invariance to
expression intensities. A special purpose back-propagation procedure, peak
gradient suppression (PGS), is proposed for network training. It drives the
intermediate-layer feature responses of non-peak expression samples towards
those of the corresponding peak expression samples, while avoiding the inverse.
This avoids degrading the recognition capability for samples of peak expression
due to interference from their non-peak expression counterparts. Extensive
comparisons on two popular FER datasets, Oulu-CASIA and CK+, demonstrate the
superiority of the PPDN over state-ofthe-art FER methods, as well as the
advantages of both the network structure and the optimization strategy.
Moreover, it is shown that PPDN is a general architecture, extensible to other
tasks by proper definition of peak and non-peak samples. This is validated by
experiments that show state-of-the-art performance on pose-invariant face
recognition, using the Multi-PIE dataset.
| [
{
"version": "v1",
"created": "Sun, 24 Jul 2016 04:26:41 GMT"
},
{
"version": "v2",
"created": "Tue, 3 Jan 2017 08:19:24 GMT"
}
] | 2017-01-04T00:00:00 | [
[
"Zhao",
"Xiangyun",
""
],
[
"Liang",
"Xiaodan",
""
],
[
"Liu",
"Luoqi",
""
],
[
"Li",
"Teng",
""
],
[
"Han",
"Yugang",
""
],
[
"Vasconcelos",
"Nuno",
""
],
[
"Yan",
"Shuicheng",
""
]
] | TITLE: Peak-Piloted Deep Network for Facial Expression Recognition
ABSTRACT: Objective functions for training of deep networks for face-related
recognition tasks, such as facial expression recognition (FER), usually
consider each sample independently. In this work, we present a novel
peak-piloted deep network (PPDN) that uses a sample with peak expression (easy
sample) to supervise the intermediate feature responses for a sample of
non-peak expression (hard sample) of the same type and from the same subject.
The expression evolving process from non-peak expression to peak expression can
thus be implicitly embedded in the network to achieve the invariance to
expression intensities. A special purpose back-propagation procedure, peak
gradient suppression (PGS), is proposed for network training. It drives the
intermediate-layer feature responses of non-peak expression samples towards
those of the corresponding peak expression samples, while avoiding the inverse.
This avoids degrading the recognition capability for samples of peak expression
due to interference from their non-peak expression counterparts. Extensive
comparisons on two popular FER datasets, Oulu-CASIA and CK+, demonstrate the
superiority of the PPDN over state-ofthe-art FER methods, as well as the
advantages of both the network structure and the optimization strategy.
Moreover, it is shown that PPDN is a general architecture, extensible to other
tasks by proper definition of peak and non-peak samples. This is validated by
experiments that show state-of-the-art performance on pose-invariant face
recognition, using the Multi-PIE dataset.
| no_new_dataset | 0.947527 |
1701.00576 | Huijia Wu | Huijia Wu, Jiajun Zhang, Chengqing Zong | Shortcut Sequence Tagging | 10 pages. arXiv admin note: text overlap with arXiv:1610.03167 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep stacked RNNs are usually hard to train. Adding shortcut connections
across different layers is a common way to ease the training of stacked
networks. However, extra shortcuts make the recurrent step more complicated. To
simply the stacked architecture, we propose a framework called shortcut block,
which is a marriage of the gating mechanism and shortcuts, while discarding the
self-connected part in LSTM cell. We present extensive empirical experiments
showing that this design makes training easy and improves generalization. We
propose various shortcut block topologies and compositions to explore its
effectiveness. Based on this architecture, we obtain a 6% relatively
improvement over the state-of-the-art on CCGbank supertagging dataset. We also
get comparable results on POS tagging task.
| [
{
"version": "v1",
"created": "Tue, 3 Jan 2017 04:15:51 GMT"
}
] | 2017-01-04T00:00:00 | [
[
"Wu",
"Huijia",
""
],
[
"Zhang",
"Jiajun",
""
],
[
"Zong",
"Chengqing",
""
]
] | TITLE: Shortcut Sequence Tagging
ABSTRACT: Deep stacked RNNs are usually hard to train. Adding shortcut connections
across different layers is a common way to ease the training of stacked
networks. However, extra shortcuts make the recurrent step more complicated. To
simply the stacked architecture, we propose a framework called shortcut block,
which is a marriage of the gating mechanism and shortcuts, while discarding the
self-connected part in LSTM cell. We present extensive empirical experiments
showing that this design makes training easy and improves generalization. We
propose various shortcut block topologies and compositions to explore its
effectiveness. Based on this architecture, we obtain a 6% relatively
improvement over the state-of-the-art on CCGbank supertagging dataset. We also
get comparable results on POS tagging task.
| no_new_dataset | 0.946941 |
1701.00595 | Saeid Hosseini | Saeid Hosseini, Hongzhi Yin, Xiaofang Zhou, Shazia Sadiq | Leveraging Multi-aspect Time-related Influence in Location
Recommendation | null | null | null | null | cs.CY cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Point-Of-Interest (POI) recommendation aims to mine a user's visiting history
and find her/his potentially preferred places. Although location recommendation
methods have been studied and improved pervasively, the challenges w.r.t
employing various influences including temporal aspect still remain. Inspired
by the fact that time includes numerous granular slots (e.g. minute, hour, day,
week and etc.), in this paper, we define a new problem to perform
recommendation through exploiting all diversified temporal factors. In
particular, we argue that most existing methods only focus on a limited number
of time-related features and neglect others. Furthermore, considering a
specific granularity (e.g. time of a day) in recommendation cannot always apply
to each user or each dataset. To address the challenges, we propose a
probabilistic generative model, named after Multi-aspect Time-related Influence
(MATI) to promote POI recommendation. We also develop a novel optimization
algorithm based on Expectation Maximization (EM). Our MATI model firstly
detects a user's temporal multivariate orientation using her check-in log in
Location-based Social Networks(LBSNs). It then performs recommendation using
temporal correlations between the user and proposed locations. Our method is
adaptable to various types of recommendation systems and can work efficiently
in multiple time-scales. Extensive experimental results on two large-scale LBSN
datasets verify the effectiveness of our method over other competitors.
| [
{
"version": "v1",
"created": "Tue, 3 Jan 2017 06:50:50 GMT"
}
] | 2017-01-04T00:00:00 | [
[
"Hosseini",
"Saeid",
""
],
[
"Yin",
"Hongzhi",
""
],
[
"Zhou",
"Xiaofang",
""
],
[
"Sadiq",
"Shazia",
""
]
] | TITLE: Leveraging Multi-aspect Time-related Influence in Location
Recommendation
ABSTRACT: Point-Of-Interest (POI) recommendation aims to mine a user's visiting history
and find her/his potentially preferred places. Although location recommendation
methods have been studied and improved pervasively, the challenges w.r.t
employing various influences including temporal aspect still remain. Inspired
by the fact that time includes numerous granular slots (e.g. minute, hour, day,
week and etc.), in this paper, we define a new problem to perform
recommendation through exploiting all diversified temporal factors. In
particular, we argue that most existing methods only focus on a limited number
of time-related features and neglect others. Furthermore, considering a
specific granularity (e.g. time of a day) in recommendation cannot always apply
to each user or each dataset. To address the challenges, we propose a
probabilistic generative model, named after Multi-aspect Time-related Influence
(MATI) to promote POI recommendation. We also develop a novel optimization
algorithm based on Expectation Maximization (EM). Our MATI model firstly
detects a user's temporal multivariate orientation using her check-in log in
Location-based Social Networks(LBSNs). It then performs recommendation using
temporal correlations between the user and proposed locations. Our method is
adaptable to various types of recommendation systems and can work efficiently
in multiple time-scales. Extensive experimental results on two large-scale LBSN
datasets verify the effectiveness of our method over other competitors.
| no_new_dataset | 0.946843 |
1607.04579 | Bo Dai | Bo Dai, Niao He, Yunpeng Pan, Byron Boots, Le Song | Learning from Conditional Distributions via Dual Embeddings | 24 pages, 11 figures | null | null | null | cs.LG math.OC stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many machine learning tasks, such as learning with invariance and policy
evaluation in reinforcement learning, can be characterized as problems of
learning from conditional distributions. In such problems, each sample $x$
itself is associated with a conditional distribution $p(z|x)$ represented by
samples $\{z_i\}_{i=1}^M$, and the goal is to learn a function $f$ that links
these conditional distributions to target values $y$. These learning problems
become very challenging when we only have limited samples or in the extreme
case only one sample from each conditional distribution. Commonly used
approaches either assume that $z$ is independent of $x$, or require an
overwhelmingly large samples from each conditional distribution.
To address these challenges, we propose a novel approach which employs a new
min-max reformulation of the learning from conditional distribution problem.
With such new reformulation, we only need to deal with the joint distribution
$p(z,x)$. We also design an efficient learning algorithm, Embedding-SGD, and
establish theoretical sample complexity for such problems. Finally, our
numerical experiments on both synthetic and real-world datasets show that the
proposed approach can significantly improve over the existing algorithms.
| [
{
"version": "v1",
"created": "Fri, 15 Jul 2016 16:56:22 GMT"
},
{
"version": "v2",
"created": "Sat, 31 Dec 2016 06:54:37 GMT"
}
] | 2017-01-03T00:00:00 | [
[
"Dai",
"Bo",
""
],
[
"He",
"Niao",
""
],
[
"Pan",
"Yunpeng",
""
],
[
"Boots",
"Byron",
""
],
[
"Song",
"Le",
""
]
] | TITLE: Learning from Conditional Distributions via Dual Embeddings
ABSTRACT: Many machine learning tasks, such as learning with invariance and policy
evaluation in reinforcement learning, can be characterized as problems of
learning from conditional distributions. In such problems, each sample $x$
itself is associated with a conditional distribution $p(z|x)$ represented by
samples $\{z_i\}_{i=1}^M$, and the goal is to learn a function $f$ that links
these conditional distributions to target values $y$. These learning problems
become very challenging when we only have limited samples or in the extreme
case only one sample from each conditional distribution. Commonly used
approaches either assume that $z$ is independent of $x$, or require an
overwhelmingly large samples from each conditional distribution.
To address these challenges, we propose a novel approach which employs a new
min-max reformulation of the learning from conditional distribution problem.
With such new reformulation, we only need to deal with the joint distribution
$p(z,x)$. We also design an efficient learning algorithm, Embedding-SGD, and
establish theoretical sample complexity for such problems. Finally, our
numerical experiments on both synthetic and real-world datasets show that the
proposed approach can significantly improve over the existing algorithms.
| no_new_dataset | 0.941007 |
1612.02287 | Frank Michel | Frank Michel, Alexander Kirillov, Eric Brachmann, Alexander Krull,
Stefan Gumhold, Bogdan Savchynskyy, Carsten Rother | Global Hypothesis Generation for 6D Object Pose Estimation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper addresses the task of estimating the 6D pose of a known 3D object
from a single RGB-D image. Most modern approaches solve this task in three
steps: i) Compute local features; ii) Generate a pool of pose-hypotheses; iii)
Select and refine a pose from the pool. This work focuses on the second step.
While all existing approaches generate the hypotheses pool via local reasoning,
e.g. RANSAC or Hough-voting, we are the first to show that global reasoning is
beneficial at this stage. In particular, we formulate a novel fully-connected
Conditional Random Field (CRF) that outputs a very small number of
pose-hypotheses. Despite the potential functions of the CRF being non-Gaussian,
we give a new and efficient two-step optimization procedure, with some
guarantees for optimality. We utilize our global hypotheses generation
procedure to produce results that exceed state-of-the-art for the challenging
"Occluded Object Dataset".
| [
{
"version": "v1",
"created": "Wed, 7 Dec 2016 15:23:12 GMT"
},
{
"version": "v2",
"created": "Thu, 8 Dec 2016 08:50:37 GMT"
},
{
"version": "v3",
"created": "Mon, 2 Jan 2017 09:09:03 GMT"
}
] | 2017-01-03T00:00:00 | [
[
"Michel",
"Frank",
""
],
[
"Kirillov",
"Alexander",
""
],
[
"Brachmann",
"Eric",
""
],
[
"Krull",
"Alexander",
""
],
[
"Gumhold",
"Stefan",
""
],
[
"Savchynskyy",
"Bogdan",
""
],
[
"Rother",
"Carsten",
""
]
] | TITLE: Global Hypothesis Generation for 6D Object Pose Estimation
ABSTRACT: This paper addresses the task of estimating the 6D pose of a known 3D object
from a single RGB-D image. Most modern approaches solve this task in three
steps: i) Compute local features; ii) Generate a pool of pose-hypotheses; iii)
Select and refine a pose from the pool. This work focuses on the second step.
While all existing approaches generate the hypotheses pool via local reasoning,
e.g. RANSAC or Hough-voting, we are the first to show that global reasoning is
beneficial at this stage. In particular, we formulate a novel fully-connected
Conditional Random Field (CRF) that outputs a very small number of
pose-hypotheses. Despite the potential functions of the CRF being non-Gaussian,
we give a new and efficient two-step optimization procedure, with some
guarantees for optimality. We utilize our global hypotheses generation
procedure to produce results that exceed state-of-the-art for the challenging
"Occluded Object Dataset".
| no_new_dataset | 0.94699 |
1612.05322 | Yutong Zheng | Yutong Zheng, Chenchen Zhu, Khoa Luu, Chandrasekhar Bhagavatula, T.
Hoang Ngan Le, Marios Savvides | Towards a Deep Learning Framework for Unconstrained Face Detection | Accepted by BTAS 2016. arXiv admin note: substantial text overlap
with arXiv:1606.05413 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Robust face detection is one of the most important pre-processing steps to
support facial expression analysis, facial landmarking, face recognition, pose
estimation, building of 3D facial models, etc. Although this topic has been
intensely studied for decades, it is still challenging due to numerous variants
of face images in real-world scenarios. In this paper, we present a novel
approach named Multiple Scale Faster Region-based Convolutional Neural Network
(MS-FRCNN) to robustly detect human facial regions from images collected under
various challenging conditions, e.g. large occlusions, extremely low
resolutions, facial expressions, strong illumination variations, etc. The
proposed approach is benchmarked on two challenging face detection databases,
i.e. the Wider Face database and the Face Detection Dataset and Benchmark
(FDDB), and compared against recent other face detection methods, e.g.
Two-stage CNN, Multi-scale Cascade CNN, Faceness, Aggregate Chanel Features,
HeadHunter, Multi-view Face Detection, Cascade CNN, etc. The experimental
results show that our proposed approach consistently achieves highly
competitive results with the state-of-the-art performance against other recent
face detection methods.
| [
{
"version": "v1",
"created": "Fri, 16 Dec 2016 00:34:06 GMT"
},
{
"version": "v2",
"created": "Mon, 2 Jan 2017 18:06:49 GMT"
}
] | 2017-01-03T00:00:00 | [
[
"Zheng",
"Yutong",
""
],
[
"Zhu",
"Chenchen",
""
],
[
"Luu",
"Khoa",
""
],
[
"Bhagavatula",
"Chandrasekhar",
""
],
[
"Le",
"T. Hoang Ngan",
""
],
[
"Savvides",
"Marios",
""
]
] | TITLE: Towards a Deep Learning Framework for Unconstrained Face Detection
ABSTRACT: Robust face detection is one of the most important pre-processing steps to
support facial expression analysis, facial landmarking, face recognition, pose
estimation, building of 3D facial models, etc. Although this topic has been
intensely studied for decades, it is still challenging due to numerous variants
of face images in real-world scenarios. In this paper, we present a novel
approach named Multiple Scale Faster Region-based Convolutional Neural Network
(MS-FRCNN) to robustly detect human facial regions from images collected under
various challenging conditions, e.g. large occlusions, extremely low
resolutions, facial expressions, strong illumination variations, etc. The
proposed approach is benchmarked on two challenging face detection databases,
i.e. the Wider Face database and the Face Detection Dataset and Benchmark
(FDDB), and compared against recent other face detection methods, e.g.
Two-stage CNN, Multi-scale Cascade CNN, Faceness, Aggregate Chanel Features,
HeadHunter, Multi-view Face Detection, Cascade CNN, etc. The experimental
results show that our proposed approach consistently achieves highly
competitive results with the state-of-the-art performance against other recent
face detection methods.
| no_new_dataset | 0.944382 |
1701.00040 | Emmanuel Osegi | E.N. Osegi | p-DLA: A Predictive System Model for Onshore Oil and Gas Pipeline
Dataset Classification and Monitoring - Part 1 | Working Paper | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | With the rise in militant activity and rogue behaviour in oil and gas regions
around the world, oil pipeline disturbances is on the increase leading to huge
losses to multinational operators and the countries where such facilities
exist. However, this situation can be averted if adequate predictive monitoring
schemes are put in place. We propose in the first part of this paper, an
artificial intelligence predictive monitoring system capable of predictive
classification and pattern recognition of pipeline datasets. The predictive
system is based on a highly sparse predictive Deviant Learning Algorithm
(p-DLA) designed to synthesize a sequence of memory predictive clusters for
eventual monitoring, control and decision making. The DLA (p-DLA) is compared
with a popular machine learning algorithm, the Long Short-Term Memory (LSTM)
which is based on a temporal version of the standard feed-forward
back-propagation trained artificial neural networks (ANNs). The results of
simulations study show impressive results and validates the sparse memory
predictive approach which favours the sub-synthesis of a highly compressed and
low dimensional knowledge discovery and information prediction scheme. It also
shows that the proposed new approach is competitive with a well-known and
proven AI approach such as the LSTM.
| [
{
"version": "v1",
"created": "Sat, 31 Dec 2016 00:40:17 GMT"
}
] | 2017-01-03T00:00:00 | [
[
"Osegi",
"E. N.",
""
]
] | TITLE: p-DLA: A Predictive System Model for Onshore Oil and Gas Pipeline
Dataset Classification and Monitoring - Part 1
ABSTRACT: With the rise in militant activity and rogue behaviour in oil and gas regions
around the world, oil pipeline disturbances is on the increase leading to huge
losses to multinational operators and the countries where such facilities
exist. However, this situation can be averted if adequate predictive monitoring
schemes are put in place. We propose in the first part of this paper, an
artificial intelligence predictive monitoring system capable of predictive
classification and pattern recognition of pipeline datasets. The predictive
system is based on a highly sparse predictive Deviant Learning Algorithm
(p-DLA) designed to synthesize a sequence of memory predictive clusters for
eventual monitoring, control and decision making. The DLA (p-DLA) is compared
with a popular machine learning algorithm, the Long Short-Term Memory (LSTM)
which is based on a temporal version of the standard feed-forward
back-propagation trained artificial neural networks (ANNs). The results of
simulations study show impressive results and validates the sparse memory
predictive approach which favours the sub-synthesis of a highly compressed and
low dimensional knowledge discovery and information prediction scheme. It also
shows that the proposed new approach is competitive with a well-known and
proven AI approach such as the LSTM.
| no_new_dataset | 0.947721 |
1701.00077 | Pietro Hiram Guzzi | Pietro Hiram Guzzi, Giuseppe Agapito, Marianna Milano, Mario Cannataro | Learning Weighted Association Rules in Human Phenotype Ontology | null | null | null | null | q-bio.QM cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The Human Phenotype Ontology (HPO) is a structured repository of concepts
(HPO Terms) that are associated to one or more diseases. The process of
association is referred to as annotation. The relevance and the specificity of
both HPO terms and annotations are evaluated by a measure defined as
Information Content (IC). The analysis of annotated data is thus an important
challenge for bioinformatics. There exist different approaches of analysis.
From those, the use of Association Rules (AR) may provide useful knowledge, and
it has been used in some applications, e.g. improving the quality of
annotations. Nevertheless classical association rules algorithms do not take
into account the source of annotation nor the importance yielding to the
generation of candidate rules with low IC. This paper presents HPO-Miner (Human
Phenotype Ontology-based Weighted Association Rules) a methodology for
extracting Weighted Association Rules. HPO-Miner can extract relevant rules
from a biological point of view. A case study on using of HPO-Miner on publicly
available HPO annotation datasets is used to demonstrate the effectiveness of
our methodology.
| [
{
"version": "v1",
"created": "Sat, 31 Dec 2016 09:19:52 GMT"
}
] | 2017-01-03T00:00:00 | [
[
"Guzzi",
"Pietro Hiram",
""
],
[
"Agapito",
"Giuseppe",
""
],
[
"Milano",
"Marianna",
""
],
[
"Cannataro",
"Mario",
""
]
] | TITLE: Learning Weighted Association Rules in Human Phenotype Ontology
ABSTRACT: The Human Phenotype Ontology (HPO) is a structured repository of concepts
(HPO Terms) that are associated to one or more diseases. The process of
association is referred to as annotation. The relevance and the specificity of
both HPO terms and annotations are evaluated by a measure defined as
Information Content (IC). The analysis of annotated data is thus an important
challenge for bioinformatics. There exist different approaches of analysis.
From those, the use of Association Rules (AR) may provide useful knowledge, and
it has been used in some applications, e.g. improving the quality of
annotations. Nevertheless classical association rules algorithms do not take
into account the source of annotation nor the importance yielding to the
generation of candidate rules with low IC. This paper presents HPO-Miner (Human
Phenotype Ontology-based Weighted Association Rules) a methodology for
extracting Weighted Association Rules. HPO-Miner can extract relevant rules
from a biological point of view. A case study on using of HPO-Miner on publicly
available HPO annotation datasets is used to demonstrate the effectiveness of
our methodology.
| no_new_dataset | 0.950915 |
1701.00142 | Helge Rhodin | Helge Rhodin, Christian Richardt, Dan Casas, Eldar Insafutdinov,
Mohammad Shafiei, Hans-Peter Seidel, Bernt Schiele, Christian Theobalt | EgoCap: Egocentric Marker-less Motion Capture with Two Fisheye Cameras
(Extended Abstract) | Short version of a SIGGRAPH Asia 2016 paper arXiv:1609.07306,
presented at EPIC@ECCV16 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Marker-based and marker-less optical skeletal motion-capture methods use an
outside-in arrangement of cameras placed around a scene, with viewpoints
converging on the center. They often create discomfort by possibly needed
marker suits, and their recording volume is severely restricted and often
constrained to indoor scenes with controlled backgrounds. We therefore propose
a new method for real-time, marker-less and egocentric motion capture which
estimates the full-body skeleton pose from a lightweight stereo pair of fisheye
cameras that are attached to a helmet or virtual-reality headset. It combines
the strength of a new generative pose estimation framework for fisheye views
with a ConvNet-based body-part detector trained on a new automatically
annotated and augmented dataset. Our inside-in method captures full-body motion
in general indoor and outdoor scenes, and also crowded scenes.
| [
{
"version": "v1",
"created": "Sat, 31 Dec 2016 16:49:39 GMT"
}
] | 2017-01-03T00:00:00 | [
[
"Rhodin",
"Helge",
""
],
[
"Richardt",
"Christian",
""
],
[
"Casas",
"Dan",
""
],
[
"Insafutdinov",
"Eldar",
""
],
[
"Shafiei",
"Mohammad",
""
],
[
"Seidel",
"Hans-Peter",
""
],
[
"Schiele",
"Bernt",
""
],
[
"Theobalt",
"Christian",
""
]
] | TITLE: EgoCap: Egocentric Marker-less Motion Capture with Two Fisheye Cameras
(Extended Abstract)
ABSTRACT: Marker-based and marker-less optical skeletal motion-capture methods use an
outside-in arrangement of cameras placed around a scene, with viewpoints
converging on the center. They often create discomfort by possibly needed
marker suits, and their recording volume is severely restricted and often
constrained to indoor scenes with controlled backgrounds. We therefore propose
a new method for real-time, marker-less and egocentric motion capture which
estimates the full-body skeleton pose from a lightweight stereo pair of fisheye
cameras that are attached to a helmet or virtual-reality headset. It combines
the strength of a new generative pose estimation framework for fisheye views
with a ConvNet-based body-part detector trained on a new automatically
annotated and augmented dataset. Our inside-in method captures full-body motion
in general indoor and outdoor scenes, and also crowded scenes.
| new_dataset | 0.814274 |
1701.00185 | Jiaming Xu | Jiaming Xu, Bo Xu, Peng Wang, Suncong Zheng, Guanhua Tian, Jun Zhao,
Bo Xu | Self-Taught Convolutional Neural Networks for Short Text Clustering | 33 pages, accepted for publication in Neural Networks | null | 10.1016/j.neunet.2016.12.008 | null | cs.IR cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Short text clustering is a challenging problem due to its sparseness of text
representation. Here we propose a flexible Self-Taught Convolutional neural
network framework for Short Text Clustering (dubbed STC^2), which can flexibly
and successfully incorporate more useful semantic features and learn non-biased
deep text representation in an unsupervised manner. In our framework, the
original raw text features are firstly embedded into compact binary codes by
using one existing unsupervised dimensionality reduction methods. Then, word
embeddings are explored and fed into convolutional neural networks to learn
deep feature representations, meanwhile the output units are used to fit the
pre-trained binary codes in the training process. Finally, we get the optimal
clusters by employing K-means to cluster the learned representations. Extensive
experimental results demonstrate that the proposed framework is effective,
flexible and outperform several popular clustering methods when tested on three
public short text datasets.
| [
{
"version": "v1",
"created": "Sun, 1 Jan 2017 01:57:59 GMT"
}
] | 2017-01-03T00:00:00 | [
[
"Xu",
"Jiaming",
""
],
[
"Xu",
"Bo",
""
],
[
"Wang",
"Peng",
""
],
[
"Zheng",
"Suncong",
""
],
[
"Tian",
"Guanhua",
""
],
[
"Zhao",
"Jun",
""
],
[
"Xu",
"Bo",
""
]
] | TITLE: Self-Taught Convolutional Neural Networks for Short Text Clustering
ABSTRACT: Short text clustering is a challenging problem due to its sparseness of text
representation. Here we propose a flexible Self-Taught Convolutional neural
network framework for Short Text Clustering (dubbed STC^2), which can flexibly
and successfully incorporate more useful semantic features and learn non-biased
deep text representation in an unsupervised manner. In our framework, the
original raw text features are firstly embedded into compact binary codes by
using one existing unsupervised dimensionality reduction methods. Then, word
embeddings are explored and fed into convolutional neural networks to learn
deep feature representations, meanwhile the output units are used to fit the
pre-trained binary codes in the training process. Finally, we get the optimal
clusters by employing K-means to cluster the learned representations. Extensive
experimental results demonstrate that the proposed framework is effective,
flexible and outperform several popular clustering methods when tested on three
public short text datasets.
| no_new_dataset | 0.947769 |
1701.00199 | Aidong Lu | Kodzo Wegba, Aidong Lu, Yuemeng Li, and Wencheng Wang | Interactive Movie Recommendation Through Latent Semantic Analysis and
Storytelling | 10 pages | null | null | null | cs.IR cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recommendation has become one of the most important components of online
services for improving sale records, however visualization work for online
recommendation is still very limited. This paper presents an interactive
recommendation approach with the following two components. First, rating
records are the most widely used data for online recommendation, but they are
often processed in high-dimensional spaces that can not be easily understood or
interacted with. We propose a Latent Semantic Model (LSM) that captures the
statistical features of semantic concepts on 2D domains and abstracts user
preferences for personal recommendation. Second, we propose an interactive
recommendation approach through a storytelling mechanism for promoting the
communication between the user and the recommendation system. Our approach
emphasizes interactivity, explicit user input, and semantic information convey;
thus it can be used by general users without any knowledge of recommendation or
visualization algorithms. We validate our model with data statistics and
demonstrate our approach with case studies from the MovieLens100K dataset. Our
approaches of latent semantic analysis and interactive recommendation can also
be extended to other network-based visualization applications, including
various online recommendation systems.
| [
{
"version": "v1",
"created": "Sun, 1 Jan 2017 04:52:37 GMT"
}
] | 2017-01-03T00:00:00 | [
[
"Wegba",
"Kodzo",
""
],
[
"Lu",
"Aidong",
""
],
[
"Li",
"Yuemeng",
""
],
[
"Wang",
"Wencheng",
""
]
] | TITLE: Interactive Movie Recommendation Through Latent Semantic Analysis and
Storytelling
ABSTRACT: Recommendation has become one of the most important components of online
services for improving sale records, however visualization work for online
recommendation is still very limited. This paper presents an interactive
recommendation approach with the following two components. First, rating
records are the most widely used data for online recommendation, but they are
often processed in high-dimensional spaces that can not be easily understood or
interacted with. We propose a Latent Semantic Model (LSM) that captures the
statistical features of semantic concepts on 2D domains and abstracts user
preferences for personal recommendation. Second, we propose an interactive
recommendation approach through a storytelling mechanism for promoting the
communication between the user and the recommendation system. Our approach
emphasizes interactivity, explicit user input, and semantic information convey;
thus it can be used by general users without any knowledge of recommendation or
visualization algorithms. We validate our model with data statistics and
demonstrate our approach with case studies from the MovieLens100K dataset. Our
approaches of latent semantic analysis and interactive recommendation can also
be extended to other network-based visualization applications, including
various online recommendation systems.
| no_new_dataset | 0.9463 |
1701.00334 | Mehdi Moussaid | Mehdi Moussaid and Kyanoush Seyed Yahosseini | Can simple transmission chains foster collective intelligence in
binary-choice tasks? | null | PLoS ONE 11(11): e0167223 (2016) | null | null | physics.soc-ph cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In many social systems, groups of individuals can find remarkably efficient
solutions to complex cognitive problems, sometimes even outperforming a single
expert. The success of the group, however, crucially depends on how the
judgments of the group members are aggregated to produce the collective answer.
A large variety of such aggregation methods have been described in the
literature, such as averaging the independent judgments, relying on the
majority or setting up a group discussion. In the present work, we introduce a
novel approach for aggregating judgments - the transmission chain - which has
not yet been consistently evaluated in the context of collective intelligence.
In a transmission chain, all group members have access to a unique collective
solution and can improve it sequentially. Over repeated improvements, the
collective solution that emerges reflects the judgments of every group members.
We address the question of whether such a transmission chain can foster
collective intelligence for binary-choice problems. In a series of numerical
simulations, we explore the impact of various factors on the performance of the
transmission chain, such as the group size, the model parameters, and the
structure of the population. The performance of this method is compared to
those of the majority rule and the confidence-weighted majority. Finally, we
rely on two existing datasets of individuals performing a series of binary
decisions to evaluate the expected performances of the three methods
empirically. We find that the parameter space where the transmission chain has
the best performance rarely appears in real datasets. We conclude that the
transmission chain is best suited for other types of problems, such as those
that have cumulative properties.
| [
{
"version": "v1",
"created": "Mon, 2 Jan 2017 08:32:08 GMT"
}
] | 2017-01-03T00:00:00 | [
[
"Moussaid",
"Mehdi",
""
],
[
"Yahosseini",
"Kyanoush Seyed",
""
]
] | TITLE: Can simple transmission chains foster collective intelligence in
binary-choice tasks?
ABSTRACT: In many social systems, groups of individuals can find remarkably efficient
solutions to complex cognitive problems, sometimes even outperforming a single
expert. The success of the group, however, crucially depends on how the
judgments of the group members are aggregated to produce the collective answer.
A large variety of such aggregation methods have been described in the
literature, such as averaging the independent judgments, relying on the
majority or setting up a group discussion. In the present work, we introduce a
novel approach for aggregating judgments - the transmission chain - which has
not yet been consistently evaluated in the context of collective intelligence.
In a transmission chain, all group members have access to a unique collective
solution and can improve it sequentially. Over repeated improvements, the
collective solution that emerges reflects the judgments of every group members.
We address the question of whether such a transmission chain can foster
collective intelligence for binary-choice problems. In a series of numerical
simulations, we explore the impact of various factors on the performance of the
transmission chain, such as the group size, the model parameters, and the
structure of the population. The performance of this method is compared to
those of the majority rule and the confidence-weighted majority. Finally, we
rely on two existing datasets of individuals performing a series of binary
decisions to evaluate the expected performances of the three methods
empirically. We find that the parameter space where the transmission chain has
the best performance rarely appears in real datasets. We conclude that the
transmission chain is best suited for other types of problems, such as those
that have cumulative properties.
| no_new_dataset | 0.939192 |
1701.00449 | Hamid Tizhoosh | Morteza Babaie, H.R. Tizhoosh, Shujin Zhu, M.E. Shiri | Retrieving Similar X-Ray Images from Big Image Data Using Radon Barcodes
with Single Projections | Accepted for publication in ICPRAM 2017: The International Conference
on Pattern Recognition Applications and Methods, Porto, Portugal, 2017 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The idea of Radon barcodes (RBC) has been introduced recently. In this paper,
we propose a content-based image retrieval approach for big datasets based on
Radon barcodes. Our method (Single Projection Radon Barcode, or SP-RBC) uses
only a few Radon single projections for each image as global features that can
serve as a basis for weak learners. This is our most important contribution in
this work, which improves the results of the RBC considerably. As a matter of
fact, only one projection of an image, as short as a single SURF feature
vector, can already achieve acceptable results. Nevertheless, using multiple
projections in a long vector will not deliver anticipated improvements. To
exploit the information inherent in each projection, our method uses the
outcome of each projection separately and then applies more precise local
search on the small subset of retrieved images. We have tested our method using
IRMA 2009 dataset a with 14,400 x-ray images as part of imageCLEF initiative.
Our approach leads to a substantial decrease in the error rate in comparison
with other non-learning methods.
| [
{
"version": "v1",
"created": "Mon, 2 Jan 2017 17:00:53 GMT"
}
] | 2017-01-03T00:00:00 | [
[
"Babaie",
"Morteza",
""
],
[
"Tizhoosh",
"H. R.",
""
],
[
"Zhu",
"Shujin",
""
],
[
"Shiri",
"M. E.",
""
]
] | TITLE: Retrieving Similar X-Ray Images from Big Image Data Using Radon Barcodes
with Single Projections
ABSTRACT: The idea of Radon barcodes (RBC) has been introduced recently. In this paper,
we propose a content-based image retrieval approach for big datasets based on
Radon barcodes. Our method (Single Projection Radon Barcode, or SP-RBC) uses
only a few Radon single projections for each image as global features that can
serve as a basis for weak learners. This is our most important contribution in
this work, which improves the results of the RBC considerably. As a matter of
fact, only one projection of an image, as short as a single SURF feature
vector, can already achieve acceptable results. Nevertheless, using multiple
projections in a long vector will not deliver anticipated improvements. To
exploit the information inherent in each projection, our method uses the
outcome of each projection separately and then applies more precise local
search on the small subset of retrieved images. We have tested our method using
IRMA 2009 dataset a with 14,400 x-ray images as part of imageCLEF initiative.
Our approach leads to a substantial decrease in the error rate in comparison
with other non-learning methods.
| no_new_dataset | 0.951323 |
1612.01756 | Francesco Cricri | Francesco Cricri, Xingyang Ni, Mikko Honkala, Emre Aksu, Moncef
Gabbouj | Video Ladder Networks | This version extends the paper accepted at the NIPS 2016 workshop on
ML for Spatiotemporal Forecasting, with more details and more experimental
results | null | null | null | cs.LG cs.CV stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present the Video Ladder Network (VLN) for efficiently generating future
video frames. VLN is a neural encoder-decoder model augmented at all layers by
both recurrent and feedforward lateral connections. At each layer, these
connections form a lateral recurrent residual block, where the feedforward
connection represents a skip connection and the recurrent connection represents
the residual. Thanks to the recurrent connections, the decoder can exploit
temporal summaries generated from all layers of the encoder. This way, the top
layer is relieved from the pressure of modeling lower-level spatial and
temporal details. Furthermore, we extend the basic version of VLN to
incorporate ResNet-style residual blocks in the encoder and decoder, which help
improving the prediction results. VLN is trained in self-supervised regime on
the Moving MNIST dataset, achieving competitive results while having very
simple structure and providing fast inference.
| [
{
"version": "v1",
"created": "Tue, 6 Dec 2016 11:15:28 GMT"
},
{
"version": "v2",
"created": "Wed, 7 Dec 2016 11:35:22 GMT"
},
{
"version": "v3",
"created": "Fri, 30 Dec 2016 09:01:02 GMT"
}
] | 2017-01-02T00:00:00 | [
[
"Cricri",
"Francesco",
""
],
[
"Ni",
"Xingyang",
""
],
[
"Honkala",
"Mikko",
""
],
[
"Aksu",
"Emre",
""
],
[
"Gabbouj",
"Moncef",
""
]
] | TITLE: Video Ladder Networks
ABSTRACT: We present the Video Ladder Network (VLN) for efficiently generating future
video frames. VLN is a neural encoder-decoder model augmented at all layers by
both recurrent and feedforward lateral connections. At each layer, these
connections form a lateral recurrent residual block, where the feedforward
connection represents a skip connection and the recurrent connection represents
the residual. Thanks to the recurrent connections, the decoder can exploit
temporal summaries generated from all layers of the encoder. This way, the top
layer is relieved from the pressure of modeling lower-level spatial and
temporal details. Furthermore, we extend the basic version of VLN to
incorporate ResNet-style residual blocks in the encoder and decoder, which help
improving the prediction results. VLN is trained in self-supervised regime on
the Moving MNIST dataset, achieving competitive results while having very
simple structure and providing fast inference.
| no_new_dataset | 0.948489 |
1612.08714 | Andreas Henelius | Andreas Henelius, Kai Puolam\"aki, Henrik Bostr\"om, Panagiotis
Papapetrou | Clustering with Confidence: Finding Clusters with Statistical Guarantees | 30 pages, 5 figures, 5 tables. Added URL to the source code | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Clustering is a widely used unsupervised learning method for finding
structure in the data. However, the resulting clusters are typically presented
without any guarantees on their robustness; slightly changing the used data
sample or re-running a clustering algorithm involving some stochastic component
may lead to completely different clusters. There is, hence, a need for
techniques that can quantify the instability of the generated clusters. In this
study, we propose a technique for quantifying the instability of a clustering
solution and for finding robust clusters, termed core clusters, which
correspond to clusters where the co-occurrence probability of each data item
within a cluster is at least $1 - \alpha$. We demonstrate how solving the core
clustering problem is linked to finding the largest maximal cliques in a graph.
We show that the method can be used with both clustering and classification
algorithms. The proposed method is tested on both simulated and real datasets.
The results show that the obtained clusters indeed meet the guarantees on
robustness.
| [
{
"version": "v1",
"created": "Tue, 27 Dec 2016 19:39:23 GMT"
},
{
"version": "v2",
"created": "Fri, 30 Dec 2016 17:56:48 GMT"
}
] | 2017-01-02T00:00:00 | [
[
"Henelius",
"Andreas",
""
],
[
"Puolamäki",
"Kai",
""
],
[
"Boström",
"Henrik",
""
],
[
"Papapetrou",
"Panagiotis",
""
]
] | TITLE: Clustering with Confidence: Finding Clusters with Statistical Guarantees
ABSTRACT: Clustering is a widely used unsupervised learning method for finding
structure in the data. However, the resulting clusters are typically presented
without any guarantees on their robustness; slightly changing the used data
sample or re-running a clustering algorithm involving some stochastic component
may lead to completely different clusters. There is, hence, a need for
techniques that can quantify the instability of the generated clusters. In this
study, we propose a technique for quantifying the instability of a clustering
solution and for finding robust clusters, termed core clusters, which
correspond to clusters where the co-occurrence probability of each data item
within a cluster is at least $1 - \alpha$. We demonstrate how solving the core
clustering problem is linked to finding the largest maximal cliques in a graph.
We show that the method can be used with both clustering and classification
algorithms. The proposed method is tested on both simulated and real datasets.
The results show that the obtained clusters indeed meet the guarantees on
robustness.
| no_new_dataset | 0.953319 |
1612.09368 | Dongxiao Yu | Na Wang, Dongxiao Yu, Hai Jin, Chen Qian, Xia Xie, Qiang-Sheng Hua | Parallel Algorithms for Core Maintenance in Dynamic Graphs | 11 pages,9 figures,1 table | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper initiates the studies of parallel algorithms for core maintenance
in dynamic graphs. The core number is a fundamental index reflecting the
cohesiveness of a graph, which are widely used in large-scale graph analytics.
The core maintenance problem requires to update the core numbers of vertices
after a set of edges and vertices are inserted into or deleted from the graph.
We investigate the parallelism in the core update process when multiple edges
and vertices are inserted or deleted. Specifically, we discover a structure
called superior edge set, the insertion or deletion of edges in which can be
processed in parallel. Based on the structure of superior edge set, efficient
parallel algorithms are then devised for incremental and decremental core
maintenance respectively. To the best of our knowledge, the proposed algorithms
are the first parallel ones for the fundamental core maintenance problem. The
algorithms show a significant speedup in the processing time compared with
previous results that sequentially handle edge and vertex insertions/deletions.
Finally, extensive experiments are conducted on different types of real-world
and synthetic datasets, and the results illustrate the efficiency, stability
and scalability of the proposed algorithms.
| [
{
"version": "v1",
"created": "Fri, 30 Dec 2016 02:01:33 GMT"
}
] | 2017-01-02T00:00:00 | [
[
"Wang",
"Na",
""
],
[
"Yu",
"Dongxiao",
""
],
[
"Jin",
"Hai",
""
],
[
"Qian",
"Chen",
""
],
[
"Xie",
"Xia",
""
],
[
"Hua",
"Qiang-Sheng",
""
]
] | TITLE: Parallel Algorithms for Core Maintenance in Dynamic Graphs
ABSTRACT: This paper initiates the studies of parallel algorithms for core maintenance
in dynamic graphs. The core number is a fundamental index reflecting the
cohesiveness of a graph, which are widely used in large-scale graph analytics.
The core maintenance problem requires to update the core numbers of vertices
after a set of edges and vertices are inserted into or deleted from the graph.
We investigate the parallelism in the core update process when multiple edges
and vertices are inserted or deleted. Specifically, we discover a structure
called superior edge set, the insertion or deletion of edges in which can be
processed in parallel. Based on the structure of superior edge set, efficient
parallel algorithms are then devised for incremental and decremental core
maintenance respectively. To the best of our knowledge, the proposed algorithms
are the first parallel ones for the fundamental core maintenance problem. The
algorithms show a significant speedup in the processing time compared with
previous results that sequentially handle edge and vertex insertions/deletions.
Finally, extensive experiments are conducted on different types of real-world
and synthetic datasets, and the results illustrate the efficiency, stability
and scalability of the proposed algorithms.
| no_new_dataset | 0.948155 |
1612.09401 | Pichao Wang | Pichao Wang and Wanqing Li and Chuankun Li and Yonghong Hou | Action Recognition Based on Joint Trajectory Maps with Convolutional
Neural Networks | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Convolutional Neural Networks (ConvNets) have recently shown promising
performance in many computer vision tasks, especially image-based recognition.
How to effectively apply ConvNets to sequence-based data is still an open
problem. This paper proposes an effective yet simple method to represent
spatio-temporal information carried in $3D$ skeleton sequences into three $2D$
images by encoding the joint trajectories and their dynamics into color
distribution in the images, referred to as Joint Trajectory Maps (JTM), and
adopts ConvNets to learn the discriminative features for human action
recognition. Such an image-based representation enables us to fine-tune
existing ConvNets models for the classification of skeleton sequences without
training the networks afresh. The three JTMs are generated in three orthogonal
planes and provide complimentary information to each other. The final
recognition is further improved through multiply score fusion of the three
JTMs. The proposed method was evaluated on four public benchmark datasets, the
large NTU RGB+D Dataset, MSRC-12 Kinect Gesture Dataset (MSRC-12), G3D Dataset
and UTD Multimodal Human Action Dataset (UTD-MHAD) and achieved the
state-of-the-art results.
| [
{
"version": "v1",
"created": "Fri, 30 Dec 2016 06:32:38 GMT"
}
] | 2017-01-02T00:00:00 | [
[
"Wang",
"Pichao",
""
],
[
"Li",
"Wanqing",
""
],
[
"Li",
"Chuankun",
""
],
[
"Hou",
"Yonghong",
""
]
] | TITLE: Action Recognition Based on Joint Trajectory Maps with Convolutional
Neural Networks
ABSTRACT: Convolutional Neural Networks (ConvNets) have recently shown promising
performance in many computer vision tasks, especially image-based recognition.
How to effectively apply ConvNets to sequence-based data is still an open
problem. This paper proposes an effective yet simple method to represent
spatio-temporal information carried in $3D$ skeleton sequences into three $2D$
images by encoding the joint trajectories and their dynamics into color
distribution in the images, referred to as Joint Trajectory Maps (JTM), and
adopts ConvNets to learn the discriminative features for human action
recognition. Such an image-based representation enables us to fine-tune
existing ConvNets models for the classification of skeleton sequences without
training the networks afresh. The three JTMs are generated in three orthogonal
planes and provide complimentary information to each other. The final
recognition is further improved through multiply score fusion of the three
JTMs. The proposed method was evaluated on four public benchmark datasets, the
large NTU RGB+D Dataset, MSRC-12 Kinect Gesture Dataset (MSRC-12), G3D Dataset
and UTD Multimodal Human Action Dataset (UTD-MHAD) and achieved the
state-of-the-art results.
| no_new_dataset | 0.948442 |
1612.06083 | Yannis Papanikolaou | Yannis Papanikolaou, Ioannis Katakis, Grigorios Tsoumakas | Hierarchical Partitioning of the Output Space in Multi-label Data | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hierarchy Of Multi-label classifiers (HOMER) is a multi-label learning
algorithm that breaks the initial learning task to several, easier sub-tasks by
first constructing a hierarchy of labels from a given label set and secondly
employing a given base multi-label classifier (MLC) to the resulting
sub-problems. The primary goal is to effectively address class imbalance and
scalability issues that often arise in real-world multi-label classification
problems. In this work, we present the general setup for a HOMER model and a
simple extension of the algorithm that is suited for MLCs that output rankings.
Furthermore, we provide a detailed analysis of the properties of the algorithm,
both from an aspect of effectiveness and computational complexity. A secondary
contribution involves the presentation of a balanced variant of the k means
algorithm, which serves in the first step of the label hierarchy construction.
We conduct extensive experiments on six real-world datasets, studying
empirically HOMER's parameters and providing examples of instantiations of the
algorithm with different clustering approaches and MLCs, The empirical results
demonstrate a significant improvement over the given base MLC.
| [
{
"version": "v1",
"created": "Mon, 19 Dec 2016 09:08:59 GMT"
}
] | 2016-12-31T00:00:00 | [
[
"Papanikolaou",
"Yannis",
""
],
[
"Katakis",
"Ioannis",
""
],
[
"Tsoumakas",
"Grigorios",
""
]
] | TITLE: Hierarchical Partitioning of the Output Space in Multi-label Data
ABSTRACT: Hierarchy Of Multi-label classifiers (HOMER) is a multi-label learning
algorithm that breaks the initial learning task to several, easier sub-tasks by
first constructing a hierarchy of labels from a given label set and secondly
employing a given base multi-label classifier (MLC) to the resulting
sub-problems. The primary goal is to effectively address class imbalance and
scalability issues that often arise in real-world multi-label classification
problems. In this work, we present the general setup for a HOMER model and a
simple extension of the algorithm that is suited for MLCs that output rankings.
Furthermore, we provide a detailed analysis of the properties of the algorithm,
both from an aspect of effectiveness and computational complexity. A secondary
contribution involves the presentation of a balanced variant of the k means
algorithm, which serves in the first step of the label hierarchy construction.
We conduct extensive experiments on six real-world datasets, studying
empirically HOMER's parameters and providing examples of instantiations of the
algorithm with different clustering approaches and MLCs, The empirical results
demonstrate a significant improvement over the given base MLC.
| no_new_dataset | 0.947235 |
1411.3406 | Thomas Goldstein | Tom Goldstein, Christoph Studer, Richard Baraniuk | A Field Guide to Forward-Backward Splitting with a FASTA Implementation | null | null | null | null | cs.NA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Non-differentiable and constrained optimization play a key role in machine
learning, signal and image processing, communications, and beyond. For
high-dimensional minimization problems involving large datasets or many
unknowns, the forward-backward splitting method provides a simple, practical
solver. Despite its apparently simplicity, the performance of the
forward-backward splitting is highly sensitive to implementation details.
This article is an introductory review of forward-backward splitting with a
special emphasis on practical implementation concerns. Issues like stepsize
selection, acceleration, stopping conditions, and initialization are
considered. Numerical experiments are used to compare the effectiveness of
different approaches.
Many variations of forward-backward splitting are implemented in the solver
FASTA (short for Fast Adaptive Shrinkage/Thresholding Algorithm). FASTA
provides a simple interface for applying forward-backward splitting to a broad
range of problems.
| [
{
"version": "v1",
"created": "Thu, 13 Nov 2014 00:38:52 GMT"
},
{
"version": "v2",
"created": "Sun, 16 Nov 2014 22:34:37 GMT"
},
{
"version": "v3",
"created": "Fri, 9 Jan 2015 02:56:53 GMT"
},
{
"version": "v4",
"created": "Wed, 20 Jan 2016 23:52:27 GMT"
},
{
"version": "v5",
"created": "Mon, 15 Feb 2016 23:24:09 GMT"
},
{
"version": "v6",
"created": "Wed, 28 Dec 2016 03:25:36 GMT"
}
] | 2016-12-30T00:00:00 | [
[
"Goldstein",
"Tom",
""
],
[
"Studer",
"Christoph",
""
],
[
"Baraniuk",
"Richard",
""
]
] | TITLE: A Field Guide to Forward-Backward Splitting with a FASTA Implementation
ABSTRACT: Non-differentiable and constrained optimization play a key role in machine
learning, signal and image processing, communications, and beyond. For
high-dimensional minimization problems involving large datasets or many
unknowns, the forward-backward splitting method provides a simple, practical
solver. Despite its apparently simplicity, the performance of the
forward-backward splitting is highly sensitive to implementation details.
This article is an introductory review of forward-backward splitting with a
special emphasis on practical implementation concerns. Issues like stepsize
selection, acceleration, stopping conditions, and initialization are
considered. Numerical experiments are used to compare the effectiveness of
different approaches.
Many variations of forward-backward splitting are implemented in the solver
FASTA (short for Fast Adaptive Shrinkage/Thresholding Algorithm). FASTA
provides a simple interface for applying forward-backward splitting to a broad
range of problems.
| no_new_dataset | 0.943452 |
1512.02325 | Wei Liu | Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott
Reed, Cheng-Yang Fu, Alexander C. Berg | SSD: Single Shot MultiBox Detector | ECCV 2016 | null | 10.1007/978-3-319-46448-0_2 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a method for detecting objects in images using a single deep
neural network. Our approach, named SSD, discretizes the output space of
bounding boxes into a set of default boxes over different aspect ratios and
scales per feature map location. At prediction time, the network generates
scores for the presence of each object category in each default box and
produces adjustments to the box to better match the object shape. Additionally,
the network combines predictions from multiple feature maps with different
resolutions to naturally handle objects of various sizes. Our SSD model is
simple relative to methods that require object proposals because it completely
eliminates proposal generation and subsequent pixel or feature resampling stage
and encapsulates all computation in a single network. This makes SSD easy to
train and straightforward to integrate into systems that require a detection
component. Experimental results on the PASCAL VOC, MS COCO, and ILSVRC datasets
confirm that SSD has comparable accuracy to methods that utilize an additional
object proposal step and is much faster, while providing a unified framework
for both training and inference. Compared to other single stage methods, SSD
has much better accuracy, even with a smaller input image size. For $300\times
300$ input, SSD achieves 72.1% mAP on VOC2007 test at 58 FPS on a Nvidia Titan
X and for $500\times 500$ input, SSD achieves 75.1% mAP, outperforming a
comparable state of the art Faster R-CNN model. Code is available at
https://github.com/weiliu89/caffe/tree/ssd .
| [
{
"version": "v1",
"created": "Tue, 8 Dec 2015 04:46:38 GMT"
},
{
"version": "v2",
"created": "Wed, 30 Mar 2016 21:17:34 GMT"
},
{
"version": "v3",
"created": "Tue, 8 Nov 2016 18:31:25 GMT"
},
{
"version": "v4",
"created": "Wed, 30 Nov 2016 09:54:02 GMT"
},
{
"version": "v5",
"created": "Thu, 29 Dec 2016 19:05:11 GMT"
}
] | 2016-12-30T00:00:00 | [
[
"Liu",
"Wei",
""
],
[
"Anguelov",
"Dragomir",
""
],
[
"Erhan",
"Dumitru",
""
],
[
"Szegedy",
"Christian",
""
],
[
"Reed",
"Scott",
""
],
[
"Fu",
"Cheng-Yang",
""
],
[
"Berg",
"Alexander C.",
""
]
] | TITLE: SSD: Single Shot MultiBox Detector
ABSTRACT: We present a method for detecting objects in images using a single deep
neural network. Our approach, named SSD, discretizes the output space of
bounding boxes into a set of default boxes over different aspect ratios and
scales per feature map location. At prediction time, the network generates
scores for the presence of each object category in each default box and
produces adjustments to the box to better match the object shape. Additionally,
the network combines predictions from multiple feature maps with different
resolutions to naturally handle objects of various sizes. Our SSD model is
simple relative to methods that require object proposals because it completely
eliminates proposal generation and subsequent pixel or feature resampling stage
and encapsulates all computation in a single network. This makes SSD easy to
train and straightforward to integrate into systems that require a detection
component. Experimental results on the PASCAL VOC, MS COCO, and ILSVRC datasets
confirm that SSD has comparable accuracy to methods that utilize an additional
object proposal step and is much faster, while providing a unified framework
for both training and inference. Compared to other single stage methods, SSD
has much better accuracy, even with a smaller input image size. For $300\times
300$ input, SSD achieves 72.1% mAP on VOC2007 test at 58 FPS on a Nvidia Titan
X and for $500\times 500$ input, SSD achieves 75.1% mAP, outperforming a
comparable state of the art Faster R-CNN model. Code is available at
https://github.com/weiliu89/caffe/tree/ssd .
| no_new_dataset | 0.949201 |
1601.00025 | Mohamed Elhoseiny Mohamed Elhoseiny | Mohamed Elhoseiny, Ahmed Elgammal, Babak Saleh | Write a Classifier: Predicting Visual Classifiers from Unstructured Text | (TPAMI) Transactions on Pattern Analysis and Machine Intelligence
2017 | null | null | null | cs.CV cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | People typically learn through exposure to visual concepts associated with
linguistic descriptions. For instance, teaching visual object categories to
children is often accompanied by descriptions in text or speech. In a machine
learning context, these observations motivates us to ask whether this learning
process could be computationally modeled to learn visual classifiers. More
specifically, the main question of this work is how to utilize purely textual
description of visual classes with no training images, to learn explicit visual
classifiers for them. We propose and investigate two baseline formulations,
based on regression and domain transfer, that predict a linear classifier.
Then, we propose a new constrained optimization formulation that combines a
regression function and a knowledge transfer function with additional
constraints to predict the parameters of a linear classifier. We also propose a
generic kernelized models where a kernel classifier is predicted in the form
defined by the representer theorem. The kernelized models allow defining and
utilizing any two RKHS (Reproducing Kernel Hilbert Space) kernel functions in
the visual space and text space, respectively. We finally propose a kernel
function between unstructured text descriptions that builds on distributional
semantics, which shows an advantage in our setting and could be useful for
other applications. We applied all the studied models to predict visual
classifiers on two fine-grained and challenging categorization datasets (CU
Birds and Flower Datasets), and the results indicate successful predictions of
our final model over several baselines that we designed.
| [
{
"version": "v1",
"created": "Thu, 31 Dec 2015 22:23:34 GMT"
},
{
"version": "v2",
"created": "Wed, 28 Dec 2016 02:13:59 GMT"
}
] | 2016-12-30T00:00:00 | [
[
"Elhoseiny",
"Mohamed",
""
],
[
"Elgammal",
"Ahmed",
""
],
[
"Saleh",
"Babak",
""
]
] | TITLE: Write a Classifier: Predicting Visual Classifiers from Unstructured Text
ABSTRACT: People typically learn through exposure to visual concepts associated with
linguistic descriptions. For instance, teaching visual object categories to
children is often accompanied by descriptions in text or speech. In a machine
learning context, these observations motivates us to ask whether this learning
process could be computationally modeled to learn visual classifiers. More
specifically, the main question of this work is how to utilize purely textual
description of visual classes with no training images, to learn explicit visual
classifiers for them. We propose and investigate two baseline formulations,
based on regression and domain transfer, that predict a linear classifier.
Then, we propose a new constrained optimization formulation that combines a
regression function and a knowledge transfer function with additional
constraints to predict the parameters of a linear classifier. We also propose a
generic kernelized models where a kernel classifier is predicted in the form
defined by the representer theorem. The kernelized models allow defining and
utilizing any two RKHS (Reproducing Kernel Hilbert Space) kernel functions in
the visual space and text space, respectively. We finally propose a kernel
function between unstructured text descriptions that builds on distributional
semantics, which shows an advantage in our setting and could be useful for
other applications. We applied all the studied models to predict visual
classifiers on two fine-grained and challenging categorization datasets (CU
Birds and Flower Datasets), and the results indicate successful predictions of
our final model over several baselines that we designed.
| no_new_dataset | 0.949012 |
1604.00758 | Richard Darst | Richard K. Darst, Clara Granell, Alex Arenas, Sergio G\'omez, Jari
Saram\"aki and Santo Fortunato | Detection of timescales in evolving complex systems | 17 pages, 7 figures | Scientific Reports 6 (2016) 39713 | 10.1038/srep39713 | null | physics.soc-ph cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Most complex systems are intrinsically dynamic in nature. The evolution of a
dynamic complex system is typically represented as a sequence of snapshots,
where each snapshot describes the configuration of the system at a particular
instant of time. Then, one may directly follow how the snapshots evolve in
time, or aggregate the snapshots within some time intervals to form
representative "slices" of the evolution of the system configuration. This is
often done with constant intervals, whose duration is based on arguments on the
nature of the system and of its dynamics. A more refined approach would be to
consider the rate of activity in the system to perform a separation of
timescales. However, an even better alternative would be to define dynamic
intervals that match the evolution of the system's configuration. To this end,
we propose a method that aims at detecting evolutionary changes in the
configuration of a complex system, and generates intervals accordingly. We show
that evolutionary timescales can be identified by looking for peaks in the
similarity between the sets of events on consecutive time intervals of data.
Tests on simple toy models reveal that the technique is able to detect
evolutionary timescales of time-varying data both when the evolution is smooth
as well as when it changes sharply. This is further corroborated by analyses of
several real datasets. Our method is scalable to extremely large datasets and
is computationally efficient. This allows a quick, parameter-free detection of
multiple timescales in the evolution of a complex system.
| [
{
"version": "v1",
"created": "Mon, 4 Apr 2016 07:06:54 GMT"
}
] | 2016-12-30T00:00:00 | [
[
"Darst",
"Richard K.",
""
],
[
"Granell",
"Clara",
""
],
[
"Arenas",
"Alex",
""
],
[
"Gómez",
"Sergio",
""
],
[
"Saramäki",
"Jari",
""
],
[
"Fortunato",
"Santo",
""
]
] | TITLE: Detection of timescales in evolving complex systems
ABSTRACT: Most complex systems are intrinsically dynamic in nature. The evolution of a
dynamic complex system is typically represented as a sequence of snapshots,
where each snapshot describes the configuration of the system at a particular
instant of time. Then, one may directly follow how the snapshots evolve in
time, or aggregate the snapshots within some time intervals to form
representative "slices" of the evolution of the system configuration. This is
often done with constant intervals, whose duration is based on arguments on the
nature of the system and of its dynamics. A more refined approach would be to
consider the rate of activity in the system to perform a separation of
timescales. However, an even better alternative would be to define dynamic
intervals that match the evolution of the system's configuration. To this end,
we propose a method that aims at detecting evolutionary changes in the
configuration of a complex system, and generates intervals accordingly. We show
that evolutionary timescales can be identified by looking for peaks in the
similarity between the sets of events on consecutive time intervals of data.
Tests on simple toy models reveal that the technique is able to detect
evolutionary timescales of time-varying data both when the evolution is smooth
as well as when it changes sharply. This is further corroborated by analyses of
several real datasets. Our method is scalable to extremely large datasets and
is computationally efficient. This allows a quick, parameter-free detection of
multiple timescales in the evolution of a complex system.
| no_new_dataset | 0.941815 |
1612.00338 | Zohre Kohan | Zohreh Kohan, Hamidreza Farhidzadeh, Reza Azmi, Behrouz Gholizadeh | Hippocampus Temporal Lobe Epilepsy Detection using a Combination of
Shape-based Features and Spherical Harmonics Representation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Most of the temporal lobe epilepsy detection approaches are based on
hippocampus deformation and use complicated features, resulting, detection is
done with complicated features extraction and pre-processing task. In this
paper, a new detection method based on shape-based features and spherical
harmonics is proposed which can analysis the hippocampus shape anomaly and
detection asymmetry. This method consisted of two main parts; (1) shape feature
extraction, and (2) image classification. For evaluation, HFH database is used
which is publicly available in this field. Nine different geometry and 256
spherical harmonic features are introduced then selected Eighteen of them that
detect the asymmetry in hippocampus significantly in a randomly selected subset
of the dataset. Then a support vector machine (SVM) classifier was employed to
classify the remaining images of the dataset to normal and epileptic images
using our selected features. On a dataset of 25 images, 12 images were used for
feature extraction and the rest 13 for classification. The results show that
the proposed method has accuracy, specificity and sensitivity of, respectively,
84%, 100%, and 80%. Therefore, the proposed approach shows acceptable result
and is straightforward also; complicated pre-processing steps were omitted
compared to other methods.
| [
{
"version": "v1",
"created": "Thu, 1 Dec 2016 16:27:59 GMT"
},
{
"version": "v2",
"created": "Wed, 28 Dec 2016 00:18:26 GMT"
}
] | 2016-12-30T00:00:00 | [
[
"Kohan",
"Zohreh",
""
],
[
"Farhidzadeh",
"Hamidreza",
""
],
[
"Azmi",
"Reza",
""
],
[
"Gholizadeh",
"Behrouz",
""
]
] | TITLE: Hippocampus Temporal Lobe Epilepsy Detection using a Combination of
Shape-based Features and Spherical Harmonics Representation
ABSTRACT: Most of the temporal lobe epilepsy detection approaches are based on
hippocampus deformation and use complicated features, resulting, detection is
done with complicated features extraction and pre-processing task. In this
paper, a new detection method based on shape-based features and spherical
harmonics is proposed which can analysis the hippocampus shape anomaly and
detection asymmetry. This method consisted of two main parts; (1) shape feature
extraction, and (2) image classification. For evaluation, HFH database is used
which is publicly available in this field. Nine different geometry and 256
spherical harmonic features are introduced then selected Eighteen of them that
detect the asymmetry in hippocampus significantly in a randomly selected subset
of the dataset. Then a support vector machine (SVM) classifier was employed to
classify the remaining images of the dataset to normal and epileptic images
using our selected features. On a dataset of 25 images, 12 images were used for
feature extraction and the rest 13 for classification. The results show that
the proposed method has accuracy, specificity and sensitivity of, respectively,
84%, 100%, and 80%. Therefore, the proposed approach shows acceptable result
and is straightforward also; complicated pre-processing steps were omitted
compared to other methods.
| no_new_dataset | 0.950319 |
1612.05310 | Luis Gerardo Mojica de la Vega | Luis Gerardo Mojica | Modeling Trolling in Social Media Conversations | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Social media websites, electronic newspapers and Internet forums allow
visitors to leave comments for others to read and interact. This exchange is
not free from participants with malicious intentions, who troll others by
positing messages that are intended to be provocative, offensive, or menacing.
With the goal of facilitating the computational modeling of trolling, we
propose a trolling categorization that is novel in the sense that it allows
comment-based analysis from both the trolls' and the responders' perspectives,
characterizing these two perspectives using four aspects, namely, the troll's
intention and his intention disclosure, as well as the responder's
interpretation of the troll's intention and her response strategy. Using this
categorization, we annotate and release a dataset containing excerpts of Reddit
conversations involving suspected trolls and their interactions with other
users. Finally, we identify the difficult-to-classify cases in our corpus and
suggest potential solutions for them.
| [
{
"version": "v1",
"created": "Thu, 15 Dec 2016 23:41:13 GMT"
},
{
"version": "v2",
"created": "Wed, 28 Dec 2016 16:36:17 GMT"
}
] | 2016-12-30T00:00:00 | [
[
"Mojica",
"Luis Gerardo",
""
]
] | TITLE: Modeling Trolling in Social Media Conversations
ABSTRACT: Social media websites, electronic newspapers and Internet forums allow
visitors to leave comments for others to read and interact. This exchange is
not free from participants with malicious intentions, who troll others by
positing messages that are intended to be provocative, offensive, or menacing.
With the goal of facilitating the computational modeling of trolling, we
propose a trolling categorization that is novel in the sense that it allows
comment-based analysis from both the trolls' and the responders' perspectives,
characterizing these two perspectives using four aspects, namely, the troll's
intention and his intention disclosure, as well as the responder's
interpretation of the troll's intention and her response strategy. Using this
categorization, we annotate and release a dataset containing excerpts of Reddit
conversations involving suspected trolls and their interactions with other
users. Finally, we identify the difficult-to-classify cases in our corpus and
suggest potential solutions for them.
| new_dataset | 0.95594 |
1612.07976 | Kuniaki Saito Saito Kuniaki | Kuniaki Saito, Yusuke Mukuta, Yoshitaka Ushiku, Tatsuya Harada | DeMIAN: Deep Modality Invariant Adversarial Network | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Obtaining common representations from different modalities is important in
that they are interchangeable with each other in a classification problem. For
example, we can train a classifier on image features in the common
representations and apply it to the testing of the text features in the
representations. Existing multi-modal representation learning methods mainly
aim to extract rich information from paired samples and train a classifier by
the corresponding labels; however, collecting paired samples and their labels
simultaneously involves high labor costs. Addressing paired modal samples
without their labels and single modal data with their labels independently is
much easier than addressing labeled multi-modal data. To obtain the common
representations under such a situation, we propose to make the distributions
over different modalities similar in the learned representations, namely
modality-invariant representations. In particular, we propose a novel algorithm
for modality-invariant representation learning, named Deep Modality Invariant
Adversarial Network (DeMIAN), which utilizes the idea of Domain Adaptation
(DA). Using the modality-invariant representations learned by DeMIAN, we
achieved better classification accuracy than with the state-of-the-art methods,
especially for some benchmark datasets of zero-shot learning.
| [
{
"version": "v1",
"created": "Fri, 23 Dec 2016 14:07:01 GMT"
},
{
"version": "v2",
"created": "Wed, 28 Dec 2016 02:29:15 GMT"
}
] | 2016-12-30T00:00:00 | [
[
"Saito",
"Kuniaki",
""
],
[
"Mukuta",
"Yusuke",
""
],
[
"Ushiku",
"Yoshitaka",
""
],
[
"Harada",
"Tatsuya",
""
]
] | TITLE: DeMIAN: Deep Modality Invariant Adversarial Network
ABSTRACT: Obtaining common representations from different modalities is important in
that they are interchangeable with each other in a classification problem. For
example, we can train a classifier on image features in the common
representations and apply it to the testing of the text features in the
representations. Existing multi-modal representation learning methods mainly
aim to extract rich information from paired samples and train a classifier by
the corresponding labels; however, collecting paired samples and their labels
simultaneously involves high labor costs. Addressing paired modal samples
without their labels and single modal data with their labels independently is
much easier than addressing labeled multi-modal data. To obtain the common
representations under such a situation, we propose to make the distributions
over different modalities similar in the learned representations, namely
modality-invariant representations. In particular, we propose a novel algorithm
for modality-invariant representation learning, named Deep Modality Invariant
Adversarial Network (DeMIAN), which utilizes the idea of Domain Adaptation
(DA). Using the modality-invariant representations learned by DeMIAN, we
achieved better classification accuracy than with the state-of-the-art methods,
especially for some benchmark datasets of zero-shot learning.
| no_new_dataset | 0.945349 |
1612.09007 | Huan Song | Huan Song, Jayaraman J. Thiagarajan, Prasanna Sattigeri, Karthikeyan
Natesan Ramamurthy, Andreas Spanias | A Deep Learning Approach To Multiple Kernel Fusion | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Kernel fusion is a popular and effective approach for combining multiple
features that characterize different aspects of data. Traditional approaches
for Multiple Kernel Learning (MKL) attempt to learn the parameters for
combining the kernels through sophisticated optimization procedures. In this
paper, we propose an alternative approach that creates dense embeddings for
data using the kernel similarities and adopts a deep neural network
architecture for fusing the embeddings. In order to improve the effectiveness
of this network, we introduce the kernel dropout regularization strategy
coupled with the use of an expanded set of composition kernels. Experiment
results on a real-world activity recognition dataset show that the proposed
architecture is effective in fusing kernels and achieves state-of-the-art
performance.
| [
{
"version": "v1",
"created": "Wed, 28 Dec 2016 23:43:27 GMT"
}
] | 2016-12-30T00:00:00 | [
[
"Song",
"Huan",
""
],
[
"Thiagarajan",
"Jayaraman J.",
""
],
[
"Sattigeri",
"Prasanna",
""
],
[
"Ramamurthy",
"Karthikeyan Natesan",
""
],
[
"Spanias",
"Andreas",
""
]
] | TITLE: A Deep Learning Approach To Multiple Kernel Fusion
ABSTRACT: Kernel fusion is a popular and effective approach for combining multiple
features that characterize different aspects of data. Traditional approaches
for Multiple Kernel Learning (MKL) attempt to learn the parameters for
combining the kernels through sophisticated optimization procedures. In this
paper, we propose an alternative approach that creates dense embeddings for
data using the kernel similarities and adopts a deep neural network
architecture for fusing the embeddings. In order to improve the effectiveness
of this network, we introduce the kernel dropout regularization strategy
coupled with the use of an expanded set of composition kernels. Experiment
results on a real-world activity recognition dataset show that the proposed
architecture is effective in fusing kernels and achieves state-of-the-art
performance.
| no_new_dataset | 0.950732 |
1612.09155 | Xiaoyang Chen | Xiaoyang Chen, Hongwei Huo, Jun Huan and Jeffrey Scott Vitter | MSQ-Index: A Succinct Index for Fast Graph Similarity Search | prepare to submit | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graph similarity search has received considerable attention in many
applications, such as bioinformatics, data mining, pattern recognition, and
social networks. Existing methods for this problem have limited scalability
because of the huge amount of memory they consume when handling very large
graph databases with millions or billions of graphs.
In this paper, we study the problem of graph similarity search under the
graph edit distance constraint. We present a space-efficient index structure
based upon the q-gram tree that incorporates succinct data structures and
hybrid encoding to achieve improved query time performance with minimal space
usage. Specifically, the space usage of our index requires only 5%-15% of the
previous state-of-the-art indexing size on the tested data while at the same
time achieving 2-3 times acceleration in query time with small data sets. We
also boost the query performance by augmenting the global filter with range
search, which allows us to perform a query in a reduced region. In addition, we
propose two effective filters that combine degree structures and label
structures. Extensive experiments demonstrate that our proposed approach is
superior in space and competitive in filtering to the state-of-the-art
approaches. To the best of our knowledge, our index is the first in-memory
index for this problem that successfully scales to cope with the large dataset
of 25 million chemical structure graphs from the PubChem dataset.
| [
{
"version": "v1",
"created": "Thu, 29 Dec 2016 14:23:46 GMT"
}
] | 2016-12-30T00:00:00 | [
[
"Chen",
"Xiaoyang",
""
],
[
"Huo",
"Hongwei",
""
],
[
"Huan",
"Jun",
""
],
[
"Vitter",
"Jeffrey Scott",
""
]
] | TITLE: MSQ-Index: A Succinct Index for Fast Graph Similarity Search
ABSTRACT: Graph similarity search has received considerable attention in many
applications, such as bioinformatics, data mining, pattern recognition, and
social networks. Existing methods for this problem have limited scalability
because of the huge amount of memory they consume when handling very large
graph databases with millions or billions of graphs.
In this paper, we study the problem of graph similarity search under the
graph edit distance constraint. We present a space-efficient index structure
based upon the q-gram tree that incorporates succinct data structures and
hybrid encoding to achieve improved query time performance with minimal space
usage. Specifically, the space usage of our index requires only 5%-15% of the
previous state-of-the-art indexing size on the tested data while at the same
time achieving 2-3 times acceleration in query time with small data sets. We
also boost the query performance by augmenting the global filter with range
search, which allows us to perform a query in a reduced region. In addition, we
propose two effective filters that combine degree structures and label
structures. Extensive experiments demonstrate that our proposed approach is
superior in space and competitive in filtering to the state-of-the-art
approaches. To the best of our knowledge, our index is the first in-memory
index for this problem that successfully scales to cope with the large dataset
of 25 million chemical structure graphs from the PubChem dataset.
| no_new_dataset | 0.942082 |
1612.09283 | Ping Li | Ping Li | Generalized Intersection Kernel | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Following the very recent line of work on the ``generalized min-max'' (GMM)
kernel, this study proposes the ``generalized intersection'' (GInt) kernel and
the related ``normalized generalized min-max'' (NGMM) kernel. In computer
vision, the (histogram) intersection kernel has been popular, and the GInt
kernel generalizes it to data which can have both negative and positive
entries. Through an extensive empirical classification study on 40 datasets
from the UCI repository, we are able to show that this (tuning-free) GInt
kernel performs fairly well.
The empirical results also demonstrate that the NGMM kernel typically
outperforms the GInt kernel. Interestingly, the NGMM kernel has another
interpretation --- it is the ``asymmetrically transformed'' version of the GInt
kernel, based on the idea of ``asymmetric hashing''. Just like the GMM kernel,
the NGMM kernel can be efficiently linearized through (e.g.,) generalized
consistent weighted sampling (GCWS), as empirically validated in our study.
Owing to the discrete nature of hashed values, it also provides a scheme for
approximate near neighbor search.
| [
{
"version": "v1",
"created": "Thu, 29 Dec 2016 20:40:52 GMT"
}
] | 2016-12-30T00:00:00 | [
[
"Li",
"Ping",
""
]
] | TITLE: Generalized Intersection Kernel
ABSTRACT: Following the very recent line of work on the ``generalized min-max'' (GMM)
kernel, this study proposes the ``generalized intersection'' (GInt) kernel and
the related ``normalized generalized min-max'' (NGMM) kernel. In computer
vision, the (histogram) intersection kernel has been popular, and the GInt
kernel generalizes it to data which can have both negative and positive
entries. Through an extensive empirical classification study on 40 datasets
from the UCI repository, we are able to show that this (tuning-free) GInt
kernel performs fairly well.
The empirical results also demonstrate that the NGMM kernel typically
outperforms the GInt kernel. Interestingly, the NGMM kernel has another
interpretation --- it is the ``asymmetrically transformed'' version of the GInt
kernel, based on the idea of ``asymmetric hashing''. Just like the GMM kernel,
the NGMM kernel can be efficiently linearized through (e.g.,) generalized
consistent weighted sampling (GCWS), as empirically validated in our study.
Owing to the discrete nature of hashed values, it also provides a scheme for
approximate near neighbor search.
| no_new_dataset | 0.940517 |
1602.04058 | Maroussia Favre | M. Favre and A. Wittwer and H.R. Heinimann and V.I. Yukalov and D.
Sornette | Quantum decision theory in simple risky choices | null | PLoS ONE 2016 11(12): e0168045 | 10.1371/journal.pone.0168045 | null | physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Quantum decision theory (QDT) is a recently developed theory of decision
making based on the mathematics of Hilbert spaces, a framework known in physics
for its application to quantum mechanics. This framework formalizes the concept
of uncertainty and other effects that are particularly manifest in cognitive
processes, which makes it well suited for the study of decision making. QDT
describes a decision maker's choice as a stochastic event occurring with a
probability that is the sum of an objective utility factor and a subjective
attraction factor. QDT offers a prediction for the average effect of
subjectivity on decision makers, the quarter law. We examine individual and
aggregated (group) data, and find that the results are in good agreement with
the quarter law at the level of groups. At the individual level, it appears
that the quarter law could be refined in order to reflect individual
characteristics. This article revisits the formalism of QDT along a concrete
example and offers a practical guide to researchers who are interested in
applying QDT to a dataset of binary lotteries in the domain of gains.
| [
{
"version": "v1",
"created": "Fri, 12 Feb 2016 13:57:13 GMT"
},
{
"version": "v2",
"created": "Sat, 24 Dec 2016 09:05:14 GMT"
}
] | 2016-12-28T00:00:00 | [
[
"Favre",
"M.",
""
],
[
"Wittwer",
"A.",
""
],
[
"Heinimann",
"H. R.",
""
],
[
"Yukalov",
"V. I.",
""
],
[
"Sornette",
"D.",
""
]
] | TITLE: Quantum decision theory in simple risky choices
ABSTRACT: Quantum decision theory (QDT) is a recently developed theory of decision
making based on the mathematics of Hilbert spaces, a framework known in physics
for its application to quantum mechanics. This framework formalizes the concept
of uncertainty and other effects that are particularly manifest in cognitive
processes, which makes it well suited for the study of decision making. QDT
describes a decision maker's choice as a stochastic event occurring with a
probability that is the sum of an objective utility factor and a subjective
attraction factor. QDT offers a prediction for the average effect of
subjectivity on decision makers, the quarter law. We examine individual and
aggregated (group) data, and find that the results are in good agreement with
the quarter law at the level of groups. At the individual level, it appears
that the quarter law could be refined in order to reflect individual
characteristics. This article revisits the formalism of QDT along a concrete
example and offers a practical guide to researchers who are interested in
applying QDT to a dataset of binary lotteries in the domain of gains.
| no_new_dataset | 0.944022 |
1603.07120 | Amir Shahroudy | Amir Shahroudy, Tian-Tsong Ng, Yihong Gong, Gang Wang | Deep Multimodal Feature Analysis for Action Recognition in RGB+D Videos | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Single modality action recognition on RGB or depth sequences has been
extensively explored recently. It is generally accepted that each of these two
modalities has different strengths and limitations for the task of action
recognition. Therefore, analysis of the RGB+D videos can help us to better
study the complementary properties of these two types of modalities and achieve
higher levels of performance. In this paper, we propose a new deep autoencoder
based shared-specific feature factorization network to separate input
multimodal signals into a hierarchy of components. Further, based on the
structure of the features, a structured sparsity learning machine is proposed
which utilizes mixed norms to apply regularization within components and group
selection between them for better classification performance. Our experimental
results show the effectiveness of our cross-modality feature analysis framework
by achieving state-of-the-art accuracy for action classification on five
challenging benchmark datasets.
| [
{
"version": "v1",
"created": "Wed, 23 Mar 2016 10:22:12 GMT"
},
{
"version": "v2",
"created": "Mon, 26 Dec 2016 05:31:52 GMT"
}
] | 2016-12-28T00:00:00 | [
[
"Shahroudy",
"Amir",
""
],
[
"Ng",
"Tian-Tsong",
""
],
[
"Gong",
"Yihong",
""
],
[
"Wang",
"Gang",
""
]
] | TITLE: Deep Multimodal Feature Analysis for Action Recognition in RGB+D Videos
ABSTRACT: Single modality action recognition on RGB or depth sequences has been
extensively explored recently. It is generally accepted that each of these two
modalities has different strengths and limitations for the task of action
recognition. Therefore, analysis of the RGB+D videos can help us to better
study the complementary properties of these two types of modalities and achieve
higher levels of performance. In this paper, we propose a new deep autoencoder
based shared-specific feature factorization network to separate input
multimodal signals into a hierarchy of components. Further, based on the
structure of the features, a structured sparsity learning machine is proposed
which utilizes mixed norms to apply regularization within components and group
selection between them for better classification performance. Our experimental
results show the effectiveness of our cross-modality feature analysis framework
by achieving state-of-the-art accuracy for action classification on five
challenging benchmark datasets.
| no_new_dataset | 0.943971 |
1605.09507 | Yoonchang Han | Yoonchang Han, Jaehun Kim, Kyogu Lee | Deep convolutional neural networks for predominant instrument
recognition in polyphonic music | 13 pages, 7 figures, accepted for publication in IEEE/ACM
Transactions on Audio, Speech, and Language Processing on 16-Nov-2016. This
is initial submission version. Fully edited version is available at
http://ieeexplore.ieee.org/document/7755799/ | Published in: IEEE/ACM Transactions on Audio, Speech, and Language
Processing ( Volume: 25, Issue: 1, Jan. 2017 ) Page(s): 208 - 221 | 10.1109/TASLP.2016.2632307 | null | cs.SD cs.CV cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Identifying musical instruments in polyphonic music recordings is a
challenging but important problem in the field of music information retrieval.
It enables music search by instrument, helps recognize musical genres, or can
make music transcription easier and more accurate. In this paper, we present a
convolutional neural network framework for predominant instrument recognition
in real-world polyphonic music. We train our network from fixed-length music
excerpts with a single-labeled predominant instrument and estimate an arbitrary
number of predominant instruments from an audio signal with a variable length.
To obtain the audio-excerpt-wise result, we aggregate multiple outputs from
sliding windows over the test audio. In doing so, we investigated two different
aggregation methods: one takes the average for each instrument and the other
takes the instrument-wise sum followed by normalization. In addition, we
conducted extensive experiments on several important factors that affect the
performance, including analysis window size, identification threshold, and
activation functions for neural networks to find the optimal set of parameters.
Using a dataset of 10k audio excerpts from 11 instruments for evaluation, we
found that convolutional neural networks are more robust than conventional
methods that exploit spectral features and source separation with support
vector machines. Experimental results showed that the proposed convolutional
network architecture obtained an F1 measure of 0.602 for micro and 0.503 for
macro, respectively, achieving 19.6% and 16.4% in performance improvement
compared with other state-of-the-art algorithms.
| [
{
"version": "v1",
"created": "Tue, 31 May 2016 07:11:18 GMT"
},
{
"version": "v2",
"created": "Fri, 18 Nov 2016 08:54:57 GMT"
},
{
"version": "v3",
"created": "Mon, 26 Dec 2016 12:29:26 GMT"
}
] | 2016-12-28T00:00:00 | [
[
"Han",
"Yoonchang",
""
],
[
"Kim",
"Jaehun",
""
],
[
"Lee",
"Kyogu",
""
]
] | TITLE: Deep convolutional neural networks for predominant instrument
recognition in polyphonic music
ABSTRACT: Identifying musical instruments in polyphonic music recordings is a
challenging but important problem in the field of music information retrieval.
It enables music search by instrument, helps recognize musical genres, or can
make music transcription easier and more accurate. In this paper, we present a
convolutional neural network framework for predominant instrument recognition
in real-world polyphonic music. We train our network from fixed-length music
excerpts with a single-labeled predominant instrument and estimate an arbitrary
number of predominant instruments from an audio signal with a variable length.
To obtain the audio-excerpt-wise result, we aggregate multiple outputs from
sliding windows over the test audio. In doing so, we investigated two different
aggregation methods: one takes the average for each instrument and the other
takes the instrument-wise sum followed by normalization. In addition, we
conducted extensive experiments on several important factors that affect the
performance, including analysis window size, identification threshold, and
activation functions for neural networks to find the optimal set of parameters.
Using a dataset of 10k audio excerpts from 11 instruments for evaluation, we
found that convolutional neural networks are more robust than conventional
methods that exploit spectral features and source separation with support
vector machines. Experimental results showed that the proposed convolutional
network architecture obtained an F1 measure of 0.602 for micro and 0.503 for
macro, respectively, achieving 19.6% and 16.4% in performance improvement
compared with other state-of-the-art algorithms.
| no_new_dataset | 0.946547 |
1606.07253 | Liuhao Ge | Liuhao Ge, Hui Liang, Junsong Yuan, Daniel Thalmann | Robust 3D Hand Pose Estimation in Single Depth Images: from Single-View
CNN to Multi-View CNNs | 9 pages, 9 figures, published at Computer Vision and Pattern
Recognition (CVPR) 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Articulated hand pose estimation plays an important role in human-computer
interaction. Despite the recent progress, the accuracy of existing methods is
still not satisfactory, partially due to the difficulty of embedded
high-dimensional and non-linear regression problem. Different from the existing
discriminative methods that regress for the hand pose with a single depth
image, we propose to first project the query depth image onto three orthogonal
planes and utilize these multi-view projections to regress for 2D heat-maps
which estimate the joint positions on each plane. These multi-view heat-maps
are then fused to produce final 3D hand pose estimation with learned pose
priors. Experiments show that the proposed method largely outperforms
state-of-the-art on a challenging dataset. Moreover, a cross-dataset experiment
also demonstrates the good generalization ability of the proposed method.
| [
{
"version": "v1",
"created": "Thu, 23 Jun 2016 10:00:03 GMT"
},
{
"version": "v2",
"created": "Sun, 4 Dec 2016 09:15:42 GMT"
},
{
"version": "v3",
"created": "Tue, 27 Dec 2016 14:22:54 GMT"
}
] | 2016-12-28T00:00:00 | [
[
"Ge",
"Liuhao",
""
],
[
"Liang",
"Hui",
""
],
[
"Yuan",
"Junsong",
""
],
[
"Thalmann",
"Daniel",
""
]
] | TITLE: Robust 3D Hand Pose Estimation in Single Depth Images: from Single-View
CNN to Multi-View CNNs
ABSTRACT: Articulated hand pose estimation plays an important role in human-computer
interaction. Despite the recent progress, the accuracy of existing methods is
still not satisfactory, partially due to the difficulty of embedded
high-dimensional and non-linear regression problem. Different from the existing
discriminative methods that regress for the hand pose with a single depth
image, we propose to first project the query depth image onto three orthogonal
planes and utilize these multi-view projections to regress for 2D heat-maps
which estimate the joint positions on each plane. These multi-view heat-maps
are then fused to produce final 3D hand pose estimation with learned pose
priors. Experiments show that the proposed method largely outperforms
state-of-the-art on a challenging dataset. Moreover, a cross-dataset experiment
also demonstrates the good generalization ability of the proposed method.
| no_new_dataset | 0.946547 |
1610.03670 | Qi Dong | Qi Dong, Shaogang Gong, Xiatian Zhu | Multi-Task Curriculum Transfer Deep Learning of Clothing Attributes | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recognising detailed clothing characteristics (fine-grained attributes) in
unconstrained images of people in-the-wild is a challenging task for computer
vision, especially when there is only limited training data from the wild
whilst most data available for model learning are captured in well-controlled
environments using fashion models (well lit, no background clutter, frontal
view, high-resolution). In this work, we develop a deep learning framework
capable of model transfer learning from well-controlled shop clothing images
collected from web retailers to in-the-wild images from the street.
Specifically, we formulate a novel Multi-Task Curriculum Transfer (MTCT) deep
learning method to explore multiple sources of different types of web
annotations with multi-labelled fine-grained attributes. Our multi-task loss
function is designed to extract more discriminative representations in training
by jointly learning all attributes, and our curriculum strategy exploits the
staged easy-to-complex transfer learning motivated by cognitive studies. We
demonstrate the advantages of the MTCT model over the state-of-the-art methods
on the X-Domain benchmark, a large scale clothing attribute dataset. Moreover,
we show that the MTCT model has a notable advantage over contemporary models
when the training data size is small.
| [
{
"version": "v1",
"created": "Wed, 12 Oct 2016 11:17:16 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Oct 2016 12:11:55 GMT"
},
{
"version": "v3",
"created": "Fri, 14 Oct 2016 10:32:54 GMT"
},
{
"version": "v4",
"created": "Sun, 25 Dec 2016 23:43:22 GMT"
}
] | 2016-12-28T00:00:00 | [
[
"Dong",
"Qi",
""
],
[
"Gong",
"Shaogang",
""
],
[
"Zhu",
"Xiatian",
""
]
] | TITLE: Multi-Task Curriculum Transfer Deep Learning of Clothing Attributes
ABSTRACT: Recognising detailed clothing characteristics (fine-grained attributes) in
unconstrained images of people in-the-wild is a challenging task for computer
vision, especially when there is only limited training data from the wild
whilst most data available for model learning are captured in well-controlled
environments using fashion models (well lit, no background clutter, frontal
view, high-resolution). In this work, we develop a deep learning framework
capable of model transfer learning from well-controlled shop clothing images
collected from web retailers to in-the-wild images from the street.
Specifically, we formulate a novel Multi-Task Curriculum Transfer (MTCT) deep
learning method to explore multiple sources of different types of web
annotations with multi-labelled fine-grained attributes. Our multi-task loss
function is designed to extract more discriminative representations in training
by jointly learning all attributes, and our curriculum strategy exploits the
staged easy-to-complex transfer learning motivated by cognitive studies. We
demonstrate the advantages of the MTCT model over the state-of-the-art methods
on the X-Domain benchmark, a large scale clothing attribute dataset. Moreover,
we show that the MTCT model has a notable advantage over contemporary models
when the training data size is small.
| no_new_dataset | 0.947866 |
1612.06007 | Ahmed Alaa | Ahmed M. Alaa and Mihaela van der Schaar | A Hidden Absorbing Semi-Markov Model for Informatively Censored Temporal
Data: Learning and Inference | null | null | null | null | cs.AI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Modeling continuous-time physiological processes that manifest a patient's
evolving clinical states is a key step in approaching many problems in
healthcare. In this paper, we develop the Hidden Absorbing Semi-Markov Model
(HASMM): a versatile probabilistic model that is capable of capturing the
modern electronic health record (EHR) data. Unlike exist- ing models, an HASMM
accommodates irregularly sampled, temporally correlated, and informatively
censored physiological data, and can describe non-stationary clinical state
transitions. Learning an HASMM from the EHR data is achieved via a novel
forward- filtering backward-sampling Monte-Carlo EM algorithm that exploits the
knowledge of the end-point clinical outcomes (informative censoring) in the EHR
data, and implements the E-step by sequentially sampling the patients' clinical
states in the reverse-time direction while conditioning on the future states.
Real-time inferences are drawn via a forward- filtering algorithm that operates
on a virtually constructed discrete-time embedded Markov chain that mirrors the
patient's continuous-time state trajectory. We demonstrate the di- agnostic and
prognostic utility of the HASMM in a critical care prognosis setting using a
real-world dataset for patients admitted to the Ronald Reagan UCLA Medical
Center.
| [
{
"version": "v1",
"created": "Sun, 18 Dec 2016 23:02:02 GMT"
},
{
"version": "v2",
"created": "Tue, 27 Dec 2016 13:44:59 GMT"
}
] | 2016-12-28T00:00:00 | [
[
"Alaa",
"Ahmed M.",
""
],
[
"van der Schaar",
"Mihaela",
""
]
] | TITLE: A Hidden Absorbing Semi-Markov Model for Informatively Censored Temporal
Data: Learning and Inference
ABSTRACT: Modeling continuous-time physiological processes that manifest a patient's
evolving clinical states is a key step in approaching many problems in
healthcare. In this paper, we develop the Hidden Absorbing Semi-Markov Model
(HASMM): a versatile probabilistic model that is capable of capturing the
modern electronic health record (EHR) data. Unlike exist- ing models, an HASMM
accommodates irregularly sampled, temporally correlated, and informatively
censored physiological data, and can describe non-stationary clinical state
transitions. Learning an HASMM from the EHR data is achieved via a novel
forward- filtering backward-sampling Monte-Carlo EM algorithm that exploits the
knowledge of the end-point clinical outcomes (informative censoring) in the EHR
data, and implements the E-step by sequentially sampling the patients' clinical
states in the reverse-time direction while conditioning on the future states.
Real-time inferences are drawn via a forward- filtering algorithm that operates
on a virtually constructed discrete-time embedded Markov chain that mirrors the
patient's continuous-time state trajectory. We demonstrate the di- agnostic and
prognostic utility of the HASMM in a critical care prognosis setting using a
real-world dataset for patients admitted to the Ronald Reagan UCLA Medical
Center.
| no_new_dataset | 0.951323 |
1612.08102 | Xintao Wu | Yuemeng Li, Xintao Wu, Aidong Lu | On Spectral Analysis of Directed Signed Graphs | 10 pages | null | null | null | cs.SI cs.LG physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It has been shown that the adjacency eigenspace of a network contains key
information of its underlying structure. However, there has been no study on
spectral analysis of the adjacency matrices of directed signed graphs. In this
paper, we derive theoretical approximations of spectral projections from such
directed signed networks using matrix perturbation theory. We use the derived
theoretical results to study the influences of negative intra cluster and inter
cluster directed edges on node spectral projections. We then develop a spectral
clustering based graph partition algorithm, SC-DSG, and conduct evaluations on
both synthetic and real datasets. Both theoretical analysis and empirical
evaluation demonstrate the effectiveness of the proposed algorithm.
| [
{
"version": "v1",
"created": "Fri, 23 Dec 2016 21:20:55 GMT"
}
] | 2016-12-28T00:00:00 | [
[
"Li",
"Yuemeng",
""
],
[
"Wu",
"Xintao",
""
],
[
"Lu",
"Aidong",
""
]
] | TITLE: On Spectral Analysis of Directed Signed Graphs
ABSTRACT: It has been shown that the adjacency eigenspace of a network contains key
information of its underlying structure. However, there has been no study on
spectral analysis of the adjacency matrices of directed signed graphs. In this
paper, we derive theoretical approximations of spectral projections from such
directed signed networks using matrix perturbation theory. We use the derived
theoretical results to study the influences of negative intra cluster and inter
cluster directed edges on node spectral projections. We then develop a spectral
clustering based graph partition algorithm, SC-DSG, and conduct evaluations on
both synthetic and real datasets. Both theoretical analysis and empirical
evaluation demonstrate the effectiveness of the proposed algorithm.
| no_new_dataset | 0.944587 |
1612.08169 | Kaihua Zhang | Kaihua Zhang and Xuejun Li and Qingshan Liu | Unsupervised Video Segmentation via Spatio-Temporally Nonlocal
Appearance Learning | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Video object segmentation is challenging due to the factors like rapidly fast
motion, cluttered backgrounds, arbitrary object appearance variation and shape
deformation. Most existing methods only explore appearance information between
two consecutive frames, which do not make full use of the usefully long-term
nonlocal information that is helpful to make the learned appearance stable, and
hence they tend to fail when the targets suffer from large viewpoint changes
and significant non-rigid deformations. In this paper, we propose a simple yet
effective approach to mine the long-term sptatio-temporally nonlocal appearance
information for unsupervised video segmentation. The motivation of our
algorithm comes from the spatio-temporal nonlocality of the region appearance
reoccurrence in a video. Specifically, we first generate a set of superpixels
to represent the foreground and background, and then update the appearance of
each superpixel with its long-term sptatio-temporally nonlocal counterparts
generated by the approximate nearest neighbor search method with the efficient
KD-tree algorithm. Then, with the updated appearances, we formulate a
spatio-temporal graphical model comprised of the superpixel label consistency
potentials. Finally, we generate the segmentation by optimizing the graphical
model via iteratively updating the appearance model and estimating the labels.
Extensive evaluations on the SegTrack and Youtube-Objects datasets demonstrate
the effectiveness of the proposed method, which performs favorably against some
state-of-art methods.
| [
{
"version": "v1",
"created": "Sat, 24 Dec 2016 12:04:31 GMT"
}
] | 2016-12-28T00:00:00 | [
[
"Zhang",
"Kaihua",
""
],
[
"Li",
"Xuejun",
""
],
[
"Liu",
"Qingshan",
""
]
] | TITLE: Unsupervised Video Segmentation via Spatio-Temporally Nonlocal
Appearance Learning
ABSTRACT: Video object segmentation is challenging due to the factors like rapidly fast
motion, cluttered backgrounds, arbitrary object appearance variation and shape
deformation. Most existing methods only explore appearance information between
two consecutive frames, which do not make full use of the usefully long-term
nonlocal information that is helpful to make the learned appearance stable, and
hence they tend to fail when the targets suffer from large viewpoint changes
and significant non-rigid deformations. In this paper, we propose a simple yet
effective approach to mine the long-term sptatio-temporally nonlocal appearance
information for unsupervised video segmentation. The motivation of our
algorithm comes from the spatio-temporal nonlocality of the region appearance
reoccurrence in a video. Specifically, we first generate a set of superpixels
to represent the foreground and background, and then update the appearance of
each superpixel with its long-term sptatio-temporally nonlocal counterparts
generated by the approximate nearest neighbor search method with the efficient
KD-tree algorithm. Then, with the updated appearances, we formulate a
spatio-temporal graphical model comprised of the superpixel label consistency
potentials. Finally, we generate the segmentation by optimizing the graphical
model via iteratively updating the appearance model and estimating the labels.
Extensive evaluations on the SegTrack and Youtube-Objects datasets demonstrate
the effectiveness of the proposed method, which performs favorably against some
state-of-art methods.
| no_new_dataset | 0.950227 |
1612.08242 | Joseph Redmon | Joseph Redmon, Ali Farhadi | YOLO9000: Better, Faster, Stronger | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce YOLO9000, a state-of-the-art, real-time object detection system
that can detect over 9000 object categories. First we propose various
improvements to the YOLO detection method, both novel and drawn from prior
work. The improved model, YOLOv2, is state-of-the-art on standard detection
tasks like PASCAL VOC and COCO. At 67 FPS, YOLOv2 gets 76.8 mAP on VOC 2007. At
40 FPS, YOLOv2 gets 78.6 mAP, outperforming state-of-the-art methods like
Faster RCNN with ResNet and SSD while still running significantly faster.
Finally we propose a method to jointly train on object detection and
classification. Using this method we train YOLO9000 simultaneously on the COCO
detection dataset and the ImageNet classification dataset. Our joint training
allows YOLO9000 to predict detections for object classes that don't have
labelled detection data. We validate our approach on the ImageNet detection
task. YOLO9000 gets 19.7 mAP on the ImageNet detection validation set despite
only having detection data for 44 of the 200 classes. On the 156 classes not in
COCO, YOLO9000 gets 16.0 mAP. But YOLO can detect more than just 200 classes;
it predicts detections for more than 9000 different object categories. And it
still runs in real-time.
| [
{
"version": "v1",
"created": "Sun, 25 Dec 2016 07:21:38 GMT"
}
] | 2016-12-28T00:00:00 | [
[
"Redmon",
"Joseph",
""
],
[
"Farhadi",
"Ali",
""
]
] | TITLE: YOLO9000: Better, Faster, Stronger
ABSTRACT: We introduce YOLO9000, a state-of-the-art, real-time object detection system
that can detect over 9000 object categories. First we propose various
improvements to the YOLO detection method, both novel and drawn from prior
work. The improved model, YOLOv2, is state-of-the-art on standard detection
tasks like PASCAL VOC and COCO. At 67 FPS, YOLOv2 gets 76.8 mAP on VOC 2007. At
40 FPS, YOLOv2 gets 78.6 mAP, outperforming state-of-the-art methods like
Faster RCNN with ResNet and SSD while still running significantly faster.
Finally we propose a method to jointly train on object detection and
classification. Using this method we train YOLO9000 simultaneously on the COCO
detection dataset and the ImageNet classification dataset. Our joint training
allows YOLO9000 to predict detections for object classes that don't have
labelled detection data. We validate our approach on the ImageNet detection
task. YOLO9000 gets 19.7 mAP on the ImageNet detection validation set despite
only having detection data for 44 of the 200 classes. On the 156 classes not in
COCO, YOLO9000 gets 16.0 mAP. But YOLO can detect more than just 200 classes;
it predicts detections for more than 9000 different object categories. And it
still runs in real-time.
| no_new_dataset | 0.941331 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.