id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
sequencelengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1605.04369 | Ragav Venkatesan | Ragav Venkatesan and Vijetha Gattupalli and Baoxin Li | Neural Dataset Generality | Long version of the paper accepted at IEEE International Conference
on Image Processing 2016 | null | 10.1109/ICIP.2016.7532315 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Often the filters learned by Convolutional Neural Networks (CNNs) from
different datasets appear similar. This is prominent in the first few layers.
This similarity of filters is being exploited for the purposes of transfer
learning and some studies have been made to analyse such transferability of
features. This is also being used as an initialization technique for different
tasks in the same dataset or for the same task in similar datasets.
Off-the-shelf CNN features have capitalized on this idea to promote their
networks as best transferable and most general and are used in a cavalier
manner in day-to-day computer vision tasks.
It is curious that while the filters learned by these CNNs are related to the
atomic structures of the images from which they are learnt, all datasets learn
similar looking low-level filters. With the understanding that a dataset that
contains many such atomic structures learn general filters and are therefore
useful to initialize other networks with, we propose a way to analyse and
quantify generality among datasets from their accuracies on transferred
filters. We applied this metric on several popular character recognition,
natural image and a medical image dataset, and arrived at some interesting
conclusions. On further experimentation we also discovered that particular
classes in a dataset themselves are more general than others.
| [
{
"version": "v1",
"created": "Sat, 14 May 2016 03:17:15 GMT"
}
] | 2017-05-03T00:00:00 | [
[
"Venkatesan",
"Ragav",
""
],
[
"Gattupalli",
"Vijetha",
""
],
[
"Li",
"Baoxin",
""
]
] | TITLE: Neural Dataset Generality
ABSTRACT: Often the filters learned by Convolutional Neural Networks (CNNs) from
different datasets appear similar. This is prominent in the first few layers.
This similarity of filters is being exploited for the purposes of transfer
learning and some studies have been made to analyse such transferability of
features. This is also being used as an initialization technique for different
tasks in the same dataset or for the same task in similar datasets.
Off-the-shelf CNN features have capitalized on this idea to promote their
networks as best transferable and most general and are used in a cavalier
manner in day-to-day computer vision tasks.
It is curious that while the filters learned by these CNNs are related to the
atomic structures of the images from which they are learnt, all datasets learn
similar looking low-level filters. With the understanding that a dataset that
contains many such atomic structures learn general filters and are therefore
useful to initialize other networks with, we propose a way to analyse and
quantify generality among datasets from their accuracies on transferred
filters. We applied this metric on several popular character recognition,
natural image and a medical image dataset, and arrived at some interesting
conclusions. On further experimentation we also discovered that particular
classes in a dataset themselves are more general than others.
| no_new_dataset | 0.948917 |
1606.01959 | Oscar Fontanelli | Oscar Fontanelli, Pedro Miramontes, Yaning Yang, Germinal Cocho,
Wentian Li | Beyond Zipf's Law: The Lavalette Rank Function and its Properties | 15 pages, 4 figures | PLoS ONE 11(9), 2016, e0163241 | 10.1371/journal.pone.0163241 | null | physics.data-an physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Although Zipf's law is widespread in natural and social data, one often
encounters situations where one or both ends of the ranked data deviate from
the power-law function. Previously we proposed the Beta rank function to
improve the fitting of data which does not follow a perfect Zipf's law. Here we
show that when the two parameters in the Beta rank function have the same
value, the Lavalette rank function, the probability density function can be
derived analytically. We also show both computationally and analytically that
Lavalette distribution is approximately equal, though not identical, to the
lognormal distribution. We illustrate the utility of Lavalette rank function in
several datasets. We also address three analysis issues on the statistical
testing of Lavalette fitting function, comparison between Zipf's law and
lognormal distribution through Lavalette function, and comparison between
lognormal distribution and Lavalette distribution.
| [
{
"version": "v1",
"created": "Mon, 6 Jun 2016 22:10:57 GMT"
}
] | 2017-05-03T00:00:00 | [
[
"Fontanelli",
"Oscar",
""
],
[
"Miramontes",
"Pedro",
""
],
[
"Yang",
"Yaning",
""
],
[
"Cocho",
"Germinal",
""
],
[
"Li",
"Wentian",
""
]
] | TITLE: Beyond Zipf's Law: The Lavalette Rank Function and its Properties
ABSTRACT: Although Zipf's law is widespread in natural and social data, one often
encounters situations where one or both ends of the ranked data deviate from
the power-law function. Previously we proposed the Beta rank function to
improve the fitting of data which does not follow a perfect Zipf's law. Here we
show that when the two parameters in the Beta rank function have the same
value, the Lavalette rank function, the probability density function can be
derived analytically. We also show both computationally and analytically that
Lavalette distribution is approximately equal, though not identical, to the
lognormal distribution. We illustrate the utility of Lavalette rank function in
several datasets. We also address three analysis issues on the statistical
testing of Lavalette fitting function, comparison between Zipf's law and
lognormal distribution through Lavalette function, and comparison between
lognormal distribution and Lavalette distribution.
| no_new_dataset | 0.953013 |
1612.00193 | Gr\'egoire Ferr\'e | G. Ferr\'e, T. Haut and K. Barros | Learning molecular energies using localized graph kernels | null | The Journal of Chemical Physics, 146(11), 114107 (2017) | 10.1063/1.4978623 | null | physics.comp-ph cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent machine learning methods make it possible to model potential energy of
atomic configurations with chemical-level accuracy (as calculated from
ab-initio calculations) and at speeds suitable for molecular dynam- ics
simulation. Best performance is achieved when the known physical constraints
are encoded in the machine learning models. For example, the atomic energy is
invariant under global translations and rotations, it is also invariant to
permutations of same-species atoms. Although simple to state, these symmetries
are complicated to encode into machine learning algorithms. In this paper, we
present a machine learning approach based on graph theory that naturally
incorporates translation, rotation, and permutation symmetries. Specifically,
we use a random walk graph kernel to measure the similarity of two adjacency
matrices, each of which represents a local atomic environment. This Graph
Approximated Energy (GRAPE) approach is flexible and admits many possible
extensions. We benchmark a simple version of GRAPE by predicting atomization
energies on a standard dataset of organic molecules.
| [
{
"version": "v1",
"created": "Thu, 1 Dec 2016 10:23:59 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Apr 2017 10:03:41 GMT"
}
] | 2017-05-03T00:00:00 | [
[
"Ferré",
"G.",
""
],
[
"Haut",
"T.",
""
],
[
"Barros",
"K.",
""
]
] | TITLE: Learning molecular energies using localized graph kernels
ABSTRACT: Recent machine learning methods make it possible to model potential energy of
atomic configurations with chemical-level accuracy (as calculated from
ab-initio calculations) and at speeds suitable for molecular dynam- ics
simulation. Best performance is achieved when the known physical constraints
are encoded in the machine learning models. For example, the atomic energy is
invariant under global translations and rotations, it is also invariant to
permutations of same-species atoms. Although simple to state, these symmetries
are complicated to encode into machine learning algorithms. In this paper, we
present a machine learning approach based on graph theory that naturally
incorporates translation, rotation, and permutation symmetries. Specifically,
we use a random walk graph kernel to measure the similarity of two adjacency
matrices, each of which represents a local atomic environment. This Graph
Approximated Energy (GRAPE) approach is flexible and admits many possible
extensions. We benchmark a simple version of GRAPE by predicting atomization
energies on a standard dataset of organic molecules.
| no_new_dataset | 0.945701 |
1612.04003 | Aditya Devarakonda | Aditya Devarakonda, Kimon Fountoulakis, James Demmel, Michael W.
Mahoney | Avoiding communication in primal and dual block coordinate descent
methods | null | null | null | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Primal and dual block coordinate descent methods are iterative methods for
solving regularized and unregularized optimization problems. Distributed-memory
parallel implementations of these methods have become popular in analyzing
large machine learning datasets. However, existing implementations communicate
at every iteration which, on modern data center and supercomputing
architectures, often dominates the cost of floating-point computation. Recent
results on communication-avoiding Krylov subspace methods suggest that large
speedups are possible by re-organizing iterative algorithms to avoid
communication. We show how applying similar algorithmic transformations can
lead to primal and dual block coordinate descent methods that only communicate
every $s$ iterations--where $s$ is a tuning parameter--instead of every
iteration for the \textit{regularized least-squares problem}. We show that the
communication-avoiding variants reduce the number of synchronizations by a
factor of $s$ on distributed-memory parallel machines without altering the
convergence rate and attains strong scaling speedups of up to $6.1\times$ on a
Cray XC30 supercomputer.
| [
{
"version": "v1",
"created": "Tue, 13 Dec 2016 02:59:33 GMT"
},
{
"version": "v2",
"created": "Tue, 2 May 2017 01:57:40 GMT"
}
] | 2017-05-03T00:00:00 | [
[
"Devarakonda",
"Aditya",
""
],
[
"Fountoulakis",
"Kimon",
""
],
[
"Demmel",
"James",
""
],
[
"Mahoney",
"Michael W.",
""
]
] | TITLE: Avoiding communication in primal and dual block coordinate descent
methods
ABSTRACT: Primal and dual block coordinate descent methods are iterative methods for
solving regularized and unregularized optimization problems. Distributed-memory
parallel implementations of these methods have become popular in analyzing
large machine learning datasets. However, existing implementations communicate
at every iteration which, on modern data center and supercomputing
architectures, often dominates the cost of floating-point computation. Recent
results on communication-avoiding Krylov subspace methods suggest that large
speedups are possible by re-organizing iterative algorithms to avoid
communication. We show how applying similar algorithmic transformations can
lead to primal and dual block coordinate descent methods that only communicate
every $s$ iterations--where $s$ is a tuning parameter--instead of every
iteration for the \textit{regularized least-squares problem}. We show that the
communication-avoiding variants reduce the number of synchronizations by a
factor of $s$ on distributed-memory parallel machines without altering the
convergence rate and attains strong scaling speedups of up to $6.1\times$ on a
Cray XC30 supercomputer.
| no_new_dataset | 0.942082 |
1703.01386 | Pongsate Tangseng | Pongsate Tangseng, Zhipeng Wu, Kota Yamaguchi | Looking at Outfit to Parse Clothing | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper extends fully-convolutional neural networks (FCN) for the clothing
parsing problem. Clothing parsing requires higher-level knowledge on clothing
semantics and contextual cues to disambiguate fine-grained categories. We
extend FCN architecture with a side-branch network which we refer outfit
encoder to predict a consistent set of clothing labels to encourage
combinatorial preference, and with conditional random field (CRF) to explicitly
consider coherent label assignment to the given image. The empirical results
using Fashionista and CFPD datasets show that our model achieves
state-of-the-art performance in clothing parsing, without additional
supervision during training. We also study the qualitative influence of
annotation on the current clothing parsing benchmarks, with our Web-based tool
for multi-scale pixel-wise annotation and manual refinement effort to the
Fashionista dataset. Finally, we show that the image representation of the
outfit encoder is useful for dress-up image retrieval application.
| [
{
"version": "v1",
"created": "Sat, 4 Mar 2017 03:09:36 GMT"
}
] | 2017-05-03T00:00:00 | [
[
"Tangseng",
"Pongsate",
""
],
[
"Wu",
"Zhipeng",
""
],
[
"Yamaguchi",
"Kota",
""
]
] | TITLE: Looking at Outfit to Parse Clothing
ABSTRACT: This paper extends fully-convolutional neural networks (FCN) for the clothing
parsing problem. Clothing parsing requires higher-level knowledge on clothing
semantics and contextual cues to disambiguate fine-grained categories. We
extend FCN architecture with a side-branch network which we refer outfit
encoder to predict a consistent set of clothing labels to encourage
combinatorial preference, and with conditional random field (CRF) to explicitly
consider coherent label assignment to the given image. The empirical results
using Fashionista and CFPD datasets show that our model achieves
state-of-the-art performance in clothing parsing, without additional
supervision during training. We also study the qualitative influence of
annotation on the current clothing parsing benchmarks, with our Web-based tool
for multi-scale pixel-wise annotation and manual refinement effort to the
Fashionista dataset. Finally, we show that the image representation of the
outfit encoder is useful for dress-up image retrieval application.
| no_new_dataset | 0.951504 |
1704.04684 | Luis Argerich | Luis Argerich, Natalia Golmar | Generic LSH Families for the Angular Distance Based on
Johnson-Lindenstrauss Projections and Feature Hashing LSH | null | null | null | null | cs.DS cs.AI cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we propose the creation of generic LSH families for the angular
distance based on Johnson-Lindenstrauss projections. We show that feature
hashing is a valid J-L projection and propose two new LSH families based on
feature hashing. These new LSH families are tested on both synthetic and real
datasets with very good results and a considerable performance improvement over
other LSH families. While the theoretical analysis is done for the angular
distance, these families can also be used in practice for the euclidean
distance with excellent results [2]. Our tests using real datasets show that
the proposed LSH functions work well for the euclidean distance.
| [
{
"version": "v1",
"created": "Sat, 15 Apr 2017 19:32:51 GMT"
}
] | 2017-05-03T00:00:00 | [
[
"Argerich",
"Luis",
""
],
[
"Golmar",
"Natalia",
""
]
] | TITLE: Generic LSH Families for the Angular Distance Based on
Johnson-Lindenstrauss Projections and Feature Hashing LSH
ABSTRACT: In this paper we propose the creation of generic LSH families for the angular
distance based on Johnson-Lindenstrauss projections. We show that feature
hashing is a valid J-L projection and propose two new LSH families based on
feature hashing. These new LSH families are tested on both synthetic and real
datasets with very good results and a considerable performance improvement over
other LSH families. While the theoretical analysis is done for the angular
distance, these families can also be used in practice for the euclidean
distance with excellent results [2]. Our tests using real datasets show that
the proposed LSH functions work well for the euclidean distance.
| no_new_dataset | 0.958109 |
1704.07595 | Di Xie | Chao Li and Qiaoyong Zhong and Di Xie and Shiliang Pu | Skeleton-based Action Recognition with Convolutional Neural Networks | ICMEW 2017 | null | 10.1109/LSP.2017.2678539 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Current state-of-the-art approaches to skeleton-based action recognition are
mostly based on recurrent neural networks (RNN). In this paper, we propose a
novel convolutional neural networks (CNN) based framework for both action
classification and detection. Raw skeleton coordinates as well as skeleton
motion are fed directly into CNN for label prediction. A novel skeleton
transformer module is designed to rearrange and select important skeleton
joints automatically. With a simple 7-layer network, we obtain 89.3% accuracy
on validation set of the NTU RGB+D dataset. For action detection in untrimmed
videos, we develop a window proposal network to extract temporal segment
proposals, which are further classified within the same network. On the recent
PKU-MMD dataset, we achieve 93.7% mAP, surpassing the baseline by a large
margin.
| [
{
"version": "v1",
"created": "Tue, 25 Apr 2017 09:09:00 GMT"
}
] | 2017-05-03T00:00:00 | [
[
"Li",
"Chao",
""
],
[
"Zhong",
"Qiaoyong",
""
],
[
"Xie",
"Di",
""
],
[
"Pu",
"Shiliang",
""
]
] | TITLE: Skeleton-based Action Recognition with Convolutional Neural Networks
ABSTRACT: Current state-of-the-art approaches to skeleton-based action recognition are
mostly based on recurrent neural networks (RNN). In this paper, we propose a
novel convolutional neural networks (CNN) based framework for both action
classification and detection. Raw skeleton coordinates as well as skeleton
motion are fed directly into CNN for label prediction. A novel skeleton
transformer module is designed to rearrange and select important skeleton
joints automatically. With a simple 7-layer network, we obtain 89.3% accuracy
on validation set of the NTU RGB+D dataset. For action detection in untrimmed
videos, we develop a window proposal network to extract temporal segment
proposals, which are further classified within the same network. On the recent
PKU-MMD dataset, we achieve 93.7% mAP, surpassing the baseline by a large
margin.
| no_new_dataset | 0.948775 |
1705.00648 | William Yang Wang | William Yang Wang | "Liar, Liar Pants on Fire": A New Benchmark Dataset for Fake News
Detection | ACL 2017 | null | null | null | cs.CL cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automatic fake news detection is a challenging problem in deception
detection, and it has tremendous real-world political and social impacts.
However, statistical approaches to combating fake news has been dramatically
limited by the lack of labeled benchmark datasets. In this paper, we present
liar: a new, publicly available dataset for fake news detection. We collected a
decade-long, 12.8K manually labeled short statements in various contexts from
PolitiFact.com, which provides detailed analysis report and links to source
documents for each case. This dataset can be used for fact-checking research as
well. Notably, this new dataset is an order of magnitude larger than previously
largest public fake news datasets of similar type. Empirically, we investigate
automatic fake news detection based on surface-level linguistic patterns. We
have designed a novel, hybrid convolutional neural network to integrate
meta-data with text. We show that this hybrid approach can improve a text-only
deep learning model.
| [
{
"version": "v1",
"created": "Mon, 1 May 2017 18:20:47 GMT"
}
] | 2017-05-03T00:00:00 | [
[
"Wang",
"William Yang",
""
]
] | TITLE: "Liar, Liar Pants on Fire": A New Benchmark Dataset for Fake News
Detection
ABSTRACT: Automatic fake news detection is a challenging problem in deception
detection, and it has tremendous real-world political and social impacts.
However, statistical approaches to combating fake news has been dramatically
limited by the lack of labeled benchmark datasets. In this paper, we present
liar: a new, publicly available dataset for fake news detection. We collected a
decade-long, 12.8K manually labeled short statements in various contexts from
PolitiFact.com, which provides detailed analysis report and links to source
documents for each case. This dataset can be used for fact-checking research as
well. Notably, this new dataset is an order of magnitude larger than previously
largest public fake news datasets of similar type. Empirically, we investigate
automatic fake news detection based on surface-level linguistic patterns. We
have designed a novel, hybrid convolutional neural network to integrate
meta-data with text. We show that this hybrid approach can improve a text-only
deep learning model.
| new_dataset | 0.961606 |
1705.00740 | Cheng Li | Bingyu Wang, Cheng Li, Virgil Pavlu, Javed Aslam | Regularizing Model Complexity and Label Structure for Multi-Label Text
Classification | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-label text classification is a popular machine learning task where each
document is assigned with multiple relevant labels. This task is challenging
due to high dimensional features and correlated labels. Multi-label text
classifiers need to be carefully regularized to prevent the severe over-fitting
in the high dimensional space, and also need to take into account label
dependencies in order to make accurate predictions under uncertainty. We
demonstrate significant and practical improvement by carefully regularizing the
model complexity during training phase, and also regularizing the label search
space during prediction phase. Specifically, we regularize the classifier
training using Elastic-net (L1+L2) penalty for reducing model complexity/size,
and employ early stopping to prevent overfitting. At prediction time, we apply
support inference to restrict the search space to label sets encountered in the
training set, and F-optimizer GFM to make optimal predictions for the F1
metric. We show that although support inference only provides density
estimations on existing label combinations, when combined with GFM predictor,
the algorithm can output unseen label combinations. Taken collectively, our
experiments show state of the art results on many benchmark datasets. Beyond
performance and practical contributions, we make some interesting observations.
Contrary to the prior belief, which deems support inference as purely an
approximate inference procedure, we show that support inference acts as a
strong regularizer on the label prediction structure. It allows the classifier
to take into account label dependencies during prediction even if the
classifiers had not modeled any label dependencies during training.
| [
{
"version": "v1",
"created": "Mon, 1 May 2017 23:30:13 GMT"
}
] | 2017-05-03T00:00:00 | [
[
"Wang",
"Bingyu",
""
],
[
"Li",
"Cheng",
""
],
[
"Pavlu",
"Virgil",
""
],
[
"Aslam",
"Javed",
""
]
] | TITLE: Regularizing Model Complexity and Label Structure for Multi-Label Text
Classification
ABSTRACT: Multi-label text classification is a popular machine learning task where each
document is assigned with multiple relevant labels. This task is challenging
due to high dimensional features and correlated labels. Multi-label text
classifiers need to be carefully regularized to prevent the severe over-fitting
in the high dimensional space, and also need to take into account label
dependencies in order to make accurate predictions under uncertainty. We
demonstrate significant and practical improvement by carefully regularizing the
model complexity during training phase, and also regularizing the label search
space during prediction phase. Specifically, we regularize the classifier
training using Elastic-net (L1+L2) penalty for reducing model complexity/size,
and employ early stopping to prevent overfitting. At prediction time, we apply
support inference to restrict the search space to label sets encountered in the
training set, and F-optimizer GFM to make optimal predictions for the F1
metric. We show that although support inference only provides density
estimations on existing label combinations, when combined with GFM predictor,
the algorithm can output unseen label combinations. Taken collectively, our
experiments show state of the art results on many benchmark datasets. Beyond
performance and practical contributions, we make some interesting observations.
Contrary to the prior belief, which deems support inference as purely an
approximate inference procedure, we show that support inference acts as a
strong regularizer on the label prediction structure. It allows the classifier
to take into account label dependencies during prediction even if the
classifiers had not modeled any label dependencies during training.
| no_new_dataset | 0.947575 |
1705.00748 | Katherine Driggs-Campbell | Katherine Driggs-Campbell, Roy Dong, S. Shankar Sastry, Ruzena Bajcsy | Robust, Informative Human-in-the-Loop Predictions via Empirical
Reachable Sets | null | null | null | null | cs.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In order to develop provably safe human-in-the-loop systems, accurate and
precise models of human behavior must be developed. In the case of intelligent
vehicles, one can imagine the need for predicting driver behavior to develop
minimally invasive active safety systems or to safely interact with other
vehicles on the road. We present a optimization based method for approximating
the stochastic reachable set for human-in-the-loop systems. This method
identifies the most precise subset of states that a human driven vehicle may
enter, given some dataset of observed trajectories. We phrase this problem as a
mixed integer linear program, which can be solved using branch and bound
methods. The resulting model uncovers the most representative subset that
encapsulates the likely trajectories, up to some probability threshold, by
optimally rejecting outliers in the dataset. This tool provides set predictions
consisting of trajectories observed from the nonlinear dynamics and behaviors
of the human driven car, and can account for modes of behavior, like the driver
state or intent. This allows us to predict driving behavior over long time
horizons with high accuracy. By using this realistic data and flexible
algorithm, a precise and accurate driver model can be developed to capture
likely behaviors. The resulting prediction can be tailored to an individual for
use in semi-autonomous frameworks or generally applied for autonomous planning
in interactive maneuvers.
| [
{
"version": "v1",
"created": "Tue, 2 May 2017 00:32:13 GMT"
}
] | 2017-05-03T00:00:00 | [
[
"Driggs-Campbell",
"Katherine",
""
],
[
"Dong",
"Roy",
""
],
[
"Sastry",
"S. Shankar",
""
],
[
"Bajcsy",
"Ruzena",
""
]
] | TITLE: Robust, Informative Human-in-the-Loop Predictions via Empirical
Reachable Sets
ABSTRACT: In order to develop provably safe human-in-the-loop systems, accurate and
precise models of human behavior must be developed. In the case of intelligent
vehicles, one can imagine the need for predicting driver behavior to develop
minimally invasive active safety systems or to safely interact with other
vehicles on the road. We present a optimization based method for approximating
the stochastic reachable set for human-in-the-loop systems. This method
identifies the most precise subset of states that a human driven vehicle may
enter, given some dataset of observed trajectories. We phrase this problem as a
mixed integer linear program, which can be solved using branch and bound
methods. The resulting model uncovers the most representative subset that
encapsulates the likely trajectories, up to some probability threshold, by
optimally rejecting outliers in the dataset. This tool provides set predictions
consisting of trajectories observed from the nonlinear dynamics and behaviors
of the human driven car, and can account for modes of behavior, like the driver
state or intent. This allows us to predict driving behavior over long time
horizons with high accuracy. By using this realistic data and flexible
algorithm, a precise and accurate driver model can be developed to capture
likely behaviors. The resulting prediction can be tailored to an individual for
use in semi-autonomous frameworks or generally applied for autonomous planning
in interactive maneuvers.
| no_new_dataset | 0.942295 |
1705.00761 | Samir Abdelrahman | Mahmoud Mahdi, Samir Abdelrahman, Reem Bahgat, and Ismail Ismail | F-tree: an algorithm for clustering transactional data using frequency
tree | Appeared at Al-Azhar University Engineering Journal, JAUES, Vol.5,
No. 8, Dec 2010 | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Clustering is an important data mining technique that groups similar data
records, recently categorical transaction clustering is received more
attention. In this research, we study the problem of categorical data
clustering for transactional data characterized with high dimensionality and
large volume. We propose a novel algorithm for clustering transactional data
called F-Tree, which is based on the idea of the frequent pattern algorithm
FP-tree; the fastest approaches to the frequent item set mining. And the simple
idea behind the F-Tree is to generate small high pure clusters, and then merge
them. That makes it fast, and dynamic in clustering large transactional
datasets with high dimensions. We also present a new solution to solve the
overlapping problem between clusters, by defining a new criterion function,
which is based on the probability of overlapping between weighted items. Our
experimental evaluation on real datasets shows that: Firstly, F-Tree is
effective in finding interesting clusters. Secondly, the usage of the tree
structure reduces the clustering process time of the large data set with high
attributes. Thirdly, the proposed evaluation metric used efficiently to solve
the overlapping of transaction items generates high-quality clustering results.
Finally, we have concluded that the process of merging pure and small clusters
increases the purity of resulted clusters as well as it reduces the time of
clustering better than the process of generating clusters directly from dataset
then refine clusters.
| [
{
"version": "v1",
"created": "Tue, 2 May 2017 01:55:44 GMT"
}
] | 2017-05-03T00:00:00 | [
[
"Mahdi",
"Mahmoud",
""
],
[
"Abdelrahman",
"Samir",
""
],
[
"Bahgat",
"Reem",
""
],
[
"Ismail",
"Ismail",
""
]
] | TITLE: F-tree: an algorithm for clustering transactional data using frequency
tree
ABSTRACT: Clustering is an important data mining technique that groups similar data
records, recently categorical transaction clustering is received more
attention. In this research, we study the problem of categorical data
clustering for transactional data characterized with high dimensionality and
large volume. We propose a novel algorithm for clustering transactional data
called F-Tree, which is based on the idea of the frequent pattern algorithm
FP-tree; the fastest approaches to the frequent item set mining. And the simple
idea behind the F-Tree is to generate small high pure clusters, and then merge
them. That makes it fast, and dynamic in clustering large transactional
datasets with high dimensions. We also present a new solution to solve the
overlapping problem between clusters, by defining a new criterion function,
which is based on the probability of overlapping between weighted items. Our
experimental evaluation on real datasets shows that: Firstly, F-Tree is
effective in finding interesting clusters. Secondly, the usage of the tree
structure reduces the clustering process time of the large data set with high
attributes. Thirdly, the proposed evaluation metric used efficiently to solve
the overlapping of transaction items generates high-quality clustering results.
Finally, we have concluded that the process of merging pure and small clusters
increases the purity of resulted clusters as well as it reduces the time of
clustering better than the process of generating clusters directly from dataset
then refine clusters.
| no_new_dataset | 0.955858 |
1705.00771 | Yehui Yang | Yehui Yang, Tao Li, Wensi Li, Haishan Wu, Wei Fan, Wensheng Zhang | Lesion detection and Grading of Diabetic Retinopathy via Two-stages Deep
Convolutional Neural Networks | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose an automatic diabetic retinopathy (DR) analysis algorithm based on
two-stages deep convolutional neural networks (DCNN). Compared to existing
DCNN-based DR detection methods, the proposed algorithm have the following
advantages: (1) Our method can point out the location and type of lesions in
the fundus images, as well as giving the severity grades of DR. Moreover, since
retina lesions and DR severity appear with different scales in fundus images,
the integration of both local and global networks learn more complete and
specific features for DR analysis. (2) By introducing imbalanced weighting map,
more attentions will be given to lesion patches for DR grading, which
significantly improve the performance of the proposed algorithm. In this study,
we label 12,206 lesion patches and re-annotate the DR grades of 23,595 fundus
images from Kaggle competition dataset. Under the guidance of clinical
ophthalmologists, the experimental results show that our local lesion detection
net achieve comparable performance with trained human observers, and the
proposed imbalanced weighted scheme also be proved to significantly improve the
capability of our DCNN-based DR grading algorithm.
| [
{
"version": "v1",
"created": "Tue, 2 May 2017 02:44:39 GMT"
}
] | 2017-05-03T00:00:00 | [
[
"Yang",
"Yehui",
""
],
[
"Li",
"Tao",
""
],
[
"Li",
"Wensi",
""
],
[
"Wu",
"Haishan",
""
],
[
"Fan",
"Wei",
""
],
[
"Zhang",
"Wensheng",
""
]
] | TITLE: Lesion detection and Grading of Diabetic Retinopathy via Two-stages Deep
Convolutional Neural Networks
ABSTRACT: We propose an automatic diabetic retinopathy (DR) analysis algorithm based on
two-stages deep convolutional neural networks (DCNN). Compared to existing
DCNN-based DR detection methods, the proposed algorithm have the following
advantages: (1) Our method can point out the location and type of lesions in
the fundus images, as well as giving the severity grades of DR. Moreover, since
retina lesions and DR severity appear with different scales in fundus images,
the integration of both local and global networks learn more complete and
specific features for DR analysis. (2) By introducing imbalanced weighting map,
more attentions will be given to lesion patches for DR grading, which
significantly improve the performance of the proposed algorithm. In this study,
we label 12,206 lesion patches and re-annotate the DR grades of 23,595 fundus
images from Kaggle competition dataset. Under the guidance of clinical
ophthalmologists, the experimental results show that our local lesion detection
net achieve comparable performance with trained human observers, and the
proposed imbalanced weighted scheme also be proved to significantly improve the
capability of our DCNN-based DR grading algorithm.
| no_new_dataset | 0.950869 |
1705.00797 | Konstantin Bauman | Evgeny Bauman and Konstantin Bauman | One-Class Semi-Supervised Learning: Detecting Linearly Separable Class
by its Mean | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we presented a novel semi-supervised one-class classification
algorithm which assumes that class is linearly separable from other elements.
We proved theoretically that class is linearly separable if and only if it is
maximal by probability within the sets with the same mean. Furthermore, we
presented an algorithm for identifying such linearly separable class utilizing
linear programming. We described three application cases including an
assumption of linear separability, Gaussian distribution, and the case of
linear separability in transformed space of kernel functions. Finally, we
demonstrated the work of the proposed algorithm on the USPS dataset and
analyzed the relationship of the performance of the algorithm and the size of
the initially labeled sample.
| [
{
"version": "v1",
"created": "Tue, 2 May 2017 05:00:28 GMT"
}
] | 2017-05-03T00:00:00 | [
[
"Bauman",
"Evgeny",
""
],
[
"Bauman",
"Konstantin",
""
]
] | TITLE: One-Class Semi-Supervised Learning: Detecting Linearly Separable Class
by its Mean
ABSTRACT: In this paper, we presented a novel semi-supervised one-class classification
algorithm which assumes that class is linearly separable from other elements.
We proved theoretically that class is linearly separable if and only if it is
maximal by probability within the sets with the same mean. Furthermore, we
presented an algorithm for identifying such linearly separable class utilizing
linear programming. We described three application cases including an
assumption of linear separability, Gaussian distribution, and the case of
linear separability in transformed space of kernel functions. Finally, we
demonstrated the work of the proposed algorithm on the USPS dataset and
analyzed the relationship of the performance of the algorithm and the size of
the initially labeled sample.
| no_new_dataset | 0.952838 |
1705.00823 | Yuya Yoshikawa | Yuya Yoshikawa, Yutaro Shigeto, Akikazu Takeuchi | STAIR Captions: Constructing a Large-Scale Japanese Image Caption
Dataset | Accepted as ACL2017 short paper. 5 pages | null | null | null | cs.CL cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, automatic generation of image descriptions (captions), that
is, image captioning, has attracted a great deal of attention. In this paper,
we particularly consider generating Japanese captions for images. Since most
available caption datasets have been constructed for English language, there
are few datasets for Japanese. To tackle this problem, we construct a
large-scale Japanese image caption dataset based on images from MS-COCO, which
is called STAIR Captions. STAIR Captions consists of 820,310 Japanese captions
for 164,062 images. In the experiment, we show that a neural network trained
using STAIR Captions can generate more natural and better Japanese captions,
compared to those generated using English-Japanese machine translation after
generating English captions.
| [
{
"version": "v1",
"created": "Tue, 2 May 2017 07:07:55 GMT"
}
] | 2017-05-03T00:00:00 | [
[
"Yoshikawa",
"Yuya",
""
],
[
"Shigeto",
"Yutaro",
""
],
[
"Takeuchi",
"Akikazu",
""
]
] | TITLE: STAIR Captions: Constructing a Large-Scale Japanese Image Caption
Dataset
ABSTRACT: In recent years, automatic generation of image descriptions (captions), that
is, image captioning, has attracted a great deal of attention. In this paper,
we particularly consider generating Japanese captions for images. Since most
available caption datasets have been constructed for English language, there
are few datasets for Japanese. To tackle this problem, we construct a
large-scale Japanese image caption dataset based on images from MS-COCO, which
is called STAIR Captions. STAIR Captions consists of 820,310 Japanese captions
for 164,062 images. In the experiment, we show that a neural network trained
using STAIR Captions can generate more natural and better Japanese captions,
compared to those generated using English-Japanese machine translation after
generating English captions.
| new_dataset | 0.957158 |
1705.00825 | Xiaokai Wei | Xiaokai Wei, Bokai Cao and Philip S. Yu | Multi-view Unsupervised Feature Selection by Cross-diffused Matrix
Alignment | 8 pages | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-view high-dimensional data become increasingly popular in the big data
era. Feature selection is a useful technique for alleviating the curse of
dimensionality in multi-view learning. In this paper, we study unsupervised
feature selection for multi-view data, as class labels are usually expensive to
obtain. Traditional feature selection methods are mostly designed for
single-view data and cannot fully exploit the rich information from multi-view
data. Existing multi-view feature selection methods are usually based on noisy
cluster labels which might not preserve sufficient information from multi-view
data. To better utilize multi-view information, we propose a method, CDMA-FS,
to select features for each view by performing alignment on a cross diffused
matrix. We formulate it as a constrained optimization problem and solve it
using Quasi-Newton based method. Experiments results on four real-world
datasets show that the proposed method is more effective than the
state-of-the-art methods in multi-view setting.
| [
{
"version": "v1",
"created": "Tue, 2 May 2017 07:12:59 GMT"
}
] | 2017-05-03T00:00:00 | [
[
"Wei",
"Xiaokai",
""
],
[
"Cao",
"Bokai",
""
],
[
"Yu",
"Philip S.",
""
]
] | TITLE: Multi-view Unsupervised Feature Selection by Cross-diffused Matrix
Alignment
ABSTRACT: Multi-view high-dimensional data become increasingly popular in the big data
era. Feature selection is a useful technique for alleviating the curse of
dimensionality in multi-view learning. In this paper, we study unsupervised
feature selection for multi-view data, as class labels are usually expensive to
obtain. Traditional feature selection methods are mostly designed for
single-view data and cannot fully exploit the rich information from multi-view
data. Existing multi-view feature selection methods are usually based on noisy
cluster labels which might not preserve sufficient information from multi-view
data. To better utilize multi-view information, we propose a method, CDMA-FS,
to select features for each view by performing alignment on a cross diffused
matrix. We formulate it as a constrained optimization problem and solve it
using Quasi-Newton based method. Experiments results on four real-world
datasets show that the proposed method is more effective than the
state-of-the-art methods in multi-view setting.
| no_new_dataset | 0.948106 |
1705.00835 | Pichao Wang | Zewei Ding and Pichao Wang and Philip O. Ogunbona and Wanqing Li | Investigation of Different Skeleton Features for CNN-based 3D Action
Recognition | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep learning techniques are being used in skeleton based action recognition
tasks and outstanding performance has been reported. Compared with RNN based
methods which tend to overemphasize temporal information, CNN-based approaches
can jointly capture spatio-temporal information from texture color images
encoded from skeleton sequences. There are several skeleton-based features that
have proven effective in RNN-based and handcrafted-feature-based methods.
However, it remains unknown whether they are suitable for CNN-based approaches.
This paper proposes to encode five spatial skeleton features into images with
different encoding methods. In addition, the performance implication of
different joints used for feature extraction is studied. The proposed method
achieved state-of-the-art performance on NTU RGB+D dataset for 3D human action
analysis. An accuracy of 75.32\% was achieved in Large Scale 3D Human Activity
Analysis Challenge in Depth Videos.
| [
{
"version": "v1",
"created": "Tue, 2 May 2017 07:42:35 GMT"
}
] | 2017-05-03T00:00:00 | [
[
"Ding",
"Zewei",
""
],
[
"Wang",
"Pichao",
""
],
[
"Ogunbona",
"Philip O.",
""
],
[
"Li",
"Wanqing",
""
]
] | TITLE: Investigation of Different Skeleton Features for CNN-based 3D Action
Recognition
ABSTRACT: Deep learning techniques are being used in skeleton based action recognition
tasks and outstanding performance has been reported. Compared with RNN based
methods which tend to overemphasize temporal information, CNN-based approaches
can jointly capture spatio-temporal information from texture color images
encoded from skeleton sequences. There are several skeleton-based features that
have proven effective in RNN-based and handcrafted-feature-based methods.
However, it remains unknown whether they are suitable for CNN-based approaches.
This paper proposes to encode five spatial skeleton features into images with
different encoding methods. In addition, the performance implication of
different joints used for feature extraction is studied. The proposed method
achieved state-of-the-art performance on NTU RGB+D dataset for 3D human action
analysis. An accuracy of 75.32\% was achieved in Large Scale 3D Human Activity
Analysis Challenge in Depth Videos.
| no_new_dataset | 0.94801 |
1705.00873 | Zhiyuan Shi | Zhiyuan Shi, Parthipan Siva, Tao Xiang | Transfer Learning by Ranking for Weakly Supervised Object Annotation | BMVC 2012 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Most existing approaches to training object detectors rely on fully
supervised learning, which requires the tedious manual annotation of object
location in a training set. Recently there has been an increasing interest in
developing weakly supervised approach to detector training where the object
location is not manually annotated but automatically determined based on binary
(weak) labels indicating if a training image contains the object. This is a
challenging problem because each image can contain many candidate object
locations which partially overlaps the object of interest. Existing approaches
focus on how to best utilise the binary labels for object location annotation.
In this paper we propose to solve this problem from a very different
perspective by casting it as a transfer learning problem. Specifically, we
formulate a novel transfer learning based on learning to rank, which
effectively transfers a model for automatic annotation of object location from
an auxiliary dataset to a target dataset with completely unrelated object
categories. We show that our approach outperforms existing state-of-the-art
weakly supervised approach to annotating objects in the challenging VOC
dataset.
| [
{
"version": "v1",
"created": "Tue, 2 May 2017 09:23:27 GMT"
}
] | 2017-05-03T00:00:00 | [
[
"Shi",
"Zhiyuan",
""
],
[
"Siva",
"Parthipan",
""
],
[
"Xiang",
"Tao",
""
]
] | TITLE: Transfer Learning by Ranking for Weakly Supervised Object Annotation
ABSTRACT: Most existing approaches to training object detectors rely on fully
supervised learning, which requires the tedious manual annotation of object
location in a training set. Recently there has been an increasing interest in
developing weakly supervised approach to detector training where the object
location is not manually annotated but automatically determined based on binary
(weak) labels indicating if a training image contains the object. This is a
challenging problem because each image can contain many candidate object
locations which partially overlaps the object of interest. Existing approaches
focus on how to best utilise the binary labels for object location annotation.
In this paper we propose to solve this problem from a very different
perspective by casting it as a transfer learning problem. Specifically, we
formulate a novel transfer learning based on learning to rank, which
effectively transfers a model for automatic annotation of object location from
an auxiliary dataset to a target dataset with completely unrelated object
categories. We show that our approach outperforms existing state-of-the-art
weakly supervised approach to annotating objects in the challenging VOC
dataset.
| no_new_dataset | 0.947186 |
1705.00894 | Svitlana Vakulenko | Sebastian Neumaier, Vadim Savenkov and Svitlana Vakulenko | Talking Open Data | Accepted at ESWC2017 demo track | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Enticing users into exploring Open Data remains an important challenge for
the whole Open Data paradigm. Standard stock interfaces often used by Open Data
portals are anything but inspiring even for tech-savvy users, let alone those
without an articulated interest in data science. To address a broader range of
citizens, we designed an open data search interface supporting natural language
interactions via popular platforms like Facebook and Skype. Our data-aware
chatbot answers search requests and suggests relevant open datasets, bringing
fun factor and a potential of viral dissemination into Open Data exploration.
The current system prototype is available for Facebook
(https://m.me/OpenDataAssistant) and Skype
(https://join.skype.com/bot/6db830ca-b365-44c4-9f4d-d423f728e741) users.
| [
{
"version": "v1",
"created": "Tue, 2 May 2017 10:35:12 GMT"
}
] | 2017-05-03T00:00:00 | [
[
"Neumaier",
"Sebastian",
""
],
[
"Savenkov",
"Vadim",
""
],
[
"Vakulenko",
"Svitlana",
""
]
] | TITLE: Talking Open Data
ABSTRACT: Enticing users into exploring Open Data remains an important challenge for
the whole Open Data paradigm. Standard stock interfaces often used by Open Data
portals are anything but inspiring even for tech-savvy users, let alone those
without an articulated interest in data science. To address a broader range of
citizens, we designed an open data search interface supporting natural language
interactions via popular platforms like Facebook and Skype. Our data-aware
chatbot answers search requests and suggests relevant open datasets, bringing
fun factor and a potential of viral dissemination into Open Data exploration.
The current system prototype is available for Facebook
(https://m.me/OpenDataAssistant) and Skype
(https://join.skype.com/bot/6db830ca-b365-44c4-9f4d-d423f728e741) users.
| no_new_dataset | 0.862004 |
1705.00949 | Christian Mostegel | Christian Mostegel and Rudolf Prettenthaler and Friedrich Fraundorfer
and Horst Bischof | Scalable Surface Reconstruction from Point Clouds with Extreme Scale and
Density Diversity | This paper was accepted to the IEEE Conference on Computer Vision and
Pattern Recognition (CVPR), 2017. The copyright was transfered to IEEE
(ieee.org). The official version of the paper will be made available on IEEE
Xplore (R) (ieeexplore.ieee.org). This version of the paper also contains the
supplementary material, which will not appear IEEE Xplore (R) | null | null | null | cs.CV cs.GR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we present a scalable approach for robustly computing a 3D
surface mesh from multi-scale multi-view stereo point clouds that can handle
extreme jumps of point density (in our experiments three orders of magnitude).
The backbone of our approach is a combination of octree data partitioning,
local Delaunay tetrahedralization and graph cut optimization. Graph cut
optimization is used twice, once to extract surface hypotheses from local
Delaunay tetrahedralizations and once to merge overlapping surface hypotheses
even when the local tetrahedralizations do not share the same topology.This
formulation allows us to obtain a constant memory consumption per sub-problem
while at the same time retaining the density independent interpolation
properties of the Delaunay-based optimization. On multiple public datasets, we
demonstrate that our approach is highly competitive with the state-of-the-art
in terms of accuracy, completeness and outlier resilience. Further, we
demonstrate the multi-scale potential of our approach by processing a newly
recorded dataset with 2 billion points and a point density variation of more
than four orders of magnitude - requiring less than 9GB of RAM per process.
| [
{
"version": "v1",
"created": "Tue, 2 May 2017 13:13:47 GMT"
}
] | 2017-05-03T00:00:00 | [
[
"Mostegel",
"Christian",
""
],
[
"Prettenthaler",
"Rudolf",
""
],
[
"Fraundorfer",
"Friedrich",
""
],
[
"Bischof",
"Horst",
""
]
] | TITLE: Scalable Surface Reconstruction from Point Clouds with Extreme Scale and
Density Diversity
ABSTRACT: In this paper we present a scalable approach for robustly computing a 3D
surface mesh from multi-scale multi-view stereo point clouds that can handle
extreme jumps of point density (in our experiments three orders of magnitude).
The backbone of our approach is a combination of octree data partitioning,
local Delaunay tetrahedralization and graph cut optimization. Graph cut
optimization is used twice, once to extract surface hypotheses from local
Delaunay tetrahedralizations and once to merge overlapping surface hypotheses
even when the local tetrahedralizations do not share the same topology.This
formulation allows us to obtain a constant memory consumption per sub-problem
while at the same time retaining the density independent interpolation
properties of the Delaunay-based optimization. On multiple public datasets, we
demonstrate that our approach is highly competitive with the state-of-the-art
in terms of accuracy, completeness and outlier resilience. Further, we
demonstrate the multi-scale potential of our approach by processing a newly
recorded dataset with 2 billion points and a point density variation of more
than four orders of magnitude - requiring less than 9GB of RAM per process.
| new_dataset | 0.960025 |
1705.01089 | Sandipan Sikdar | Sandipan Sikdar, Matteo Marsili, Niloy Ganguly, Animesh Mukherjee | Influence of Reviewer Interaction Network on Long-term Citations: A Case
Study of the Scientific Peer-Review System of the Journal of High Energy
Physics | null | null | null | null | cs.DL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A `peer-review system' in the context of judging research contributions, is
one of the prime steps undertaken to ensure the quality of the submissions
received, a significant portion of the publishing budget is spent towards
successful completion of the peer-review by the publication houses.
Nevertheless, the scientific community is largely reaching a consensus that
peer-review system, although indispensable, is nonetheless flawed. A very
pertinent question therefore is "could this system be improved?". In this
paper, we attempt to present an answer to this question by considering a
massive dataset of around $29k$ papers with roughly $70k$ distinct review
reports together consisting of $12m$ lines of review text from the Journal of
High Energy Physics (JHEP) between 1997 and 2015. In specific, we introduce a
novel \textit{reviewer-reviewer interaction network} (an edge exists between
two reviewers if they were assigned by the same editor) and show that
surprisingly the simple structural properties of this network such as degree,
clustering coefficient, centrality (closeness, betweenness etc.) serve as
strong predictors of the long-term citations (i.e., the overall scientific
impact) of a submitted paper. These features, when plugged in a regression
model, alone achieves a high $R^2$ of \0.79 and a low $RMSE$ of 0.496 in
predicting the long-term citations. In addition, we also design a set of
supporting features built from the basic characteristics of the submitted
papers, the authors and the referees (e.g., the popularity of the submitting
author, the acceptance rate history of a referee, the linguistic properties
laden in the text of the review reports etc.), which further results in overall
improvement with $R^2$ of 0.81 and $RMSE$ of 0.46.
| [
{
"version": "v1",
"created": "Tue, 2 May 2017 17:47:45 GMT"
}
] | 2017-05-03T00:00:00 | [
[
"Sikdar",
"Sandipan",
""
],
[
"Marsili",
"Matteo",
""
],
[
"Ganguly",
"Niloy",
""
],
[
"Mukherjee",
"Animesh",
""
]
] | TITLE: Influence of Reviewer Interaction Network on Long-term Citations: A Case
Study of the Scientific Peer-Review System of the Journal of High Energy
Physics
ABSTRACT: A `peer-review system' in the context of judging research contributions, is
one of the prime steps undertaken to ensure the quality of the submissions
received, a significant portion of the publishing budget is spent towards
successful completion of the peer-review by the publication houses.
Nevertheless, the scientific community is largely reaching a consensus that
peer-review system, although indispensable, is nonetheless flawed. A very
pertinent question therefore is "could this system be improved?". In this
paper, we attempt to present an answer to this question by considering a
massive dataset of around $29k$ papers with roughly $70k$ distinct review
reports together consisting of $12m$ lines of review text from the Journal of
High Energy Physics (JHEP) between 1997 and 2015. In specific, we introduce a
novel \textit{reviewer-reviewer interaction network} (an edge exists between
two reviewers if they were assigned by the same editor) and show that
surprisingly the simple structural properties of this network such as degree,
clustering coefficient, centrality (closeness, betweenness etc.) serve as
strong predictors of the long-term citations (i.e., the overall scientific
impact) of a submitted paper. These features, when plugged in a regression
model, alone achieves a high $R^2$ of \0.79 and a low $RMSE$ of 0.496 in
predicting the long-term citations. In addition, we also design a set of
supporting features built from the basic characteristics of the submitted
papers, the authors and the referees (e.g., the popularity of the submitting
author, the acceptance rate history of a referee, the linguistic properties
laden in the text of the review reports etc.), which further results in overall
improvement with $R^2$ of 0.81 and $RMSE$ of 0.46.
| no_new_dataset | 0.939025 |
1408.2803 | Jayadeva | Jayadeva | Learning a hyperplane classifier by minimizing an exact bound on the VC
dimension | Accepted Author Manuscript (Neurocomputing, Elsevier); 10 pages | Neurocomputing, Volume 149, Part B, 3 February 2015, Pages
683-689, ISSN 0925-2312, | 10.1016/j.neucom.2014.07.062 | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The VC dimension measures the capacity of a learning machine, and a low VC
dimension leads to good generalization. While SVMs produce state-of-the-art
learning performance, it is well known that the VC dimension of a SVM can be
unbounded; despite good results in practice, there is no guarantee of good
generalization. In this paper, we show how to learn a hyperplane classifier by
minimizing an exact, or \boldmath{$\Theta$} bound on its VC dimension. The
proposed approach, termed as the Minimal Complexity Machine (MCM), involves
solving a simple linear programming problem. Experimental results show, that on
a number of benchmark datasets, the proposed approach learns classifiers with
error rates much less than conventional SVMs, while often using fewer support
vectors. On many benchmark datasets, the number of support vectors is less than
one-tenth the number used by SVMs, indicating that the MCM does indeed learn
simpler representations.
| [
{
"version": "v1",
"created": "Tue, 12 Aug 2014 18:57:48 GMT"
},
{
"version": "v2",
"created": "Wed, 13 Aug 2014 16:32:30 GMT"
}
] | 2017-05-02T00:00:00 | [
[
"Jayadeva",
"",
""
]
] | TITLE: Learning a hyperplane classifier by minimizing an exact bound on the VC
dimension
ABSTRACT: The VC dimension measures the capacity of a learning machine, and a low VC
dimension leads to good generalization. While SVMs produce state-of-the-art
learning performance, it is well known that the VC dimension of a SVM can be
unbounded; despite good results in practice, there is no guarantee of good
generalization. In this paper, we show how to learn a hyperplane classifier by
minimizing an exact, or \boldmath{$\Theta$} bound on its VC dimension. The
proposed approach, termed as the Minimal Complexity Machine (MCM), involves
solving a simple linear programming problem. Experimental results show, that on
a number of benchmark datasets, the proposed approach learns classifiers with
error rates much less than conventional SVMs, while often using fewer support
vectors. On many benchmark datasets, the number of support vectors is less than
one-tenth the number used by SVMs, indicating that the MCM does indeed learn
simpler representations.
| no_new_dataset | 0.95297 |
1410.4573 | Jayadeva | Jayadeva, Suresh Chandra, Siddarth Sabharwal, and Sanjit S. Batra | Learning a hyperplane regressor by minimizing an exact bound on the VC
dimension | see
http://www.sciencedirect.com/science/article/pii/S0925231214010194 or
arXiv:1408.2803 for background information | Neurocomputing, Volume 171, 1 January 2016, Pages 1610-1616, ISSN
0925-2312 | 10.1016/j.neucom.2015.06.065 | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The capacity of a learning machine is measured by its Vapnik-Chervonenkis
dimension, and learning machines with a low VC dimension generalize better. It
is well known that the VC dimension of SVMs can be very large or unbounded,
even though they generally yield state-of-the-art learning performance. In this
paper, we show how to learn a hyperplane regressor by minimizing an exact, or
\boldmath{$\Theta$} bound on its VC dimension. The proposed approach, termed as
the Minimal Complexity Machine (MCM) Regressor, involves solving a simple
linear programming problem. Experimental results show, that on a number of
benchmark datasets, the proposed approach yields regressors with error rates
much less than those obtained with conventional SVM regresssors, while often
using fewer support vectors. On some benchmark datasets, the number of support
vectors is less than one tenth the number used by SVMs, indicating that the MCM
does indeed learn simpler representations.
| [
{
"version": "v1",
"created": "Thu, 16 Oct 2014 20:04:49 GMT"
}
] | 2017-05-02T00:00:00 | [
[
"Jayadeva",
"",
""
],
[
"Chandra",
"Suresh",
""
],
[
"Sabharwal",
"Siddarth",
""
],
[
"Batra",
"Sanjit S.",
""
]
] | TITLE: Learning a hyperplane regressor by minimizing an exact bound on the VC
dimension
ABSTRACT: The capacity of a learning machine is measured by its Vapnik-Chervonenkis
dimension, and learning machines with a low VC dimension generalize better. It
is well known that the VC dimension of SVMs can be very large or unbounded,
even though they generally yield state-of-the-art learning performance. In this
paper, we show how to learn a hyperplane regressor by minimizing an exact, or
\boldmath{$\Theta$} bound on its VC dimension. The proposed approach, termed as
the Minimal Complexity Machine (MCM) Regressor, involves solving a simple
linear programming problem. Experimental results show, that on a number of
benchmark datasets, the proposed approach yields regressors with error rates
much less than those obtained with conventional SVM regresssors, while often
using fewer support vectors. On some benchmark datasets, the number of support
vectors is less than one tenth the number used by SVMs, indicating that the MCM
does indeed learn simpler representations.
| no_new_dataset | 0.952926 |
1608.03462 | Muhammet Bastan | Muhammet Bastan and Ozgur Yilmaz | Multi-View Product Image Search Using Deep ConvNets Representations | 13 pages, 16 figures | null | null | null | cs.CV cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-view product image queries can improve retrieval performance over
single view queries significantly. In this paper, we investigated the
performance of deep convolutional neural networks (ConvNets) on multi-view
product image search. First, we trained a VGG-like network to learn deep
ConvNets representations of product images. Then, we computed the deep ConvNets
representations of database and query images and performed single view queries,
and multi-view queries using several early and late fusion approaches.
We performed extensive experiments on the publicly available Multi-View
Object Image Dataset (MVOD 5K) with both clean background queries from the
Internet and cluttered background queries from a mobile phone. We compared the
performance of ConvNets to the classical bag-of-visual-words (BoWs). We
concluded that (1) multi-view queries with deep ConvNets representations
perform significantly better than single view queries, (2) ConvNets perform
much better than BoWs and have room for further improvement, (3) pre-training
of ConvNets on a different image dataset with background clutter is needed to
obtain good performance on cluttered product image queries obtained with a
mobile phone.
| [
{
"version": "v1",
"created": "Thu, 11 Aug 2016 13:50:07 GMT"
},
{
"version": "v2",
"created": "Mon, 1 May 2017 08:08:28 GMT"
}
] | 2017-05-02T00:00:00 | [
[
"Bastan",
"Muhammet",
""
],
[
"Yilmaz",
"Ozgur",
""
]
] | TITLE: Multi-View Product Image Search Using Deep ConvNets Representations
ABSTRACT: Multi-view product image queries can improve retrieval performance over
single view queries significantly. In this paper, we investigated the
performance of deep convolutional neural networks (ConvNets) on multi-view
product image search. First, we trained a VGG-like network to learn deep
ConvNets representations of product images. Then, we computed the deep ConvNets
representations of database and query images and performed single view queries,
and multi-view queries using several early and late fusion approaches.
We performed extensive experiments on the publicly available Multi-View
Object Image Dataset (MVOD 5K) with both clean background queries from the
Internet and cluttered background queries from a mobile phone. We compared the
performance of ConvNets to the classical bag-of-visual-words (BoWs). We
concluded that (1) multi-view queries with deep ConvNets representations
perform significantly better than single view queries, (2) ConvNets perform
much better than BoWs and have room for further improvement, (3) pre-training
of ConvNets on a different image dataset with background clutter is needed to
obtain good performance on cluttered product image queries obtained with a
mobile phone.
| no_new_dataset | 0.940898 |
1702.02212 | Tarek Sakakini | Tarek Sakakini, Suma Bhat, Pramod Viswanath | MORSE: Semantic-ally Drive-n MORpheme SEgment-er | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present in this paper a novel framework for morpheme segmentation which
uses the morpho-syntactic regularities preserved by word representations, in
addition to orthographic features, to segment words into morphemes. This
framework is the first to consider vocabulary-wide syntactico-semantic
information for this task. We also analyze the deficiencies of available
benchmarking datasets and introduce our own dataset that was created on the
basis of compositionality. We validate our algorithm across datasets and
present state-of-the-art results.
| [
{
"version": "v1",
"created": "Tue, 7 Feb 2017 21:49:13 GMT"
},
{
"version": "v2",
"created": "Sat, 11 Feb 2017 00:13:28 GMT"
},
{
"version": "v3",
"created": "Mon, 1 May 2017 12:36:34 GMT"
}
] | 2017-05-02T00:00:00 | [
[
"Sakakini",
"Tarek",
""
],
[
"Bhat",
"Suma",
""
],
[
"Viswanath",
"Pramod",
""
]
] | TITLE: MORSE: Semantic-ally Drive-n MORpheme SEgment-er
ABSTRACT: We present in this paper a novel framework for morpheme segmentation which
uses the morpho-syntactic regularities preserved by word representations, in
addition to orthographic features, to segment words into morphemes. This
framework is the first to consider vocabulary-wide syntactico-semantic
information for this task. We also analyze the deficiencies of available
benchmarking datasets and introduce our own dataset that was created on the
basis of compositionality. We validate our algorithm across datasets and
present state-of-the-art results.
| new_dataset | 0.951504 |
1704.06020 | Xun Yang | Xun Yang, Meng Wang, Richang Hong, Qi Tian, Yong Rui | Enhancing Person Re-identification in a Self-trained Subspace | Accepted by ACM Transactions on Multimedia Computing, Communications,
and Applications (TOMM) | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite the promising progress made in recent years, person re-identification
(re-ID) remains a challenging task due to the complex variations in human
appearances from different camera views. For this challenging problem, a large
variety of algorithms have been developed in the fully-supervised setting,
requiring access to a large amount of labeled training data. However, the main
bottleneck for fully-supervised re-ID is the limited availability of labeled
training samples. To address this problem, in this paper, we propose a
self-trained subspace learning paradigm for person re-ID which effectively
utilizes both labeled and unlabeled data to learn a discriminative subspace
where person images across disjoint camera views can be easily matched. The
proposed approach first constructs pseudo pairwise relationships among
unlabeled persons using the k-nearest neighbors algorithm. Then, with the
pseudo pairwise relationships, the unlabeled samples can be easily combined
with the labeled samples to learn a discriminative projection by solving an
eigenvalue problem. In addition, we refine the pseudo pairwise relationships
iteratively, which further improves the learning performance. A multi-kernel
embedding strategy is also incorporated into the proposed approach to cope with
the non-linearity in person's appearance and explore the complementation of
multiple kernels. In this way, the performance of person re-ID can be greatly
enhanced when training data are insufficient. Experimental results on six
widely-used datasets demonstrate the effectiveness of our approach and its
performance can be comparable to the reported results of most state-of-the-art
fully-supervised methods while using much fewer labeled data.
| [
{
"version": "v1",
"created": "Thu, 20 Apr 2017 05:43:05 GMT"
},
{
"version": "v2",
"created": "Sun, 30 Apr 2017 00:28:52 GMT"
}
] | 2017-05-02T00:00:00 | [
[
"Yang",
"Xun",
""
],
[
"Wang",
"Meng",
""
],
[
"Hong",
"Richang",
""
],
[
"Tian",
"Qi",
""
],
[
"Rui",
"Yong",
""
]
] | TITLE: Enhancing Person Re-identification in a Self-trained Subspace
ABSTRACT: Despite the promising progress made in recent years, person re-identification
(re-ID) remains a challenging task due to the complex variations in human
appearances from different camera views. For this challenging problem, a large
variety of algorithms have been developed in the fully-supervised setting,
requiring access to a large amount of labeled training data. However, the main
bottleneck for fully-supervised re-ID is the limited availability of labeled
training samples. To address this problem, in this paper, we propose a
self-trained subspace learning paradigm for person re-ID which effectively
utilizes both labeled and unlabeled data to learn a discriminative subspace
where person images across disjoint camera views can be easily matched. The
proposed approach first constructs pseudo pairwise relationships among
unlabeled persons using the k-nearest neighbors algorithm. Then, with the
pseudo pairwise relationships, the unlabeled samples can be easily combined
with the labeled samples to learn a discriminative projection by solving an
eigenvalue problem. In addition, we refine the pseudo pairwise relationships
iteratively, which further improves the learning performance. A multi-kernel
embedding strategy is also incorporated into the proposed approach to cope with
the non-linearity in person's appearance and explore the complementation of
multiple kernels. In this way, the performance of person re-ID can be greatly
enhanced when training data are insufficient. Experimental results on six
widely-used datasets demonstrate the effectiveness of our approach and its
performance can be comparable to the reported results of most state-of-the-art
fully-supervised methods while using much fewer labeled data.
| no_new_dataset | 0.949623 |
1704.07028 | Yuan Li | Jie Xue, Yuan Li, Ravi Janardan | On the expected diameter, width, and complexity of a stochastic
convex-hull | null | null | null | null | cs.CG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate several computational problems related to the stochastic
convex hull (SCH). Given a stochastic dataset consisting of $n$ points in
$\mathbb{R}^d$ each of which has an existence probability, a SCH refers to the
convex hull of a realization of the dataset, i.e., a random sample including
each point with its existence probability. We are interested in computing
certain expected statistics of a SCH, including diameter, width, and
combinatorial complexity. For diameter, we establish the first deterministic
1.633-approximation algorithm with a time complexity polynomial in both $n$ and
$d$. For width, two approximation algorithms are provided: a deterministic
$O(1)$-approximation running in $O(n^{d+1} \log n)$ time, and a fully
polynomial-time randomized approximation scheme (FPRAS). For combinatorial
complexity, we propose an exact $O(n^d)$-time algorithm. Our solutions exploit
many geometric insights in Euclidean space, some of which might be of
independent interest.
| [
{
"version": "v1",
"created": "Mon, 24 Apr 2017 03:33:24 GMT"
},
{
"version": "v2",
"created": "Mon, 1 May 2017 05:36:14 GMT"
}
] | 2017-05-02T00:00:00 | [
[
"Xue",
"Jie",
""
],
[
"Li",
"Yuan",
""
],
[
"Janardan",
"Ravi",
""
]
] | TITLE: On the expected diameter, width, and complexity of a stochastic
convex-hull
ABSTRACT: We investigate several computational problems related to the stochastic
convex hull (SCH). Given a stochastic dataset consisting of $n$ points in
$\mathbb{R}^d$ each of which has an existence probability, a SCH refers to the
convex hull of a realization of the dataset, i.e., a random sample including
each point with its existence probability. We are interested in computing
certain expected statistics of a SCH, including diameter, width, and
combinatorial complexity. For diameter, we establish the first deterministic
1.633-approximation algorithm with a time complexity polynomial in both $n$ and
$d$. For width, two approximation algorithms are provided: a deterministic
$O(1)$-approximation running in $O(n^{d+1} \log n)$ time, and a fully
polynomial-time randomized approximation scheme (FPRAS). For combinatorial
complexity, we propose an exact $O(n^d)$-time algorithm. Our solutions exploit
many geometric insights in Euclidean space, some of which might be of
independent interest.
| no_new_dataset | 0.944382 |
1704.07502 | Yongliang Chen | Yongliang Chen | A Labeling-Free Approach to Supervising Deep Neural Networks for Retinal
Blood Vessel Segmentation | 10 pages, 8 figures, 3 tables, forbidden work, correct the citation
typo of [29] | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Segmenting blood vessels in fundus imaging plays an important role in medical
diagnosis. Many algorithms have been proposed. While deep Neural Networks have
been attracting enormous attention from computer vision community recent years
and several novel works have been done in terms of its application in retinal
blood vessel segmentation, most of them are based on supervised learning which
requires amount of labeled data, which is both scarce and expensive to obtain.
We leverage the power of Deep Convolutional Neural Networks (DCNN) in feature
learning, in this work, to achieve this ultimate goal. The highly efficient
feature learning of DCNN inspires our novel approach that trains the networks
with automatically-generated samples to achieve desirable performance on
real-world fundus images. For this, we design a set of rules abstracted from
the domain-specific prior knowledge to generate these samples. We argue that,
with the high efficiency of DCNN in feature learning, one can achieve this goal
by constructing the training dataset with prior knowledge, no manual labeling
is needed. This approach allows us to take advantages of supervised learning
without labeling. We also build a naive DCNN model to test it. The results on
standard benchmarks of fundus imaging show it is competitive to the
state-of-the-art methods which implies a potential way to leverage the power of
DCNN in feature learning.
| [
{
"version": "v1",
"created": "Tue, 25 Apr 2017 01:04:21 GMT"
},
{
"version": "v2",
"created": "Mon, 1 May 2017 12:13:47 GMT"
}
] | 2017-05-02T00:00:00 | [
[
"Chen",
"Yongliang",
""
]
] | TITLE: A Labeling-Free Approach to Supervising Deep Neural Networks for Retinal
Blood Vessel Segmentation
ABSTRACT: Segmenting blood vessels in fundus imaging plays an important role in medical
diagnosis. Many algorithms have been proposed. While deep Neural Networks have
been attracting enormous attention from computer vision community recent years
and several novel works have been done in terms of its application in retinal
blood vessel segmentation, most of them are based on supervised learning which
requires amount of labeled data, which is both scarce and expensive to obtain.
We leverage the power of Deep Convolutional Neural Networks (DCNN) in feature
learning, in this work, to achieve this ultimate goal. The highly efficient
feature learning of DCNN inspires our novel approach that trains the networks
with automatically-generated samples to achieve desirable performance on
real-world fundus images. For this, we design a set of rules abstracted from
the domain-specific prior knowledge to generate these samples. We argue that,
with the high efficiency of DCNN in feature learning, one can achieve this goal
by constructing the training dataset with prior knowledge, no manual labeling
is needed. This approach allows us to take advantages of supervised learning
without labeling. We also build a naive DCNN model to test it. The results on
standard benchmarks of fundus imaging show it is competitive to the
state-of-the-art methods which implies a potential way to leverage the power of
DCNN in feature learning.
| no_new_dataset | 0.948298 |
1705.00108 | Matthew Peters | Matthew E. Peters, Waleed Ammar, Chandra Bhagavatula, Russell Power | Semi-supervised sequence tagging with bidirectional language models | To appear in ACL 2017 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Pre-trained word embeddings learned from unlabeled text have become a
standard component of neural network architectures for NLP tasks. However, in
most cases, the recurrent network that operates on word-level representations
to produce context sensitive representations is trained on relatively little
labeled data. In this paper, we demonstrate a general semi-supervised approach
for adding pre- trained context embeddings from bidirectional language models
to NLP systems and apply it to sequence labeling tasks. We evaluate our model
on two standard datasets for named entity recognition (NER) and chunking, and
in both cases achieve state of the art results, surpassing previous systems
that use other forms of transfer or joint learning with additional labeled data
and task specific gazetteers.
| [
{
"version": "v1",
"created": "Sat, 29 Apr 2017 01:13:04 GMT"
}
] | 2017-05-02T00:00:00 | [
[
"Peters",
"Matthew E.",
""
],
[
"Ammar",
"Waleed",
""
],
[
"Bhagavatula",
"Chandra",
""
],
[
"Power",
"Russell",
""
]
] | TITLE: Semi-supervised sequence tagging with bidirectional language models
ABSTRACT: Pre-trained word embeddings learned from unlabeled text have become a
standard component of neural network architectures for NLP tasks. However, in
most cases, the recurrent network that operates on word-level representations
to produce context sensitive representations is trained on relatively little
labeled data. In this paper, we demonstrate a general semi-supervised approach
for adding pre- trained context embeddings from bidirectional language models
to NLP systems and apply it to sequence labeling tasks. We evaluate our model
on two standard datasets for named entity recognition (NER) and chunking, and
in both cases achieve state of the art results, surpassing previous systems
that use other forms of transfer or joint learning with additional labeled data
and task specific gazetteers.
| no_new_dataset | 0.951908 |
1705.00301 | Marei Algarni Mr. | Marei Algarni and Ganesh Sundaramoorthi | SurfCut: Surfaces of Minimal Paths From Topological Structures | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present SurfCut, an algorithm for extracting a smooth, simple surface with
an unknown 3D curve boundary from a noisy 3D image and a seed point. Our method
is built on the novel observation that certain ridge curves of a function
defined on a front propagated using the Fast Marching algorithm lie on the
surface. Our method extracts and cuts these ridges to form the surface
boundary. Our surface extraction algorithm is built on the novel observation
that the surface lies in a valley of the distance from Fast Marching. We show
that the resulting surface is a collection of minimal paths. Using the
framework of cubical complexes and Morse theory, we design algorithms to
extract these critical structures robustly. Experiments on three 3D datasets
show the robustness of our method, and that it achieves higher accuracy with
lower computational cost than state-of-the-art.
| [
{
"version": "v1",
"created": "Sun, 30 Apr 2017 11:56:51 GMT"
}
] | 2017-05-02T00:00:00 | [
[
"Algarni",
"Marei",
""
],
[
"Sundaramoorthi",
"Ganesh",
""
]
] | TITLE: SurfCut: Surfaces of Minimal Paths From Topological Structures
ABSTRACT: We present SurfCut, an algorithm for extracting a smooth, simple surface with
an unknown 3D curve boundary from a noisy 3D image and a seed point. Our method
is built on the novel observation that certain ridge curves of a function
defined on a front propagated using the Fast Marching algorithm lie on the
surface. Our method extracts and cuts these ridges to form the surface
boundary. Our surface extraction algorithm is built on the novel observation
that the surface lies in a valley of the distance from Fast Marching. We show
that the resulting surface is a collection of minimal paths. Using the
framework of cubical complexes and Morse theory, we design algorithms to
extract these critical structures robustly. Experiments on three 3D datasets
show the robustness of our method, and that it achieves higher accuracy with
lower computational cost than state-of-the-art.
| no_new_dataset | 0.948394 |
1705.00346 | Andre Luckow | Andre Luckow and Matthew Cook and Nathan Ashcraft and Edwin Weill and
Emil Djerekarov and Bennie Vorster | Deep Learning in the Automotive Industry: Applications and Tools | 10 pages | null | 10.1109/BigData.2016.7841045 | null | cs.LG cs.CV cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep Learning refers to a set of machine learning techniques that utilize
neural networks with many hidden layers for tasks, such as image
classification, speech recognition, language understanding. Deep learning has
been proven to be very effective in these domains and is pervasively used by
many Internet services. In this paper, we describe different automotive uses
cases for deep learning in particular in the domain of computer vision. We
surveys the current state-of-the-art in libraries, tools and infrastructures
(e.\,g.\ GPUs and clouds) for implementing, training and deploying deep neural
networks. We particularly focus on convolutional neural networks and computer
vision use cases, such as the visual inspection process in manufacturing plants
and the analysis of social media data. To train neural networks, curated and
labeled datasets are essential. In particular, both the availability and scope
of such datasets is typically very limited. A main contribution of this paper
is the creation of an automotive dataset, that allows us to learn and
automatically recognize different vehicle properties. We describe an end-to-end
deep learning application utilizing a mobile app for data collection and
process support, and an Amazon-based cloud backend for storage and training.
For training we evaluate the use of cloud and on-premises infrastructures
(including multiple GPUs) in conjunction with different neural network
architectures and frameworks. We assess both the training times as well as the
accuracy of the classifier. Finally, we demonstrate the effectiveness of the
trained classifier in a real world setting during manufacturing process.
| [
{
"version": "v1",
"created": "Sun, 30 Apr 2017 17:17:44 GMT"
}
] | 2017-05-02T00:00:00 | [
[
"Luckow",
"Andre",
""
],
[
"Cook",
"Matthew",
""
],
[
"Ashcraft",
"Nathan",
""
],
[
"Weill",
"Edwin",
""
],
[
"Djerekarov",
"Emil",
""
],
[
"Vorster",
"Bennie",
""
]
] | TITLE: Deep Learning in the Automotive Industry: Applications and Tools
ABSTRACT: Deep Learning refers to a set of machine learning techniques that utilize
neural networks with many hidden layers for tasks, such as image
classification, speech recognition, language understanding. Deep learning has
been proven to be very effective in these domains and is pervasively used by
many Internet services. In this paper, we describe different automotive uses
cases for deep learning in particular in the domain of computer vision. We
surveys the current state-of-the-art in libraries, tools and infrastructures
(e.\,g.\ GPUs and clouds) for implementing, training and deploying deep neural
networks. We particularly focus on convolutional neural networks and computer
vision use cases, such as the visual inspection process in manufacturing plants
and the analysis of social media data. To train neural networks, curated and
labeled datasets are essential. In particular, both the availability and scope
of such datasets is typically very limited. A main contribution of this paper
is the creation of an automotive dataset, that allows us to learn and
automatically recognize different vehicle properties. We describe an end-to-end
deep learning application utilizing a mobile app for data collection and
process support, and an Amazon-based cloud backend for storage and training.
For training we evaluate the use of cloud and on-premises infrastructures
(including multiple GPUs) in conjunction with different neural network
architectures and frameworks. We assess both the training times as well as the
accuracy of the classifier. Finally, we demonstrate the effectiveness of the
trained classifier in a real world setting during manufacturing process.
| new_dataset | 0.964456 |
1705.00366 | Danna Gurari | Danna Gurari and Kun He and Bo Xiong and Jianming Zhang and Mehrnoosh
Sameki and Suyog Dutt Jain and Stan Sclaroff and Margrit Betke and Kristen
Grauman | Predicting Foreground Object Ambiguity and Efficiently Crowdsourcing the
Segmentation(s) | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose the ambiguity problem for the foreground object segmentation task
and motivate the importance of estimating and accounting for this ambiguity
when designing vision systems. Specifically, we distinguish between images
which lead multiple annotators to segment different foreground objects
(ambiguous) versus minor inter-annotator differences of the same object. Taking
images from eight widely used datasets, we crowdsource labeling the images as
"ambiguous" or "not ambiguous" to segment in order to construct a new dataset
we call STATIC. Using STATIC, we develop a system that automatically predicts
which images are ambiguous. Experiments demonstrate the advantage of our
prediction system over existing saliency-based methods on images from vision
benchmarks and images taken by blind people who are trying to recognize objects
in their environment. Finally, we introduce a crowdsourcing system to achieve
cost savings for collecting the diversity of all valid "ground truth"
foreground object segmentations by collecting extra segmentations only when
ambiguity is expected. Experiments show our system eliminates up to 47% of
human effort compared to existing crowdsourcing methods with no loss in
capturing the diversity of ground truths.
| [
{
"version": "v1",
"created": "Sun, 30 Apr 2017 19:27:30 GMT"
}
] | 2017-05-02T00:00:00 | [
[
"Gurari",
"Danna",
""
],
[
"He",
"Kun",
""
],
[
"Xiong",
"Bo",
""
],
[
"Zhang",
"Jianming",
""
],
[
"Sameki",
"Mehrnoosh",
""
],
[
"Jain",
"Suyog Dutt",
""
],
[
"Sclaroff",
"Stan",
""
],
[
"Betke",
"Margrit",
""
],
[
"Grauman",
"Kristen",
""
]
] | TITLE: Predicting Foreground Object Ambiguity and Efficiently Crowdsourcing the
Segmentation(s)
ABSTRACT: We propose the ambiguity problem for the foreground object segmentation task
and motivate the importance of estimating and accounting for this ambiguity
when designing vision systems. Specifically, we distinguish between images
which lead multiple annotators to segment different foreground objects
(ambiguous) versus minor inter-annotator differences of the same object. Taking
images from eight widely used datasets, we crowdsource labeling the images as
"ambiguous" or "not ambiguous" to segment in order to construct a new dataset
we call STATIC. Using STATIC, we develop a system that automatically predicts
which images are ambiguous. Experiments demonstrate the advantage of our
prediction system over existing saliency-based methods on images from vision
benchmarks and images taken by blind people who are trying to recognize objects
in their environment. Finally, we introduce a crowdsourcing system to achieve
cost savings for collecting the diversity of all valid "ground truth"
foreground object segmentations by collecting extra segmentations only when
ambiguity is expected. Experiments show our system eliminates up to 47% of
human effort compared to existing crowdsourcing methods with no loss in
capturing the diversity of ground truths.
| new_dataset | 0.954647 |
1705.00399 | Natali Ruchansky | Natali Ruchansky and Mark Crovella and Evimaria Terzi | Matrix completion with queries | Proceedings of the 21th ACM SIGKDD International Conference on
Knowledge Discovery and Data Mining | null | 10.1145/2783258.2783259 | null | cs.LG cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In many applications, e.g., recommender systems and traffic monitoring, the
data comes in the form of a matrix that is only partially observed and low
rank. A fundamental data-analysis task for these datasets is matrix completion,
where the goal is to accurately infer the entries missing from the matrix. Even
when the data satisfies the low-rank assumption, classical matrix-completion
methods may output completions with significant error -- in that the
reconstructed matrix differs significantly from the true underlying matrix.
Often, this is due to the fact that the information contained in the observed
entries is insufficient. In this work, we address this problem by proposing an
active version of matrix completion, where queries can be made to the true
underlying matrix. Subsequently, we design Order&Extend, which is the first
algorithm to unify a matrix-completion approach and a querying strategy into a
single algorithm. Order&Extend is able identify and alleviate insufficient
information by judiciously querying a small number of additional entries. In an
extensive experimental evaluation on real-world datasets, we demonstrate that
our algorithm is efficient and is able to accurately reconstruct the true
matrix while asking only a small number of queries.
| [
{
"version": "v1",
"created": "Mon, 1 May 2017 01:58:45 GMT"
}
] | 2017-05-02T00:00:00 | [
[
"Ruchansky",
"Natali",
""
],
[
"Crovella",
"Mark",
""
],
[
"Terzi",
"Evimaria",
""
]
] | TITLE: Matrix completion with queries
ABSTRACT: In many applications, e.g., recommender systems and traffic monitoring, the
data comes in the form of a matrix that is only partially observed and low
rank. A fundamental data-analysis task for these datasets is matrix completion,
where the goal is to accurately infer the entries missing from the matrix. Even
when the data satisfies the low-rank assumption, classical matrix-completion
methods may output completions with significant error -- in that the
reconstructed matrix differs significantly from the true underlying matrix.
Often, this is due to the fact that the information contained in the observed
entries is insufficient. In this work, we address this problem by proposing an
active version of matrix completion, where queries can be made to the true
underlying matrix. Subsequently, we design Order&Extend, which is the first
algorithm to unify a matrix-completion approach and a querying strategy into a
single algorithm. Order&Extend is able identify and alleviate insufficient
information by judiciously querying a small number of additional entries. In an
extensive experimental evaluation on real-world datasets, we demonstrate that
our algorithm is efficient and is able to accurately reconstruct the true
matrix while asking only a small number of queries.
| no_new_dataset | 0.940298 |
1705.00415 | Travis Gagie | Leo Ferres, Jos\'e Fuentes-Sep\'ulveda, Travis Gagie, Meng He and
Gonzalo Navarro | Parallel Construction of Compact Planar Embeddings | null | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The sheer sizes of modern datasets are forcing data-structure designers to
consider seriously both parallel construction and compactness. To achieve those
goals we need to design a parallel algorithm with good scalability and with low
memory consumption. An algorithm with good scalability improves its performance
when the number of available cores increases, and an algorithm with low memory
consumption uses memory proportional to the space used by the dataset in
uncompact form. In this work, we discuss the engineering of a parallel
algorithm with linear work and logarithmic span for the construction of the
compact representation of planar embeddings. We also provide an experimental
study of our implementation and prove experimentally that it has good
scalability and low memory consumption. Additionally, we describe and test
experimentally queries supported by the compact representation.
| [
{
"version": "v1",
"created": "Mon, 1 May 2017 03:50:09 GMT"
}
] | 2017-05-02T00:00:00 | [
[
"Ferres",
"Leo",
""
],
[
"Fuentes-Sepúlveda",
"José",
""
],
[
"Gagie",
"Travis",
""
],
[
"He",
"Meng",
""
],
[
"Navarro",
"Gonzalo",
""
]
] | TITLE: Parallel Construction of Compact Planar Embeddings
ABSTRACT: The sheer sizes of modern datasets are forcing data-structure designers to
consider seriously both parallel construction and compactness. To achieve those
goals we need to design a parallel algorithm with good scalability and with low
memory consumption. An algorithm with good scalability improves its performance
when the number of available cores increases, and an algorithm with low memory
consumption uses memory proportional to the space used by the dataset in
uncompact form. In this work, we discuss the engineering of a parallel
algorithm with linear work and logarithmic span for the construction of the
compact representation of planar embeddings. We also provide an experimental
study of our implementation and prove experimentally that it has good
scalability and low memory consumption. Additionally, we describe and test
experimentally queries supported by the compact representation.
| no_new_dataset | 0.947186 |
1705.00462 | Francisco Paisana Francisco Paisana | Ahmed Selim, Francisco Paisana, Jerome A. Arokkiam, Yi Zhang, Linda
Doyle, Luiz A. DaSilva | Spectrum Monitoring for Radar Bands using Deep Convolutional Neural
Networks | 7 pages, 10 figures, conference | null | null | null | cs.NI cs.IT cs.LG math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we present a spectrum monitoring framework for the detection
of radar signals in spectrum sharing scenarios. The core of our framework is a
deep convolutional neural network (CNN) model that enables Measurement Capable
Devices to identify the presence of radar signals in the radio spectrum, even
when these signals are overlapped with other sources of interference, such as
commercial LTE and WLAN. We collected a large dataset of RF measurements, which
include the transmissions of multiple radar pulse waveforms, downlink LTE,
WLAN, and thermal noise. We propose a pre-processing data representation that
leverages the amplitude and phase shifts of the collected samples. This
representation allows our CNN model to achieve a classification accuracy of
99.6% on our testing dataset. The trained CNN model is then tested under
various SNR values, outperforming other models, such as spectrogram-based CNN
models.
| [
{
"version": "v1",
"created": "Mon, 1 May 2017 10:37:43 GMT"
}
] | 2017-05-02T00:00:00 | [
[
"Selim",
"Ahmed",
""
],
[
"Paisana",
"Francisco",
""
],
[
"Arokkiam",
"Jerome A.",
""
],
[
"Zhang",
"Yi",
""
],
[
"Doyle",
"Linda",
""
],
[
"DaSilva",
"Luiz A.",
""
]
] | TITLE: Spectrum Monitoring for Radar Bands using Deep Convolutional Neural
Networks
ABSTRACT: In this paper, we present a spectrum monitoring framework for the detection
of radar signals in spectrum sharing scenarios. The core of our framework is a
deep convolutional neural network (CNN) model that enables Measurement Capable
Devices to identify the presence of radar signals in the radio spectrum, even
when these signals are overlapped with other sources of interference, such as
commercial LTE and WLAN. We collected a large dataset of RF measurements, which
include the transmissions of multiple radar pulse waveforms, downlink LTE,
WLAN, and thermal noise. We propose a pre-processing data representation that
leverages the amplitude and phase shifts of the collected samples. This
representation allows our CNN model to achieve a classification accuracy of
99.6% on our testing dataset. The trained CNN model is then tested under
various SNR values, outperforming other models, such as spectrogram-based CNN
models.
| new_dataset | 0.962988 |
1705.00534 | Bo Li | Bo Li, Yuchao Dai, Huahui Chen, Mingyi He | Single image depth estimation by dilated deep residual convolutional
neural network and soft-weight-sum inference | null | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes a new residual convolutional neural network (CNN)
architecture for single image depth estimation. Compared with existing deep CNN
based methods, our method achieves much better results with fewer training
examples and model parameters. The advantages of our method come from the usage
of dilated convolution, skip connection architecture and soft-weight-sum
inference. Experimental evaluation on the NYU Depth V2 dataset shows that our
method outperforms other state-of-the-art methods by a margin.
| [
{
"version": "v1",
"created": "Thu, 27 Apr 2017 06:07:05 GMT"
}
] | 2017-05-02T00:00:00 | [
[
"Li",
"Bo",
""
],
[
"Dai",
"Yuchao",
""
],
[
"Chen",
"Huahui",
""
],
[
"He",
"Mingyi",
""
]
] | TITLE: Single image depth estimation by dilated deep residual convolutional
neural network and soft-weight-sum inference
ABSTRACT: This paper proposes a new residual convolutional neural network (CNN)
architecture for single image depth estimation. Compared with existing deep CNN
based methods, our method achieves much better results with fewer training
examples and model parameters. The advantages of our method come from the usage
of dilated convolution, skip connection architecture and soft-weight-sum
inference. Experimental evaluation on the NYU Depth V2 dataset shows that our
method outperforms other state-of-the-art methods by a margin.
| no_new_dataset | 0.952264 |
1705.00548 | Reza Farahbakhsh | Reza Farahbakhsh, Angel Cuevas, Ruben Cuevas, Roberto Gonzalez, Noel
Crespi | Understanding the evolution of multimedia content in the Internet
through BitTorrent glasses | Farahbakhsh, Reza, et al. "Understanding the evolution of multimedia
content in the internet through bittorrent glasses." IEEE Network 27.6
(2013): 80-88 | IEEE Network 27.6 (2013): 80-88 | null | null | cs.NI cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Today's Internet traffic is mostly dominated by multimedia content and the
prediction is that this trend will intensify in the future. Therefore, main
Internet players, such as ISPs, content delivery platforms (e.g. Youtube,
Bitorrent, Netflix, etc) or CDN operators, need to understand the evolution of
multimedia content availability and popularity in order to adapt their
infrastructures and resources to satisfy clients requirements while they
minimize their costs. This paper presents a thorough analysis on the evolution
of multimedia content available in BitTorrent. Specifically, we analyze the
evolution of four relevant metrics across different content categories: content
availability, content popularity, content size and user's feedback. To this end
we leverage a large-scale dataset formed by 4 snapshots collected from the most
popular BitTorrent portal, namely The Pirate Bay, between Nov. 2009 and Feb.
2012. Overall our dataset is formed by more than 160k content that attracted
more than 185M of download sessions.
| [
{
"version": "v1",
"created": "Mon, 1 May 2017 14:45:00 GMT"
}
] | 2017-05-02T00:00:00 | [
[
"Farahbakhsh",
"Reza",
""
],
[
"Cuevas",
"Angel",
""
],
[
"Cuevas",
"Ruben",
""
],
[
"Gonzalez",
"Roberto",
""
],
[
"Crespi",
"Noel",
""
]
] | TITLE: Understanding the evolution of multimedia content in the Internet
through BitTorrent glasses
ABSTRACT: Today's Internet traffic is mostly dominated by multimedia content and the
prediction is that this trend will intensify in the future. Therefore, main
Internet players, such as ISPs, content delivery platforms (e.g. Youtube,
Bitorrent, Netflix, etc) or CDN operators, need to understand the evolution of
multimedia content availability and popularity in order to adapt their
infrastructures and resources to satisfy clients requirements while they
minimize their costs. This paper presents a thorough analysis on the evolution
of multimedia content available in BitTorrent. Specifically, we analyze the
evolution of four relevant metrics across different content categories: content
availability, content popularity, content size and user's feedback. To this end
we leverage a large-scale dataset formed by 4 snapshots collected from the most
popular BitTorrent portal, namely The Pirate Bay, between Nov. 2009 and Feb.
2012. Overall our dataset is formed by more than 160k content that attracted
more than 185M of download sessions.
| new_dataset | 0.962497 |
1601.05613 | Boyue Wang | Boyue Wang and Yongli Hu and Junbin Gao and Yanfeng Sun and Baocai Yin | Partial Sum Minimization of Singular Values Representation on Grassmann
Manifolds | Submitting to ACM Transactions on Knowledge Discovery from Data with
minor revision | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As a significant subspace clustering method, low rank representation (LRR)
has attracted great attention in recent years. To further improve the
performance of LRR and extend its applications, there are several issues to be
resolved. The nuclear norm in LRR does not sufficiently use the prior knowledge
of the rank which is known in many practical problems. The LRR is designed for
vectorial data from linear spaces, thus not suitable for high dimensional data
with intrinsic non-linear manifold structure. This paper proposes an extended
LRR model for manifold-valued Grassmann data which incorporates prior knowledge
by minimizing partial sum of singular values instead of the nuclear norm,
namely Partial Sum minimization of Singular Values Representation (GPSSVR). The
new model not only enforces the global structure of data in low rank, but also
retains important information by minimizing only smaller singular values. To
further maintain the local structures among Grassmann points, we also integrate
the Laplacian penalty with GPSSVR. An effective algorithm is proposed to solve
the optimization problem based on the GPSSVR model. The proposed model and
algorithms are assessed on some widely used human action video datasets and a
real scenery dataset. The experimental results show that the proposed methods
obviously outperform other state-of-the-art methods.
| [
{
"version": "v1",
"created": "Thu, 21 Jan 2016 12:47:17 GMT"
},
{
"version": "v2",
"created": "Sun, 31 Jan 2016 01:57:28 GMT"
},
{
"version": "v3",
"created": "Fri, 28 Apr 2017 03:19:27 GMT"
}
] | 2017-05-01T00:00:00 | [
[
"Wang",
"Boyue",
""
],
[
"Hu",
"Yongli",
""
],
[
"Gao",
"Junbin",
""
],
[
"Sun",
"Yanfeng",
""
],
[
"Yin",
"Baocai",
""
]
] | TITLE: Partial Sum Minimization of Singular Values Representation on Grassmann
Manifolds
ABSTRACT: As a significant subspace clustering method, low rank representation (LRR)
has attracted great attention in recent years. To further improve the
performance of LRR and extend its applications, there are several issues to be
resolved. The nuclear norm in LRR does not sufficiently use the prior knowledge
of the rank which is known in many practical problems. The LRR is designed for
vectorial data from linear spaces, thus not suitable for high dimensional data
with intrinsic non-linear manifold structure. This paper proposes an extended
LRR model for manifold-valued Grassmann data which incorporates prior knowledge
by minimizing partial sum of singular values instead of the nuclear norm,
namely Partial Sum minimization of Singular Values Representation (GPSSVR). The
new model not only enforces the global structure of data in low rank, but also
retains important information by minimizing only smaller singular values. To
further maintain the local structures among Grassmann points, we also integrate
the Laplacian penalty with GPSSVR. An effective algorithm is proposed to solve
the optimization problem based on the GPSSVR model. The proposed model and
algorithms are assessed on some widely used human action video datasets and a
real scenery dataset. The experimental results show that the proposed methods
obviously outperform other state-of-the-art methods.
| no_new_dataset | 0.946101 |
1611.02049 | Michal Nowicki | Micha{\l} Nowicki and Jan Wietrzykowski | Low-effort place recognition with WiFi fingerprints using deep learning | null | null | 10.1007/978-3-319-54042-9_57 | null | cs.RO cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Using WiFi signals for indoor localization is the main localization modality
of the existing personal indoor localization systems operating on mobile
devices. WiFi fingerprinting is also used for mobile robots, as WiFi signals
are usually available indoors and can provide rough initial position estimate
or can be used together with other positioning systems. Currently, the best
solutions rely on filtering, manual data analysis, and time-consuming parameter
tuning to achieve reliable and accurate localization. In this work, we propose
to use deep neural networks to significantly lower the work-force burden of the
localization system design, while still achieving satisfactory results.
Assuming the state-of-the-art hierarchical approach, we employ the DNN system
for building/floor classification. We show that stacked autoencoders allow to
efficiently reduce the feature space in order to achieve robust and precise
classification. The proposed architecture is verified on the publicly available
UJIIndoorLoc dataset and the results are compared with other solutions.
| [
{
"version": "v1",
"created": "Mon, 7 Nov 2016 13:47:25 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Apr 2017 06:32:25 GMT"
}
] | 2017-05-01T00:00:00 | [
[
"Nowicki",
"Michał",
""
],
[
"Wietrzykowski",
"Jan",
""
]
] | TITLE: Low-effort place recognition with WiFi fingerprints using deep learning
ABSTRACT: Using WiFi signals for indoor localization is the main localization modality
of the existing personal indoor localization systems operating on mobile
devices. WiFi fingerprinting is also used for mobile robots, as WiFi signals
are usually available indoors and can provide rough initial position estimate
or can be used together with other positioning systems. Currently, the best
solutions rely on filtering, manual data analysis, and time-consuming parameter
tuning to achieve reliable and accurate localization. In this work, we propose
to use deep neural networks to significantly lower the work-force burden of the
localization system design, while still achieving satisfactory results.
Assuming the state-of-the-art hierarchical approach, we employ the DNN system
for building/floor classification. We show that stacked autoencoders allow to
efficiently reduce the feature space in order to achieve robust and precise
classification. The proposed architecture is verified on the publicly available
UJIIndoorLoc dataset and the results are compared with other solutions.
| no_new_dataset | 0.953057 |
1611.02054 | Michal Nowicki | Jan Wietrzykowski and Micha{\l} Nowicki and Piotr Skrzypczy\'nski | Adopting the FAB-MAP algorithm for indoor localization with WiFi
fingerprints | null | null | 10.1007/978-3-319-54042-9_58 | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Personal indoor localization is usually accomplished by fusing information
from various sensors. A common choice is to use the WiFi adapter that provides
information about Access Points that can be found in the vicinity.
Unfortunately, state-of-the-art approaches to WiFi-based localization often
employ very dense maps of the WiFi signal distribution, and require a
time-consuming process of parameter selection. On the other hand, camera images
are commonly used for visual place recognition, detecting whenever the user
observes a scene similar to the one already recorded in a database. Visual
place recognition algorithms can work with sparse databases of recorded scenes
and are in general simple to parametrize. Therefore, we propose a WiFi-based
global localization method employing the structure of the well-known FAB-MAP
visual place recognition algorithm. Similarly to FAB-MAP our method uses
Chow-Liu trees to estimate a joint probability distribution of re-observation
of a place given a set of features extracted at places visited so far. However,
we are the first who apply this idea to recorded WiFi scans instead of visual
words. The new method is evaluated on the UJIIndoorLoc dataset used in the
EvAAL competition, allowing fair comparison with other solutions.
| [
{
"version": "v1",
"created": "Mon, 7 Nov 2016 13:55:35 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Apr 2017 06:29:48 GMT"
}
] | 2017-05-01T00:00:00 | [
[
"Wietrzykowski",
"Jan",
""
],
[
"Nowicki",
"Michał",
""
],
[
"Skrzypczyński",
"Piotr",
""
]
] | TITLE: Adopting the FAB-MAP algorithm for indoor localization with WiFi
fingerprints
ABSTRACT: Personal indoor localization is usually accomplished by fusing information
from various sensors. A common choice is to use the WiFi adapter that provides
information about Access Points that can be found in the vicinity.
Unfortunately, state-of-the-art approaches to WiFi-based localization often
employ very dense maps of the WiFi signal distribution, and require a
time-consuming process of parameter selection. On the other hand, camera images
are commonly used for visual place recognition, detecting whenever the user
observes a scene similar to the one already recorded in a database. Visual
place recognition algorithms can work with sparse databases of recorded scenes
and are in general simple to parametrize. Therefore, we propose a WiFi-based
global localization method employing the structure of the well-known FAB-MAP
visual place recognition algorithm. Similarly to FAB-MAP our method uses
Chow-Liu trees to estimate a joint probability distribution of re-observation
of a place given a set of features extracted at places visited so far. However,
we are the first who apply this idea to recorded WiFi scans instead of visual
words. The new method is evaluated on the UJIIndoorLoc dataset used in the
EvAAL competition, allowing fair comparison with other solutions.
| no_new_dataset | 0.949153 |
1701.04658 | Kevis-Kokitsi Maninis | Kevis-Kokitsi Maninis and Jordi Pont-Tuset and Pablo Arbel\'aez and
Luc Van Gool | Convolutional Oriented Boundaries: From Image Segmentation to High-Level
Tasks | Accepted by T-PAMI. Extended version of "Convolutional Oriented
Boundaries", ECCV 2016 (arXiv:1608.02755). Project page:
http://www.vision.ee.ethz.ch/~cvlsegmentation/cob/ | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present Convolutional Oriented Boundaries (COB), which produces multiscale
oriented contours and region hierarchies starting from generic image
classification Convolutional Neural Networks (CNNs). COB is computationally
efficient, because it requires a single CNN forward pass for multi-scale
contour detection and it uses a novel sparse boundary representation for
hierarchical segmentation; it gives a significant leap in performance over the
state-of-the-art, and it generalizes very well to unseen categories and
datasets. Particularly, we show that learning to estimate not only contour
strength but also orientation provides more accurate results. We perform
extensive experiments for low-level applications on BSDS, PASCAL Context,
PASCAL Segmentation, and NYUD to evaluate boundary detection performance,
showing that COB provides state-of-the-art contours and region hierarchies in
all datasets. We also evaluate COB on high-level tasks when coupled with
multiple pipelines for object proposals, semantic contours, semantic
segmentation, and object detection on MS-COCO, SBD, and PASCAL; showing that
COB also improves the results for all tasks.
| [
{
"version": "v1",
"created": "Tue, 17 Jan 2017 13:04:33 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Apr 2017 17:08:42 GMT"
}
] | 2017-05-01T00:00:00 | [
[
"Maninis",
"Kevis-Kokitsi",
""
],
[
"Pont-Tuset",
"Jordi",
""
],
[
"Arbeláez",
"Pablo",
""
],
[
"Van Gool",
"Luc",
""
]
] | TITLE: Convolutional Oriented Boundaries: From Image Segmentation to High-Level
Tasks
ABSTRACT: We present Convolutional Oriented Boundaries (COB), which produces multiscale
oriented contours and region hierarchies starting from generic image
classification Convolutional Neural Networks (CNNs). COB is computationally
efficient, because it requires a single CNN forward pass for multi-scale
contour detection and it uses a novel sparse boundary representation for
hierarchical segmentation; it gives a significant leap in performance over the
state-of-the-art, and it generalizes very well to unseen categories and
datasets. Particularly, we show that learning to estimate not only contour
strength but also orientation provides more accurate results. We perform
extensive experiments for low-level applications on BSDS, PASCAL Context,
PASCAL Segmentation, and NYUD to evaluate boundary detection performance,
showing that COB provides state-of-the-art contours and region hierarchies in
all datasets. We also evaluate COB on high-level tasks when coupled with
multiple pipelines for object proposals, semantic contours, semantic
segmentation, and object detection on MS-COCO, SBD, and PASCAL; showing that
COB also improves the results for all tasks.
| no_new_dataset | 0.950732 |
1703.01333 | Steve Jan | Steve T.K. Jan and Chun Wang and Qing Zhang and Gang Wang | Towards Monetary Incentives in Social Q&A Services | null | null | null | null | cs.AI cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Community-based question answering (CQA) services are facing key challenges
to motivate domain experts to provide timely answers. Recently, CQA services
are exploring new incentive models to engage experts and celebrities by
allowing them to set a price on their answers. In this paper, we perform a
data-driven analysis on two emerging payment-based CQA systems: Fenda (China)
and Whale (US). By analyzing a large dataset of 220K questions (worth 1 million
USD collectively), we examine how monetary incentives affect different players
in the system. We find that, while monetary incentive enables quick answers
from experts, it also drives certain users to aggressively game the system for
profits. In addition, in this supplier-driven marketplace, users need to
proactively adjust their price to make profits. Famous people are unwilling to
lower their price, which in turn hurts their income and engagement over time.
Finally, we discuss the key implications to future CQA design.
| [
{
"version": "v1",
"created": "Fri, 3 Mar 2017 20:36:38 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Apr 2017 01:48:18 GMT"
}
] | 2017-05-01T00:00:00 | [
[
"Jan",
"Steve T. K.",
""
],
[
"Wang",
"Chun",
""
],
[
"Zhang",
"Qing",
""
],
[
"Wang",
"Gang",
""
]
] | TITLE: Towards Monetary Incentives in Social Q&A Services
ABSTRACT: Community-based question answering (CQA) services are facing key challenges
to motivate domain experts to provide timely answers. Recently, CQA services
are exploring new incentive models to engage experts and celebrities by
allowing them to set a price on their answers. In this paper, we perform a
data-driven analysis on two emerging payment-based CQA systems: Fenda (China)
and Whale (US). By analyzing a large dataset of 220K questions (worth 1 million
USD collectively), we examine how monetary incentives affect different players
in the system. We find that, while monetary incentive enables quick answers
from experts, it also drives certain users to aggressively game the system for
profits. In addition, in this supplier-driven marketplace, users need to
proactively adjust their price to make profits. Famous people are unwilling to
lower their price, which in turn hurts their income and engagement over time.
Finally, we discuss the key implications to future CQA design.
| no_new_dataset | 0.946498 |
1704.00051 | Danqi Chen | Danqi Chen, Adam Fisch, Jason Weston and Antoine Bordes | Reading Wikipedia to Answer Open-Domain Questions | ACL2017, 10 pages | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes to tackle open- domain question answering using Wikipedia
as the unique knowledge source: the answer to any factoid question is a text
span in a Wikipedia article. This task of machine reading at scale combines the
challenges of document retrieval (finding the relevant articles) with that of
machine comprehension of text (identifying the answer spans from those
articles). Our approach combines a search component based on bigram hashing and
TF-IDF matching with a multi-layer recurrent neural network model trained to
detect answers in Wikipedia paragraphs. Our experiments on multiple existing QA
datasets indicate that (1) both modules are highly competitive with respect to
existing counterparts and (2) multitask learning using distant supervision on
their combination is an effective complete system on this challenging task.
| [
{
"version": "v1",
"created": "Fri, 31 Mar 2017 20:39:10 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Apr 2017 03:53:14 GMT"
}
] | 2017-05-01T00:00:00 | [
[
"Chen",
"Danqi",
""
],
[
"Fisch",
"Adam",
""
],
[
"Weston",
"Jason",
""
],
[
"Bordes",
"Antoine",
""
]
] | TITLE: Reading Wikipedia to Answer Open-Domain Questions
ABSTRACT: This paper proposes to tackle open- domain question answering using Wikipedia
as the unique knowledge source: the answer to any factoid question is a text
span in a Wikipedia article. This task of machine reading at scale combines the
challenges of document retrieval (finding the relevant articles) with that of
machine comprehension of text (identifying the answer spans from those
articles). Our approach combines a search component based on bigram hashing and
TF-IDF matching with a multi-layer recurrent neural network model trained to
detect answers in Wikipedia paragraphs. Our experiments on multiple existing QA
datasets indicate that (1) both modules are highly competitive with respect to
existing counterparts and (2) multitask learning using distant supervision on
their combination is an effective complete system on this challenging task.
| no_new_dataset | 0.952131 |
1704.08723 | Chenliang Xu | Chenliang Xu, Caiming Xiong and Jason J. Corso | Action Understanding with Multiple Classes of Actors | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite the rapid progress, existing works on action understanding focus
strictly on one type of action agent, which we call actor---a human adult,
ignoring the diversity of actions performed by other actors. To overcome this
narrow viewpoint, our paper marks the first effort in the computer vision
community to jointly consider algorithmic understanding of various types of
actors undergoing various actions. To begin with, we collect a large annotated
Actor-Action Dataset (A2D) that consists of 3782 short videos and 31 temporally
untrimmed long videos. We formulate the general actor-action understanding
problem and instantiate it at various granularities: video-level single- and
multiple-label actor-action recognition, and pixel-level actor-action
segmentation. We propose and examine a comprehensive set of graphical models
that consider the various types of interplay among actors and actions. Our
findings have led us to conclusive evidence that the joint modeling of actor
and action improves performance over modeling each of them independently, and
further improvement can be obtained by considering the multi-scale natural in
video understanding. Hence, our paper concludes the argument of the value of
explicit consideration of various actors in comprehensive action understanding
and provides a dataset and a benchmark for later works exploring this new
problem.
| [
{
"version": "v1",
"created": "Thu, 27 Apr 2017 19:20:50 GMT"
}
] | 2017-05-01T00:00:00 | [
[
"Xu",
"Chenliang",
""
],
[
"Xiong",
"Caiming",
""
],
[
"Corso",
"Jason J.",
""
]
] | TITLE: Action Understanding with Multiple Classes of Actors
ABSTRACT: Despite the rapid progress, existing works on action understanding focus
strictly on one type of action agent, which we call actor---a human adult,
ignoring the diversity of actions performed by other actors. To overcome this
narrow viewpoint, our paper marks the first effort in the computer vision
community to jointly consider algorithmic understanding of various types of
actors undergoing various actions. To begin with, we collect a large annotated
Actor-Action Dataset (A2D) that consists of 3782 short videos and 31 temporally
untrimmed long videos. We formulate the general actor-action understanding
problem and instantiate it at various granularities: video-level single- and
multiple-label actor-action recognition, and pixel-level actor-action
segmentation. We propose and examine a comprehensive set of graphical models
that consider the various types of interplay among actors and actions. Our
findings have led us to conclusive evidence that the joint modeling of actor
and action improves performance over modeling each of them independently, and
further improvement can be obtained by considering the multi-scale natural in
video understanding. Hence, our paper concludes the argument of the value of
explicit consideration of various actors in comprehensive action understanding
and provides a dataset and a benchmark for later works exploring this new
problem.
| new_dataset | 0.954265 |
1704.08729 | Dorian Cazau | Dorian Cazau and Yuancheng Wang and Olivier Adam and Qiao Wang and
Gr\'egory Nuel | Calibration of a two-state pitch-wise HMM method for note segmentation
in Automatic Music Transcription systems | null | null | null | null | stat.ME cs.SD | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many methods for automatic music transcription involves a multi-pitch
estimation method that estimates an activity score for each pitch. A second
processing step, called note segmentation, has to be performed for each pitch
in order to identify the time intervals when the notes are played. In this
study, a pitch-wise two-state on/off firstorder Hidden Markov Model (HMM) is
developed for note segmentation. A complete parametrization of the HMM sigmoid
function is proposed, based on its original regression formulation, including a
parameter alpha of slope smoothing and beta? of thresholding contrast. A
comparative evaluation of different note segmentation strategies was performed,
differentiated according to whether they use a fixed threshold, called "Hard
Thresholding" (HT), or a HMM-based thresholding method, called "Soft
Thresholding" (ST). This evaluation was done following MIREX standards and
using the MAPS dataset. Also, different transcription scenarios and recording
natures were tested using three units of the Degradation toolbox. Results show
that note segmentation through a HMM soft thresholding with a data-based
optimization of the {alpha,beta} parameter couple significantly enhances
transcription performance.
| [
{
"version": "v1",
"created": "Thu, 27 Apr 2017 19:48:09 GMT"
}
] | 2017-05-01T00:00:00 | [
[
"Cazau",
"Dorian",
""
],
[
"Wang",
"Yuancheng",
""
],
[
"Adam",
"Olivier",
""
],
[
"Wang",
"Qiao",
""
],
[
"Nuel",
"Grégory",
""
]
] | TITLE: Calibration of a two-state pitch-wise HMM method for note segmentation
in Automatic Music Transcription systems
ABSTRACT: Many methods for automatic music transcription involves a multi-pitch
estimation method that estimates an activity score for each pitch. A second
processing step, called note segmentation, has to be performed for each pitch
in order to identify the time intervals when the notes are played. In this
study, a pitch-wise two-state on/off firstorder Hidden Markov Model (HMM) is
developed for note segmentation. A complete parametrization of the HMM sigmoid
function is proposed, based on its original regression formulation, including a
parameter alpha of slope smoothing and beta? of thresholding contrast. A
comparative evaluation of different note segmentation strategies was performed,
differentiated according to whether they use a fixed threshold, called "Hard
Thresholding" (HT), or a HMM-based thresholding method, called "Soft
Thresholding" (ST). This evaluation was done following MIREX standards and
using the MAPS dataset. Also, different transcription scenarios and recording
natures were tested using three units of the Degradation toolbox. Results show
that note segmentation through a HMM soft thresholding with a data-based
optimization of the {alpha,beta} parameter couple significantly enhances
transcription performance.
| no_new_dataset | 0.950088 |
1704.08740 | Mahdi Kalayeh | Mahdi M. Kalayeh, Boqing Gong, Mubarak Shah | Improving Facial Attribute Prediction using Semantic Segmentation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Attributes are semantically meaningful characteristics whose applicability
widely crosses category boundaries. They are particularly important in
describing and recognizing concepts where no explicit training example is
given, \textit{e.g., zero-shot learning}. Additionally, since attributes are
human describable, they can be used for efficient human-computer interaction.
In this paper, we propose to employ semantic segmentation to improve facial
attribute prediction. The core idea lies in the fact that many facial
attributes describe local properties. In other words, the probability of an
attribute to appear in a face image is far from being uniform in the spatial
domain. We build our facial attribute prediction model jointly with a deep
semantic segmentation network. This harnesses the localization cues learned by
the semantic segmentation to guide the attention of the attribute prediction to
the regions where different attributes naturally show up. As a result of this
approach, in addition to recognition, we are able to localize the attributes,
despite merely having access to image level labels (weak supervision) during
training. We evaluate our proposed method on CelebA and LFWA datasets and
achieve superior results to the prior arts. Furthermore, we show that in the
reverse problem, semantic face parsing improves when facial attributes are
available. That reaffirms the need to jointly model these two interconnected
tasks.
| [
{
"version": "v1",
"created": "Thu, 27 Apr 2017 20:41:50 GMT"
}
] | 2017-05-01T00:00:00 | [
[
"Kalayeh",
"Mahdi M.",
""
],
[
"Gong",
"Boqing",
""
],
[
"Shah",
"Mubarak",
""
]
] | TITLE: Improving Facial Attribute Prediction using Semantic Segmentation
ABSTRACT: Attributes are semantically meaningful characteristics whose applicability
widely crosses category boundaries. They are particularly important in
describing and recognizing concepts where no explicit training example is
given, \textit{e.g., zero-shot learning}. Additionally, since attributes are
human describable, they can be used for efficient human-computer interaction.
In this paper, we propose to employ semantic segmentation to improve facial
attribute prediction. The core idea lies in the fact that many facial
attributes describe local properties. In other words, the probability of an
attribute to appear in a face image is far from being uniform in the spatial
domain. We build our facial attribute prediction model jointly with a deep
semantic segmentation network. This harnesses the localization cues learned by
the semantic segmentation to guide the attention of the attribute prediction to
the regions where different attributes naturally show up. As a result of this
approach, in addition to recognition, we are able to localize the attributes,
despite merely having access to image level labels (weak supervision) during
training. We evaluate our proposed method on CelebA and LFWA datasets and
achieve superior results to the prior arts. Furthermore, we show that in the
reverse problem, semantic face parsing improves when facial attributes are
available. That reaffirms the need to jointly model these two interconnected
tasks.
| no_new_dataset | 0.947962 |
1704.08759 | Shichao Yang | Shichao Yang, Sandeep Konam, Chen Ma, Stephanie Rosenthal, Manuela
Veloso, Sebastian Scherer | Obstacle Avoidance through Deep Networks based Intermediate Perception | null | null | null | null | cs.RO cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Obstacle avoidance from monocular images is a challenging problem for robots.
Though multi-view structure-from-motion could build 3D maps, it is not robust
in textureless environments. Some learning based methods exploit human
demonstration to predict a steering command directly from a single image.
However, this method is usually biased towards certain tasks or demonstration
scenarios and also biased by human understanding. In this paper, we propose a
new method to predict a trajectory from images. We train our system on more
diverse NYUv2 dataset. The ground truth trajectory is computed from the
designed cost functions automatically. The Convolutional Neural Network
perception is divided into two stages: first, predict depth map and surface
normal from RGB images, which are two important geometric properties related to
3D obstacle representation. Second, predict the trajectory from the depth and
normal. Results show that our intermediate perception increases the accuracy by
20% than the direct prediction. Our model generalizes well to other public
indoor datasets and is also demonstrated for robot flights in simulation and
experiments.
| [
{
"version": "v1",
"created": "Thu, 27 Apr 2017 21:55:07 GMT"
}
] | 2017-05-01T00:00:00 | [
[
"Yang",
"Shichao",
""
],
[
"Konam",
"Sandeep",
""
],
[
"Ma",
"Chen",
""
],
[
"Rosenthal",
"Stephanie",
""
],
[
"Veloso",
"Manuela",
""
],
[
"Scherer",
"Sebastian",
""
]
] | TITLE: Obstacle Avoidance through Deep Networks based Intermediate Perception
ABSTRACT: Obstacle avoidance from monocular images is a challenging problem for robots.
Though multi-view structure-from-motion could build 3D maps, it is not robust
in textureless environments. Some learning based methods exploit human
demonstration to predict a steering command directly from a single image.
However, this method is usually biased towards certain tasks or demonstration
scenarios and also biased by human understanding. In this paper, we propose a
new method to predict a trajectory from images. We train our system on more
diverse NYUv2 dataset. The ground truth trajectory is computed from the
designed cost functions automatically. The Convolutional Neural Network
perception is divided into two stages: first, predict depth map and surface
normal from RGB images, which are two important geometric properties related to
3D obstacle representation. Second, predict the trajectory from the depth and
normal. Results show that our intermediate perception increases the accuracy by
20% than the direct prediction. Our model generalizes well to other public
indoor datasets and is also demonstrated for robot flights in simulation and
experiments.
| no_new_dataset | 0.952175 |
1704.08763 | Erroll Wood | Erroll Wood, Tadas Baltrusaitis, Louis-Philippe Morency, Peter
Robinson, Andreas Bulling | GazeDirector: Fully Articulated Eye Gaze Redirection in Video | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present GazeDirector, a new approach for eye gaze redirection that uses
model-fitting. Our method first tracks the eyes by fitting a multi-part eye
region model to video frames using analysis-by-synthesis, thereby recovering
eye region shape, texture, pose, and gaze simultaneously. It then redirects
gaze by 1) warping the eyelids from the original image using a model-derived
flow field, and 2) rendering and compositing synthesized 3D eyeballs onto the
output image in a photorealistic manner. GazeDirector allows us to change where
people are looking without person-specific training data, and with full
articulation, i.e. we can precisely specify new gaze directions in 3D.
Quantitatively, we evaluate both model-fitting and gaze synthesis, with
experiments for gaze estimation and redirection on the Columbia gaze dataset.
Qualitatively, we compare GazeDirector against recent work on gaze redirection,
showing better results especially for large redirection angles. Finally, we
demonstrate gaze redirection on YouTube videos by introducing new 3D gaze
targets and by manipulating visual behavior.
| [
{
"version": "v1",
"created": "Thu, 27 Apr 2017 22:23:53 GMT"
}
] | 2017-05-01T00:00:00 | [
[
"Wood",
"Erroll",
""
],
[
"Baltrusaitis",
"Tadas",
""
],
[
"Morency",
"Louis-Philippe",
""
],
[
"Robinson",
"Peter",
""
],
[
"Bulling",
"Andreas",
""
]
] | TITLE: GazeDirector: Fully Articulated Eye Gaze Redirection in Video
ABSTRACT: We present GazeDirector, a new approach for eye gaze redirection that uses
model-fitting. Our method first tracks the eyes by fitting a multi-part eye
region model to video frames using analysis-by-synthesis, thereby recovering
eye region shape, texture, pose, and gaze simultaneously. It then redirects
gaze by 1) warping the eyelids from the original image using a model-derived
flow field, and 2) rendering and compositing synthesized 3D eyeballs onto the
output image in a photorealistic manner. GazeDirector allows us to change where
people are looking without person-specific training data, and with full
articulation, i.e. we can precisely specify new gaze directions in 3D.
Quantitatively, we evaluate both model-fitting and gaze synthesis, with
experiments for gaze estimation and redirection on the Columbia gaze dataset.
Qualitatively, we compare GazeDirector against recent work on gaze redirection,
showing better results especially for large redirection angles. Finally, we
demonstrate gaze redirection on YouTube videos by introducing new 3D gaze
targets and by manipulating visual behavior.
| no_new_dataset | 0.953101 |
1704.08812 | Xiaoyong Shen | Xiaoyong Shen, Ruixing Wang, Hengshuang Zhao, Jiaya Jia | Automatic Real-time Background Cut for Portrait Videos | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We in this paper solve the problem of high-quality automatic real-time
background cut for 720p portrait videos. We first handle the background
ambiguity issue in semantic segmentation by proposing a global background
attenuation model. A spatial-temporal refinement network is developed to
further refine the segmentation errors in each frame and ensure temporal
coherence in the segmentation map. We form an end-to-end network for training
and testing. Each module is designed considering efficiency and accuracy. We
build a portrait dataset, which includes 8,000 images with high-quality labeled
map for training and testing. To further improve the performance, we build a
portrait video dataset with 50 sequences to fine-tune video segmentation. Our
framework benefits many video processing applications.
| [
{
"version": "v1",
"created": "Fri, 28 Apr 2017 05:29:34 GMT"
}
] | 2017-05-01T00:00:00 | [
[
"Shen",
"Xiaoyong",
""
],
[
"Wang",
"Ruixing",
""
],
[
"Zhao",
"Hengshuang",
""
],
[
"Jia",
"Jiaya",
""
]
] | TITLE: Automatic Real-time Background Cut for Portrait Videos
ABSTRACT: We in this paper solve the problem of high-quality automatic real-time
background cut for 720p portrait videos. We first handle the background
ambiguity issue in semantic segmentation by proposing a global background
attenuation model. A spatial-temporal refinement network is developed to
further refine the segmentation errors in each frame and ensure temporal
coherence in the segmentation map. We form an end-to-end network for training
and testing. Each module is designed considering efficiency and accuracy. We
build a portrait dataset, which includes 8,000 images with high-quality labeled
map for training and testing. To further improve the performance, we build a
portrait video dataset with 50 sequences to fine-tune video segmentation. Our
framework benefits many video processing applications.
| new_dataset | 0.950824 |
1704.08818 | Yong Xia | Benteng Ma, Yong Xia | A Tribe Competition-Based Genetic Algorithm for Feature Selection in
Pattern Classification | null | null | 10.1016/j.asoc.2017.04.042 | null | cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Feature selection has always been a critical step in pattern recognition, in
which evolutionary algorithms, such as the genetic algorithm (GA), are most
commonly used. However, the individual encoding scheme used in various GAs
would either pose a bias on the solution or require a pre-specified number of
features, and hence may lead to less accurate results. In this paper, a tribe
competition-based genetic algorithm (TCbGA) is proposed for feature selection
in pattern classification. The population of individuals is divided into
multiple tribes, and the initialization and evolutionary operations are
modified to ensure that the number of selected features in each tribe follows a
Gaussian distribution. Thus each tribe focuses on exploring a specific part of
the solution space. Meanwhile, tribe competition is introduced to the evolution
process, which allows the winning tribes, which produce better individuals, to
enlarge their sizes, i.e. having more individuals to search their parts of the
solution space. This algorithm, therefore, avoids the bias on solutions and
requirement of a pre-specified number of features. We have evaluated our
algorithm against several state-of-the-art feature selection approaches on 20
benchmark datasets. Our results suggest that the proposed TCbGA algorithm can
identify the optimal feature subset more effectively and produce more accurate
pattern classification.
| [
{
"version": "v1",
"created": "Fri, 28 Apr 2017 06:25:50 GMT"
}
] | 2017-05-01T00:00:00 | [
[
"Ma",
"Benteng",
""
],
[
"Xia",
"Yong",
""
]
] | TITLE: A Tribe Competition-Based Genetic Algorithm for Feature Selection in
Pattern Classification
ABSTRACT: Feature selection has always been a critical step in pattern recognition, in
which evolutionary algorithms, such as the genetic algorithm (GA), are most
commonly used. However, the individual encoding scheme used in various GAs
would either pose a bias on the solution or require a pre-specified number of
features, and hence may lead to less accurate results. In this paper, a tribe
competition-based genetic algorithm (TCbGA) is proposed for feature selection
in pattern classification. The population of individuals is divided into
multiple tribes, and the initialization and evolutionary operations are
modified to ensure that the number of selected features in each tribe follows a
Gaussian distribution. Thus each tribe focuses on exploring a specific part of
the solution space. Meanwhile, tribe competition is introduced to the evolution
process, which allows the winning tribes, which produce better individuals, to
enlarge their sizes, i.e. having more individuals to search their parts of the
solution space. This algorithm, therefore, avoids the bias on solutions and
requirement of a pre-specified number of features. We have evaluated our
algorithm against several state-of-the-art feature selection approaches on 20
benchmark datasets. Our results suggest that the proposed TCbGA algorithm can
identify the optimal feature subset more effectively and produce more accurate
pattern classification.
| no_new_dataset | 0.950411 |
1704.08853 | Tieyun Qian | Bei Liu, Tieyun Qian, Bing Liu, Liang Hong, Zhenni You, Yuxiang Li | Learning Spatiotemporal-Aware Representation for POI Recommendation | null | null | null | null | cs.IR cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The wide spread of location-based social networks brings about a huge volume
of user check-in data, which facilitates the recommendation of points of
interest (POIs). Recent advances on distributed representation shed light on
learning low dimensional dense vectors to alleviate the data sparsity problem.
Current studies on representation learning for POI recommendation embed both
users and POIs in a common latent space, and users' preference is inferred
based on the distance/similarity between a user and a POI. Such an approach is
not in accordance with the semantics of users and POIs as they are inherently
different objects. In this paper, we present a novel spatiotemporal aware (STA)
representation, which models the spatial and temporal information as \emph{a
relationship connecting users and POIs}. Our model generalizes the recent
advances in knowledge graph embedding. The basic idea is that the embedding of
a $<$time, location$>$ pair corresponds to a translation from embeddings of
users to POIs. Since the POI embedding should be close to the user embedding
plus the relationship vector, the recommendation can be performed by selecting
the top-\emph{k} POIs similar to the translated POI, which are all of the same
type of objects. We conduct extensive experiments on two real-world datasets.
The results demonstrate that our STA model achieves the state-of-the-art
performance in terms of high recommendation accuracy, robustness to data
sparsity and effectiveness in handling cold start problem.
| [
{
"version": "v1",
"created": "Fri, 28 Apr 2017 09:01:01 GMT"
}
] | 2017-05-01T00:00:00 | [
[
"Liu",
"Bei",
""
],
[
"Qian",
"Tieyun",
""
],
[
"Liu",
"Bing",
""
],
[
"Hong",
"Liang",
""
],
[
"You",
"Zhenni",
""
],
[
"Li",
"Yuxiang",
""
]
] | TITLE: Learning Spatiotemporal-Aware Representation for POI Recommendation
ABSTRACT: The wide spread of location-based social networks brings about a huge volume
of user check-in data, which facilitates the recommendation of points of
interest (POIs). Recent advances on distributed representation shed light on
learning low dimensional dense vectors to alleviate the data sparsity problem.
Current studies on representation learning for POI recommendation embed both
users and POIs in a common latent space, and users' preference is inferred
based on the distance/similarity between a user and a POI. Such an approach is
not in accordance with the semantics of users and POIs as they are inherently
different objects. In this paper, we present a novel spatiotemporal aware (STA)
representation, which models the spatial and temporal information as \emph{a
relationship connecting users and POIs}. Our model generalizes the recent
advances in knowledge graph embedding. The basic idea is that the embedding of
a $<$time, location$>$ pair corresponds to a translation from embeddings of
users to POIs. Since the POI embedding should be close to the user embedding
plus the relationship vector, the recommendation can be performed by selecting
the top-\emph{k} POIs similar to the translated POI, which are all of the same
type of objects. We conduct extensive experiments on two real-world datasets.
The results demonstrate that our STA model achieves the state-of-the-art
performance in terms of high recommendation accuracy, robustness to data
sparsity and effectiveness in handling cold start problem.
| no_new_dataset | 0.949153 |
1704.08881 | Christian Eggert | Christian Eggert, Dan Zecha, Stephan Brehm, Rainer Lienhart | Improving Small Object Proposals for Company Logo Detection | 8 Pages, ICMR 2017 | null | 10.1145/3078971.3078990 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many modern approaches for object detection are two-staged pipelines. The
first stage identifies regions of interest which are then classified in the
second stage. Faster R-CNN is such an approach for object detection which
combines both stages into a single pipeline. In this paper we apply Faster
R-CNN to the task of company logo detection. Motivated by its weak performance
on small object instances, we examine in detail both the proposal and the
classification stage with respect to a wide range of object sizes. We
investigate the influence of feature map resolution on the performance of those
stages.
Based on theoretical considerations, we introduce an improved scheme for
generating anchor proposals and propose a modification to Faster R-CNN which
leverages higher-resolution feature maps for small objects. We evaluate our
approach on the FlickrLogos dataset improving the RPN performance from 0.52 to
0.71 (MABO) and the detection performance from 0.52 to 0.67 (mAP).
| [
{
"version": "v1",
"created": "Fri, 28 Apr 2017 11:30:10 GMT"
}
] | 2017-05-01T00:00:00 | [
[
"Eggert",
"Christian",
""
],
[
"Zecha",
"Dan",
""
],
[
"Brehm",
"Stephan",
""
],
[
"Lienhart",
"Rainer",
""
]
] | TITLE: Improving Small Object Proposals for Company Logo Detection
ABSTRACT: Many modern approaches for object detection are two-staged pipelines. The
first stage identifies regions of interest which are then classified in the
second stage. Faster R-CNN is such an approach for object detection which
combines both stages into a single pipeline. In this paper we apply Faster
R-CNN to the task of company logo detection. Motivated by its weak performance
on small object instances, we examine in detail both the proposal and the
classification stage with respect to a wide range of object sizes. We
investigate the influence of feature map resolution on the performance of those
stages.
Based on theoretical considerations, we introduce an improved scheme for
generating anchor proposals and propose a modification to Faster R-CNN which
leverages higher-resolution feature maps for small objects. We evaluate our
approach on the FlickrLogos dataset improving the RPN performance from 0.52 to
0.71 (MABO) and the detection performance from 0.52 to 0.67 (mAP).
| no_new_dataset | 0.949856 |
1704.08950 | Amit Kumar | Amit Kumar, Rahul Dutta, Harbhajan Rai | Intelligent Personal Assistant with Knowledge Navigation | Converted O(N3) solution to viable O(N) solution | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An Intelligent Personal Agent (IPA) is an agent that has the purpose of
helping the user to gain information through reliable resources with the help
of knowledge navigation techniques and saving time to search the best content.
The agent is also responsible for responding to the chat-based queries with the
help of Conversation Corpus. We will be testing different methods for optimal
query generation. To felicitate the ease of usage of the application, the agent
will be able to accept the input through Text (Keyboard), Voice (Speech
Recognition) and Server (Facebook) and output responses using the same method.
Existing chat bots reply by making changes in the input, but we will give
responses based on multiple SRT files. The model will learn using the human
dialogs dataset and will be able respond human-like. Responses to queries about
famous things (places, people, and words) can be provided using web scraping
which will enable the bot to have knowledge navigation features. The agent will
even learn from its past experiences supporting semi-supervised learning.
| [
{
"version": "v1",
"created": "Fri, 28 Apr 2017 14:26:12 GMT"
}
] | 2017-05-01T00:00:00 | [
[
"Kumar",
"Amit",
""
],
[
"Dutta",
"Rahul",
""
],
[
"Rai",
"Harbhajan",
""
]
] | TITLE: Intelligent Personal Assistant with Knowledge Navigation
ABSTRACT: An Intelligent Personal Agent (IPA) is an agent that has the purpose of
helping the user to gain information through reliable resources with the help
of knowledge navigation techniques and saving time to search the best content.
The agent is also responsible for responding to the chat-based queries with the
help of Conversation Corpus. We will be testing different methods for optimal
query generation. To felicitate the ease of usage of the application, the agent
will be able to accept the input through Text (Keyboard), Voice (Speech
Recognition) and Server (Facebook) and output responses using the same method.
Existing chat bots reply by making changes in the input, but we will give
responses based on multiple SRT files. The model will learn using the human
dialogs dataset and will be able respond human-like. Responses to queries about
famous things (places, people, and words) can be provided using web scraping
which will enable the bot to have knowledge navigation features. The agent will
even learn from its past experiences supporting semi-supervised learning.
| no_new_dataset | 0.938688 |
1512.09049 | Chenliang Xu | Chenliang Xu and Jason J. Corso | LIBSVX: A Supervoxel Library and Benchmark for Early Video Processing | In Review at International Journal of Computer Vision | Int J Comput Vis (2016) 119: 272 | 10.1007/s11263-016-0906-5 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Supervoxel segmentation has strong potential to be incorporated into early
video analysis as superpixel segmentation has in image analysis. However, there
are many plausible supervoxel methods and little understanding as to when and
where each is most appropriate. Indeed, we are not aware of a single
comparative study on supervoxel segmentation. To that end, we study seven
supervoxel algorithms, including both off-line and streaming methods, in the
context of what we consider to be a good supervoxel: namely, spatiotemporal
uniformity, object/region boundary detection, region compression and parsimony.
For the evaluation we propose a comprehensive suite of seven quality metrics to
measure these desirable supervoxel characteristics. In addition, we evaluate
the methods in a supervoxel classification task as a proxy for subsequent
high-level uses of the supervoxels in video analysis. We use six existing
benchmark video datasets with a variety of content-types and dense human
annotations. Our findings have led us to conclusive evidence that the
hierarchical graph-based (GBH), segmentation by weighted aggregation (SWA) and
temporal superpixels (TSP) methods are the top-performers among the seven
methods. They all perform well in terms of segmentation accuracy, but vary in
regard to the other desiderata: GBH captures object boundaries best; SWA has
the best potential for region compression; and TSP achieves the best
undersegmentation error.
| [
{
"version": "v1",
"created": "Wed, 30 Dec 2015 18:25:19 GMT"
}
] | 2017-04-28T00:00:00 | [
[
"Xu",
"Chenliang",
""
],
[
"Corso",
"Jason J.",
""
]
] | TITLE: LIBSVX: A Supervoxel Library and Benchmark for Early Video Processing
ABSTRACT: Supervoxel segmentation has strong potential to be incorporated into early
video analysis as superpixel segmentation has in image analysis. However, there
are many plausible supervoxel methods and little understanding as to when and
where each is most appropriate. Indeed, we are not aware of a single
comparative study on supervoxel segmentation. To that end, we study seven
supervoxel algorithms, including both off-line and streaming methods, in the
context of what we consider to be a good supervoxel: namely, spatiotemporal
uniformity, object/region boundary detection, region compression and parsimony.
For the evaluation we propose a comprehensive suite of seven quality metrics to
measure these desirable supervoxel characteristics. In addition, we evaluate
the methods in a supervoxel classification task as a proxy for subsequent
high-level uses of the supervoxels in video analysis. We use six existing
benchmark video datasets with a variety of content-types and dense human
annotations. Our findings have led us to conclusive evidence that the
hierarchical graph-based (GBH), segmentation by weighted aggregation (SWA) and
temporal superpixels (TSP) methods are the top-performers among the seven
methods. They all perform well in terms of segmentation accuracy, but vary in
regard to the other desiderata: GBH captures object boundaries best; SWA has
the best potential for region compression; and TSP achieves the best
undersegmentation error.
| no_new_dataset | 0.939582 |
1604.00494 | Phi Vu Tran | Phi Vu Tran | A Fully Convolutional Neural Network for Cardiac Segmentation in
Short-Axis MRI | Initial Technical Report; Include link to models and code | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automated cardiac segmentation from magnetic resonance imaging datasets is an
essential step in the timely diagnosis and management of cardiac pathologies.
We propose to tackle the problem of automated left and right ventricle
segmentation through the application of a deep fully convolutional neural
network architecture. Our model is efficiently trained end-to-end in a single
learning stage from whole-image inputs and ground truths to make inference at
every pixel. To our knowledge, this is the first application of a fully
convolutional neural network architecture for pixel-wise labeling in cardiac
magnetic resonance imaging. Numerical experiments demonstrate that our model is
robust to outperform previous fully automated methods across multiple
evaluation measures on a range of cardiac datasets. Moreover, our model is fast
and can leverage commodity compute resources such as the graphics processing
unit to enable state-of-the-art cardiac segmentation at massive scales. The
models and code are available at
https://github.com/vuptran/cardiac-segmentation
| [
{
"version": "v1",
"created": "Sat, 2 Apr 2016 12:32:55 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Apr 2017 10:11:34 GMT"
},
{
"version": "v3",
"created": "Thu, 27 Apr 2017 03:04:26 GMT"
}
] | 2017-04-28T00:00:00 | [
[
"Tran",
"Phi Vu",
""
]
] | TITLE: A Fully Convolutional Neural Network for Cardiac Segmentation in
Short-Axis MRI
ABSTRACT: Automated cardiac segmentation from magnetic resonance imaging datasets is an
essential step in the timely diagnosis and management of cardiac pathologies.
We propose to tackle the problem of automated left and right ventricle
segmentation through the application of a deep fully convolutional neural
network architecture. Our model is efficiently trained end-to-end in a single
learning stage from whole-image inputs and ground truths to make inference at
every pixel. To our knowledge, this is the first application of a fully
convolutional neural network architecture for pixel-wise labeling in cardiac
magnetic resonance imaging. Numerical experiments demonstrate that our model is
robust to outperform previous fully automated methods across multiple
evaluation measures on a range of cardiac datasets. Moreover, our model is fast
and can leverage commodity compute resources such as the graphics processing
unit to enable state-of-the-art cardiac segmentation at massive scales. The
models and code are available at
https://github.com/vuptran/cardiac-segmentation
| no_new_dataset | 0.951504 |
1610.01239 | Qinglong Wang | Qinglong Wang, Wenbo Guo, Kaixuan Zhang, Alexander G. Ororbia II,
Xinyu Xing, C. Lee Giles, Xue Liu | Adversary Resistant Deep Neural Networks with an Application to Malware
Detection | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Beyond its highly publicized victories in Go, there have been numerous
successful applications of deep learning in information retrieval, computer
vision and speech recognition. In cybersecurity, an increasing number of
companies have become excited about the potential of deep learning, and have
started to use it for various security incidents, the most popular being
malware detection. These companies assert that deep learning (DL) could help
turn the tide in the battle against malware infections. However, deep neural
networks (DNNs) are vulnerable to adversarial samples, a flaw that plagues most
if not all statistical learning models. Recent research has demonstrated that
those with malicious intent can easily circumvent deep learning-powered malware
detection by exploiting this flaw.
In order to address this problem, previous work has developed various defense
mechanisms that either augmenting training data or enhance model's complexity.
However, after a thorough analysis of the fundamental flaw in DNNs, we discover
that the effectiveness of current defenses is limited and, more importantly,
cannot provide theoretical guarantees as to their robustness against
adversarial sampled-based attacks. As such, we propose a new adversary
resistant technique that obstructs attackers from constructing impactful
adversarial samples by randomly nullifying features within samples. In this
work, we evaluate our proposed technique against a real world dataset with
14,679 malware variants and 17,399 benign programs. We theoretically validate
the robustness of our technique, and empirically show that our technique
significantly boosts DNN robustness to adversarial samples while maintaining
high accuracy in classification. To demonstrate the general applicability of
our proposed method, we also conduct experiments using the MNIST and CIFAR-10
datasets, generally used in image recognition research.
| [
{
"version": "v1",
"created": "Wed, 5 Oct 2016 00:46:03 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Oct 2016 16:20:56 GMT"
},
{
"version": "v3",
"created": "Fri, 7 Oct 2016 17:24:13 GMT"
},
{
"version": "v4",
"created": "Thu, 27 Apr 2017 17:25:30 GMT"
}
] | 2017-04-28T00:00:00 | [
[
"Wang",
"Qinglong",
""
],
[
"Guo",
"Wenbo",
""
],
[
"Zhang",
"Kaixuan",
""
],
[
"Ororbia",
"Alexander G.",
"II"
],
[
"Xing",
"Xinyu",
""
],
[
"Giles",
"C. Lee",
""
],
[
"Liu",
"Xue",
""
]
] | TITLE: Adversary Resistant Deep Neural Networks with an Application to Malware
Detection
ABSTRACT: Beyond its highly publicized victories in Go, there have been numerous
successful applications of deep learning in information retrieval, computer
vision and speech recognition. In cybersecurity, an increasing number of
companies have become excited about the potential of deep learning, and have
started to use it for various security incidents, the most popular being
malware detection. These companies assert that deep learning (DL) could help
turn the tide in the battle against malware infections. However, deep neural
networks (DNNs) are vulnerable to adversarial samples, a flaw that plagues most
if not all statistical learning models. Recent research has demonstrated that
those with malicious intent can easily circumvent deep learning-powered malware
detection by exploiting this flaw.
In order to address this problem, previous work has developed various defense
mechanisms that either augmenting training data or enhance model's complexity.
However, after a thorough analysis of the fundamental flaw in DNNs, we discover
that the effectiveness of current defenses is limited and, more importantly,
cannot provide theoretical guarantees as to their robustness against
adversarial sampled-based attacks. As such, we propose a new adversary
resistant technique that obstructs attackers from constructing impactful
adversarial samples by randomly nullifying features within samples. In this
work, we evaluate our proposed technique against a real world dataset with
14,679 malware variants and 17,399 benign programs. We theoretically validate
the robustness of our technique, and empirically show that our technique
significantly boosts DNN robustness to adversarial samples while maintaining
high accuracy in classification. To demonstrate the general applicability of
our proposed method, we also conduct experiments using the MNIST and CIFAR-10
datasets, generally used in image recognition research.
| no_new_dataset | 0.933127 |
1612.01105 | Hengshuang Zhao | Hengshuang Zhao, Jianping Shi, Xiaojuan Qi, Xiaogang Wang, Jiaya Jia | Pyramid Scene Parsing Network | CVPR 2017 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Scene parsing is challenging for unrestricted open vocabulary and diverse
scenes. In this paper, we exploit the capability of global context information
by different-region-based context aggregation through our pyramid pooling
module together with the proposed pyramid scene parsing network (PSPNet). Our
global prior representation is effective to produce good quality results on the
scene parsing task, while PSPNet provides a superior framework for pixel-level
prediction tasks. The proposed approach achieves state-of-the-art performance
on various datasets. It came first in ImageNet scene parsing challenge 2016,
PASCAL VOC 2012 benchmark and Cityscapes benchmark. A single PSPNet yields new
record of mIoU accuracy 85.4% on PASCAL VOC 2012 and accuracy 80.2% on
Cityscapes.
| [
{
"version": "v1",
"created": "Sun, 4 Dec 2016 11:46:22 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Apr 2017 12:15:17 GMT"
}
] | 2017-04-28T00:00:00 | [
[
"Zhao",
"Hengshuang",
""
],
[
"Shi",
"Jianping",
""
],
[
"Qi",
"Xiaojuan",
""
],
[
"Wang",
"Xiaogang",
""
],
[
"Jia",
"Jiaya",
""
]
] | TITLE: Pyramid Scene Parsing Network
ABSTRACT: Scene parsing is challenging for unrestricted open vocabulary and diverse
scenes. In this paper, we exploit the capability of global context information
by different-region-based context aggregation through our pyramid pooling
module together with the proposed pyramid scene parsing network (PSPNet). Our
global prior representation is effective to produce good quality results on the
scene parsing task, while PSPNet provides a superior framework for pixel-level
prediction tasks. The proposed approach achieves state-of-the-art performance
on various datasets. It came first in ImageNet scene parsing challenge 2016,
PASCAL VOC 2012 benchmark and Cityscapes benchmark. A single PSPNet yields new
record of mIoU accuracy 85.4% on PASCAL VOC 2012 and accuracy 80.2% on
Cityscapes.
| no_new_dataset | 0.953188 |
1704.04008 | Afshin Rahimi | Afshin Rahimi, Trevor Cohn, Timothy Baldwin | A Neural Model for User Geolocation and Lexical Dialectology | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a simple yet effective text- based user geolocation model based on
a neural network with one hidden layer, which achieves state of the art
performance over three Twitter benchmark geolocation datasets, in addition to
producing word and phrase embeddings in the hidden layer that we show to be
useful for detecting dialectal terms. As part of our analysis of dialectal
terms, we release DAREDS, a dataset for evaluating dialect term detection
methods.
| [
{
"version": "v1",
"created": "Thu, 13 Apr 2017 06:35:55 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Apr 2017 00:38:27 GMT"
},
{
"version": "v3",
"created": "Thu, 27 Apr 2017 01:18:58 GMT"
}
] | 2017-04-28T00:00:00 | [
[
"Rahimi",
"Afshin",
""
],
[
"Cohn",
"Trevor",
""
],
[
"Baldwin",
"Timothy",
""
]
] | TITLE: A Neural Model for User Geolocation and Lexical Dialectology
ABSTRACT: We propose a simple yet effective text- based user geolocation model based on
a neural network with one hidden layer, which achieves state of the art
performance over three Twitter benchmark geolocation datasets, in addition to
producing word and phrase embeddings in the hidden layer that we show to be
useful for detecting dialectal terms. As part of our analysis of dialectal
terms, we release DAREDS, a dataset for evaluating dialect term detection
methods.
| new_dataset | 0.957991 |
1704.05588 | Dhiraj Gandhi | Dhiraj Gandhi, Lerrel Pinto, Abhinav Gupta | Learning to Fly by Crashing | null | null | null | null | cs.RO cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | How do you learn to navigate an Unmanned Aerial Vehicle (UAV) and avoid
obstacles? One approach is to use a small dataset collected by human experts:
however, high capacity learning algorithms tend to overfit when trained with
little data. An alternative is to use simulation. But the gap between
simulation and real world remains large especially for perception problems. The
reason most research avoids using large-scale real data is the fear of crashes!
In this paper, we propose to bite the bullet and collect a dataset of crashes
itself! We build a drone whose sole purpose is to crash into objects: it
samples naive trajectories and crashes into random objects. We crash our drone
11,500 times to create one of the biggest UAV crash dataset. This dataset
captures the different ways in which a UAV can crash. We use all this negative
flying data in conjunction with positive data sampled from the same
trajectories to learn a simple yet powerful policy for UAV navigation. We show
that this simple self-supervised model is quite effective in navigating the UAV
even in extremely cluttered environments with dynamic obstacles including
humans. For supplementary video see: https://youtu.be/u151hJaGKUo
| [
{
"version": "v1",
"created": "Wed, 19 Apr 2017 02:20:20 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Apr 2017 00:13:19 GMT"
}
] | 2017-04-28T00:00:00 | [
[
"Gandhi",
"Dhiraj",
""
],
[
"Pinto",
"Lerrel",
""
],
[
"Gupta",
"Abhinav",
""
]
] | TITLE: Learning to Fly by Crashing
ABSTRACT: How do you learn to navigate an Unmanned Aerial Vehicle (UAV) and avoid
obstacles? One approach is to use a small dataset collected by human experts:
however, high capacity learning algorithms tend to overfit when trained with
little data. An alternative is to use simulation. But the gap between
simulation and real world remains large especially for perception problems. The
reason most research avoids using large-scale real data is the fear of crashes!
In this paper, we propose to bite the bullet and collect a dataset of crashes
itself! We build a drone whose sole purpose is to crash into objects: it
samples naive trajectories and crashes into random objects. We crash our drone
11,500 times to create one of the biggest UAV crash dataset. This dataset
captures the different ways in which a UAV can crash. We use all this negative
flying data in conjunction with positive data sampled from the same
trajectories to learn a simple yet powerful policy for UAV navigation. We show
that this simple self-supervised model is quite effective in navigating the UAV
even in extremely cluttered environments with dynamic obstacles including
humans. For supplementary video see: https://youtu.be/u151hJaGKUo
| new_dataset | 0.957358 |
1704.08292 | Chenliang Xu | Lele Chen, Sudhanshu Srivastava, Zhiyao Duan and Chenliang Xu | Deep Cross-Modal Audio-Visual Generation | null | null | null | null | cs.CV cs.MM cs.SD | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cross-modal audio-visual perception has been a long-lasting topic in
psychology and neurology, and various studies have discovered strong
correlations in human perception of auditory and visual stimuli. Despite works
in computational multimodal modeling, the problem of cross-modal audio-visual
generation has not been systematically studied in the literature. In this
paper, we make the first attempt to solve this cross-modal generation problem
leveraging the power of deep generative adversarial training. Specifically, we
use conditional generative adversarial networks to achieve cross-modal
audio-visual generation of musical performances. We explore different encoding
methods for audio and visual signals, and work on two scenarios:
instrument-oriented generation and pose-oriented generation. Being the first to
explore this new problem, we compose two new datasets with pairs of images and
sounds of musical performances of different instruments. Our experiments using
both classification and human evaluations demonstrate that our model has the
ability to generate one modality, i.e., audio/visual, from the other modality,
i.e., visual/audio, to a good extent. Our experiments on various design choices
along with the datasets will facilitate future research in this new problem
space.
| [
{
"version": "v1",
"created": "Wed, 26 Apr 2017 18:46:10 GMT"
}
] | 2017-04-28T00:00:00 | [
[
"Chen",
"Lele",
""
],
[
"Srivastava",
"Sudhanshu",
""
],
[
"Duan",
"Zhiyao",
""
],
[
"Xu",
"Chenliang",
""
]
] | TITLE: Deep Cross-Modal Audio-Visual Generation
ABSTRACT: Cross-modal audio-visual perception has been a long-lasting topic in
psychology and neurology, and various studies have discovered strong
correlations in human perception of auditory and visual stimuli. Despite works
in computational multimodal modeling, the problem of cross-modal audio-visual
generation has not been systematically studied in the literature. In this
paper, we make the first attempt to solve this cross-modal generation problem
leveraging the power of deep generative adversarial training. Specifically, we
use conditional generative adversarial networks to achieve cross-modal
audio-visual generation of musical performances. We explore different encoding
methods for audio and visual signals, and work on two scenarios:
instrument-oriented generation and pose-oriented generation. Being the first to
explore this new problem, we compose two new datasets with pairs of images and
sounds of musical performances of different instruments. Our experiments using
both classification and human evaluations demonstrate that our model has the
ability to generate one modality, i.e., audio/visual, from the other modality,
i.e., visual/audio, to a good extent. Our experiments on various design choices
along with the datasets will facilitate future research in this new problem
space.
| new_dataset | 0.95594 |
1704.08331 | Narapureddy Dinesh Reddy | Nazrul Haque, N Dinesh Reddy and K. Madhava Krishna | Joint Semantic and Motion Segmentation for dynamic scenes using Deep
Convolutional Networks | In Proceedings of the 12th International Joint Conference on Computer
Vision, Imaging and Computer Graphics Theory and Applications - Volume 5:
Visapp, (Visigrapp 2017) | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dynamic scene understanding is a challenging problem and motion segmentation
plays a crucial role in solving it. Incorporating semantics and motion enhances
the overall perception of the dynamic scene. For applications of outdoor
robotic navigation, joint learning methods have not been extensively used for
extracting spatio-temporal features or adding different priors into the
formulation. The task becomes even more challenging without stereo information
being incorporated. This paper proposes an approach to fuse semantic features
and motion clues using CNNs, to address the problem of monocular semantic
motion segmentation. We deduce semantic and motion labels by integrating
optical flow as a constraint with semantic features into dilated convolution
network. The pipeline consists of three main stages i.e Feature extraction,
Feature amplification and Multi Scale Context Aggregation to fuse the semantics
and flow features. Our joint formulation shows significant improvements in
monocular motion segmentation over the state of the art methods on challenging
KITTI tracking dataset.
| [
{
"version": "v1",
"created": "Tue, 18 Apr 2017 03:06:03 GMT"
}
] | 2017-04-28T00:00:00 | [
[
"Haque",
"Nazrul",
""
],
[
"Reddy",
"N Dinesh",
""
],
[
"Krishna",
"K. Madhava",
""
]
] | TITLE: Joint Semantic and Motion Segmentation for dynamic scenes using Deep
Convolutional Networks
ABSTRACT: Dynamic scene understanding is a challenging problem and motion segmentation
plays a crucial role in solving it. Incorporating semantics and motion enhances
the overall perception of the dynamic scene. For applications of outdoor
robotic navigation, joint learning methods have not been extensively used for
extracting spatio-temporal features or adding different priors into the
formulation. The task becomes even more challenging without stereo information
being incorporated. This paper proposes an approach to fuse semantic features
and motion clues using CNNs, to address the problem of monocular semantic
motion segmentation. We deduce semantic and motion labels by integrating
optical flow as a constraint with semantic features into dilated convolution
network. The pipeline consists of three main stages i.e Feature extraction,
Feature amplification and Multi Scale Context Aggregation to fuse the semantics
and flow features. Our joint formulation shows significant improvements in
monocular motion segmentation over the state of the art methods on challenging
KITTI tracking dataset.
| no_new_dataset | 0.94699 |
1704.08345 | Elyor Kodirov | Elyor Kodirov, Tao Xiang, Shaogang Gong | Semantic Autoencoder for Zero-Shot Learning | accepted to CVPR2017 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Existing zero-shot learning (ZSL) models typically learn a projection
function from a feature space to a semantic embedding space (e.g.~attribute
space). However, such a projection function is only concerned with predicting
the training seen class semantic representation (e.g.~attribute prediction) or
classification. When applied to test data, which in the context of ZSL contains
different (unseen) classes without training data, a ZSL model typically suffers
from the project domain shift problem. In this work, we present a novel
solution to ZSL based on learning a Semantic AutoEncoder (SAE). Taking the
encoder-decoder paradigm, an encoder aims to project a visual feature vector
into the semantic space as in the existing ZSL models. However, the decoder
exerts an additional constraint, that is, the projection/code must be able to
reconstruct the original visual feature. We show that with this additional
reconstruction constraint, the learned projection function from the seen
classes is able to generalise better to the new unseen classes. Importantly,
the encoder and decoder are linear and symmetric which enable us to develop an
extremely efficient learning algorithm. Extensive experiments on six benchmark
datasets demonstrate that the proposed SAE outperforms significantly the
existing ZSL models with the additional benefit of lower computational cost.
Furthermore, when the SAE is applied to supervised clustering problem, it also
beats the state-of-the-art.
| [
{
"version": "v1",
"created": "Wed, 26 Apr 2017 20:45:53 GMT"
}
] | 2017-04-28T00:00:00 | [
[
"Kodirov",
"Elyor",
""
],
[
"Xiang",
"Tao",
""
],
[
"Gong",
"Shaogang",
""
]
] | TITLE: Semantic Autoencoder for Zero-Shot Learning
ABSTRACT: Existing zero-shot learning (ZSL) models typically learn a projection
function from a feature space to a semantic embedding space (e.g.~attribute
space). However, such a projection function is only concerned with predicting
the training seen class semantic representation (e.g.~attribute prediction) or
classification. When applied to test data, which in the context of ZSL contains
different (unseen) classes without training data, a ZSL model typically suffers
from the project domain shift problem. In this work, we present a novel
solution to ZSL based on learning a Semantic AutoEncoder (SAE). Taking the
encoder-decoder paradigm, an encoder aims to project a visual feature vector
into the semantic space as in the existing ZSL models. However, the decoder
exerts an additional constraint, that is, the projection/code must be able to
reconstruct the original visual feature. We show that with this additional
reconstruction constraint, the learned projection function from the seen
classes is able to generalise better to the new unseen classes. Importantly,
the encoder and decoder are linear and symmetric which enable us to develop an
extremely efficient learning algorithm. Extensive experiments on six benchmark
datasets demonstrate that the proposed SAE outperforms significantly the
existing ZSL models with the additional benefit of lower computational cost.
Furthermore, when the SAE is applied to supervised clustering problem, it also
beats the state-of-the-art.
| no_new_dataset | 0.944638 |
1704.08347 | Jiachun Liao | Jiachun Liao, Lalitha Sankar, Vincent Y. F. Tan and Flavio P. Calmon | Hypothesis Testing under Mutual Information Privacy Constraints in the
High Privacy Regime | 13 pages, 7 figures. The paper is submitted to "Transactions on
Information Forensics & Security". Comparing to the paper arXiv:1607.00533
"Hypothesis Testing in the High Privacy Limit", the overlapping content is
results for binary hypothesis testing with a zero error exponent, and the
extended contents are the results for both m-ary hypothesis testing and
binary hypothesis testing with nonzero error exponents | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hypothesis testing is a statistical inference framework for determining the
true distribution among a set of possible distributions for a given dataset.
Privacy restrictions may require the curator of the data or the respondents
themselves to share data with the test only after applying a randomizing
privacy mechanism. This work considers mutual information (MI) as the privacy
metric for measuring leakage. In addition, motivated by the Chernoff-Stein
lemma, the relative entropy between pairs of distributions of the output
(generated by the privacy mechanism) is chosen as the utility metric. For these
metrics, the goal is to find the optimal privacy-utility trade-off (PUT) and
the corresponding optimal privacy mechanism for both binary and m-ary
hypothesis testing. Focusing on the high privacy regime, Euclidean
information-theoretic approximations of the binary and m-ary PUT problems are
developed. The solutions for the approximation problems clarify that an
MI-based privacy metric preserves the privacy of the source symbols in inverse
proportion to their likelihoods.
| [
{
"version": "v1",
"created": "Wed, 26 Apr 2017 20:48:58 GMT"
}
] | 2017-04-28T00:00:00 | [
[
"Liao",
"Jiachun",
""
],
[
"Sankar",
"Lalitha",
""
],
[
"Tan",
"Vincent Y. F.",
""
],
[
"Calmon",
"Flavio P.",
""
]
] | TITLE: Hypothesis Testing under Mutual Information Privacy Constraints in the
High Privacy Regime
ABSTRACT: Hypothesis testing is a statistical inference framework for determining the
true distribution among a set of possible distributions for a given dataset.
Privacy restrictions may require the curator of the data or the respondents
themselves to share data with the test only after applying a randomizing
privacy mechanism. This work considers mutual information (MI) as the privacy
metric for measuring leakage. In addition, motivated by the Chernoff-Stein
lemma, the relative entropy between pairs of distributions of the output
(generated by the privacy mechanism) is chosen as the utility metric. For these
metrics, the goal is to find the optimal privacy-utility trade-off (PUT) and
the corresponding optimal privacy mechanism for both binary and m-ary
hypothesis testing. Focusing on the high privacy regime, Euclidean
information-theoretic approximations of the binary and m-ary PUT problems are
developed. The solutions for the approximation problems clarify that an
MI-based privacy metric preserves the privacy of the source symbols in inverse
proportion to their likelihoods.
| no_new_dataset | 0.950549 |
1704.08364 | Eduardo Miqueles Dr. | Gilberto Martinez Jr., Janito V. Ferreira Filho, Eduardo X. Miqueles | Low-complexity Distributed Tomographic Backprojection for large datasets | null | null | null | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this manuscript we present a fast GPU implementation for tomographic
reconstruction of large datasets using data obtained at the Brazilian
synchrotron light source. The algorithm is distributed in a cluster with 4 GPUs
through a fast pipeline implemented in C programming language. Our algorithm is
theoretically based on a recently discovered low complexity formula, computing
the total volume within O(N3logN) floating point operations; much less than
traditional algorithms that operates with O(N4) flops over an input data of
size O(N3). The results obtained with real data indicate that a reconstruction
can be achieved within 1 second provided the data is transferred completely to
the memory.
| [
{
"version": "v1",
"created": "Wed, 26 Apr 2017 22:18:34 GMT"
}
] | 2017-04-28T00:00:00 | [
[
"Martinez",
"Gilberto",
"Jr."
],
[
"Filho",
"Janito V. Ferreira",
""
],
[
"Miqueles",
"Eduardo X.",
""
]
] | TITLE: Low-complexity Distributed Tomographic Backprojection for large datasets
ABSTRACT: In this manuscript we present a fast GPU implementation for tomographic
reconstruction of large datasets using data obtained at the Brazilian
synchrotron light source. The algorithm is distributed in a cluster with 4 GPUs
through a fast pipeline implemented in C programming language. Our algorithm is
theoretically based on a recently discovered low complexity formula, computing
the total volume within O(N3logN) floating point operations; much less than
traditional algorithms that operates with O(N4) flops over an input data of
size O(N3). The results obtained with real data indicate that a reconstruction
can be achieved within 1 second provided the data is transferred completely to
the memory.
| no_new_dataset | 0.950641 |
1704.08378 | Guanshuo Xu | Guanshuo Xu | Deep Convolutional Neural Network to Detect J-UNIWARD | Accepted by IH&MMSec 2017. This is a personal copy | null | null | null | cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents an empirical study on applying convolutional neural
networks (CNNs) to detecting J-UNIWARD, one of the most secure JPEG
steganographic method. Experiments guiding the architectural design of the CNNs
have been conducted on the JPEG compressed BOSSBase containing 10,000 covers of
size 512x512. Results have verified that both the pooling method and the depth
of the CNNs are critical for performance. Results have also proved that a
20-layer CNN, in general, outperforms the most sophisticated feature-based
methods, but its advantage gradually diminishes on hard-to-detect cases. To
show that the performance generalizes to large-scale databases and to different
cover sizes, one experiment has been conducted on the CLS-LOC dataset of
ImageNet containing more than one million covers cropped to unified size of
256x256. The proposed 20-layer CNN has cut the error achieved by a CNN recently
proposed for large-scale JPEG steganalysis by 35%. Source code is available via
GitHub: https://github.com/GuanshuoXu/deep_cnn_jpeg_steganalysis
| [
{
"version": "v1",
"created": "Wed, 26 Apr 2017 23:15:52 GMT"
}
] | 2017-04-28T00:00:00 | [
[
"Xu",
"Guanshuo",
""
]
] | TITLE: Deep Convolutional Neural Network to Detect J-UNIWARD
ABSTRACT: This paper presents an empirical study on applying convolutional neural
networks (CNNs) to detecting J-UNIWARD, one of the most secure JPEG
steganographic method. Experiments guiding the architectural design of the CNNs
have been conducted on the JPEG compressed BOSSBase containing 10,000 covers of
size 512x512. Results have verified that both the pooling method and the depth
of the CNNs are critical for performance. Results have also proved that a
20-layer CNN, in general, outperforms the most sophisticated feature-based
methods, but its advantage gradually diminishes on hard-to-detect cases. To
show that the performance generalizes to large-scale databases and to different
cover sizes, one experiment has been conducted on the CLS-LOC dataset of
ImageNet containing more than one million covers cropped to unified size of
256x256. The proposed 20-layer CNN has cut the error achieved by a CNN recently
proposed for large-scale JPEG steganalysis by 35%. Source code is available via
GitHub: https://github.com/GuanshuoXu/deep_cnn_jpeg_steganalysis
| no_new_dataset | 0.9463 |
1704.08384 | Rajarshi Das | Rajarshi Das, Manzil Zaheer, Siva Reddy, Andrew McCallum | Question Answering on Knowledge Bases and Text using Universal Schema
and Memory Networks | ACL 2017 (short) | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Existing question answering methods infer answers either from a knowledge
base or from raw text. While knowledge base (KB) methods are good at answering
compositional questions, their performance is often affected by the
incompleteness of the KB. Au contraire, web text contains millions of facts
that are absent in the KB, however in an unstructured form. {\it Universal
schema} can support reasoning on the union of both structured KBs and
unstructured text by aligning them in a common embedded space. In this paper we
extend universal schema to natural language question answering, employing
\emph{memory networks} to attend to the large body of facts in the combination
of text and KB. Our models can be trained in an end-to-end fashion on
question-answer pairs. Evaluation results on \spades fill-in-the-blank question
answering dataset show that exploiting universal schema for question answering
is better than using either a KB or text alone. This model also outperforms the
current state-of-the-art by 8.5 $F_1$ points.\footnote{Code and data available
in \url{https://rajarshd.github.io/TextKBQA}}
| [
{
"version": "v1",
"created": "Thu, 27 Apr 2017 00:03:02 GMT"
}
] | 2017-04-28T00:00:00 | [
[
"Das",
"Rajarshi",
""
],
[
"Zaheer",
"Manzil",
""
],
[
"Reddy",
"Siva",
""
],
[
"McCallum",
"Andrew",
""
]
] | TITLE: Question Answering on Knowledge Bases and Text using Universal Schema
and Memory Networks
ABSTRACT: Existing question answering methods infer answers either from a knowledge
base or from raw text. While knowledge base (KB) methods are good at answering
compositional questions, their performance is often affected by the
incompleteness of the KB. Au contraire, web text contains millions of facts
that are absent in the KB, however in an unstructured form. {\it Universal
schema} can support reasoning on the union of both structured KBs and
unstructured text by aligning them in a common embedded space. In this paper we
extend universal schema to natural language question answering, employing
\emph{memory networks} to attend to the large body of facts in the combination
of text and KB. Our models can be trained in an end-to-end fashion on
question-answer pairs. Evaluation results on \spades fill-in-the-blank question
answering dataset show that exploiting universal schema for question answering
is better than using either a KB or text alone. This model also outperforms the
current state-of-the-art by 8.5 $F_1$ points.\footnote{Code and data available
in \url{https://rajarshd.github.io/TextKBQA}}
| no_new_dataset | 0.945399 |
1704.08509 | Bo-Cheng Tsai | Yi-Hsin Chen, Wei-Yu Chen, Yu-Ting Chen, Bo-Cheng Tsai, Yu-Chiang
Frank Wang, Min Sun | No More Discrimination: Cross City Adaptation of Road Scene Segmenters | 13 pages, 10 figures | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite the recent success of deep-learning based semantic segmentation,
deploying a pre-trained road scene segmenter to a city whose images are not
presented in the training set would not achieve satisfactory performance due to
dataset biases. Instead of collecting a large number of annotated images of
each city of interest to train or refine the segmenter, we propose an
unsupervised learning approach to adapt road scene segmenters across different
cities. By utilizing Google Street View and its time-machine feature, we can
collect unannotated images for each road scene at different times, so that the
associated static-object priors can be extracted accordingly. By advancing a
joint global and class-specific domain adversarial learning framework,
adaptation of pre-trained segmenters to that city can be achieved without the
need of any user annotation or interaction. We show that our method improves
the performance of semantic segmentation in multiple cities across continents,
while it performs favorably against state-of-the-art approaches requiring
annotated training data.
| [
{
"version": "v1",
"created": "Thu, 27 Apr 2017 11:14:21 GMT"
}
] | 2017-04-28T00:00:00 | [
[
"Chen",
"Yi-Hsin",
""
],
[
"Chen",
"Wei-Yu",
""
],
[
"Chen",
"Yu-Ting",
""
],
[
"Tsai",
"Bo-Cheng",
""
],
[
"Wang",
"Yu-Chiang Frank",
""
],
[
"Sun",
"Min",
""
]
] | TITLE: No More Discrimination: Cross City Adaptation of Road Scene Segmenters
ABSTRACT: Despite the recent success of deep-learning based semantic segmentation,
deploying a pre-trained road scene segmenter to a city whose images are not
presented in the training set would not achieve satisfactory performance due to
dataset biases. Instead of collecting a large number of annotated images of
each city of interest to train or refine the segmenter, we propose an
unsupervised learning approach to adapt road scene segmenters across different
cities. By utilizing Google Street View and its time-machine feature, we can
collect unannotated images for each road scene at different times, so that the
associated static-object priors can be extracted accordingly. By advancing a
joint global and class-specific domain adversarial learning framework,
adaptation of pre-trained segmenters to that city can be achieved without the
need of any user annotation or interaction. We show that our method improves
the performance of semantic segmentation in multiple cities across continents,
while it performs favorably against state-of-the-art approaches requiring
annotated training data.
| no_new_dataset | 0.946547 |
1704.08547 | Benjamin Rubinstein | Chris Culnane, Benjamin I. P. Rubinstein, Vanessa Teague | Privacy Assessment of De-identified Opal Data: A report for Transport
for NSW | 14 pages, 3 figures, 4 tables | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the privacy implications of public release of a de-identified
dataset of Opal card transactions. The data was recently published at
https://opendata.transport.nsw.gov.au/dataset/opal-tap-on-and-tap-off. It
consists of tap-on and tap-off counts for NSW's four modes of public transport,
collected over two separate week-long periods. The data has been further
treated to improve privacy by removing small counts, aggregating some stops and
routes, and perturbing the counts. This is a summary of our findings.
| [
{
"version": "v1",
"created": "Thu, 27 Apr 2017 13:12:29 GMT"
}
] | 2017-04-28T00:00:00 | [
[
"Culnane",
"Chris",
""
],
[
"Rubinstein",
"Benjamin I. P.",
""
],
[
"Teague",
"Vanessa",
""
]
] | TITLE: Privacy Assessment of De-identified Opal Data: A report for Transport
for NSW
ABSTRACT: We consider the privacy implications of public release of a de-identified
dataset of Opal card transactions. The data was recently published at
https://opendata.transport.nsw.gov.au/dataset/opal-tap-on-and-tap-off. It
consists of tap-on and tap-off counts for NSW's four modes of public transport,
collected over two separate week-long periods. The data has been further
treated to improve privacy by removing small counts, aggregating some stops and
routes, and perturbing the counts. This is a summary of our findings.
| new_dataset | 0.838548 |
1704.08558 | Nicola Prezza | Philip Bille, Inge Li G{\o}rtz, Nicola Prezza | Practical and Effective Re-Pair Compression | null | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Re-Pair is an efficient grammar compressor that operates by recursively
replacing high-frequency character pairs with new grammar symbols. The most
space-efficient linear-time algorithm computing Re-Pair uses
$(1+\epsilon)n+\sqrt n$ words on top of the re-writable text (of length $n$ and
stored in $n$ words), for any constant $\epsilon>0$; in practice however, this
solution uses complex sub-procedures preventing it from being practical. In
this paper, we present an implementation of the above-mentioned result making
use of more practical solutions; our tool further improves the working space to
$(1.5+\epsilon)n$ words (text included), for some small constant $\epsilon$. As
a second contribution, we focus on compact representations of the output
grammar. The lower bound for storing a grammar with $d$ rules is
$\log(d!)+2d\approx d\log d+0.557 d$ bits, and the most efficient encoding
algorithm in the literature uses at most $d\log d + 2d$ bits and runs in
$\mathcal O(d^{1.5})$ time. We describe a linear-time heuristic maximizing the
compressibility of the output Re-Pair grammar. On real datasets, our grammar
encoding uses---on average---only $2.8\%$ more bits than the
information-theoretic minimum. In half of the tested cases, our compressor
improves the output size of 7-Zip with maximum compression rate turned on.
| [
{
"version": "v1",
"created": "Thu, 27 Apr 2017 13:28:45 GMT"
}
] | 2017-04-28T00:00:00 | [
[
"Bille",
"Philip",
""
],
[
"Gørtz",
"Inge Li",
""
],
[
"Prezza",
"Nicola",
""
]
] | TITLE: Practical and Effective Re-Pair Compression
ABSTRACT: Re-Pair is an efficient grammar compressor that operates by recursively
replacing high-frequency character pairs with new grammar symbols. The most
space-efficient linear-time algorithm computing Re-Pair uses
$(1+\epsilon)n+\sqrt n$ words on top of the re-writable text (of length $n$ and
stored in $n$ words), for any constant $\epsilon>0$; in practice however, this
solution uses complex sub-procedures preventing it from being practical. In
this paper, we present an implementation of the above-mentioned result making
use of more practical solutions; our tool further improves the working space to
$(1.5+\epsilon)n$ words (text included), for some small constant $\epsilon$. As
a second contribution, we focus on compact representations of the output
grammar. The lower bound for storing a grammar with $d$ rules is
$\log(d!)+2d\approx d\log d+0.557 d$ bits, and the most efficient encoding
algorithm in the literature uses at most $d\log d + 2d$ bits and runs in
$\mathcal O(d^{1.5})$ time. We describe a linear-time heuristic maximizing the
compressibility of the output Re-Pair grammar. On real datasets, our grammar
encoding uses---on average---only $2.8\%$ more bits than the
information-theoretic minimum. In half of the tested cases, our compressor
improves the output size of 7-Zip with maximum compression rate turned on.
| no_new_dataset | 0.944125 |
1704.08628 | Bastien Moysset | Bastien Moysset, Christopher Kermorvant, Christian Wolf | Full-Page Text Recognition: Learning Where to Start and When to Stop | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Text line detection and localization is a crucial step for full page document
analysis, but still suffers from heterogeneity of real life documents. In this
paper, we present a new approach for full page text recognition. Localization
of the text lines is based on regressions with Fully Convolutional Neural
Networks and Multidimensional Long Short-Term Memory as contextual layers. In
order to increase the efficiency of this localization method, only the position
of the left side of the text lines are predicted. The text recognizer is then
in charge of predicting the end of the text to recognize. This method has shown
good results for full page text recognition on the highly heterogeneous Maurdor
dataset.
| [
{
"version": "v1",
"created": "Thu, 27 Apr 2017 15:50:37 GMT"
}
] | 2017-04-28T00:00:00 | [
[
"Moysset",
"Bastien",
""
],
[
"Kermorvant",
"Christopher",
""
],
[
"Wolf",
"Christian",
""
]
] | TITLE: Full-Page Text Recognition: Learning Where to Start and When to Stop
ABSTRACT: Text line detection and localization is a crucial step for full page document
analysis, but still suffers from heterogeneity of real life documents. In this
paper, we present a new approach for full page text recognition. Localization
of the text lines is based on regressions with Fully Convolutional Neural
Networks and Multidimensional Long Short-Term Memory as contextual layers. In
order to increase the efficiency of this localization method, only the position
of the left side of the text lines are predicted. The text recognizer is then
in charge of predicting the end of the text to recognize. This method has shown
good results for full page text recognition on the highly heterogeneous Maurdor
dataset.
| no_new_dataset | 0.951142 |
1704.08631 | Nicolas Honnorat | Nicolas Honnorat, Christos Davatzikos | Sparse Hierachical Extrapolated Parametric Methods for Cortical Data
Analysis | Technical report (ongoing work) | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many neuroimaging studies focus on the cortex, in order to benefit from
better signal to noise ratios and reduced computational burden. Cortical data
are usually projected onto a reference mesh, where subsequent analyses are
carried out. Several multiscale approaches have been proposed for analyzing
these surface data, such as spherical harmonics and graph wavelets. As far as
we know, however, the hierarchical structure of the template icosahedral meshes
used by most neuroimaging software has never been exploited for cortical data
factorization. In this paper, we demonstrate how the structure of the
ubiquitous icosahedral meshes can be exploited by data factorization methods
such as sparse dictionary learning, and we assess the optimization speed-up
offered by extrapolation methods in this context. By testing different
sparsity-inducing norms, extrapolation methods, and factorization schemes, we
compare the performances of eleven methods for analyzing four datasets: two
structural and two functional MRI datasets obtained by processing the data
publicly available for the hundred unrelated subjects of the Human Connectome
Project. Our results demonstrate that, depending on the level of details
requested, a speedup of several orders of magnitudes can be obtained.
| [
{
"version": "v1",
"created": "Thu, 27 Apr 2017 15:52:23 GMT"
}
] | 2017-04-28T00:00:00 | [
[
"Honnorat",
"Nicolas",
""
],
[
"Davatzikos",
"Christos",
""
]
] | TITLE: Sparse Hierachical Extrapolated Parametric Methods for Cortical Data
Analysis
ABSTRACT: Many neuroimaging studies focus on the cortex, in order to benefit from
better signal to noise ratios and reduced computational burden. Cortical data
are usually projected onto a reference mesh, where subsequent analyses are
carried out. Several multiscale approaches have been proposed for analyzing
these surface data, such as spherical harmonics and graph wavelets. As far as
we know, however, the hierarchical structure of the template icosahedral meshes
used by most neuroimaging software has never been exploited for cortical data
factorization. In this paper, we demonstrate how the structure of the
ubiquitous icosahedral meshes can be exploited by data factorization methods
such as sparse dictionary learning, and we assess the optimization speed-up
offered by extrapolation methods in this context. By testing different
sparsity-inducing norms, extrapolation methods, and factorization schemes, we
compare the performances of eleven methods for analyzing four datasets: two
structural and two functional MRI datasets obtained by processing the data
publicly available for the hundred unrelated subjects of the Human Connectome
Project. Our results demonstrate that, depending on the level of details
requested, a speedup of several orders of magnitudes can be obtained.
| no_new_dataset | 0.949576 |
1511.08769 | Andrea Montanari | Adel Javanmard and Andrea Montanari and Federico Ricci-Tersenghi | Phase Transitions in Semidefinite Relaxations | 71 pages, 24 pdf figures | Proceedings of the National Academy of Sciences 113, E2218-E2223
(2016) | 10.1073/pnas.1523097113 | null | cond-mat.stat-mech cs.DM cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Statistical inference problems arising within signal processing, data mining,
and machine learning naturally give rise to hard combinatorial optimization
problems. These problems become intractable when the dimensionality of the data
is large, as is often the case for modern datasets. A popular idea is to
construct convex relaxations of these combinatorial problems, which can be
solved efficiently for large scale datasets.
Semidefinite programming (SDP) relaxations are among the most powerful
methods in this family, and are surprisingly well-suited for a broad range of
problems where data take the form of matrices or graphs. It has been observed
several times that, when the `statistical noise' is small enough, SDP
relaxations correctly detect the underlying combinatorial structures.
In this paper we develop asymptotic predictions for several `detection
thresholds,' as well as for the estimation error above these thresholds. We
study some classical SDP relaxations for statistical problems motivated by
graph synchronization and community detection in networks. We map these
optimization problems to statistical mechanics models with vector spins, and
use non-rigorous techniques from statistical mechanics to characterize the
corresponding phase transitions. Our results clarify the effectiveness of SDP
relaxations in solving high-dimensional statistical problems.
| [
{
"version": "v1",
"created": "Fri, 27 Nov 2015 19:16:24 GMT"
},
{
"version": "v2",
"created": "Mon, 4 Jan 2016 21:37:50 GMT"
}
] | 2017-04-27T00:00:00 | [
[
"Javanmard",
"Adel",
""
],
[
"Montanari",
"Andrea",
""
],
[
"Ricci-Tersenghi",
"Federico",
""
]
] | TITLE: Phase Transitions in Semidefinite Relaxations
ABSTRACT: Statistical inference problems arising within signal processing, data mining,
and machine learning naturally give rise to hard combinatorial optimization
problems. These problems become intractable when the dimensionality of the data
is large, as is often the case for modern datasets. A popular idea is to
construct convex relaxations of these combinatorial problems, which can be
solved efficiently for large scale datasets.
Semidefinite programming (SDP) relaxations are among the most powerful
methods in this family, and are surprisingly well-suited for a broad range of
problems where data take the form of matrices or graphs. It has been observed
several times that, when the `statistical noise' is small enough, SDP
relaxations correctly detect the underlying combinatorial structures.
In this paper we develop asymptotic predictions for several `detection
thresholds,' as well as for the estimation error above these thresholds. We
study some classical SDP relaxations for statistical problems motivated by
graph synchronization and community detection in networks. We map these
optimization problems to statistical mechanics models with vector spins, and
use non-rigorous techniques from statistical mechanics to characterize the
corresponding phase transitions. Our results clarify the effectiveness of SDP
relaxations in solving high-dimensional statistical problems.
| no_new_dataset | 0.941547 |
1604.02388 | Yang He | Yang He, Wei-Chen Chiu, Margret Keuper, Mario Fritz | STD2P: RGBD Semantic Segmentation Using Spatio-Temporal Data-Driven
Pooling | To appear in CVPR 2017 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a novel superpixel-based multi-view convolutional neural network
for semantic image segmentation. The proposed network produces a high quality
segmentation of a single image by leveraging information from additional views
of the same scene. Particularly in indoor videos such as captured by robotic
platforms or handheld and bodyworn RGBD cameras, nearby video frames provide
diverse viewpoints and additional context of objects and scenes. To leverage
such information, we first compute region correspondences by optical flow and
image boundary-based superpixels. Given these region correspondences, we
propose a novel spatio-temporal pooling layer to aggregate information over
space and time. We evaluate our approach on the NYU--Depth--V2 and the SUN3D
datasets and compare it to various state-of-the-art single-view and multi-view
approaches. Besides a general improvement over the state-of-the-art, we also
show the benefits of making use of unlabeled frames during training for
multi-view as well as single-view prediction.
| [
{
"version": "v1",
"created": "Fri, 8 Apr 2016 16:01:34 GMT"
},
{
"version": "v2",
"created": "Thu, 9 Jun 2016 19:52:02 GMT"
},
{
"version": "v3",
"created": "Wed, 26 Apr 2017 13:13:02 GMT"
}
] | 2017-04-27T00:00:00 | [
[
"He",
"Yang",
""
],
[
"Chiu",
"Wei-Chen",
""
],
[
"Keuper",
"Margret",
""
],
[
"Fritz",
"Mario",
""
]
] | TITLE: STD2P: RGBD Semantic Segmentation Using Spatio-Temporal Data-Driven
Pooling
ABSTRACT: We propose a novel superpixel-based multi-view convolutional neural network
for semantic image segmentation. The proposed network produces a high quality
segmentation of a single image by leveraging information from additional views
of the same scene. Particularly in indoor videos such as captured by robotic
platforms or handheld and bodyworn RGBD cameras, nearby video frames provide
diverse viewpoints and additional context of objects and scenes. To leverage
such information, we first compute region correspondences by optical flow and
image boundary-based superpixels. Given these region correspondences, we
propose a novel spatio-temporal pooling layer to aggregate information over
space and time. We evaluate our approach on the NYU--Depth--V2 and the SUN3D
datasets and compare it to various state-of-the-art single-view and multi-view
approaches. Besides a general improvement over the state-of-the-art, we also
show the benefits of making use of unlabeled frames during training for
multi-view as well as single-view prediction.
| no_new_dataset | 0.946843 |
1704.00514 | Isabelle Augenstein | Isabelle Augenstein, Anders S{\o}gaard | Multi-Task Learning of Keyphrase Boundary Classification | ACL 2017 | null | null | null | cs.CL cs.AI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Keyphrase boundary classification (KBC) is the task of detecting keyphrases
in scientific articles and labelling them with respect to predefined types.
Although important in practice, this task is so far underexplored, partly due
to the lack of labelled data. To overcome this, we explore several auxiliary
tasks, including semantic super-sense tagging and identification of multi-word
expressions, and cast the task as a multi-task learning problem with deep
recurrent neural networks. Our multi-task models perform significantly better
than previous state of the art approaches on two scientific KBC datasets,
particularly for long keyphrases.
| [
{
"version": "v1",
"created": "Mon, 3 Apr 2017 10:25:22 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Apr 2017 16:48:49 GMT"
}
] | 2017-04-27T00:00:00 | [
[
"Augenstein",
"Isabelle",
""
],
[
"Søgaard",
"Anders",
""
]
] | TITLE: Multi-Task Learning of Keyphrase Boundary Classification
ABSTRACT: Keyphrase boundary classification (KBC) is the task of detecting keyphrases
in scientific articles and labelling them with respect to predefined types.
Although important in practice, this task is so far underexplored, partly due
to the lack of labelled data. To overcome this, we explore several auxiliary
tasks, including semantic super-sense tagging and identification of multi-word
expressions, and cast the task as a multi-task learning problem with deep
recurrent neural networks. Our multi-task models perform significantly better
than previous state of the art approaches on two scientific KBC datasets,
particularly for long keyphrases.
| no_new_dataset | 0.951051 |
1704.06485 | Byeongchang Kim | Cesc Chunseong Park, Byeongchang Kim, Gunhee Kim | Attend to You: Personalized Image Captioning with Context Sequence
Memory Networks | Accepted paper at CVPR 2017 | null | null | null | cs.CV cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We address personalization issues of image captioning, which have not been
discussed yet in previous research. For a query image, we aim to generate a
descriptive sentence, accounting for prior knowledge such as the user's active
vocabularies in previous documents. As applications of personalized image
captioning, we tackle two post automation tasks: hashtag prediction and post
generation, on our newly collected Instagram dataset, consisting of 1.1M posts
from 6.3K users. We propose a novel captioning model named Context Sequence
Memory Network (CSMN). Its unique updates over previous memory network models
include (i) exploiting memory as a repository for multiple types of context
information, (ii) appending previously generated words into memory to capture
long-term information without suffering from the vanishing gradient problem,
and (iii) adopting CNN memory structure to jointly represent nearby ordered
memory slots for better context understanding. With quantitative evaluation and
user studies via Amazon Mechanical Turk, we show the effectiveness of the three
novel features of CSMN and its performance enhancement for personalized image
captioning over state-of-the-art captioning models.
| [
{
"version": "v1",
"created": "Fri, 21 Apr 2017 11:29:07 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Apr 2017 23:30:43 GMT"
}
] | 2017-04-27T00:00:00 | [
[
"Park",
"Cesc Chunseong",
""
],
[
"Kim",
"Byeongchang",
""
],
[
"Kim",
"Gunhee",
""
]
] | TITLE: Attend to You: Personalized Image Captioning with Context Sequence
Memory Networks
ABSTRACT: We address personalization issues of image captioning, which have not been
discussed yet in previous research. For a query image, we aim to generate a
descriptive sentence, accounting for prior knowledge such as the user's active
vocabularies in previous documents. As applications of personalized image
captioning, we tackle two post automation tasks: hashtag prediction and post
generation, on our newly collected Instagram dataset, consisting of 1.1M posts
from 6.3K users. We propose a novel captioning model named Context Sequence
Memory Network (CSMN). Its unique updates over previous memory network models
include (i) exploiting memory as a repository for multiple types of context
information, (ii) appending previously generated words into memory to capture
long-term information without suffering from the vanishing gradient problem,
and (iii) adopting CNN memory structure to jointly represent nearby ordered
memory slots for better context understanding. With quantitative evaluation and
user studies via Amazon Mechanical Turk, we show the effectiveness of the three
novel features of CSMN and its performance enhancement for personalized image
captioning over state-of-the-art captioning models.
| new_dataset | 0.959459 |
1704.07938 | Tien Thanh Nguyen | Tien Thanh Nguyen, Thi Thu Thuy Nguyen, Xuan Cuong Pham, Alan
Wee-Chung Liew, James C. Bezdek | An ensemble-based online learning algorithm for streaming data | 19 pages, 3 figures | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this study, we introduce an ensemble-based approach for online machine
learning. The ensemble of base classifiers in our approach is obtained by
learning Naive Bayes classifiers on different training sets which are generated
by projecting the original training set to lower dimensional space. We propose
a mechanism to learn sequences of data using data chunks paradigm. The
experiments conducted on a number of UCI datasets and one synthetic dataset
demonstrate that the proposed approach performs significantly better than some
well-known online learning algorithms.
| [
{
"version": "v1",
"created": "Wed, 26 Apr 2017 00:33:36 GMT"
}
] | 2017-04-27T00:00:00 | [
[
"Nguyen",
"Tien Thanh",
""
],
[
"Nguyen",
"Thi Thu Thuy",
""
],
[
"Pham",
"Xuan Cuong",
""
],
[
"Liew",
"Alan Wee-Chung",
""
],
[
"Bezdek",
"James C.",
""
]
] | TITLE: An ensemble-based online learning algorithm for streaming data
ABSTRACT: In this study, we introduce an ensemble-based approach for online machine
learning. The ensemble of base classifiers in our approach is obtained by
learning Naive Bayes classifiers on different training sets which are generated
by projecting the original training set to lower dimensional space. We propose
a mechanism to learn sequences of data using data chunks paradigm. The
experiments conducted on a number of UCI datasets and one synthetic dataset
demonstrate that the proposed approach performs significantly better than some
well-known online learning algorithms.
| no_new_dataset | 0.94743 |
1704.07969 | Tejal Bhamre | Tejal Bhamre, Teng Zhang, Amit Singer | Anisotropic twicing for single particle reconstruction using
autocorrelation analysis | null | null | null | null | cs.CV q-bio.BM stat.AP stat.ME | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The missing phase problem in X-ray crystallography is commonly solved using
the technique of molecular replacement, which borrows phases from a previously
solved homologous structure, and appends them to the measured Fourier
magnitudes of the diffraction patterns of the unknown structure. More recently,
molecular replacement has been proposed for solving the missing orthogonal
matrices problem arising in Kam's autocorrelation analysis for single particle
reconstruction using X-ray free electron lasers and cryo-EM. In classical
molecular replacement, it is common to estimate the magnitudes of the unknown
structure as twice the measured magnitudes minus the magnitudes of the
homologous structure, a procedure known as `twicing'. Mathematically, this is
equivalent to finding an unbiased estimator for a complex-valued scalar. We
generalize this scheme for the case of estimating real or complex valued
matrices arising in single particle autocorrelation analysis. We name this
approach "Anisotropic Twicing" because unlike the scalar case, the unbiased
estimator is not obtained by a simple magnitude isotropic correction. We
compare the performance of the least squares, twicing and anisotropic twicing
estimators on synthetic and experimental datasets. We demonstrate 3D homology
modeling in cryo-EM directly from experimental data without iterative
refinement or class averaging, for the first time.
| [
{
"version": "v1",
"created": "Wed, 26 Apr 2017 04:47:01 GMT"
}
] | 2017-04-27T00:00:00 | [
[
"Bhamre",
"Tejal",
""
],
[
"Zhang",
"Teng",
""
],
[
"Singer",
"Amit",
""
]
] | TITLE: Anisotropic twicing for single particle reconstruction using
autocorrelation analysis
ABSTRACT: The missing phase problem in X-ray crystallography is commonly solved using
the technique of molecular replacement, which borrows phases from a previously
solved homologous structure, and appends them to the measured Fourier
magnitudes of the diffraction patterns of the unknown structure. More recently,
molecular replacement has been proposed for solving the missing orthogonal
matrices problem arising in Kam's autocorrelation analysis for single particle
reconstruction using X-ray free electron lasers and cryo-EM. In classical
molecular replacement, it is common to estimate the magnitudes of the unknown
structure as twice the measured magnitudes minus the magnitudes of the
homologous structure, a procedure known as `twicing'. Mathematically, this is
equivalent to finding an unbiased estimator for a complex-valued scalar. We
generalize this scheme for the case of estimating real or complex valued
matrices arising in single particle autocorrelation analysis. We name this
approach "Anisotropic Twicing" because unlike the scalar case, the unbiased
estimator is not obtained by a simple magnitude isotropic correction. We
compare the performance of the least squares, twicing and anisotropic twicing
estimators on synthetic and experimental datasets. We demonstrate 3D homology
modeling in cryo-EM directly from experimental data without iterative
refinement or class averaging, for the first time.
| no_new_dataset | 0.956145 |
1704.08088 | Leandro Dos Santos | Leandro B. dos Santos, Edilson A. Corr\^ea Jr, Osvaldo N. Oliveira Jr,
Diego R. Amancio, Let\'icia L. Mansur and Sandra M. Alu\'isio | Enriching Complex Networks with Word Embeddings for Detecting Mild
Cognitive Impairment from Speech Transcripts | Published in Annual Meeting of the Association for Computational
Linguist 2017 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mild Cognitive Impairment (MCI) is a mental disorder difficult to diagnose.
Linguistic features, mainly from parsers, have been used to detect MCI, but
this is not suitable for large-scale assessments. MCI disfluencies produce
non-grammatical speech that requires manual or high precision automatic
correction of transcripts. In this paper, we modeled transcripts into complex
networks and enriched them with word embedding (CNE) to better represent short
texts produced in neuropsychological assessments. The network measurements were
applied with well-known classifiers to automatically identify MCI in
transcripts, in a binary classification task. A comparison was made with the
performance of traditional approaches using Bag of Words (BoW) and linguistic
features for three datasets: DementiaBank in English, and Cinderella and
Arizona-Battery in Portuguese. Overall, CNE provided higher accuracy than using
only complex networks, while Support Vector Machine was superior to other
classifiers. CNE provided the highest accuracies for DementiaBank and
Cinderella, but BoW was more efficient for the Arizona-Battery dataset probably
owing to its short narratives. The approach using linguistic features yielded
higher accuracy if the transcriptions of the Cinderella dataset were manually
revised. Taken together, the results indicate that complex networks enriched
with embedding is promising for detecting MCI in large-scale assessments
| [
{
"version": "v1",
"created": "Wed, 26 Apr 2017 13:06:25 GMT"
}
] | 2017-04-27T00:00:00 | [
[
"Santos",
"Leandro B. dos",
""
],
[
"Corrêa",
"Edilson A.",
"Jr"
],
[
"Oliveira",
"Osvaldo N.",
"Jr"
],
[
"Amancio",
"Diego R.",
""
],
[
"Mansur",
"Letícia L.",
""
],
[
"Aluísio",
"Sandra M.",
""
]
] | TITLE: Enriching Complex Networks with Word Embeddings for Detecting Mild
Cognitive Impairment from Speech Transcripts
ABSTRACT: Mild Cognitive Impairment (MCI) is a mental disorder difficult to diagnose.
Linguistic features, mainly from parsers, have been used to detect MCI, but
this is not suitable for large-scale assessments. MCI disfluencies produce
non-grammatical speech that requires manual or high precision automatic
correction of transcripts. In this paper, we modeled transcripts into complex
networks and enriched them with word embedding (CNE) to better represent short
texts produced in neuropsychological assessments. The network measurements were
applied with well-known classifiers to automatically identify MCI in
transcripts, in a binary classification task. A comparison was made with the
performance of traditional approaches using Bag of Words (BoW) and linguistic
features for three datasets: DementiaBank in English, and Cinderella and
Arizona-Battery in Portuguese. Overall, CNE provided higher accuracy than using
only complex networks, while Support Vector Machine was superior to other
classifiers. CNE provided the highest accuracies for DementiaBank and
Cinderella, but BoW was more efficient for the Arizona-Battery dataset probably
owing to its short narratives. The approach using linguistic features yielded
higher accuracy if the transcriptions of the Cinderella dataset were manually
revised. Taken together, the results indicate that complex networks enriched
with embedding is promising for detecting MCI in large-scale assessments
| no_new_dataset | 0.952574 |
1704.08134 | Mohammadreza Soltaninejad | Mohammadreza Soltaninejad, Lei Zhang, Tryphon Lambrou, Nigel Allinson,
Xujiong Ye | Multimodal MRI brain tumor segmentation using random forests with
features learned from fully convolutional neural network | null | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a novel learning based method for automated
segmenta-tion of brain tumor in multimodal MRI images. The machine learned
features from fully convolutional neural network (FCN) and hand-designed texton
fea-tures are used to classify the MRI image voxels. The score map with
pixel-wise predictions is used as a feature map which is learned from
multimodal MRI train-ing dataset using the FCN. The learned features are then
applied to random for-ests to classify each MRI image voxel into normal brain
tissues and different parts of tumor. The method was evaluated on BRATS 2013
challenge dataset. The results show that the application of the random forest
classifier to multimodal MRI images using machine-learned features based on FCN
and hand-designed features based on textons provides promising segmentations.
The Dice overlap measure for automatic brain tumor segmentation against ground
truth is 0.88, 080 and 0.73 for complete tumor, core and enhancing tumor,
respectively.
| [
{
"version": "v1",
"created": "Wed, 26 Apr 2017 14:22:02 GMT"
}
] | 2017-04-27T00:00:00 | [
[
"Soltaninejad",
"Mohammadreza",
""
],
[
"Zhang",
"Lei",
""
],
[
"Lambrou",
"Tryphon",
""
],
[
"Allinson",
"Nigel",
""
],
[
"Ye",
"Xujiong",
""
]
] | TITLE: Multimodal MRI brain tumor segmentation using random forests with
features learned from fully convolutional neural network
ABSTRACT: In this paper, we propose a novel learning based method for automated
segmenta-tion of brain tumor in multimodal MRI images. The machine learned
features from fully convolutional neural network (FCN) and hand-designed texton
fea-tures are used to classify the MRI image voxels. The score map with
pixel-wise predictions is used as a feature map which is learned from
multimodal MRI train-ing dataset using the FCN. The learned features are then
applied to random for-ests to classify each MRI image voxel into normal brain
tissues and different parts of tumor. The method was evaluated on BRATS 2013
challenge dataset. The results show that the application of the random forest
classifier to multimodal MRI images using machine-learned features based on FCN
and hand-designed features based on textons provides promising segmentations.
The Dice overlap measure for automatic brain tumor segmentation against ground
truth is 0.88, 080 and 0.73 for complete tumor, core and enhancing tumor,
respectively.
| no_new_dataset | 0.954605 |
1704.08243 | Aishwarya Agrawal | Aishwarya Agrawal, Aniruddha Kembhavi, Dhruv Batra, Devi Parikh | C-VQA: A Compositional Split of the Visual Question Answering (VQA) v1.0
Dataset | null | null | null | null | cs.CV cs.AI cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Visual Question Answering (VQA) has received a lot of attention over the past
couple of years. A number of deep learning models have been proposed for this
task. However, it has been shown that these models are heavily driven by
superficial correlations in the training data and lack compositionality -- the
ability to answer questions about unseen compositions of seen concepts. This
compositionality is desirable and central to intelligence. In this paper, we
propose a new setting for Visual Question Answering where the test
question-answer pairs are compositionally novel compared to training
question-answer pairs. To facilitate developing models under this setting, we
present a new compositional split of the VQA v1.0 dataset, which we call
Compositional VQA (C-VQA). We analyze the distribution of questions and answers
in the C-VQA splits. Finally, we evaluate several existing VQA models under
this new setting and show that the performances of these models degrade by a
significant amount compared to the original VQA setting.
| [
{
"version": "v1",
"created": "Wed, 26 Apr 2017 17:57:59 GMT"
}
] | 2017-04-27T00:00:00 | [
[
"Agrawal",
"Aishwarya",
""
],
[
"Kembhavi",
"Aniruddha",
""
],
[
"Batra",
"Dhruv",
""
],
[
"Parikh",
"Devi",
""
]
] | TITLE: C-VQA: A Compositional Split of the Visual Question Answering (VQA) v1.0
Dataset
ABSTRACT: Visual Question Answering (VQA) has received a lot of attention over the past
couple of years. A number of deep learning models have been proposed for this
task. However, it has been shown that these models are heavily driven by
superficial correlations in the training data and lack compositionality -- the
ability to answer questions about unseen compositions of seen concepts. This
compositionality is desirable and central to intelligence. In this paper, we
propose a new setting for Visual Question Answering where the test
question-answer pairs are compositionally novel compared to training
question-answer pairs. To facilitate developing models under this setting, we
present a new compositional split of the VQA v1.0 dataset, which we call
Compositional VQA (C-VQA). We analyze the distribution of questions and answers
in the C-VQA splits. Finally, we evaluate several existing VQA models under
this new setting and show that the performances of these models degrade by a
significant amount compared to the original VQA setting.
| new_dataset | 0.936749 |
1409.5185 | Zhuowen Tu | Chen-Yu Lee, Saining Xie, Patrick Gallagher, Zhengyou Zhang, Zhuowen
Tu | Deeply-Supervised Nets | Patent disclosure, UCSD Docket No. SD2014-313, filed on May 22, 2014 | null | null | null | stat.ML cs.CV cs.LG cs.NE | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Our proposed deeply-supervised nets (DSN) method simultaneously minimizes
classification error while making the learning process of hidden layers direct
and transparent. We make an attempt to boost the classification performance by
studying a new formulation in deep networks. Three aspects in convolutional
neural networks (CNN) style architectures are being looked at: (1) transparency
of the intermediate layers to the overall classification; (2)
discriminativeness and robustness of learned features, especially in the early
layers; (3) effectiveness in training due to the presence of the exploding and
vanishing gradients. We introduce "companion objective" to the individual
hidden layers, in addition to the overall objective at the output layer (a
different strategy to layer-wise pre-training). We extend techniques from
stochastic gradient methods to analyze our algorithm. The advantage of our
method is evident and our experimental result on benchmark datasets shows
significant performance gain over existing methods (e.g. all state-of-the-art
results on MNIST, CIFAR-10, CIFAR-100, and SVHN).
| [
{
"version": "v1",
"created": "Thu, 18 Sep 2014 04:08:25 GMT"
},
{
"version": "v2",
"created": "Thu, 25 Sep 2014 05:03:06 GMT"
}
] | 2017-04-26T00:00:00 | [
[
"Lee",
"Chen-Yu",
""
],
[
"Xie",
"Saining",
""
],
[
"Gallagher",
"Patrick",
""
],
[
"Zhang",
"Zhengyou",
""
],
[
"Tu",
"Zhuowen",
""
]
] | TITLE: Deeply-Supervised Nets
ABSTRACT: Our proposed deeply-supervised nets (DSN) method simultaneously minimizes
classification error while making the learning process of hidden layers direct
and transparent. We make an attempt to boost the classification performance by
studying a new formulation in deep networks. Three aspects in convolutional
neural networks (CNN) style architectures are being looked at: (1) transparency
of the intermediate layers to the overall classification; (2)
discriminativeness and robustness of learned features, especially in the early
layers; (3) effectiveness in training due to the presence of the exploding and
vanishing gradients. We introduce "companion objective" to the individual
hidden layers, in addition to the overall objective at the output layer (a
different strategy to layer-wise pre-training). We extend techniques from
stochastic gradient methods to analyze our algorithm. The advantage of our
method is evident and our experimental result on benchmark datasets shows
significant performance gain over existing methods (e.g. all state-of-the-art
results on MNIST, CIFAR-10, CIFAR-100, and SVHN).
| no_new_dataset | 0.946892 |
1604.07236 | Arkaitz Zubiaga | Arkaitz Zubiaga, Alex Voss, Rob Procter, Maria Liakata, Bo Wang, Adam
Tsakalidis | Towards Real-Time, Country-Level Location Classification of Worldwide
Tweets | Accepted for publication in IEEE Transactions on Knowledge and Data
Engineering (IEEE TKDE) | null | null | null | cs.IR cs.CL cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In contrast to much previous work that has focused on location classification
of tweets restricted to a specific country, here we undertake the task in a
broader context by classifying global tweets at the country level, which is so
far unexplored in a real-time scenario. We analyse the extent to which a
tweet's country of origin can be determined by making use of eight
tweet-inherent features for classification. Furthermore, we use two datasets,
collected a year apart from each other, to analyse the extent to which a model
trained from historical tweets can still be leveraged for classification of new
tweets. With classification experiments on all 217 countries in our datasets,
as well as on the top 25 countries, we offer some insights into the best use of
tweet-inherent features for an accurate country-level classification of tweets.
We find that the use of a single feature, such as the use of tweet content
alone -- the most widely used feature in previous work -- leaves much to be
desired. Choosing an appropriate combination of both tweet content and metadata
can actually lead to substantial improvements of between 20\% and 50\%. We
observe that tweet content, the user's self-reported location and the user's
real name, all of which are inherent in a tweet and available in a real-time
scenario, are particularly useful to determine the country of origin. We also
experiment on the applicability of a model trained on historical tweets to
classify new tweets, finding that the choice of a particular combination of
features whose utility does not fade over time can actually lead to comparable
performance, avoiding the need to retrain. However, the difficulty of achieving
accurate classification increases slightly for countries with multiple
commonalities, especially for English and Spanish speaking countries.
| [
{
"version": "v1",
"created": "Mon, 25 Apr 2016 12:50:50 GMT"
},
{
"version": "v2",
"created": "Thu, 1 Dec 2016 18:35:36 GMT"
},
{
"version": "v3",
"created": "Tue, 25 Apr 2017 11:03:05 GMT"
}
] | 2017-04-26T00:00:00 | [
[
"Zubiaga",
"Arkaitz",
""
],
[
"Voss",
"Alex",
""
],
[
"Procter",
"Rob",
""
],
[
"Liakata",
"Maria",
""
],
[
"Wang",
"Bo",
""
],
[
"Tsakalidis",
"Adam",
""
]
] | TITLE: Towards Real-Time, Country-Level Location Classification of Worldwide
Tweets
ABSTRACT: In contrast to much previous work that has focused on location classification
of tweets restricted to a specific country, here we undertake the task in a
broader context by classifying global tweets at the country level, which is so
far unexplored in a real-time scenario. We analyse the extent to which a
tweet's country of origin can be determined by making use of eight
tweet-inherent features for classification. Furthermore, we use two datasets,
collected a year apart from each other, to analyse the extent to which a model
trained from historical tweets can still be leveraged for classification of new
tweets. With classification experiments on all 217 countries in our datasets,
as well as on the top 25 countries, we offer some insights into the best use of
tweet-inherent features for an accurate country-level classification of tweets.
We find that the use of a single feature, such as the use of tweet content
alone -- the most widely used feature in previous work -- leaves much to be
desired. Choosing an appropriate combination of both tweet content and metadata
can actually lead to substantial improvements of between 20\% and 50\%. We
observe that tweet content, the user's self-reported location and the user's
real name, all of which are inherent in a tweet and available in a real-time
scenario, are particularly useful to determine the country of origin. We also
experiment on the applicability of a model trained on historical tweets to
classify new tweets, finding that the choice of a particular combination of
features whose utility does not fade over time can actually lead to comparable
performance, avoiding the need to retrain. However, the difficulty of achieving
accurate classification increases slightly for countries with multiple
commonalities, especially for English and Spanish speaking countries.
| no_new_dataset | 0.936692 |
1606.00368 | Ryan Levy | Ryan Levy, J.P.F. LeBlanc, Emanuel Gull | Implementation of the Maximum Entropy Method for Analytic Continuation | Code can be found at https://github.com/CQMP/Maxent | null | 10.1016/j.cpc.2017.01.018 | null | physics.comp-ph cond-mat.str-el | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present $\texttt{Maxent}$, a tool for performing analytic continuation of
spectral functions using the maximum entropy method. The code operates on
discrete imaginary axis datasets (values with uncertainties) and transforms
this input to the real axis. The code works for imaginary time and Matsubara
frequency data and implements the 'Legendre' representation of finite
temperature Green's functions. It implements a variety of kernels, default
models, and grids for continuing bosonic, fermionic, anomalous, and other data.
Our implementation is licensed under GPLv2 and extensively documented. This
paper shows the use of the programs in detail.
| [
{
"version": "v1",
"created": "Wed, 1 Jun 2016 17:47:56 GMT"
}
] | 2017-04-26T00:00:00 | [
[
"Levy",
"Ryan",
""
],
[
"LeBlanc",
"J. P. F.",
""
],
[
"Gull",
"Emanuel",
""
]
] | TITLE: Implementation of the Maximum Entropy Method for Analytic Continuation
ABSTRACT: We present $\texttt{Maxent}$, a tool for performing analytic continuation of
spectral functions using the maximum entropy method. The code operates on
discrete imaginary axis datasets (values with uncertainties) and transforms
this input to the real axis. The code works for imaginary time and Matsubara
frequency data and implements the 'Legendre' representation of finite
temperature Green's functions. It implements a variety of kernels, default
models, and grids for continuing bosonic, fermionic, anomalous, and other data.
Our implementation is licensed under GPLv2 and extensively documented. This
paper shows the use of the programs in detail.
| no_new_dataset | 0.949295 |
1607.03333 | Liangqiong Qu | Liangqiong Qu, Shengfeng He, Jiawei Zhang, Jiandong Tian, Yandong
Tang, and Qingxiong Yang | RGBD Salient Object Detection via Deep Fusion | This paper has been submitted to IEEE Transactions on Image
Processing | null | 10.1109/TIP.2017.2682981 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Numerous efforts have been made to design different low level saliency cues
for the RGBD saliency detection, such as color or depth contrast features,
background and color compactness priors. However, how these saliency cues
interact with each other and how to incorporate these low level saliency cues
effectively to generate a master saliency map remain a challenging problem. In
this paper, we design a new convolutional neural network (CNN) to fuse
different low level saliency cues into hierarchical features for automatically
detecting salient objects in RGBD images. In contrast to the existing works
that directly feed raw image pixels to the CNN, the proposed method takes
advantage of the knowledge in traditional saliency detection by adopting
various meaningful and well-designed saliency feature vectors as input. This
can guide the training of CNN towards detecting salient object more effectively
due to the reduced learning ambiguity. We then integrate a Laplacian
propagation framework with the learned CNN to extract a spatially consistent
saliency map by exploiting the intrinsic structure of the input image.
Extensive quantitative and qualitative experimental evaluations on three
datasets demonstrate that the proposed method consistently outperforms
state-of-the-art methods.
| [
{
"version": "v1",
"created": "Tue, 12 Jul 2016 12:32:56 GMT"
}
] | 2017-04-26T00:00:00 | [
[
"Qu",
"Liangqiong",
""
],
[
"He",
"Shengfeng",
""
],
[
"Zhang",
"Jiawei",
""
],
[
"Tian",
"Jiandong",
""
],
[
"Tang",
"Yandong",
""
],
[
"Yang",
"Qingxiong",
""
]
] | TITLE: RGBD Salient Object Detection via Deep Fusion
ABSTRACT: Numerous efforts have been made to design different low level saliency cues
for the RGBD saliency detection, such as color or depth contrast features,
background and color compactness priors. However, how these saliency cues
interact with each other and how to incorporate these low level saliency cues
effectively to generate a master saliency map remain a challenging problem. In
this paper, we design a new convolutional neural network (CNN) to fuse
different low level saliency cues into hierarchical features for automatically
detecting salient objects in RGBD images. In contrast to the existing works
that directly feed raw image pixels to the CNN, the proposed method takes
advantage of the knowledge in traditional saliency detection by adopting
various meaningful and well-designed saliency feature vectors as input. This
can guide the training of CNN towards detecting salient object more effectively
due to the reduced learning ambiguity. We then integrate a Laplacian
propagation framework with the learned CNN to extract a spatially consistent
saliency map by exploiting the intrinsic structure of the input image.
Extensive quantitative and qualitative experimental evaluations on three
datasets demonstrate that the proposed method consistently outperforms
state-of-the-art methods.
| no_new_dataset | 0.95018 |
1610.01119 | Limin Wang | Limin Wang, Sheng Guo, Weilin Huang, Yuanjun Xiong, Yu Qiao | Knowledge Guided Disambiguation for Large-Scale Scene Classification
with Multi-Resolution CNNs | To appear in IEEE Transactions on Image Processing. Code and models
are available at https://github.com/wanglimin/MRCNN-Scene-Recognition | null | 10.1109/TIP.2017.2675339 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Convolutional Neural Networks (CNNs) have made remarkable progress on scene
recognition, partially due to these recent large-scale scene datasets, such as
the Places and Places2. Scene categories are often defined by multi-level
information, including local objects, global layout, and background
environment, thus leading to large intra-class variations. In addition, with
the increasing number of scene categories, label ambiguity has become another
crucial issue in large-scale classification. This paper focuses on large-scale
scene recognition and makes two major contributions to tackle these issues.
First, we propose a multi-resolution CNN architecture that captures visual
content and structure at multiple levels. The multi-resolution CNNs are
composed of coarse resolution CNNs and fine resolution CNNs, which are
complementary to each other. Second, we design two knowledge guided
disambiguation techniques to deal with the problem of label ambiguity. (i) We
exploit the knowledge from the confusion matrix computed on validation data to
merge ambiguous classes into a super category. (ii) We utilize the knowledge of
extra networks to produce a soft label for each image. Then the super
categories or soft labels are employed to guide CNN training on the Places2. We
conduct extensive experiments on three large-scale image datasets (ImageNet,
Places, and Places2), demonstrating the effectiveness of our approach.
Furthermore, our method takes part in two major scene recognition challenges,
and achieves the second place at the Places2 challenge in ILSVRC 2015, and the
first place at the LSUN challenge in CVPR 2016. Finally, we directly test the
learned representations on other scene benchmarks, and obtain the new
state-of-the-art results on the MIT Indoor67 (86.7\%) and SUN397 (72.0\%). We
release the code and models
at~\url{https://github.com/wanglimin/MRCNN-Scene-Recognition}.
| [
{
"version": "v1",
"created": "Tue, 4 Oct 2016 18:33:32 GMT"
},
{
"version": "v2",
"created": "Tue, 21 Feb 2017 21:00:55 GMT"
}
] | 2017-04-26T00:00:00 | [
[
"Wang",
"Limin",
""
],
[
"Guo",
"Sheng",
""
],
[
"Huang",
"Weilin",
""
],
[
"Xiong",
"Yuanjun",
""
],
[
"Qiao",
"Yu",
""
]
] | TITLE: Knowledge Guided Disambiguation for Large-Scale Scene Classification
with Multi-Resolution CNNs
ABSTRACT: Convolutional Neural Networks (CNNs) have made remarkable progress on scene
recognition, partially due to these recent large-scale scene datasets, such as
the Places and Places2. Scene categories are often defined by multi-level
information, including local objects, global layout, and background
environment, thus leading to large intra-class variations. In addition, with
the increasing number of scene categories, label ambiguity has become another
crucial issue in large-scale classification. This paper focuses on large-scale
scene recognition and makes two major contributions to tackle these issues.
First, we propose a multi-resolution CNN architecture that captures visual
content and structure at multiple levels. The multi-resolution CNNs are
composed of coarse resolution CNNs and fine resolution CNNs, which are
complementary to each other. Second, we design two knowledge guided
disambiguation techniques to deal with the problem of label ambiguity. (i) We
exploit the knowledge from the confusion matrix computed on validation data to
merge ambiguous classes into a super category. (ii) We utilize the knowledge of
extra networks to produce a soft label for each image. Then the super
categories or soft labels are employed to guide CNN training on the Places2. We
conduct extensive experiments on three large-scale image datasets (ImageNet,
Places, and Places2), demonstrating the effectiveness of our approach.
Furthermore, our method takes part in two major scene recognition challenges,
and achieves the second place at the Places2 challenge in ILSVRC 2015, and the
first place at the LSUN challenge in CVPR 2016. Finally, we directly test the
learned representations on other scene benchmarks, and obtain the new
state-of-the-art results on the MIT Indoor67 (86.7\%) and SUN397 (72.0\%). We
release the code and models
at~\url{https://github.com/wanglimin/MRCNN-Scene-Recognition}.
| no_new_dataset | 0.958731 |
1611.02730 | Angelica I. Aviles | Angelica I. Aviles, Thomas Widlak, Alicia Casals, Maartje M. Nillesen
and Habib Ammari | Robust Cardiac Motion Estimation using Ultrafast Ultrasound Data: A
Low-Rank-Topology-Preserving Approach | 15 pages, 10 figures, Physics in Medicine and Biology, 2017 | null | 10.1088/1361-6560/aa6914 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cardiac motion estimation is an important diagnostic tool to detect heart
diseases and it has been explored with modalities such as MRI and conventional
ultrasound (US) sequences. US cardiac motion estimation still presents
challenges because of the complex motion patterns and the presence of noise. In
this work, we propose a novel approach to estimate the cardiac motion using
ultrafast ultrasound data. -- Our solution is based on a variational
formulation characterized by the L2-regularized class. The displacement is
represented by a lattice of b-splines and we ensure robustness by applying a
maximum likelihood type estimator. While this is an important part of our
solution, the main highlight of this paper is to combine a low-rank data
representation with topology preservation. Low-rank data representation
(achieved by finding the k-dominant singular values of a Casorati Matrix
arranged from the data sequence) speeds up the global solution and achieves
noise reduction. On the other hand, topology preservation (achieved by
monitoring the Jacobian determinant) allows to radically rule out distortions
while carefully controlling the size of allowed expansions and contractions.
Our variational approach is carried out on a realistic dataset as well as on a
simulated one. We demonstrate how our proposed variational solution deals with
complex deformations through careful numerical experiments. While maintaining
the accuracy of the solution, the low-rank preprocessing is shown to speed up
the convergence of the variational problem. Beyond cardiac motion estimation,
our approach is promising for the analysis of other organs that experience
motion.
| [
{
"version": "v1",
"created": "Wed, 26 Oct 2016 22:38:48 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Apr 2017 13:24:48 GMT"
}
] | 2017-04-26T00:00:00 | [
[
"Aviles",
"Angelica I.",
""
],
[
"Widlak",
"Thomas",
""
],
[
"Casals",
"Alicia",
""
],
[
"Nillesen",
"Maartje M.",
""
],
[
"Ammari",
"Habib",
""
]
] | TITLE: Robust Cardiac Motion Estimation using Ultrafast Ultrasound Data: A
Low-Rank-Topology-Preserving Approach
ABSTRACT: Cardiac motion estimation is an important diagnostic tool to detect heart
diseases and it has been explored with modalities such as MRI and conventional
ultrasound (US) sequences. US cardiac motion estimation still presents
challenges because of the complex motion patterns and the presence of noise. In
this work, we propose a novel approach to estimate the cardiac motion using
ultrafast ultrasound data. -- Our solution is based on a variational
formulation characterized by the L2-regularized class. The displacement is
represented by a lattice of b-splines and we ensure robustness by applying a
maximum likelihood type estimator. While this is an important part of our
solution, the main highlight of this paper is to combine a low-rank data
representation with topology preservation. Low-rank data representation
(achieved by finding the k-dominant singular values of a Casorati Matrix
arranged from the data sequence) speeds up the global solution and achieves
noise reduction. On the other hand, topology preservation (achieved by
monitoring the Jacobian determinant) allows to radically rule out distortions
while carefully controlling the size of allowed expansions and contractions.
Our variational approach is carried out on a realistic dataset as well as on a
simulated one. We demonstrate how our proposed variational solution deals with
complex deformations through careful numerical experiments. While maintaining
the accuracy of the solution, the low-rank preprocessing is shown to speed up
the convergence of the variational problem. Beyond cardiac motion estimation,
our approach is promising for the analysis of other organs that experience
motion.
| no_new_dataset | 0.949856 |
1612.04054 | Zeng Zhi | Qingdong Hu, Hao Ma, Zhi Zeng, Jianping Cheng, Yunhua Chen, Shenming
He, Junli Li, Manbin Shen, Shiyong Wu, Qian Yue, Jianfeng Yue, Hui Zhang | Neutron background measurements at China Jinping underground laboratory
with a Bonner Multi-sphere Spectrometer | 10 pages,6 Figures, 3 Tables | null | 10.1016/j.nima.2017.03.048 | null | physics.ins-det hep-ex nucl-ex | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The neutron background spectrum from thermal neutron to 20 MeV fast neutron
was measured at the first experimental hall of China Jinping underground
laboratory with a Bonner multi-sphere spectrometer. The measurement system was
validated by a Cf252 source and inconformity was corrected. Due to micro charge
discharge, the dataset was screened and background from the steel of the
detectors was estimated by MC simulation. Based on genetic algorithm we
obtained the energy distribution of the neutron and the total flux of neutron
was (2.69 +/-1.02) *10^-5 cm^-2s^-1
| [
{
"version": "v1",
"created": "Tue, 13 Dec 2016 08:09:42 GMT"
}
] | 2017-04-26T00:00:00 | [
[
"Hu",
"Qingdong",
""
],
[
"Ma",
"Hao",
""
],
[
"Zeng",
"Zhi",
""
],
[
"Cheng",
"Jianping",
""
],
[
"Chen",
"Yunhua",
""
],
[
"He",
"Shenming",
""
],
[
"Li",
"Junli",
""
],
[
"Shen",
"Manbin",
""
],
[
"Wu",
"Shiyong",
""
],
[
"Yue",
"Qian",
""
],
[
"Yue",
"Jianfeng",
""
],
[
"Zhang",
"Hui",
""
]
] | TITLE: Neutron background measurements at China Jinping underground laboratory
with a Bonner Multi-sphere Spectrometer
ABSTRACT: The neutron background spectrum from thermal neutron to 20 MeV fast neutron
was measured at the first experimental hall of China Jinping underground
laboratory with a Bonner multi-sphere spectrometer. The measurement system was
validated by a Cf252 source and inconformity was corrected. Due to micro charge
discharge, the dataset was screened and background from the steel of the
detectors was estimated by MC simulation. Based on genetic algorithm we
obtained the energy distribution of the neutron and the total flux of neutron
was (2.69 +/-1.02) *10^-5 cm^-2s^-1
| no_new_dataset | 0.925769 |
1703.05871 | Thomas Hartlep | Thomas Hartlep and Jeffrey N. Cuzzi and Brian Weston | Scale Dependence of Multiplier Distributions for Particle Concentration,
Enstrophy and Dissipation in the Inertial Range of Homogeneous Turbulence | 21 pages, 14 figures, accepted for publication in Physical Review E | null | 10.1103/PhysRevE.95.033115 | null | physics.flu-dyn | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Turbulent flows preferentially concentrate inertial particles depending on
their stopping time or Stokes number, which can lead to significant spatial
variations in the particle concentration. Cascade models are one way to
describe this process in statistical terms. Here, we use a direct numerical
simulation (DNS) dataset of homogeneous, isotropic turbulence to determine
probability distribution functions (PDFs) for cascade multipliers, which
determine the ratio by which a property is partitioned into sub-volumes as an
eddy is envisioned to decay into smaller eddies. We present a technique for
correcting effects of small particle numbers in the statistics. We determine
multiplier PDFs for particle number, flow dissipation, and enstrophy, all of
which are shown to be scale dependent. However, the particle multiplier PDFs
collapse when scaled with an appropriately defined local Stokes number. As
anticipated from earlier works, dissipation and enstrophy multiplier PDFs reach
an asymptote for sufficiently small spatial scales. From the DNS measurements,
we derive a cascade model that is used it to make predictions for the radial
distribution function (RDF) for arbitrarily high Reynolds numbers, $Re$,
finding good agreement with the asymptotic, infinite $Re$ inertial range theory
of Zaichik and Alipchenkov [New Journal of Physics 11, 103018 (2009)]. We
discuss implications of these results for the statistical modeling of the
turbulent clustering process in the inertial range for high Reynolds numbers
inaccessible to numerical simulations.
| [
{
"version": "v1",
"created": "Fri, 17 Mar 2017 02:48:24 GMT"
}
] | 2017-04-26T00:00:00 | [
[
"Hartlep",
"Thomas",
""
],
[
"Cuzzi",
"Jeffrey N.",
""
],
[
"Weston",
"Brian",
""
]
] | TITLE: Scale Dependence of Multiplier Distributions for Particle Concentration,
Enstrophy and Dissipation in the Inertial Range of Homogeneous Turbulence
ABSTRACT: Turbulent flows preferentially concentrate inertial particles depending on
their stopping time or Stokes number, which can lead to significant spatial
variations in the particle concentration. Cascade models are one way to
describe this process in statistical terms. Here, we use a direct numerical
simulation (DNS) dataset of homogeneous, isotropic turbulence to determine
probability distribution functions (PDFs) for cascade multipliers, which
determine the ratio by which a property is partitioned into sub-volumes as an
eddy is envisioned to decay into smaller eddies. We present a technique for
correcting effects of small particle numbers in the statistics. We determine
multiplier PDFs for particle number, flow dissipation, and enstrophy, all of
which are shown to be scale dependent. However, the particle multiplier PDFs
collapse when scaled with an appropriately defined local Stokes number. As
anticipated from earlier works, dissipation and enstrophy multiplier PDFs reach
an asymptote for sufficiently small spatial scales. From the DNS measurements,
we derive a cascade model that is used it to make predictions for the radial
distribution function (RDF) for arbitrarily high Reynolds numbers, $Re$,
finding good agreement with the asymptotic, infinite $Re$ inertial range theory
of Zaichik and Alipchenkov [New Journal of Physics 11, 103018 (2009)]. We
discuss implications of these results for the statistical modeling of the
turbulent clustering process in the inertial range for high Reynolds numbers
inaccessible to numerical simulations.
| no_new_dataset | 0.95561 |
1703.08383 | Joseph Lemley | Joseph Lemley, Shabab Bazrafkan, Peter Corcoran | Smart Augmentation - Learning an Optimal Data Augmentation Strategy | null | null | 10.1109/ACCESS.2017.2696121 | null | cs.AI cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A recurring problem faced when training neural networks is that there is
typically not enough data to maximize the generalization capability of deep
neural networks(DNN). There are many techniques to address this, including data
augmentation, dropout, and transfer learning. In this paper, we introduce an
additional method which we call Smart Augmentation and we show how to use it to
increase the accuracy and reduce overfitting on a target network. Smart
Augmentation works by creating a network that learns how to generate augmented
data during the training process of a target network in a way that reduces that
networks loss. This allows us to learn augmentations that minimize the error of
that network.
Smart Augmentation has shown the potential to increase accuracy by
demonstrably significant measures on all datasets tested. In addition, it has
shown potential to achieve similar or improved performance levels with
significantly smaller network sizes in a number of tested cases.
| [
{
"version": "v1",
"created": "Fri, 24 Mar 2017 12:07:34 GMT"
}
] | 2017-04-26T00:00:00 | [
[
"Lemley",
"Joseph",
""
],
[
"Bazrafkan",
"Shabab",
""
],
[
"Corcoran",
"Peter",
""
]
] | TITLE: Smart Augmentation - Learning an Optimal Data Augmentation Strategy
ABSTRACT: A recurring problem faced when training neural networks is that there is
typically not enough data to maximize the generalization capability of deep
neural networks(DNN). There are many techniques to address this, including data
augmentation, dropout, and transfer learning. In this paper, we introduce an
additional method which we call Smart Augmentation and we show how to use it to
increase the accuracy and reduce overfitting on a target network. Smart
Augmentation works by creating a network that learns how to generate augmented
data during the training process of a target network in a way that reduces that
networks loss. This allows us to learn augmentations that minimize the error of
that network.
Smart Augmentation has shown the potential to increase accuracy by
demonstrably significant measures on all datasets tested. In addition, it has
shown potential to achieve similar or improved performance levels with
significantly smaller network sizes in a number of tested cases.
| no_new_dataset | 0.949106 |
1704.06693 | Sandipan Banerjee | Sandipan Banerjee, John S. Bernhard, Walter J. Scheirer, Kevin W.
Bowyer, Patrick J. Flynn | SREFI: Synthesis of Realistic Example Face Images | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a novel face synthesis approach that can generate
an arbitrarily large number of synthetic images of both real and synthetic
identities. Thus a face image dataset can be expanded in terms of the number of
identities represented and the number of images per identity using this
approach, without the identity-labeling and privacy complications that come
from downloading images from the web. To measure the visual fidelity and
uniqueness of the synthetic face images and identities, we conducted face
matching experiments with both human participants and a CNN pre-trained on a
dataset of 2.6M real face images. To evaluate the stability of these synthetic
faces, we trained a CNN model with an augmented dataset containing close to
200,000 synthetic faces. We used a snapshot of this trained CNN to recognize
extremely challenging frontal (real) face images. Experiments showed training
with the augmented faces boosted the face recognition performance of the CNN.
| [
{
"version": "v1",
"created": "Fri, 21 Apr 2017 19:59:47 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Apr 2017 03:54:34 GMT"
}
] | 2017-04-26T00:00:00 | [
[
"Banerjee",
"Sandipan",
""
],
[
"Bernhard",
"John S.",
""
],
[
"Scheirer",
"Walter J.",
""
],
[
"Bowyer",
"Kevin W.",
""
],
[
"Flynn",
"Patrick J.",
""
]
] | TITLE: SREFI: Synthesis of Realistic Example Face Images
ABSTRACT: In this paper, we propose a novel face synthesis approach that can generate
an arbitrarily large number of synthetic images of both real and synthetic
identities. Thus a face image dataset can be expanded in terms of the number of
identities represented and the number of images per identity using this
approach, without the identity-labeling and privacy complications that come
from downloading images from the web. To measure the visual fidelity and
uniqueness of the synthetic face images and identities, we conducted face
matching experiments with both human participants and a CNN pre-trained on a
dataset of 2.6M real face images. To evaluate the stability of these synthetic
faces, we trained a CNN model with an augmented dataset containing close to
200,000 synthetic faces. We used a snapshot of this trained CNN to recognize
extremely challenging frontal (real) face images. Experiments showed training
with the augmented faces boosted the face recognition performance of the CNN.
| no_new_dataset | 0.910147 |
1704.07160 | Congqi Cao | Congqi Cao, Yifan Zhang, Chunjie Zhang and Hanqing Lu | Body Joint guided 3D Deep Convolutional Descriptors for Action
Recognition | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Three dimensional convolutional neural networks (3D CNNs) have been
established as a powerful tool to simultaneously learn features from both
spatial and temporal dimensions, which is suitable to be applied to video-based
action recognition. In this work, we propose not to directly use the
activations of fully-connected layers of a 3D CNN as the video feature, but to
use selective convolutional layer activations to form a discriminative
descriptor for video. It pools the feature on the convolutional layers under
the guidance of body joint positions. Two schemes of mapping body joints into
convolutional feature maps for pooling are discussed. The body joint positions
can be obtained from any off-the-shelf skeleton estimation algorithm. The
helpfulness of the body joint guided feature pooling with inaccurate skeleton
estimation is systematically evaluated. To make it end-to-end and do not rely
on any sophisticated body joint detection algorithm, we further propose a
two-stream bilinear model which can learn the guidance from the body joints and
capture the spatio-temporal features simultaneously. In this model, the body
joint guided feature pooling is conveniently formulated as a bilinear product
operation. Experimental results on three real-world datasets demonstrate the
effectiveness of body joint guided pooling which achieves promising
performance.
| [
{
"version": "v1",
"created": "Mon, 24 Apr 2017 11:58:24 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Apr 2017 15:08:05 GMT"
}
] | 2017-04-26T00:00:00 | [
[
"Cao",
"Congqi",
""
],
[
"Zhang",
"Yifan",
""
],
[
"Zhang",
"Chunjie",
""
],
[
"Lu",
"Hanqing",
""
]
] | TITLE: Body Joint guided 3D Deep Convolutional Descriptors for Action
Recognition
ABSTRACT: Three dimensional convolutional neural networks (3D CNNs) have been
established as a powerful tool to simultaneously learn features from both
spatial and temporal dimensions, which is suitable to be applied to video-based
action recognition. In this work, we propose not to directly use the
activations of fully-connected layers of a 3D CNN as the video feature, but to
use selective convolutional layer activations to form a discriminative
descriptor for video. It pools the feature on the convolutional layers under
the guidance of body joint positions. Two schemes of mapping body joints into
convolutional feature maps for pooling are discussed. The body joint positions
can be obtained from any off-the-shelf skeleton estimation algorithm. The
helpfulness of the body joint guided feature pooling with inaccurate skeleton
estimation is systematically evaluated. To make it end-to-end and do not rely
on any sophisticated body joint detection algorithm, we further propose a
two-stream bilinear model which can learn the guidance from the body joints and
capture the spatio-temporal features simultaneously. In this model, the body
joint guided feature pooling is conveniently formulated as a bilinear product
operation. Experimental results on three real-world datasets demonstrate the
effectiveness of body joint guided pooling which achieves promising
performance.
| no_new_dataset | 0.951097 |
1704.07405 | Sabbir Ahmad | Sabbir Ahmad, Rafi Kamal, Mohammed Eunus Ali, Jianzhong Qi, Peter
Scheuermann and Egemen Tanin | The Flexible Group Spatial Keyword Query | 12 pages | null | null | null | cs.SI cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a new class of service for location based social networks, called
the Flexible Group Spatial Keyword Query, which enables a group of users to
collectively find a point of interest (POI) that optimizes an aggregate cost
function combining both spatial distances and keyword similarities. In
addition, our query service allows users to consider the tradeoffs between
obtaining a sub-optimal solution for the entire group and obtaining an
optimimized solution but only for a subgroup.
We propose algorithms to process three variants of the query: (i) the group
nearest neighbor with keywords query, which finds a POI that optimizes the
aggregate cost function for the whole group of size n, (ii) the subgroup
nearest neighbor with keywords query, which finds the optimal subgroup and a
POI that optimizes the aggregate cost function for a given subgroup size m (m
<= n), and (iii) the multiple subgroup nearest neighbor with keywords query,
which finds optimal subgroups and corresponding POIs for each of the subgroup
sizes in the range [m, n]. We design query processing algorithms based on
branch-and-bound and best-first paradigms. Finally, we provide theoretical
bounds and conduct extensive experiments with two real datasets which verify
the effectiveness and efficiency of the proposed algorithms.
| [
{
"version": "v1",
"created": "Mon, 24 Apr 2017 18:33:50 GMT"
}
] | 2017-04-26T00:00:00 | [
[
"Ahmad",
"Sabbir",
""
],
[
"Kamal",
"Rafi",
""
],
[
"Ali",
"Mohammed Eunus",
""
],
[
"Qi",
"Jianzhong",
""
],
[
"Scheuermann",
"Peter",
""
],
[
"Tanin",
"Egemen",
""
]
] | TITLE: The Flexible Group Spatial Keyword Query
ABSTRACT: We present a new class of service for location based social networks, called
the Flexible Group Spatial Keyword Query, which enables a group of users to
collectively find a point of interest (POI) that optimizes an aggregate cost
function combining both spatial distances and keyword similarities. In
addition, our query service allows users to consider the tradeoffs between
obtaining a sub-optimal solution for the entire group and obtaining an
optimimized solution but only for a subgroup.
We propose algorithms to process three variants of the query: (i) the group
nearest neighbor with keywords query, which finds a POI that optimizes the
aggregate cost function for the whole group of size n, (ii) the subgroup
nearest neighbor with keywords query, which finds the optimal subgroup and a
POI that optimizes the aggregate cost function for a given subgroup size m (m
<= n), and (iii) the multiple subgroup nearest neighbor with keywords query,
which finds optimal subgroups and corresponding POIs for each of the subgroup
sizes in the range [m, n]. We design query processing algorithms based on
branch-and-bound and best-first paradigms. Finally, we provide theoretical
bounds and conduct extensive experiments with two real datasets which verify
the effectiveness and efficiency of the proposed algorithms.
| no_new_dataset | 0.9463 |
1704.07461 | Ashwin Pananjady | Ashwin Pananjady, Martin J. Wainwright, Thomas A. Courtade | Denoising Linear Models with Permuted Data | To appear in part at ISIT 2017, Aachen | null | null | null | stat.ML cs.IT math.IT math.ST stat.TH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The multivariate linear regression model with shuffled data and additive
Gaussian noise arises in various correspondence estimation and matching
problems. Focusing on the denoising aspect of this problem, we provide a
characterization the minimax error rate that is sharp up to logarithmic
factors. We also analyze the performance of two versions of a computationally
efficient estimator, and establish their consistency for a large range of input
parameters. Finally, we provide an exact algorithm for the noiseless problem
and demonstrate its performance on an image point-cloud matching task. Our
analysis also extends to datasets with outliers.
| [
{
"version": "v1",
"created": "Mon, 24 Apr 2017 20:46:48 GMT"
}
] | 2017-04-26T00:00:00 | [
[
"Pananjady",
"Ashwin",
""
],
[
"Wainwright",
"Martin J.",
""
],
[
"Courtade",
"Thomas A.",
""
]
] | TITLE: Denoising Linear Models with Permuted Data
ABSTRACT: The multivariate linear regression model with shuffled data and additive
Gaussian noise arises in various correspondence estimation and matching
problems. Focusing on the denoising aspect of this problem, we provide a
characterization the minimax error rate that is sharp up to logarithmic
factors. We also analyze the performance of two versions of a computationally
efficient estimator, and establish their consistency for a large range of input
parameters. Finally, we provide an exact algorithm for the noiseless problem
and demonstrate its performance on an image point-cloud matching task. Our
analysis also extends to datasets with outliers.
| no_new_dataset | 0.947186 |
1704.07505 | Feng Nan | Feng Nan and Venkatesh Saligrama | Dynamic Model Selection for Prediction Under a Budget | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a dynamic model selection approach for resource-constrained
prediction. Given an input instance at test-time, a gating function identifies
a prediction model for the input among a collection of models. Our objective is
to minimize overall average cost without sacrificing accuracy. We learn gating
and prediction models on fully labeled training data by means of a bottom-up
strategy. Our novel bottom-up method is a recursive scheme whereby a
high-accuracy complex model is first trained. Then a low-complexity gating and
prediction model are subsequently learnt to adaptively approximate the
high-accuracy model in regions where low-cost models are capable of making
highly accurate predictions. We pose an empirical loss minimization problem
with cost constraints to jointly train gating and prediction models. On a
number of benchmark datasets our method outperforms state-of-the-art achieving
higher accuracy for the same cost.
| [
{
"version": "v1",
"created": "Tue, 25 Apr 2017 01:17:22 GMT"
}
] | 2017-04-26T00:00:00 | [
[
"Nan",
"Feng",
""
],
[
"Saligrama",
"Venkatesh",
""
]
] | TITLE: Dynamic Model Selection for Prediction Under a Budget
ABSTRACT: We present a dynamic model selection approach for resource-constrained
prediction. Given an input instance at test-time, a gating function identifies
a prediction model for the input among a collection of models. Our objective is
to minimize overall average cost without sacrificing accuracy. We learn gating
and prediction models on fully labeled training data by means of a bottom-up
strategy. Our novel bottom-up method is a recursive scheme whereby a
high-accuracy complex model is first trained. Then a low-complexity gating and
prediction model are subsequently learnt to adaptively approximate the
high-accuracy model in regions where low-cost models are capable of making
highly accurate predictions. We pose an empirical loss minimization problem
with cost constraints to jointly train gating and prediction models. On a
number of benchmark datasets our method outperforms state-of-the-art achieving
higher accuracy for the same cost.
| no_new_dataset | 0.947284 |
1704.07535 | Maxim Rabinovich | Maxim Rabinovich, Mitchell Stern, Dan Klein | Abstract Syntax Networks for Code Generation and Semantic Parsing | ACL 2017. MR and MS contributed equally | null | null | null | cs.CL cs.AI cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Tasks like code generation and semantic parsing require mapping unstructured
(or partially structured) inputs to well-formed, executable outputs. We
introduce abstract syntax networks, a modeling framework for these problems.
The outputs are represented as abstract syntax trees (ASTs) and constructed by
a decoder with a dynamically-determined modular structure paralleling the
structure of the output tree. On the benchmark Hearthstone dataset for code
generation, our model obtains 79.2 BLEU and 22.7% exact match accuracy,
compared to previous state-of-the-art values of 67.1 and 6.1%. Furthermore, we
perform competitively on the Atis, Jobs, and Geo semantic parsing datasets with
no task-specific engineering.
| [
{
"version": "v1",
"created": "Tue, 25 Apr 2017 04:37:35 GMT"
}
] | 2017-04-26T00:00:00 | [
[
"Rabinovich",
"Maxim",
""
],
[
"Stern",
"Mitchell",
""
],
[
"Klein",
"Dan",
""
]
] | TITLE: Abstract Syntax Networks for Code Generation and Semantic Parsing
ABSTRACT: Tasks like code generation and semantic parsing require mapping unstructured
(or partially structured) inputs to well-formed, executable outputs. We
introduce abstract syntax networks, a modeling framework for these problems.
The outputs are represented as abstract syntax trees (ASTs) and constructed by
a decoder with a dynamically-determined modular structure paralleling the
structure of the output tree. On the benchmark Hearthstone dataset for code
generation, our model obtains 79.2 BLEU and 22.7% exact match accuracy,
compared to previous state-of-the-art values of 67.1 and 6.1%. Furthermore, we
perform competitively on the Atis, Jobs, and Geo semantic parsing datasets with
no task-specific engineering.
| no_new_dataset | 0.953449 |
1704.07548 | Huiguang He | Changde Du, Changying Du, Jinpeng Li, Wei-long Zheng, Bao-liang Lu,
Huiguang He | Semi-supervised Bayesian Deep Multi-modal Emotion Recognition | null | null | null | null | cs.AI cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In emotion recognition, it is difficult to recognize human's emotional states
using just a single modality. Besides, the annotation of physiological
emotional data is particularly expensive. These two aspects make the building
of effective emotion recognition model challenging. In this paper, we first
build a multi-view deep generative model to simulate the generative process of
multi-modality emotional data. By imposing a mixture of Gaussians assumption on
the posterior approximation of the latent variables, our model can learn the
shared deep representation from multiple modalities. To solve the
labeled-data-scarcity problem, we further extend our multi-view model to
semi-supervised learning scenario by casting the semi-supervised classification
problem as a specialized missing data imputation task. Our semi-supervised
multi-view deep generative framework can leverage both labeled and unlabeled
data from multiple modalities, where the weight factor for each modality can be
learned automatically. Compared with previous emotion recognition methods, our
method is more robust and flexible. The experiments conducted on two real
multi-modal emotion datasets have demonstrated the superiority of our framework
over a number of competitors.
| [
{
"version": "v1",
"created": "Tue, 25 Apr 2017 06:29:59 GMT"
}
] | 2017-04-26T00:00:00 | [
[
"Du",
"Changde",
""
],
[
"Du",
"Changying",
""
],
[
"Li",
"Jinpeng",
""
],
[
"Zheng",
"Wei-long",
""
],
[
"Lu",
"Bao-liang",
""
],
[
"He",
"Huiguang",
""
]
] | TITLE: Semi-supervised Bayesian Deep Multi-modal Emotion Recognition
ABSTRACT: In emotion recognition, it is difficult to recognize human's emotional states
using just a single modality. Besides, the annotation of physiological
emotional data is particularly expensive. These two aspects make the building
of effective emotion recognition model challenging. In this paper, we first
build a multi-view deep generative model to simulate the generative process of
multi-modality emotional data. By imposing a mixture of Gaussians assumption on
the posterior approximation of the latent variables, our model can learn the
shared deep representation from multiple modalities. To solve the
labeled-data-scarcity problem, we further extend our multi-view model to
semi-supervised learning scenario by casting the semi-supervised classification
problem as a specialized missing data imputation task. Our semi-supervised
multi-view deep generative framework can leverage both labeled and unlabeled
data from multiple modalities, where the weight factor for each modality can be
learned automatically. Compared with previous emotion recognition methods, our
method is more robust and flexible. The experiments conducted on two real
multi-modal emotion datasets have demonstrated the superiority of our framework
over a number of competitors.
| no_new_dataset | 0.945701 |
1704.07709 | Md Zahangir Alom | Md Zahangir Alom, Mahmudul Hasan, Chris Yakopcic, Tarek M. Taha | Inception Recurrent Convolutional Neural Network for Object Recognition | 11 pages, 10 figures, 2 tables | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Deep convolutional neural networks (DCNNs) are an influential tool for
solving various problems in the machine learning and computer vision fields. In
this paper, we introduce a new deep learning model called an Inception-
Recurrent Convolutional Neural Network (IRCNN), which utilizes the power of an
inception network combined with recurrent layers in DCNN architecture. We have
empirically evaluated the recognition performance of the proposed IRCNN model
using different benchmark datasets such as MNIST, CIFAR-10, CIFAR- 100, and
SVHN. Experimental results show similar or higher recognition accuracy when
compared to most of the popular DCNNs including the RCNN. Furthermore, we have
investigated IRCNN performance against equivalent Inception Networks and
Inception-Residual Networks using the CIFAR-100 dataset. We report about 3.5%,
3.47% and 2.54% improvement in classification accuracy when compared to the
RCNN, equivalent Inception Networks, and Inception- Residual Networks on the
augmented CIFAR- 100 dataset respectively.
| [
{
"version": "v1",
"created": "Tue, 25 Apr 2017 14:19:26 GMT"
}
] | 2017-04-26T00:00:00 | [
[
"Alom",
"Md Zahangir",
""
],
[
"Hasan",
"Mahmudul",
""
],
[
"Yakopcic",
"Chris",
""
],
[
"Taha",
"Tarek M.",
""
]
] | TITLE: Inception Recurrent Convolutional Neural Network for Object Recognition
ABSTRACT: Deep convolutional neural networks (DCNNs) are an influential tool for
solving various problems in the machine learning and computer vision fields. In
this paper, we introduce a new deep learning model called an Inception-
Recurrent Convolutional Neural Network (IRCNN), which utilizes the power of an
inception network combined with recurrent layers in DCNN architecture. We have
empirically evaluated the recognition performance of the proposed IRCNN model
using different benchmark datasets such as MNIST, CIFAR-10, CIFAR- 100, and
SVHN. Experimental results show similar or higher recognition accuracy when
compared to most of the popular DCNNs including the RCNN. Furthermore, we have
investigated IRCNN performance against equivalent Inception Networks and
Inception-Residual Networks using the CIFAR-100 dataset. We report about 3.5%,
3.47% and 2.54% improvement in classification accuracy when compared to the
RCNN, equivalent Inception Networks, and Inception- Residual Networks on the
augmented CIFAR- 100 dataset respectively.
| no_new_dataset | 0.95418 |
1704.07751 | Maxim Rabinovich | Maxim Rabinovich, Dan Klein | Fine-Grained Entity Typing with High-Multiplicity Assignments | ACL 2017 | null | null | null | cs.CL cs.AI cs.IR cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As entity type systems become richer and more fine-grained, we expect the
number of types assigned to a given entity to increase. However, most
fine-grained typing work has focused on datasets that exhibit a low degree of
type multiplicity. In this paper, we consider the high-multiplicity regime
inherent in data sources such as Wikipedia that have semi-open type systems. We
introduce a set-prediction approach to this problem and show that our model
outperforms unstructured baselines on a new Wikipedia-based fine-grained typing
corpus.
| [
{
"version": "v1",
"created": "Tue, 25 Apr 2017 15:52:52 GMT"
}
] | 2017-04-26T00:00:00 | [
[
"Rabinovich",
"Maxim",
""
],
[
"Klein",
"Dan",
""
]
] | TITLE: Fine-Grained Entity Typing with High-Multiplicity Assignments
ABSTRACT: As entity type systems become richer and more fine-grained, we expect the
number of types assigned to a given entity to increase. However, most
fine-grained typing work has focused on datasets that exhibit a low degree of
type multiplicity. In this paper, we consider the high-multiplicity regime
inherent in data sources such as Wikipedia that have semi-open type systems. We
introduce a set-prediction approach to this problem and show that our model
outperforms unstructured baselines on a new Wikipedia-based fine-grained typing
corpus.
| no_new_dataset | 0.946448 |
1704.07790 | Haoyi Xiong | Haoyi Xiong, Wei Cheng, Wenqing Hu, Jiang Bian, and Zhishan Guo | FWDA: a Fast Wishart Discriminant Analysis with its Application to
Electronic Health Records Data Classification | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Linear Discriminant Analysis (LDA) on Electronic Health Records (EHR) data is
widely-used for early detection of diseases. Classical LDA for EHR data
classification, however, suffers from two handicaps: the ill-posed estimation
of LDA parameters (e.g., covariance matrix), and the "linear inseparability" of
EHR data. To handle these two issues, in this paper, we propose a novel
classifier FWDA -- Fast Wishart Discriminant Analysis, that makes predictions
in an ensemble way. Specifically, FWDA first surrogates the distribution of
inverse covariance matrices using a Wishart distribution estimated from the
training data, then "weighted-averages" the classification results of multiple
LDA classifiers parameterized by the sampled inverse covariance matrices via a
Bayesian Voting scheme. The weights for voting are optimally updated to adapt
each new input data, so as to enable the nonlinear classification. Theoretical
analysis indicates that FWDA possesses a fast convergence rate and a robust
performance on high dimensional data. Extensive experiments on large-scale EHR
dataset show that our approach outperforms state-of-the-art algorithms by a
large margin.
| [
{
"version": "v1",
"created": "Tue, 25 Apr 2017 17:11:57 GMT"
}
] | 2017-04-26T00:00:00 | [
[
"Xiong",
"Haoyi",
""
],
[
"Cheng",
"Wei",
""
],
[
"Hu",
"Wenqing",
""
],
[
"Bian",
"Jiang",
""
],
[
"Guo",
"Zhishan",
""
]
] | TITLE: FWDA: a Fast Wishart Discriminant Analysis with its Application to
Electronic Health Records Data Classification
ABSTRACT: Linear Discriminant Analysis (LDA) on Electronic Health Records (EHR) data is
widely-used for early detection of diseases. Classical LDA for EHR data
classification, however, suffers from two handicaps: the ill-posed estimation
of LDA parameters (e.g., covariance matrix), and the "linear inseparability" of
EHR data. To handle these two issues, in this paper, we propose a novel
classifier FWDA -- Fast Wishart Discriminant Analysis, that makes predictions
in an ensemble way. Specifically, FWDA first surrogates the distribution of
inverse covariance matrices using a Wishart distribution estimated from the
training data, then "weighted-averages" the classification results of multiple
LDA classifiers parameterized by the sampled inverse covariance matrices via a
Bayesian Voting scheme. The weights for voting are optimally updated to adapt
each new input data, so as to enable the nonlinear classification. Theoretical
analysis indicates that FWDA possesses a fast convergence rate and a robust
performance on high dimensional data. Extensive experiments on large-scale EHR
dataset show that our approach outperforms state-of-the-art algorithms by a
large margin.
| no_new_dataset | 0.946448 |
1111.4470 | Aryeh Kontorovich | Lee-Ad Gottlieb and Aryeh Kontorovich and Robert Krauthgamer | Efficient Regression in Metric Spaces via Approximate Lipschitz
Extension | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a framework for performing efficient regression in general metric
spaces. Roughly speaking, our regressor predicts the value at a new point by
computing a Lipschitz extension --- the smoothest function consistent with the
observed data --- after performing structural risk minimization to avoid
overfitting. We obtain finite-sample risk bounds with minimal structural and
noise assumptions, and a natural speed-precision tradeoff. The offline
(learning) and online (prediction) stages can be solved by convex programming,
but this naive approach has runtime complexity $O(n^3)$, which is prohibitive
for large datasets. We design instead a regression algorithm whose speed and
generalization performance depend on the intrinsic dimension of the data, to
which the algorithm adapts. While our main innovation is algorithmic, the
statistical results may also be of independent interest.
| [
{
"version": "v1",
"created": "Fri, 18 Nov 2011 20:32:33 GMT"
},
{
"version": "v2",
"created": "Tue, 14 Jul 2015 17:06:29 GMT"
},
{
"version": "v3",
"created": "Mon, 24 Apr 2017 07:56:53 GMT"
}
] | 2017-04-25T00:00:00 | [
[
"Gottlieb",
"Lee-Ad",
""
],
[
"Kontorovich",
"Aryeh",
""
],
[
"Krauthgamer",
"Robert",
""
]
] | TITLE: Efficient Regression in Metric Spaces via Approximate Lipschitz
Extension
ABSTRACT: We present a framework for performing efficient regression in general metric
spaces. Roughly speaking, our regressor predicts the value at a new point by
computing a Lipschitz extension --- the smoothest function consistent with the
observed data --- after performing structural risk minimization to avoid
overfitting. We obtain finite-sample risk bounds with minimal structural and
noise assumptions, and a natural speed-precision tradeoff. The offline
(learning) and online (prediction) stages can be solved by convex programming,
but this naive approach has runtime complexity $O(n^3)$, which is prohibitive
for large datasets. We design instead a regression algorithm whose speed and
generalization performance depend on the intrinsic dimension of the data, to
which the algorithm adapts. While our main innovation is algorithmic, the
statistical results may also be of independent interest.
| no_new_dataset | 0.947672 |
1509.00104 | Wenqiang Liu | Wenqiang Liu | Truth Discovery to Resolve Object Conflicts in Linked Data | Have many crucial faults in this version | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the community of Linked Data, anyone can publish their data as Linked Data
on the web because of the openness of the Semantic Web. As such, RDF (Resource
Description Framework) triples described the same real-world entity can be
obtained from multiple sources; it inevitably results in conflicting objects
for a certain predicate of a real-world entity. The objective of this study is
to identify one truth from multiple conflicting objects for a certain predicate
of a real-world entity. An intuitive principle based on common sense is that an
object from a reliable source is trustworthy; thus, a source that provide
trustworthy object is reliable. Many truth discovery methods based on this
principle have been proposed to estimate source reliability and identify the
truth. However, the effectiveness of existing truth discovery methods is
significantly affected by the number of objects provided by each source.
Therefore, these methods cannot be trivially extended to resolve conflicts in
Linked Data with a scale-free property, i.e., most of the sources provide few
conflicting objects, whereas only a few sources have many conflicting objects.
To address this challenge, we propose a novel approach called TruthDiscover to
identify the truth in Linked Data with a scale-free property. Two strategies
are adopted in TruthDiscover to reduce the effect of the scale-free property on
truth discovery. First, this approach leverages the topological properties of
the Source Belief Graph to estimate the priori beliefs of sources, which are
utilized to smooth the trustworthiness of sources. Second, this approach
utilizes the Hidden Markov Random Field to model the interdependencies between
objects to estimate the trust values of objects accurately. Experiments are
conducted in the six datasets to evaluate TruthDiscover.
| [
{
"version": "v1",
"created": "Tue, 1 Sep 2015 00:58:16 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Sep 2015 00:40:30 GMT"
},
{
"version": "v3",
"created": "Wed, 4 Nov 2015 00:56:14 GMT"
},
{
"version": "v4",
"created": "Wed, 11 Nov 2015 12:00:26 GMT"
},
{
"version": "v5",
"created": "Sat, 28 Nov 2015 09:38:52 GMT"
},
{
"version": "v6",
"created": "Tue, 8 Mar 2016 02:10:02 GMT"
},
{
"version": "v7",
"created": "Wed, 22 Feb 2017 21:34:06 GMT"
},
{
"version": "v8",
"created": "Fri, 21 Apr 2017 22:46:34 GMT"
}
] | 2017-04-25T00:00:00 | [
[
"Liu",
"Wenqiang",
""
]
] | TITLE: Truth Discovery to Resolve Object Conflicts in Linked Data
ABSTRACT: In the community of Linked Data, anyone can publish their data as Linked Data
on the web because of the openness of the Semantic Web. As such, RDF (Resource
Description Framework) triples described the same real-world entity can be
obtained from multiple sources; it inevitably results in conflicting objects
for a certain predicate of a real-world entity. The objective of this study is
to identify one truth from multiple conflicting objects for a certain predicate
of a real-world entity. An intuitive principle based on common sense is that an
object from a reliable source is trustworthy; thus, a source that provide
trustworthy object is reliable. Many truth discovery methods based on this
principle have been proposed to estimate source reliability and identify the
truth. However, the effectiveness of existing truth discovery methods is
significantly affected by the number of objects provided by each source.
Therefore, these methods cannot be trivially extended to resolve conflicts in
Linked Data with a scale-free property, i.e., most of the sources provide few
conflicting objects, whereas only a few sources have many conflicting objects.
To address this challenge, we propose a novel approach called TruthDiscover to
identify the truth in Linked Data with a scale-free property. Two strategies
are adopted in TruthDiscover to reduce the effect of the scale-free property on
truth discovery. First, this approach leverages the topological properties of
the Source Belief Graph to estimate the priori beliefs of sources, which are
utilized to smooth the trustworthiness of sources. Second, this approach
utilizes the Hidden Markov Random Field to model the interdependencies between
objects to estimate the trust values of objects accurately. Experiments are
conducted in the six datasets to evaluate TruthDiscover.
| no_new_dataset | 0.950869 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.