id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
sequencelengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1604.03336 | Raef Bassily | Raef Bassily and Yoav Freund | Typical Stability | New sections, extended discussions, and complete proofs | null | null | null | cs.LG cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we introduce a notion of algorithmic stability called typical
stability. When our goal is to release real-valued queries (statistics)
computed over a dataset, this notion does not require the queries to be of
bounded sensitivity -- a condition that is generally assumed under differential
privacy [DMNS06, Dwork06] when used as a notion of algorithmic stability
[DFHPRR15a, DFHPRR15b, BNSSSU16] -- nor does it require the samples in the
dataset to be independent -- a condition that is usually assumed when
generalization-error guarantees are sought. Instead, typical stability requires
the output of the query, when computed on a dataset drawn from the underlying
distribution, to be concentrated around its expected value with respect to that
distribution.
We discuss the implications of typical stability on the generalization error
(i.e., the difference between the value of the query computed on the dataset
and the expected value of the query with respect to the true data
distribution). We show that typical stability can control generalization error
in adaptive data analysis even when the samples in the dataset are not
necessarily independent and when queries to be computed are not necessarily of
bounded-sensitivity as long as the results of the queries over the dataset
(i.e., the computed statistics) follow a distribution with a "light" tail.
Examples of such queries include, but not limited to, subgaussian and
subexponential queries.
We also discuss the composition guarantees of typical stability and prove
composition theorems that characterize the degradation of the parameters of
typical stability under $k$-fold adaptive composition. We also give simple
noise-addition algorithms that achieve this notion. These algorithms are
similar to their differentially private counterparts, however, the added noise
is calibrated differently.
| [
{
"version": "v1",
"created": "Tue, 12 Apr 2016 10:52:06 GMT"
},
{
"version": "v2",
"created": "Mon, 19 Sep 2016 00:06:06 GMT"
}
] | 2016-09-20T00:00:00 | [
[
"Bassily",
"Raef",
""
],
[
"Freund",
"Yoav",
""
]
] | TITLE: Typical Stability
ABSTRACT: In this paper, we introduce a notion of algorithmic stability called typical
stability. When our goal is to release real-valued queries (statistics)
computed over a dataset, this notion does not require the queries to be of
bounded sensitivity -- a condition that is generally assumed under differential
privacy [DMNS06, Dwork06] when used as a notion of algorithmic stability
[DFHPRR15a, DFHPRR15b, BNSSSU16] -- nor does it require the samples in the
dataset to be independent -- a condition that is usually assumed when
generalization-error guarantees are sought. Instead, typical stability requires
the output of the query, when computed on a dataset drawn from the underlying
distribution, to be concentrated around its expected value with respect to that
distribution.
We discuss the implications of typical stability on the generalization error
(i.e., the difference between the value of the query computed on the dataset
and the expected value of the query with respect to the true data
distribution). We show that typical stability can control generalization error
in adaptive data analysis even when the samples in the dataset are not
necessarily independent and when queries to be computed are not necessarily of
bounded-sensitivity as long as the results of the queries over the dataset
(i.e., the computed statistics) follow a distribution with a "light" tail.
Examples of such queries include, but not limited to, subgaussian and
subexponential queries.
We also discuss the composition guarantees of typical stability and prove
composition theorems that characterize the degradation of the parameters of
typical stability under $k$-fold adaptive composition. We also give simple
noise-addition algorithms that achieve this notion. These algorithms are
similar to their differentially private counterparts, however, the added noise
is calibrated differently.
| no_new_dataset | 0.934694 |
1605.05411 | Ethan Rudd | Andras Rozsa, Manuel G\"unther, Ethan M. Rudd, and Terrance E. Boult | Are Facial Attributes Adversarially Robust? | Pre-print of article accepted to the International Conference on
Pattern Recognition (ICPR) 2016. 7 pages total | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Facial attributes are emerging soft biometrics that have the potential to
reject non-matches, for example, based on mismatching gender. To be usable in
stand-alone systems, facial attributes must be extracted from images
automatically and reliably. In this paper, we propose a simple yet effective
solution for automatic facial attribute extraction by training a deep
convolutional neural network (DCNN) for each facial attribute separately,
without using any pre-training or dataset augmentation, and we obtain new
state-of-the-art facial attribute classification results on the CelebA
benchmark. To test the stability of the networks, we generated adversarial
images -- formed by adding imperceptible non-random perturbations to original
inputs which result in classification errors -- via a novel fast flipping
attribute (FFA) technique. We show that FFA generates more adversarial examples
than other related algorithms, and that DCNNs for certain attributes are
generally robust to adversarial inputs, while DCNNs for other attributes are
not. This result is surprising because no DCNNs tested to date have exhibited
robustness to adversarial images without explicit augmentation in the training
procedure to account for adversarial examples. Finally, we introduce the
concept of natural adversarial samples, i.e., images that are misclassified but
can be easily turned into correctly classified images by applying small
perturbations. We demonstrate that natural adversarial samples commonly occur,
even within the training set, and show that many of these images remain
misclassified even with additional training epochs. This phenomenon is
surprising because correcting the misclassification, particularly when guided
by training data, should require only a small adjustment to the DCNN
parameters.
| [
{
"version": "v1",
"created": "Wed, 18 May 2016 01:13:09 GMT"
},
{
"version": "v2",
"created": "Wed, 14 Sep 2016 18:44:50 GMT"
},
{
"version": "v3",
"created": "Fri, 16 Sep 2016 21:49:14 GMT"
}
] | 2016-09-20T00:00:00 | [
[
"Rozsa",
"Andras",
""
],
[
"Günther",
"Manuel",
""
],
[
"Rudd",
"Ethan M.",
""
],
[
"Boult",
"Terrance E.",
""
]
] | TITLE: Are Facial Attributes Adversarially Robust?
ABSTRACT: Facial attributes are emerging soft biometrics that have the potential to
reject non-matches, for example, based on mismatching gender. To be usable in
stand-alone systems, facial attributes must be extracted from images
automatically and reliably. In this paper, we propose a simple yet effective
solution for automatic facial attribute extraction by training a deep
convolutional neural network (DCNN) for each facial attribute separately,
without using any pre-training or dataset augmentation, and we obtain new
state-of-the-art facial attribute classification results on the CelebA
benchmark. To test the stability of the networks, we generated adversarial
images -- formed by adding imperceptible non-random perturbations to original
inputs which result in classification errors -- via a novel fast flipping
attribute (FFA) technique. We show that FFA generates more adversarial examples
than other related algorithms, and that DCNNs for certain attributes are
generally robust to adversarial inputs, while DCNNs for other attributes are
not. This result is surprising because no DCNNs tested to date have exhibited
robustness to adversarial images without explicit augmentation in the training
procedure to account for adversarial examples. Finally, we introduce the
concept of natural adversarial samples, i.e., images that are misclassified but
can be easily turned into correctly classified images by applying small
perturbations. We demonstrate that natural adversarial samples commonly occur,
even within the training set, and show that many of these images remain
misclassified even with additional training epochs. This phenomenon is
surprising because correcting the misclassification, particularly when guided
by training data, should require only a small adjustment to the DCNN
parameters.
| no_new_dataset | 0.94625 |
1608.02158 | Adler Perotte | Rajesh Ranganath and Adler Perotte and No\'emie Elhadad and David Blei | Deep Survival Analysis | Presented at 2016 Machine Learning and Healthcare Conference (MLHC
2016), Los Angeles, CA | null | null | null | stat.ML cs.AI stat.ME | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The electronic health record (EHR) provides an unprecedented opportunity to
build actionable tools to support physicians at the point of care. In this
paper, we investigate survival analysis in the context of EHR data. We
introduce deep survival analysis, a hierarchical generative approach to
survival analysis. It departs from previous approaches in two primary ways: (1)
all observations, including covariates, are modeled jointly conditioned on a
rich latent structure; and (2) the observations are aligned by their failure
time, rather than by an arbitrary time zero as in traditional survival
analysis. Further, it (3) scalably handles heterogeneous (continuous and
discrete) data types that occur in the EHR. We validate deep survival analysis
model by stratifying patients according to risk of developing coronary heart
disease (CHD). Specifically, we study a dataset of 313,000 patients
corresponding to 5.5 million months of observations. When compared to the
clinically validated Framingham CHD risk score, deep survival analysis is
significantly superior in stratifying patients according to their risk.
| [
{
"version": "v1",
"created": "Sat, 6 Aug 2016 22:18:18 GMT"
},
{
"version": "v2",
"created": "Sun, 18 Sep 2016 14:08:02 GMT"
}
] | 2016-09-20T00:00:00 | [
[
"Ranganath",
"Rajesh",
""
],
[
"Perotte",
"Adler",
""
],
[
"Elhadad",
"Noémie",
""
],
[
"Blei",
"David",
""
]
] | TITLE: Deep Survival Analysis
ABSTRACT: The electronic health record (EHR) provides an unprecedented opportunity to
build actionable tools to support physicians at the point of care. In this
paper, we investigate survival analysis in the context of EHR data. We
introduce deep survival analysis, a hierarchical generative approach to
survival analysis. It departs from previous approaches in two primary ways: (1)
all observations, including covariates, are modeled jointly conditioned on a
rich latent structure; and (2) the observations are aligned by their failure
time, rather than by an arbitrary time zero as in traditional survival
analysis. Further, it (3) scalably handles heterogeneous (continuous and
discrete) data types that occur in the EHR. We validate deep survival analysis
model by stratifying patients according to risk of developing coronary heart
disease (CHD). Specifically, we study a dataset of 313,000 patients
corresponding to 5.5 million months of observations. When compared to the
clinically validated Framingham CHD risk score, deep survival analysis is
significantly superior in stratifying patients according to their risk.
| no_new_dataset | 0.851953 |
1609.03461 | Hossein Ziaei Nafchi | Hossein Ziaei Nafchi, Atena Shahkolaei, Rachid Hedjam, Mohamed Cheriet | MUG: A Parameterless No-Reference JPEG Quality Evaluator Robust to Block
Size and Misalignment | 5 pages, 4 figures, 3 tables | null | 10.1109/LSP.2016.2608865 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this letter, a very simple no-reference image quality assessment (NR-IQA)
model for JPEG compressed images is proposed. The proposed metric called median
of unique gradients (MUG) is based on the very simple facts of unique gradient
magnitudes of JPEG compressed images. MUG is a parameterless metric and does
not need training. Unlike other NR-IQAs, MUG is independent to block size and
cropping. A more stable index called MUG+ is also introduced. The experimental
results on six benchmark datasets of natural images and a benchmark dataset of
synthetic images show that MUG is comparable to the state-of-the-art indices in
literature. In addition, its performance remains unchanged for the case of the
cropped images in which block boundaries are not known. The MATLAB source code
of the proposed metrics is available at
https://dl.dropboxusercontent.com/u/74505502/MUG.m and
https://dl.dropboxusercontent.com/u/74505502/MUGplus.m.
| [
{
"version": "v1",
"created": "Mon, 12 Sep 2016 16:11:26 GMT"
},
{
"version": "v2",
"created": "Mon, 19 Sep 2016 16:33:48 GMT"
}
] | 2016-09-20T00:00:00 | [
[
"Nafchi",
"Hossein Ziaei",
""
],
[
"Shahkolaei",
"Atena",
""
],
[
"Hedjam",
"Rachid",
""
],
[
"Cheriet",
"Mohamed",
""
]
] | TITLE: MUG: A Parameterless No-Reference JPEG Quality Evaluator Robust to Block
Size and Misalignment
ABSTRACT: In this letter, a very simple no-reference image quality assessment (NR-IQA)
model for JPEG compressed images is proposed. The proposed metric called median
of unique gradients (MUG) is based on the very simple facts of unique gradient
magnitudes of JPEG compressed images. MUG is a parameterless metric and does
not need training. Unlike other NR-IQAs, MUG is independent to block size and
cropping. A more stable index called MUG+ is also introduced. The experimental
results on six benchmark datasets of natural images and a benchmark dataset of
synthetic images show that MUG is comparable to the state-of-the-art indices in
literature. In addition, its performance remains unchanged for the case of the
cropped images in which block boundaries are not known. The MATLAB source code
of the proposed metrics is available at
https://dl.dropboxusercontent.com/u/74505502/MUG.m and
https://dl.dropboxusercontent.com/u/74505502/MUGplus.m.
| no_new_dataset | 0.949669 |
1609.05281 | Ankit Gandhi | Ankit Gandhi, Arjun Sharma, Arijit Biswas, Om Deshmukh | GeThR-Net: A Generalized Temporally Hybrid Recurrent Neural Network for
Multimodal Information Fusion | To appear in ECCV workshop on Computer Vision for Audio-Visual Media,
2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Data generated from real world events are usually temporal and contain
multimodal information such as audio, visual, depth, sensor etc. which are
required to be intelligently combined for classification tasks. In this paper,
we propose a novel generalized deep neural network architecture where temporal
streams from multiple modalities are combined. There are total M+1 (M is the
number of modalities) components in the proposed network. The first component
is a novel temporally hybrid Recurrent Neural Network (RNN) that exploits the
complimentary nature of the multimodal temporal information by allowing the
network to learn both modality specific temporal dynamics as well as the
dynamics in a multimodal feature space. M additional components are added to
the network which extract discriminative but non-temporal cues from each
modality. Finally, the predictions from all of these components are linearly
combined using a set of automatically learned weights. We perform exhaustive
experiments on three different datasets spanning four modalities. The proposed
network is relatively 3.5%, 5.7% and 2% better than the best performing
temporal multimodal baseline for UCF-101, CCV and Multimodal Gesture datasets
respectively.
| [
{
"version": "v1",
"created": "Sat, 17 Sep 2016 04:18:02 GMT"
}
] | 2016-09-20T00:00:00 | [
[
"Gandhi",
"Ankit",
""
],
[
"Sharma",
"Arjun",
""
],
[
"Biswas",
"Arijit",
""
],
[
"Deshmukh",
"Om",
""
]
] | TITLE: GeThR-Net: A Generalized Temporally Hybrid Recurrent Neural Network for
Multimodal Information Fusion
ABSTRACT: Data generated from real world events are usually temporal and contain
multimodal information such as audio, visual, depth, sensor etc. which are
required to be intelligently combined for classification tasks. In this paper,
we propose a novel generalized deep neural network architecture where temporal
streams from multiple modalities are combined. There are total M+1 (M is the
number of modalities) components in the proposed network. The first component
is a novel temporally hybrid Recurrent Neural Network (RNN) that exploits the
complimentary nature of the multimodal temporal information by allowing the
network to learn both modality specific temporal dynamics as well as the
dynamics in a multimodal feature space. M additional components are added to
the network which extract discriminative but non-temporal cues from each
modality. Finally, the predictions from all of these components are linearly
combined using a set of automatically learned weights. We perform exhaustive
experiments on three different datasets spanning four modalities. The proposed
network is relatively 3.5%, 5.7% and 2% better than the best performing
temporal multimodal baseline for UCF-101, CCV and Multimodal Gesture datasets
respectively.
| no_new_dataset | 0.952618 |
1609.05317 | Xingyi Zhou | Xingyi Zhou, Xiao Sun, Wei Zhang, Shuang Liang, Yichen Wei | Deep Kinematic Pose Regression | ECCV Workshop on Geometry Meets Deep Learning, 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learning articulated object pose is inherently difficult because the pose is
high dimensional but has many structural constraints. Most existing work do not
model such constraints and does not guarantee the geometric validity of their
pose estimation, therefore requiring a post-processing to recover the correct
geometry if desired, which is cumbersome and sub-optimal. In this work, we
propose to directly embed a kinematic object model into the deep neutral
network learning for general articulated object pose estimation. The kinematic
function is defined on the appropriately parameterized object motion variables.
It is differentiable and can be used in the gradient descent based optimization
in network training. The prior knowledge on the object geometric model is fully
exploited and the structure is guaranteed to be valid. We show convincing
experiment results on a toy example and the 3D human pose estimation problem.
For the latter we achieve state-of-the-art result on Human3.6M dataset.
| [
{
"version": "v1",
"created": "Sat, 17 Sep 2016 11:22:11 GMT"
}
] | 2016-09-20T00:00:00 | [
[
"Zhou",
"Xingyi",
""
],
[
"Sun",
"Xiao",
""
],
[
"Zhang",
"Wei",
""
],
[
"Liang",
"Shuang",
""
],
[
"Wei",
"Yichen",
""
]
] | TITLE: Deep Kinematic Pose Regression
ABSTRACT: Learning articulated object pose is inherently difficult because the pose is
high dimensional but has many structural constraints. Most existing work do not
model such constraints and does not guarantee the geometric validity of their
pose estimation, therefore requiring a post-processing to recover the correct
geometry if desired, which is cumbersome and sub-optimal. In this work, we
propose to directly embed a kinematic object model into the deep neutral
network learning for general articulated object pose estimation. The kinematic
function is defined on the appropriately parameterized object motion variables.
It is differentiable and can be used in the gradient descent based optimization
in network training. The prior knowledge on the object geometric model is fully
exploited and the structure is guaranteed to be valid. We show convincing
experiment results on a toy example and the 3D human pose estimation problem.
For the latter we achieve state-of-the-art result on Human3.6M dataset.
| no_new_dataset | 0.948155 |
1609.05345 | Shicong Liu | Shicong Liu, Junru Shao, Hongtao Lu | Generalized residual vector quantization for large scale data | published on International Conference on Multimedia and Expo 2016 | null | null | null | cs.MM cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Vector quantization is an essential tool for tasks involving large scale
data, for example, large scale similarity search, which is crucial for
content-based information retrieval and analysis. In this paper, we propose a
novel vector quantization framework that iteratively minimizes quantization
error. First, we provide a detailed review on a relevant vector quantization
method named \textit{residual vector quantization} (RVQ). Next, we propose
\textit{generalized residual vector quantization} (GRVQ) to further improve
over RVQ. Many vector quantization methods can be viewed as the special cases
of our proposed framework. We evaluate GRVQ on several large scale benchmark
datasets for large scale search, classification and object retrieval. We
compared GRVQ with existing methods in detail. Extensive experiments
demonstrate our GRVQ framework substantially outperforms existing methods in
term of quantization accuracy and computation efficiency.
| [
{
"version": "v1",
"created": "Sat, 17 Sep 2016 14:50:06 GMT"
}
] | 2016-09-20T00:00:00 | [
[
"Liu",
"Shicong",
""
],
[
"Shao",
"Junru",
""
],
[
"Lu",
"Hongtao",
""
]
] | TITLE: Generalized residual vector quantization for large scale data
ABSTRACT: Vector quantization is an essential tool for tasks involving large scale
data, for example, large scale similarity search, which is crucial for
content-based information retrieval and analysis. In this paper, we propose a
novel vector quantization framework that iteratively minimizes quantization
error. First, we provide a detailed review on a relevant vector quantization
method named \textit{residual vector quantization} (RVQ). Next, we propose
\textit{generalized residual vector quantization} (GRVQ) to further improve
over RVQ. Many vector quantization methods can be viewed as the special cases
of our proposed framework. We evaluate GRVQ on several large scale benchmark
datasets for large scale search, classification and object retrieval. We
compared GRVQ with existing methods in detail. Extensive experiments
demonstrate our GRVQ framework substantially outperforms existing methods in
term of quantization accuracy and computation efficiency.
| no_new_dataset | 0.948822 |
1609.05359 | Praveen Rao | Praveen Rao, Anas Katib, Daniel E. Lopez Barron | A Knowledge Ecosystem for the Food, Energy, and Water System | KDD 2016 Workshop on Data Science for Food, Energy and Water, Aug
13-17, 2016, San Francisco, CA | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Food, energy, and water (FEW) are key resources to sustain human life and
economic growth. There is an increasing stress on these interconnected
resources due to population growth, natural disasters, and human activities.
New research is necessary to foster more efficient, more secure, and safer use
of FEW resources in the U.S. and globally. In this position paper, we present
the idea of a knowledge ecosystem for enabling the semantic data integration of
heterogeneous datasets in the FEW system to promote knowledge discovery and
superior decision making through semantic reasoning. Rich, diverse datasets
published by U.S. federal agencies will be utilized. Our knowledge ecosystem
will build on Semantic Web technologies and advances in statistical relational
learning to (a) represent, integrate, and harmonize diverse data sources and
(b) perform ontology-based reasoning to discover actionable insights from FEW
datasets.
| [
{
"version": "v1",
"created": "Sat, 17 Sep 2016 16:27:38 GMT"
}
] | 2016-09-20T00:00:00 | [
[
"Rao",
"Praveen",
""
],
[
"Katib",
"Anas",
""
],
[
"Barron",
"Daniel E. Lopez",
""
]
] | TITLE: A Knowledge Ecosystem for the Food, Energy, and Water System
ABSTRACT: Food, energy, and water (FEW) are key resources to sustain human life and
economic growth. There is an increasing stress on these interconnected
resources due to population growth, natural disasters, and human activities.
New research is necessary to foster more efficient, more secure, and safer use
of FEW resources in the U.S. and globally. In this position paper, we present
the idea of a knowledge ecosystem for enabling the semantic data integration of
heterogeneous datasets in the FEW system to promote knowledge discovery and
superior decision making through semantic reasoning. Rich, diverse datasets
published by U.S. federal agencies will be utilized. Our knowledge ecosystem
will build on Semantic Web technologies and advances in statistical relational
learning to (a) represent, integrate, and harmonize diverse data sources and
(b) perform ontology-based reasoning to discover actionable insights from FEW
datasets.
| no_new_dataset | 0.952309 |
1609.05388 | Charalampos Tsourakakis | Jaros{\l}aw B{\l}asiok, Charalampos E. Tsourakakis | ADAGIO: Fast Data-aware Near-Isometric Linear Embeddings | ICDM 2016 | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many important applications, including signal reconstruction, parameter
estimation, and signal processing in a compressed domain, rely on a
low-dimensional representation of the dataset that preserves {\em all} pairwise
distances between the data points and leverages the inherent geometric
structure that is typically present. Recently Hedge, Sankaranarayanan, Yin and
Baraniuk \cite{hedge2015} proposed the first data-aware near-isometric linear
embedding which achieves the best of both worlds. However, their method NuMax
does not scale to large-scale datasets.
Our main contribution is a simple, data-aware, near-isometric linear
dimensionality reduction method which significantly outperforms a
state-of-the-art method \cite{hedge2015} with respect to scalability while
achieving high quality near-isometries. Furthermore, our method comes with
strong worst-case theoretical guarantees that allow us to guarantee the quality
of the obtained near-isometry. We verify experimentally the efficiency of our
method on numerous real-world datasets, where we find that our method ($<$10
secs) is more than 3\,000$\times$ faster than the state-of-the-art method
\cite{hedge2015} ($>$9 hours) on medium scale datasets with 60\,000 data points
in 784 dimensions. Finally, we use our method as a preprocessing step to
increase the computational efficiency of a classification application and for
speeding up approximate nearest neighbor queries.
| [
{
"version": "v1",
"created": "Sat, 17 Sep 2016 21:01:19 GMT"
}
] | 2016-09-20T00:00:00 | [
[
"Błasiok",
"Jarosław",
""
],
[
"Tsourakakis",
"Charalampos E.",
""
]
] | TITLE: ADAGIO: Fast Data-aware Near-Isometric Linear Embeddings
ABSTRACT: Many important applications, including signal reconstruction, parameter
estimation, and signal processing in a compressed domain, rely on a
low-dimensional representation of the dataset that preserves {\em all} pairwise
distances between the data points and leverages the inherent geometric
structure that is typically present. Recently Hedge, Sankaranarayanan, Yin and
Baraniuk \cite{hedge2015} proposed the first data-aware near-isometric linear
embedding which achieves the best of both worlds. However, their method NuMax
does not scale to large-scale datasets.
Our main contribution is a simple, data-aware, near-isometric linear
dimensionality reduction method which significantly outperforms a
state-of-the-art method \cite{hedge2015} with respect to scalability while
achieving high quality near-isometries. Furthermore, our method comes with
strong worst-case theoretical guarantees that allow us to guarantee the quality
of the obtained near-isometry. We verify experimentally the efficiency of our
method on numerous real-world datasets, where we find that our method ($<$10
secs) is more than 3\,000$\times$ faster than the state-of-the-art method
\cite{hedge2015} ($>$9 hours) on medium scale datasets with 60\,000 data points
in 784 dimensions. Finally, we use our method as a preprocessing step to
increase the computational efficiency of a classification application and for
speeding up approximate nearest neighbor queries.
| no_new_dataset | 0.950088 |
1609.05396 | Martin Simonovsky | Martin Simonovsky, Benjam\'in Guti\'errez-Becker, Diana Mateus, Nassir
Navab, Nikos Komodakis | A Deep Metric for Multimodal Registration | Accepted to MICCAI 2016; extended version | null | null | null | cs.CV cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multimodal registration is a challenging problem in medical imaging due the
high variability of tissue appearance under different imaging modalities. The
crucial component here is the choice of the right similarity measure. We make a
step towards a general learning-based solution that can be adapted to specific
situations and present a metric based on a convolutional neural network. Our
network can be trained from scratch even from a few aligned image pairs. The
metric is validated on intersubject deformable registration on a dataset
different from the one used for training, demonstrating good generalization. In
this task, we outperform mutual information by a significant margin.
| [
{
"version": "v1",
"created": "Sat, 17 Sep 2016 21:46:21 GMT"
}
] | 2016-09-20T00:00:00 | [
[
"Simonovsky",
"Martin",
""
],
[
"Gutiérrez-Becker",
"Benjamín",
""
],
[
"Mateus",
"Diana",
""
],
[
"Navab",
"Nassir",
""
],
[
"Komodakis",
"Nikos",
""
]
] | TITLE: A Deep Metric for Multimodal Registration
ABSTRACT: Multimodal registration is a challenging problem in medical imaging due the
high variability of tissue appearance under different imaging modalities. The
crucial component here is the choice of the right similarity measure. We make a
step towards a general learning-based solution that can be adapted to specific
situations and present a metric based on a convolutional neural network. Our
network can be trained from scratch even from a few aligned image pairs. The
metric is validated on intersubject deformable registration on a dataset
different from the one used for training, demonstrating good generalization. In
this task, we outperform mutual information by a significant margin.
| no_new_dataset | 0.949763 |
1609.05401 | Jose Alberto Garc\'ia Guti\'errez Sr. | Jose A. Garc\'ia Guti\'errez | Applications of Data Mining (DM) in Science and Engineering: State of
the art and perspectives | in Spanish | null | null | null | cs.AI cs.DB | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The continuous increase in the availability of data of any kind, coupled with
the development of networks of high-speed communications, the popularization of
cloud computing and the growth of data centers and the emergence of
high-performance computing does essential the task to develop techniques that
allow more efficient data processing and analyzing of large volumes datasets
and extraction of valuable information. In the following pages we will discuss
about development of this field in recent decades, and its potential and
applicability present in the various branches of scientific research. Also, we
try to review briefly the different families of algorithms that are included in
data mining research area, its scalability with increasing dimensionality of
the input data and how they can be addressed and what behavior different
methods in a scenario in which the information is distributed or decentralized
processed so as to increment performance optimization in heterogeneous
environments.
| [
{
"version": "v1",
"created": "Sat, 17 Sep 2016 22:22:17 GMT"
}
] | 2016-09-20T00:00:00 | [
[
"Gutiérrez",
"Jose A. García",
""
]
] | TITLE: Applications of Data Mining (DM) in Science and Engineering: State of
the art and perspectives
ABSTRACT: The continuous increase in the availability of data of any kind, coupled with
the development of networks of high-speed communications, the popularization of
cloud computing and the growth of data centers and the emergence of
high-performance computing does essential the task to develop techniques that
allow more efficient data processing and analyzing of large volumes datasets
and extraction of valuable information. In the following pages we will discuss
about development of this field in recent decades, and its potential and
applicability present in the various branches of scientific research. Also, we
try to review briefly the different families of algorithms that are included in
data mining research area, its scalability with increasing dimensionality of
the input data and how they can be addressed and what behavior different
methods in a scenario in which the information is distributed or decentralized
processed so as to increment performance optimization in heterogeneous
environments.
| no_new_dataset | 0.948202 |
1609.05420 | Senthil Purushwalkam | Senthil Purushwalkam, Abhinav Gupta | Pose from Action: Unsupervised Learning of Pose Features based on Motion | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Human actions are comprised of a sequence of poses. This makes videos of
humans a rich and dense source of human poses. We propose an unsupervised
method to learn pose features from videos that exploits a signal which is
complementary to appearance and can be used as supervision: motion. The key
idea is that humans go through poses in a predictable manner while performing
actions. Hence, given two poses, it should be possible to model the motion that
caused the change between them. We represent each of the poses as a feature in
a CNN (Appearance ConvNet) and generate a motion encoding from optical flow
maps using a separate CNN (Motion ConvNet). The data for this task is
automatically generated allowing us to train without human supervision. We
demonstrate the strength of the learned representation by finetuning the
trained model for Pose Estimation on the FLIC dataset, for static image action
recognition on PASCAL and for action recognition in videos on UCF101 and
HMDB51.
| [
{
"version": "v1",
"created": "Sun, 18 Sep 2016 04:18:42 GMT"
}
] | 2016-09-20T00:00:00 | [
[
"Purushwalkam",
"Senthil",
""
],
[
"Gupta",
"Abhinav",
""
]
] | TITLE: Pose from Action: Unsupervised Learning of Pose Features based on Motion
ABSTRACT: Human actions are comprised of a sequence of poses. This makes videos of
humans a rich and dense source of human poses. We propose an unsupervised
method to learn pose features from videos that exploits a signal which is
complementary to appearance and can be used as supervision: motion. The key
idea is that humans go through poses in a predictable manner while performing
actions. Hence, given two poses, it should be possible to model the motion that
caused the change between them. We represent each of the poses as a feature in
a CNN (Appearance ConvNet) and generate a motion encoding from optical flow
maps using a separate CNN (Motion ConvNet). The data for this task is
automatically generated allowing us to train without human supervision. We
demonstrate the strength of the learned representation by finetuning the
trained model for Pose Estimation on the FLIC dataset, for static image action
recognition on PASCAL and for action recognition in videos on UCF101 and
HMDB51.
| no_new_dataset | 0.947672 |
1609.05528 | Shebuti Rayana | Shebuti Rayana, Wen Zhong and Leman Akoglu | Sequential Ensemble Learning for Outlier Detection: A Bias-Variance
Perspective | 11 pages, 8 figures, conference | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Ensemble methods for classification and clustering have been effectively used
for decades, while ensemble learning for outlier detection has only been
studied recently. In this work, we design a new ensemble approach for outlier
detection in multi-dimensional point data, which provides improved accuracy by
reducing error through both bias and variance. Although classification and
outlier detection appear as different problems, their theoretical underpinnings
are quite similar in terms of the bias-variance trade-off [1], where outlier
detection is considered as a binary classification task with unobserved labels
but a similar bias-variance decomposition of error.
In this paper, we propose a sequential ensemble approach called CARE that
employs a two-phase aggregation of the intermediate results in each iteration
to reach the final outcome. Unlike existing outlier ensembles which solely
incorporate a parallel framework by aggregating the outcomes of independent
base detectors to reduce variance, our ensemble incorporates both the parallel
and sequential building blocks to reduce bias as well as variance by ($i$)
successively eliminating outliers from the original dataset to build a better
data model on which outlierness is estimated (sequentially), and ($ii$)
combining the results from individual base detectors and across iterations
(parallelly). Through extensive experiments on sixteen real-world datasets
mainly from the UCI machine learning repository [2], we show that CARE performs
significantly better than or at least similar to the individual baselines. We
also compare CARE with the state-of-the-art outlier ensembles where it also
provides significant improvement when it is the winner and remains close
otherwise.
| [
{
"version": "v1",
"created": "Sun, 18 Sep 2016 18:59:42 GMT"
}
] | 2016-09-20T00:00:00 | [
[
"Rayana",
"Shebuti",
""
],
[
"Zhong",
"Wen",
""
],
[
"Akoglu",
"Leman",
""
]
] | TITLE: Sequential Ensemble Learning for Outlier Detection: A Bias-Variance
Perspective
ABSTRACT: Ensemble methods for classification and clustering have been effectively used
for decades, while ensemble learning for outlier detection has only been
studied recently. In this work, we design a new ensemble approach for outlier
detection in multi-dimensional point data, which provides improved accuracy by
reducing error through both bias and variance. Although classification and
outlier detection appear as different problems, their theoretical underpinnings
are quite similar in terms of the bias-variance trade-off [1], where outlier
detection is considered as a binary classification task with unobserved labels
but a similar bias-variance decomposition of error.
In this paper, we propose a sequential ensemble approach called CARE that
employs a two-phase aggregation of the intermediate results in each iteration
to reach the final outcome. Unlike existing outlier ensembles which solely
incorporate a parallel framework by aggregating the outcomes of independent
base detectors to reduce variance, our ensemble incorporates both the parallel
and sequential building blocks to reduce bias as well as variance by ($i$)
successively eliminating outliers from the original dataset to build a better
data model on which outlierness is estimated (sequentially), and ($ii$)
combining the results from individual base detectors and across iterations
(parallelly). Through extensive experiments on sixteen real-world datasets
mainly from the UCI machine learning repository [2], we show that CARE performs
significantly better than or at least similar to the individual baselines. We
also compare CARE with the state-of-the-art outlier ensembles where it also
provides significant improvement when it is the winner and remains close
otherwise.
| no_new_dataset | 0.946892 |
1609.05561 | Ricardo Fabbri | Anil Usumezbas and Ricardo Fabbri and Benjamin B. Kimia | From Multiview Image Curves to 3D Drawings | Expanded ECCV 2016 version with tweaked figures and including an
overview of the supplementary material available at
multiview-3d-drawing.sourceforge.net | Lecture Notes in Computer Science, 9908, pp 70-87, september 2016 | 10.1007/978-3-319-46493-0_5 | null | cs.CV cs.CG cs.GR cs.RO | http://creativecommons.org/licenses/by/4.0/ | Reconstructing 3D scenes from multiple views has made impressive strides in
recent years, chiefly by correlating isolated feature points, intensity
patterns, or curvilinear structures. In the general setting - without
controlled acquisition, abundant texture, curves and surfaces following
specific models or limiting scene complexity - most methods produce unorganized
point clouds, meshes, or voxel representations, with some exceptions producing
unorganized clouds of 3D curve fragments. Ideally, many applications require
structured representations of curves, surfaces and their spatial relationships.
This paper presents a step in this direction by formulating an approach that
combines 2D image curves into a collection of 3D curves, with topological
connectivity between them represented as a 3D graph. This results in a 3D
drawing, which is complementary to surface representations in the same sense as
a 3D scaffold complements a tent taut over it. We evaluate our results against
truth on synthetic and real datasets.
| [
{
"version": "v1",
"created": "Sun, 18 Sep 2016 22:20:35 GMT"
}
] | 2016-09-20T00:00:00 | [
[
"Usumezbas",
"Anil",
""
],
[
"Fabbri",
"Ricardo",
""
],
[
"Kimia",
"Benjamin B.",
""
]
] | TITLE: From Multiview Image Curves to 3D Drawings
ABSTRACT: Reconstructing 3D scenes from multiple views has made impressive strides in
recent years, chiefly by correlating isolated feature points, intensity
patterns, or curvilinear structures. In the general setting - without
controlled acquisition, abundant texture, curves and surfaces following
specific models or limiting scene complexity - most methods produce unorganized
point clouds, meshes, or voxel representations, with some exceptions producing
unorganized clouds of 3D curve fragments. Ideally, many applications require
structured representations of curves, surfaces and their spatial relationships.
This paper presents a step in this direction by formulating an approach that
combines 2D image curves into a collection of 3D curves, with topological
connectivity between them represented as a 3D graph. This results in a 3D
drawing, which is complementary to surface representations in the same sense as
a 3D scaffold complements a tent taut over it. We evaluate our results against
truth on synthetic and real datasets.
| no_new_dataset | 0.944434 |
1609.05583 | Seyed Ali Amirshahi Seyed Ali Amirshahi | Seyed Ali Amirshahi, Gregor Uwe Hayn-Leichsenring, Joachim Denzler,
Christoph Redies | Color: A Crucial Factor for Aesthetic Quality Assessment in a Subjective
Dataset of Paintings | This paper was presented at the AIC 2013 Congress | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Computational aesthetics is an emerging field of research which has attracted
different research groups in the last few years. In this field, one of the main
approaches to evaluate the aesthetic quality of paintings and photographs is a
feature-based approach. Among the different features proposed to reach this
goal, color plays an import role. In this paper, we introduce a novel dataset
that consists of paintings of Western provenance from 36 well-known painters
from the 15th to the 20th century. As a first step and to assess this dataset,
using a classifier, we investigate the correlation between the subjective
scores and two widely used features that are related to color perception and in
different aesthetic quality assessment approaches. Results show a
classification rate of up to 73% between the color features and the subjective
scores.
| [
{
"version": "v1",
"created": "Mon, 19 Sep 2016 02:17:34 GMT"
}
] | 2016-09-20T00:00:00 | [
[
"Amirshahi",
"Seyed Ali",
""
],
[
"Hayn-Leichsenring",
"Gregor Uwe",
""
],
[
"Denzler",
"Joachim",
""
],
[
"Redies",
"Christoph",
""
]
] | TITLE: Color: A Crucial Factor for Aesthetic Quality Assessment in a Subjective
Dataset of Paintings
ABSTRACT: Computational aesthetics is an emerging field of research which has attracted
different research groups in the last few years. In this field, one of the main
approaches to evaluate the aesthetic quality of paintings and photographs is a
feature-based approach. Among the different features proposed to reach this
goal, color plays an import role. In this paper, we introduce a novel dataset
that consists of paintings of Western provenance from 36 well-known painters
from the 15th to the 20th century. As a first step and to assess this dataset,
using a classifier, we investigate the correlation between the subjective
scores and two widely used features that are related to color perception and in
different aesthetic quality assessment approaches. Results show a
classification rate of up to 73% between the color features and the subjective
scores.
| new_dataset | 0.9601 |
1609.05619 | Hassan Alhajj Hassan ALHAJJ | Hassan Al Hajj, Gwenol\'e Quellec, Mathieu Lamard, Guy Cazuguel,
B\'eatrice Cochener | Coarse-to-fine Surgical Instrument Detection for Cataract Surgery
Monitoring | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The amount of surgical data, recorded during video-monitored surgeries, has
extremely increased. This paper aims at improving existing solutions for the
automated analysis of cataract surgeries in real time. Through the analysis of
a video recording the operating table, it is possible to know which instruments
exit or enter the operating table, and therefore which ones are likely being
used by the surgeon. Combining these observations with observations from the
microscope video should enhance the overall performance of the systems. To this
end, the proposed solution is divided into two main parts: one to detect the
instruments at the beginning of the surgery and one to update the list of
instruments every time a change is detected in the scene. In the first part,
the goal is to separate the instruments from the background and from irrelevant
objects. For the second, we are interested in detecting the instruments that
appear and disappear whenever the surgeon interacts with the table. Experiments
on a dataset of 36 cataract surgeries validate the good performance of the
proposed solution.
| [
{
"version": "v1",
"created": "Mon, 19 Sep 2016 07:40:41 GMT"
}
] | 2016-09-20T00:00:00 | [
[
"Hajj",
"Hassan Al",
""
],
[
"Quellec",
"Gwenolé",
""
],
[
"Lamard",
"Mathieu",
""
],
[
"Cazuguel",
"Guy",
""
],
[
"Cochener",
"Béatrice",
""
]
] | TITLE: Coarse-to-fine Surgical Instrument Detection for Cataract Surgery
Monitoring
ABSTRACT: The amount of surgical data, recorded during video-monitored surgeries, has
extremely increased. This paper aims at improving existing solutions for the
automated analysis of cataract surgeries in real time. Through the analysis of
a video recording the operating table, it is possible to know which instruments
exit or enter the operating table, and therefore which ones are likely being
used by the surgeon. Combining these observations with observations from the
microscope video should enhance the overall performance of the systems. To this
end, the proposed solution is divided into two main parts: one to detect the
instruments at the beginning of the surgery and one to update the list of
instruments every time a change is detected in the scene. In the first part,
the goal is to separate the instruments from the background and from irrelevant
objects. For the second, we are interested in detecting the instruments that
appear and disappear whenever the surgeon interacts with the table. Experiments
on a dataset of 36 cataract surgeries validate the good performance of the
proposed solution.
| new_dataset | 0.502594 |
1609.05787 | Qiang Liu | Qiang Liu, Shu Wu, Diyi Wang, Zhaokang Li, Liang Wang | Context-aware Sequential Recommendation | IEEE International Conference on Data Mining (ICDM) 2016, to apear | null | null | null | cs.IR cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Since sequential information plays an important role in modeling user
behaviors, various sequential recommendation methods have been proposed.
Methods based on Markov assumption are widely-used, but independently combine
several most recent components. Recently, Recurrent Neural Networks (RNN) based
methods have been successfully applied in several sequential modeling tasks.
However, for real-world applications, these methods have difficulty in modeling
the contextual information, which has been proved to be very important for
behavior modeling. In this paper, we propose a novel model, named Context-Aware
Recurrent Neural Networks (CA-RNN). Instead of using the constant input matrix
and transition matrix in conventional RNN models, CA-RNN employs adaptive
context-specific input matrices and adaptive context-specific transition
matrices. The adaptive context-specific input matrices capture external
situations where user behaviors happen, such as time, location, weather and so
on. And the adaptive context-specific transition matrices capture how lengths
of time intervals between adjacent behaviors in historical sequences affect the
transition of global sequential features. Experimental results show that the
proposed CA-RNN model yields significant improvements over state-of-the-art
sequential recommendation methods and context-aware recommendation methods on
two public datasets, i.e., the Taobao dataset and the Movielens-1M dataset.
| [
{
"version": "v1",
"created": "Mon, 19 Sep 2016 15:33:46 GMT"
}
] | 2016-09-20T00:00:00 | [
[
"Liu",
"Qiang",
""
],
[
"Wu",
"Shu",
""
],
[
"Wang",
"Diyi",
""
],
[
"Li",
"Zhaokang",
""
],
[
"Wang",
"Liang",
""
]
] | TITLE: Context-aware Sequential Recommendation
ABSTRACT: Since sequential information plays an important role in modeling user
behaviors, various sequential recommendation methods have been proposed.
Methods based on Markov assumption are widely-used, but independently combine
several most recent components. Recently, Recurrent Neural Networks (RNN) based
methods have been successfully applied in several sequential modeling tasks.
However, for real-world applications, these methods have difficulty in modeling
the contextual information, which has been proved to be very important for
behavior modeling. In this paper, we propose a novel model, named Context-Aware
Recurrent Neural Networks (CA-RNN). Instead of using the constant input matrix
and transition matrix in conventional RNN models, CA-RNN employs adaptive
context-specific input matrices and adaptive context-specific transition
matrices. The adaptive context-specific input matrices capture external
situations where user behaviors happen, such as time, location, weather and so
on. And the adaptive context-specific transition matrices capture how lengths
of time intervals between adjacent behaviors in historical sequences affect the
transition of global sequential features. Experimental results show that the
proposed CA-RNN model yields significant improvements over state-of-the-art
sequential recommendation methods and context-aware recommendation methods on
two public datasets, i.e., the Taobao dataset and the Movielens-1M dataset.
| no_new_dataset | 0.949248 |
1606.08117 | Xinxing Xu | Yong Kiam Tan, Xinxing Xu and Yong Liu | Improved Recurrent Neural Networks for Session-based Recommendations | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recurrent neural networks (RNNs) were recently proposed for the session-based
recommendation task. The models showed promising improvements over traditional
recommendation approaches. In this work, we further study RNN-based models for
session-based recommendations. We propose the application of two techniques to
improve model performance, namely, data augmentation, and a method to account
for shifts in the input data distribution. We also empirically study the use of
generalised distillation, and a novel alternative model that directly predicts
item embeddings. Experiments on the RecSys Challenge 2015 dataset demonstrate
relative improvements of 12.8% and 14.8% over previously reported results on
the Recall@20 and Mean Reciprocal Rank@20 metrics respectively.
| [
{
"version": "v1",
"created": "Mon, 27 Jun 2016 03:06:44 GMT"
},
{
"version": "v2",
"created": "Fri, 16 Sep 2016 09:41:10 GMT"
}
] | 2016-09-19T00:00:00 | [
[
"Tan",
"Yong Kiam",
""
],
[
"Xu",
"Xinxing",
""
],
[
"Liu",
"Yong",
""
]
] | TITLE: Improved Recurrent Neural Networks for Session-based Recommendations
ABSTRACT: Recurrent neural networks (RNNs) were recently proposed for the session-based
recommendation task. The models showed promising improvements over traditional
recommendation approaches. In this work, we further study RNN-based models for
session-based recommendations. We propose the application of two techniques to
improve model performance, namely, data augmentation, and a method to account
for shifts in the input data distribution. We also empirically study the use of
generalised distillation, and a novel alternative model that directly predicts
item embeddings. Experiments on the RecSys Challenge 2015 dataset demonstrate
relative improvements of 12.8% and 14.8% over previously reported results on
the Recall@20 and Mean Reciprocal Rank@20 metrics respectively.
| no_new_dataset | 0.953101 |
1608.05180 | Wenzheng Chen | Huayong Xu, Yangyan Li, Wenzheng Chen, Dani Lischinski, Daniel
Cohen-Or, Baoquan Chen | A Holistic Approach for Data-Driven Object Cutout | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Object cutout is a fundamental operation for image editing and manipulation,
yet it is extremely challenging to automate it in real-world images, which
typically contain considerable background clutter. In contrast to existing
cutout methods, which are based mainly on low-level image analysis, we propose
a more holistic approach, which considers the entire shape of the object of
interest by leveraging higher-level image analysis and learnt global shape
priors. Specifically, we leverage a deep neural network (DNN) trained for
objects of a particular class (chairs) for realizing this mechanism. Given a
rectangular image region, the DNN outputs a probability map (P-map) that
indicates for each pixel inside the rectangle how likely it is to be contained
inside an object from the class of interest. We show that the resulting P-maps
may be used to evaluate how likely a rectangle proposal is to contain an
instance of the class, and further process good proposals to produce an
accurate object cutout mask. This amounts to an automatic end-to-end pipeline
for catergory-specific object cutout. We evaluate our approach on segmentation
benchmark datasets, and show that it significantly outperforms the
state-of-the-art on them.
| [
{
"version": "v1",
"created": "Thu, 18 Aug 2016 05:19:26 GMT"
},
{
"version": "v2",
"created": "Fri, 16 Sep 2016 13:00:21 GMT"
}
] | 2016-09-19T00:00:00 | [
[
"Xu",
"Huayong",
""
],
[
"Li",
"Yangyan",
""
],
[
"Chen",
"Wenzheng",
""
],
[
"Lischinski",
"Dani",
""
],
[
"Cohen-Or",
"Daniel",
""
],
[
"Chen",
"Baoquan",
""
]
] | TITLE: A Holistic Approach for Data-Driven Object Cutout
ABSTRACT: Object cutout is a fundamental operation for image editing and manipulation,
yet it is extremely challenging to automate it in real-world images, which
typically contain considerable background clutter. In contrast to existing
cutout methods, which are based mainly on low-level image analysis, we propose
a more holistic approach, which considers the entire shape of the object of
interest by leveraging higher-level image analysis and learnt global shape
priors. Specifically, we leverage a deep neural network (DNN) trained for
objects of a particular class (chairs) for realizing this mechanism. Given a
rectangular image region, the DNN outputs a probability map (P-map) that
indicates for each pixel inside the rectangle how likely it is to be contained
inside an object from the class of interest. We show that the resulting P-maps
may be used to evaluate how likely a rectangle proposal is to contain an
instance of the class, and further process good proposals to produce an
accurate object cutout mask. This amounts to an automatic end-to-end pipeline
for catergory-specific object cutout. We evaluate our approach on segmentation
benchmark datasets, and show that it significantly outperforms the
state-of-the-art on them.
| no_new_dataset | 0.949669 |
1609.00626 | Shinichi Nakajima | Shinichi Nakajima, Sebastian Krause, Dirk Weissenborn, Sven Schmeier,
Nico Goernitz, Feiyu Xu | SynsetRank: Degree-adjusted Random Walk for Relation Identification | null | null | null | null | cs.CL stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In relation extraction, a key process is to obtain good detectors that find
relevant sentences describing the target relation. To minimize the necessity of
labeled data for refining detectors, previous work successfully made use of
BabelNet, a semantic graph structure expressing relationships between synsets,
as side information or prior knowledge. The goal of this paper is to enhance
the use of graph structure in the framework of random walk with a few
adjustable parameters. Actually, a straightforward application of random walk
degrades the performance even after parameter optimization. With the insight
from this unsuccessful trial, we propose SynsetRank, which adjusts the initial
probability so that high degree nodes influence the neighbors as strong as low
degree nodes. In our experiment on 13 relations in the FB15K-237 dataset,
SynsetRank significantly outperforms baselines and the plain random walk
approach.
| [
{
"version": "v1",
"created": "Fri, 2 Sep 2016 14:42:18 GMT"
},
{
"version": "v2",
"created": "Thu, 15 Sep 2016 22:46:29 GMT"
}
] | 2016-09-19T00:00:00 | [
[
"Nakajima",
"Shinichi",
""
],
[
"Krause",
"Sebastian",
""
],
[
"Weissenborn",
"Dirk",
""
],
[
"Schmeier",
"Sven",
""
],
[
"Goernitz",
"Nico",
""
],
[
"Xu",
"Feiyu",
""
]
] | TITLE: SynsetRank: Degree-adjusted Random Walk for Relation Identification
ABSTRACT: In relation extraction, a key process is to obtain good detectors that find
relevant sentences describing the target relation. To minimize the necessity of
labeled data for refining detectors, previous work successfully made use of
BabelNet, a semantic graph structure expressing relationships between synsets,
as side information or prior knowledge. The goal of this paper is to enhance
the use of graph structure in the framework of random walk with a few
adjustable parameters. Actually, a straightforward application of random walk
degrades the performance even after parameter optimization. With the insight
from this unsuccessful trial, we propose SynsetRank, which adjusts the initial
probability so that high degree nodes influence the neighbors as strong as low
degree nodes. In our experiment on 13 relations in the FB15K-237 dataset,
SynsetRank significantly outperforms baselines and the plain random walk
approach.
| no_new_dataset | 0.947137 |
1609.04859 | Rajmonda Caceres S | Rajmonda S. Caceres, Leah Weiner, Matthew C. Schmidt, Benjamin A.
Miller, William M. Campbell | Model Selection Framework for Graph-based data | 7 pages | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graphs are powerful abstractions for capturing complex relationships in
diverse application settings. An active area of research focuses on theoretical
models that define the generative mechanism of a graph. Yet given the
complexity and inherent noise in real datasets, it is still very challenging to
identify the best model for a given observed graph. We discuss a framework for
graph model selection that leverages a long list of graph topological
properties and a random forest classifier to learn and classify different graph
instances. We fully characterize the discriminative power of our approach as we
sweep through the parameter space of two generative models, the Erdos-Renyi and
the stochastic block model. We show that our approach gets very close to known
theoretical bounds and we provide insight on which topological features play a
critical discriminating role.
| [
{
"version": "v1",
"created": "Thu, 15 Sep 2016 21:00:56 GMT"
}
] | 2016-09-19T00:00:00 | [
[
"Caceres",
"Rajmonda S.",
""
],
[
"Weiner",
"Leah",
""
],
[
"Schmidt",
"Matthew C.",
""
],
[
"Miller",
"Benjamin A.",
""
],
[
"Campbell",
"William M.",
""
]
] | TITLE: Model Selection Framework for Graph-based data
ABSTRACT: Graphs are powerful abstractions for capturing complex relationships in
diverse application settings. An active area of research focuses on theoretical
models that define the generative mechanism of a graph. Yet given the
complexity and inherent noise in real datasets, it is still very challenging to
identify the best model for a given observed graph. We discuss a framework for
graph model selection that leverages a long list of graph topological
properties and a random forest classifier to learn and classify different graph
instances. We fully characterize the discriminative power of our approach as we
sweep through the parameter space of two generative models, the Erdos-Renyi and
the stochastic block model. We show that our approach gets very close to known
theoretical bounds and we provide insight on which topological features play a
critical discriminating role.
| no_new_dataset | 0.947284 |
1609.05096 | Yongchao Tian | Yongchao Tian, Ioannis Alagiannis, Erietta Liarou, Anastasia Ailamaki,
Pietro Michiardi, Marko Vukolic | DiNoDB: an Interactive-speed Query Engine for Ad-hoc Queries on
Temporary Data | null | null | null | null | cs.DB cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As data sets grow in size, analytics applications struggle to get instant
insight into large datasets. Modern applications involve heavy batch processing
jobs over large volumes of data and at the same time require efficient ad-hoc
interactive analytics on temporary data. Existing solutions, however, typically
focus on one of these two aspects, largely ignoring the need for synergy
between the two. Consequently, interactive queries need to re-iterate costly
passes through the entire dataset (e.g., data loading) that may provide
meaningful return on investment only when data is queried over a long period of
time. In this paper, we propose DiNoDB, an interactive-speed query engine for
ad-hoc queries on temporary data. DiNoDB avoids the expensive loading and
transformation phase that characterizes both traditional RDBMSs and current
interactive analytics solutions. It is tailored to modern workflows found in
machine learning and data exploration use cases, which often involve iterations
of cycles of batch and interactive analytics on data that is typically useful
for a narrow processing window. The key innovation of DiNoDB is to piggyback on
the batch processing phase the creation of metadata that DiNoDB exploits to
expedite the interactive queries. Our experimental analysis demonstrates that
DiNoDB achieves very good performance for a wide range of ad-hoc queries
compared to alternatives %such as Hive, Stado, SparkSQL and Impala.
| [
{
"version": "v1",
"created": "Fri, 16 Sep 2016 14:56:31 GMT"
}
] | 2016-09-19T00:00:00 | [
[
"Tian",
"Yongchao",
""
],
[
"Alagiannis",
"Ioannis",
""
],
[
"Liarou",
"Erietta",
""
],
[
"Ailamaki",
"Anastasia",
""
],
[
"Michiardi",
"Pietro",
""
],
[
"Vukolic",
"Marko",
""
]
] | TITLE: DiNoDB: an Interactive-speed Query Engine for Ad-hoc Queries on
Temporary Data
ABSTRACT: As data sets grow in size, analytics applications struggle to get instant
insight into large datasets. Modern applications involve heavy batch processing
jobs over large volumes of data and at the same time require efficient ad-hoc
interactive analytics on temporary data. Existing solutions, however, typically
focus on one of these two aspects, largely ignoring the need for synergy
between the two. Consequently, interactive queries need to re-iterate costly
passes through the entire dataset (e.g., data loading) that may provide
meaningful return on investment only when data is queried over a long period of
time. In this paper, we propose DiNoDB, an interactive-speed query engine for
ad-hoc queries on temporary data. DiNoDB avoids the expensive loading and
transformation phase that characterizes both traditional RDBMSs and current
interactive analytics solutions. It is tailored to modern workflows found in
machine learning and data exploration use cases, which often involve iterations
of cycles of batch and interactive analytics on data that is typically useful
for a narrow processing window. The key innovation of DiNoDB is to piggyback on
the batch processing phase the creation of metadata that DiNoDB exploits to
expedite the interactive queries. Our experimental analysis demonstrates that
DiNoDB achieves very good performance for a wide range of ad-hoc queries
compared to alternatives %such as Hive, Stado, SparkSQL and Impala.
| no_new_dataset | 0.944022 |
1609.05112 | Hamid Tizhoosh | Hamid R. Tizhoosh, Christopher Mitcheltree, Shujin Zhu, Shamak Dutta | Barcodes for Medical Image Retrieval Using Autoencoded Radon Transform | o appear in proceedings of the 23rd International Conference on
Pattern Recognition (ICPR 2016), Cancun, Mexico, December 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Using content-based binary codes to tag digital images has emerged as a
promising retrieval technology. Recently, Radon barcodes (RBCs) have been
introduced as a new binary descriptor for image search. RBCs are generated by
binarization of Radon projections and by assembling them into a vector, namely
the barcode. A simple local thresholding has been suggested for binarization.
In this paper, we put forward the idea of "autoencoded Radon barcodes". Using
images in a training dataset, we autoencode Radon projections to perform
binarization on outputs of hidden layers. We employed the mini-batch stochastic
gradient descent approach for the training. Each hidden layer of the
autoencoder can produce a barcode using a threshold determined based on the
range of the logistic function used. The compressing capability of autoencoders
apparently reduces the redundancies inherent in Radon projections leading to
more accurate retrieval results. The IRMA dataset with 14,410 x-ray images is
used to validate the performance of the proposed method. The experimental
results, containing comparison with RBCs, SURF and BRISK, show that autoencoded
Radon barcode (ARBC) has the capacity to capture important information and to
learn richer representations resulting in lower retrieval errors for image
retrieval measured with the accuracy of the first hit only.
| [
{
"version": "v1",
"created": "Fri, 16 Sep 2016 15:51:24 GMT"
}
] | 2016-09-19T00:00:00 | [
[
"Tizhoosh",
"Hamid R.",
""
],
[
"Mitcheltree",
"Christopher",
""
],
[
"Zhu",
"Shujin",
""
],
[
"Dutta",
"Shamak",
""
]
] | TITLE: Barcodes for Medical Image Retrieval Using Autoencoded Radon Transform
ABSTRACT: Using content-based binary codes to tag digital images has emerged as a
promising retrieval technology. Recently, Radon barcodes (RBCs) have been
introduced as a new binary descriptor for image search. RBCs are generated by
binarization of Radon projections and by assembling them into a vector, namely
the barcode. A simple local thresholding has been suggested for binarization.
In this paper, we put forward the idea of "autoencoded Radon barcodes". Using
images in a training dataset, we autoencode Radon projections to perform
binarization on outputs of hidden layers. We employed the mini-batch stochastic
gradient descent approach for the training. Each hidden layer of the
autoencoder can produce a barcode using a threshold determined based on the
range of the logistic function used. The compressing capability of autoencoders
apparently reduces the redundancies inherent in Radon projections leading to
more accurate retrieval results. The IRMA dataset with 14,410 x-ray images is
used to validate the performance of the proposed method. The experimental
results, containing comparison with RBCs, SURF and BRISK, show that autoencoded
Radon barcode (ARBC) has the capacity to capture important information and to
learn richer representations resulting in lower retrieval errors for image
retrieval measured with the accuracy of the first hit only.
| no_new_dataset | 0.881513 |
1609.05115 | Christian Richardt | Christian Richardt, Hyeongwoo Kim, Levi Valgaerts, Christian Theobalt | Dense Wide-Baseline Scene Flow From Two Handheld Video Cameras | 11 pages, supplemental document included as appendix, 3DV 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a new technique for computing dense scene flow from two handheld
videos with wide camera baselines and different photometric properties due to
different sensors or camera settings like exposure and white balance. Our
technique innovates in two ways over existing methods: (1) it supports
independently moving cameras, and (2) it computes dense scene flow for
wide-baseline scenarios.We achieve this by combining state-of-the-art
wide-baseline correspondence finding with a variational scene flow formulation.
First, we compute dense, wide-baseline correspondences using DAISY descriptors
for matching between cameras and over time. We then detect and replace occluded
pixels in the correspondence fields using a novel edge-preserving Laplacian
correspondence completion technique. We finally refine the computed
correspondence fields in a variational scene flow formulation. We show dense
scene flow results computed from challenging datasets with independently
moving, handheld cameras of varying camera settings.
| [
{
"version": "v1",
"created": "Fri, 16 Sep 2016 15:54:46 GMT"
}
] | 2016-09-19T00:00:00 | [
[
"Richardt",
"Christian",
""
],
[
"Kim",
"Hyeongwoo",
""
],
[
"Valgaerts",
"Levi",
""
],
[
"Theobalt",
"Christian",
""
]
] | TITLE: Dense Wide-Baseline Scene Flow From Two Handheld Video Cameras
ABSTRACT: We propose a new technique for computing dense scene flow from two handheld
videos with wide camera baselines and different photometric properties due to
different sensors or camera settings like exposure and white balance. Our
technique innovates in two ways over existing methods: (1) it supports
independently moving cameras, and (2) it computes dense scene flow for
wide-baseline scenarios.We achieve this by combining state-of-the-art
wide-baseline correspondence finding with a variational scene flow formulation.
First, we compute dense, wide-baseline correspondences using DAISY descriptors
for matching between cameras and over time. We then detect and replace occluded
pixels in the correspondence fields using a novel edge-preserving Laplacian
correspondence completion technique. We finally refine the computed
correspondence fields in a variational scene flow formulation. We show dense
scene flow results computed from challenging datasets with independently
moving, handheld cameras of varying camera settings.
| no_new_dataset | 0.954942 |
1609.05118 | Hamid Tizhoosh | Mina Nouredanesh, H.R. Tizhoosh, Ershad Banijamali, James Tung | Radon-Gabor Barcodes for Medical Image Retrieval | To appear in proceedings of the 23rd International Conference on
Pattern Recognition (ICPR 2016), Cancun, Mexico, December 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, with the explosion of digital images on the Web,
content-based retrieval has emerged as a significant research area. Shapes,
textures, edges and segments may play a key role in describing the content of
an image. Radon and Gabor transforms are both powerful techniques that have
been widely studied to extract shape-texture-based information. The combined
Radon-Gabor features may be more robust against scale/rotation variations,
presence of noise, and illumination changes. The objective of this paper is to
harness the potentials of both Gabor and Radon transforms in order to introduce
expressive binary features, called barcodes, for image annotation/tagging
tasks. We propose two different techniques: Gabor-of-Radon-Image Barcodes
(GRIBCs), and Guided-Radon-of-Gabor Barcodes (GRGBCs). For validation, we
employ the IRMA x-ray dataset with 193 classes, containing 12,677 training
images and 1,733 test images. A total error score as low as 322 and 330 were
achieved for GRGBCs and GRIBCs, respectively. This corresponds to $\approx
81\%$ retrieval accuracy for the first hit.
| [
{
"version": "v1",
"created": "Fri, 16 Sep 2016 16:01:43 GMT"
}
] | 2016-09-19T00:00:00 | [
[
"Nouredanesh",
"Mina",
""
],
[
"Tizhoosh",
"H. R.",
""
],
[
"Banijamali",
"Ershad",
""
],
[
"Tung",
"James",
""
]
] | TITLE: Radon-Gabor Barcodes for Medical Image Retrieval
ABSTRACT: In recent years, with the explosion of digital images on the Web,
content-based retrieval has emerged as a significant research area. Shapes,
textures, edges and segments may play a key role in describing the content of
an image. Radon and Gabor transforms are both powerful techniques that have
been widely studied to extract shape-texture-based information. The combined
Radon-Gabor features may be more robust against scale/rotation variations,
presence of noise, and illumination changes. The objective of this paper is to
harness the potentials of both Gabor and Radon transforms in order to introduce
expressive binary features, called barcodes, for image annotation/tagging
tasks. We propose two different techniques: Gabor-of-Radon-Image Barcodes
(GRIBCs), and Guided-Radon-of-Gabor Barcodes (GRGBCs). For validation, we
employ the IRMA x-ray dataset with 193 classes, containing 12,677 training
images and 1,733 test images. A total error score as low as 322 and 330 were
achieved for GRGBCs and GRIBCs, respectively. This corresponds to $\approx
81\%$ retrieval accuracy for the first hit.
| no_new_dataset | 0.949482 |
1408.1292 | Ilja Kuzborskij | Ilja Kuzborskij, Francesco Orabona, Barbara Caputo | Scalable Greedy Algorithms for Transfer Learning | null | null | 10.1016/j.cviu.2016.09.003 | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we consider the binary transfer learning problem, focusing on
how to select and combine sources from a large pool to yield a good performance
on a target task. Constraining our scenario to real world, we do not assume the
direct access to the source data, but rather we employ the source hypotheses
trained from them. We propose an efficient algorithm that selects relevant
source hypotheses and feature dimensions simultaneously, building on the
literature on the best subset selection problem. Our algorithm achieves
state-of-the-art results on three computer vision datasets, substantially
outperforming both transfer learning and popular feature selection baselines in
a small-sample setting. We also present a randomized variant that achieves the
same results with the computational cost independent from the number of source
hypotheses and feature dimensions. Also, we theoretically prove that, under
reasonable assumptions on the source hypotheses, our algorithm can learn
effectively from few examples.
| [
{
"version": "v1",
"created": "Wed, 6 Aug 2014 14:27:57 GMT"
},
{
"version": "v2",
"created": "Thu, 4 Dec 2014 15:56:53 GMT"
},
{
"version": "v3",
"created": "Thu, 8 Oct 2015 10:27:39 GMT"
},
{
"version": "v4",
"created": "Sat, 18 Jun 2016 00:17:50 GMT"
}
] | 2016-09-16T00:00:00 | [
[
"Kuzborskij",
"Ilja",
""
],
[
"Orabona",
"Francesco",
""
],
[
"Caputo",
"Barbara",
""
]
] | TITLE: Scalable Greedy Algorithms for Transfer Learning
ABSTRACT: In this paper we consider the binary transfer learning problem, focusing on
how to select and combine sources from a large pool to yield a good performance
on a target task. Constraining our scenario to real world, we do not assume the
direct access to the source data, but rather we employ the source hypotheses
trained from them. We propose an efficient algorithm that selects relevant
source hypotheses and feature dimensions simultaneously, building on the
literature on the best subset selection problem. Our algorithm achieves
state-of-the-art results on three computer vision datasets, substantially
outperforming both transfer learning and popular feature selection baselines in
a small-sample setting. We also present a randomized variant that achieves the
same results with the computational cost independent from the number of source
hypotheses and feature dimensions. Also, we theoretically prove that, under
reasonable assumptions on the source hypotheses, our algorithm can learn
effectively from few examples.
| no_new_dataset | 0.950686 |
1608.08905 | Rajasekar Venkatesan | Rajasekar Venkatesan, Meng Joo Er, Shiqian Wu, Mahardhika Pratama | A Novel Online Real-time Classifier for Multi-label Data Streams | 8 pages, 7 tables, 3 figures. arXiv admin note: text overlap with
arXiv:1609.00086 | null | null | null | cs.LG cs.AI cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, a novel extreme learning machine based online multi-label
classifier for real-time data streams is proposed. Multi-label classification
is one of the actively researched machine learning paradigm that has gained
much attention in the recent years due to its rapidly increasing real world
applications. In contrast to traditional binary and multi-class classification,
multi-label classification involves association of each of the input samples
with a set of target labels simultaneously. There are no real-time online
neural network based multi-label classifier available in the literature. In
this paper, we exploit the inherent nature of high speed exhibited by the
extreme learning machines to develop a novel online real-time classifier for
multi-label data streams. The developed classifier is experimented with
datasets from different application domains for consistency, performance and
speed. The experimental studies show that the proposed method outperforms the
existing state-of-the-art techniques in terms of speed and accuracy and can
classify multi-label data streams in real-time.
| [
{
"version": "v1",
"created": "Wed, 31 Aug 2016 15:14:06 GMT"
}
] | 2016-09-16T00:00:00 | [
[
"Venkatesan",
"Rajasekar",
""
],
[
"Er",
"Meng Joo",
""
],
[
"Wu",
"Shiqian",
""
],
[
"Pratama",
"Mahardhika",
""
]
] | TITLE: A Novel Online Real-time Classifier for Multi-label Data Streams
ABSTRACT: In this paper, a novel extreme learning machine based online multi-label
classifier for real-time data streams is proposed. Multi-label classification
is one of the actively researched machine learning paradigm that has gained
much attention in the recent years due to its rapidly increasing real world
applications. In contrast to traditional binary and multi-class classification,
multi-label classification involves association of each of the input samples
with a set of target labels simultaneously. There are no real-time online
neural network based multi-label classifier available in the literature. In
this paper, we exploit the inherent nature of high speed exhibited by the
extreme learning machines to develop a novel online real-time classifier for
multi-label data streams. The developed classifier is experimented with
datasets from different application domains for consistency, performance and
speed. The experimental studies show that the proposed method outperforms the
existing state-of-the-art techniques in terms of speed and accuracy and can
classify multi-label data streams in real-time.
| no_new_dataset | 0.952086 |
1609.04453 | Terrell Mundhenk | T. Nathan Mundhenk, Goran Konjevod, Wesam A. Sakla, Kofi Boakye | A Large Contextual Dataset for Classification, Detection and Counting of
Cars with Deep Learning | ECCV 2016 Pre-press revision | null | null | null | cs.CV cs.DC cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We have created a large diverse set of cars from overhead images, which are
useful for training a deep learner to binary classify, detect and count them.
The dataset and all related material will be made publically available. The set
contains contextual matter to aid in identification of difficult targets. We
demonstrate classification and detection on this dataset using a neural network
we call ResCeption. This network combines residual learning with
Inception-style layers and is used to count cars in one look. This is a new way
to count objects rather than by localization or density estimation. It is
fairly accurate, fast and easy to implement. Additionally, the counting method
is not car or scene specific. It would be easy to train this method to count
other kinds of objects and counting over new scenes requires no extra set up or
assumptions about object locations.
| [
{
"version": "v1",
"created": "Wed, 14 Sep 2016 21:44:58 GMT"
}
] | 2016-09-16T00:00:00 | [
[
"Mundhenk",
"T. Nathan",
""
],
[
"Konjevod",
"Goran",
""
],
[
"Sakla",
"Wesam A.",
""
],
[
"Boakye",
"Kofi",
""
]
] | TITLE: A Large Contextual Dataset for Classification, Detection and Counting of
Cars with Deep Learning
ABSTRACT: We have created a large diverse set of cars from overhead images, which are
useful for training a deep learner to binary classify, detect and count them.
The dataset and all related material will be made publically available. The set
contains contextual matter to aid in identification of difficult targets. We
demonstrate classification and detection on this dataset using a neural network
we call ResCeption. This network combines residual learning with
Inception-style layers and is used to count cars in one look. This is a new way
to count objects rather than by localization or density estimation. It is
fairly accurate, fast and easy to implement. Additionally, the counting method
is not car or scene specific. It would be easy to train this method to count
other kinds of objects and counting over new scenes requires no extra set up or
assumptions about object locations.
| new_dataset | 0.962285 |
1609.04504 | Brett Naul | Brett Naul, St\'efan van der Walt, Arien Crellin-Quick, Joshua S.
Bloom, Fernando P\'erez | cesium: Open-Source Platform for Time-Series Inference | Proceedings of the 15th Python in Science Conference (SciPy 2016) | null | null | null | cs.MS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Inference on time series data is a common requirement in many scientific
disciplines and internet of things (IoT) applications, yet there are few
resources available to domain scientists to easily, robustly, and repeatably
build such complex inference workflows: traditional statistical models of time
series are often too rigid to explain complex time domain behavior, while
popular machine learning packages require already-featurized dataset inputs.
Moreover, the software engineering tasks required to instantiate the
computational platform are daunting. cesium is an end-to-end time series
analysis framework, consisting of a Python library as well as a web front-end
interface, that allows researchers to featurize raw data and apply modern
machine learning techniques in a simple, reproducible, and extensible way.
Users can apply out-of-the-box feature engineering workflows as well as save
and replay their own analyses. Any steps taken in the front end can also be
exported to a Jupyter notebook, so users can iterate between possible models
within the front end and then fine-tune their analysis using the additional
capabilities of the back-end library. The open-source packages make us of many
use modern Python toolkits, including xarray, dask, Celery, Flask, and
scikit-learn.
| [
{
"version": "v1",
"created": "Thu, 15 Sep 2016 04:09:48 GMT"
}
] | 2016-09-16T00:00:00 | [
[
"Naul",
"Brett",
""
],
[
"van der Walt",
"Stéfan",
""
],
[
"Crellin-Quick",
"Arien",
""
],
[
"Bloom",
"Joshua S.",
""
],
[
"Pérez",
"Fernando",
""
]
] | TITLE: cesium: Open-Source Platform for Time-Series Inference
ABSTRACT: Inference on time series data is a common requirement in many scientific
disciplines and internet of things (IoT) applications, yet there are few
resources available to domain scientists to easily, robustly, and repeatably
build such complex inference workflows: traditional statistical models of time
series are often too rigid to explain complex time domain behavior, while
popular machine learning packages require already-featurized dataset inputs.
Moreover, the software engineering tasks required to instantiate the
computational platform are daunting. cesium is an end-to-end time series
analysis framework, consisting of a Python library as well as a web front-end
interface, that allows researchers to featurize raw data and apply modern
machine learning techniques in a simple, reproducible, and extensible way.
Users can apply out-of-the-box feature engineering workflows as well as save
and replay their own analyses. Any steps taken in the front end can also be
exported to a Jupyter notebook, so users can iterate between possible models
within the front end and then fine-tune their analysis using the additional
capabilities of the back-end library. The open-source packages make us of many
use modern Python toolkits, including xarray, dask, Celery, Flask, and
scikit-learn.
| no_new_dataset | 0.935582 |
1609.04556 | Djoerd Hiemstra | Dong Nguyen, Thomas Demeester, Dolf Trieschnigg, Djoerd Hiemstra | Resource Selection for Federated Search on the Web | CTIT Technical Report, University of Twente | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A publicly available dataset for federated search reflecting a real web
environment has long been absent, making it difficult for researchers to test
the validity of their federated search algorithms for the web setting. We
present several experiments and analyses on resource selection on the web using
a recently released test collection containing the results from more than a
hundred real search engines, ranging from large general web search engines such
as Google, Bing and Yahoo to small domain-specific engines. First, we
experiment with estimating the size of uncooperative search engines on the web
using query based sampling and propose a new method using the ClueWeb09
dataset. We find the size estimates to be highly effective in resource
selection. Second, we show that an optimized federated search system based on
smaller web search engines can be an alternative to a system using large web
search engines. Third, we provide an empirical comparison of several popular
resource selection methods and find that these methods are not readily suitable
for resource selection on the web. Challenges include the sparse resource
descriptions and extremely skewed sizes of collections.
| [
{
"version": "v1",
"created": "Thu, 15 Sep 2016 09:49:27 GMT"
}
] | 2016-09-16T00:00:00 | [
[
"Nguyen",
"Dong",
""
],
[
"Demeester",
"Thomas",
""
],
[
"Trieschnigg",
"Dolf",
""
],
[
"Hiemstra",
"Djoerd",
""
]
] | TITLE: Resource Selection for Federated Search on the Web
ABSTRACT: A publicly available dataset for federated search reflecting a real web
environment has long been absent, making it difficult for researchers to test
the validity of their federated search algorithms for the web setting. We
present several experiments and analyses on resource selection on the web using
a recently released test collection containing the results from more than a
hundred real search engines, ranging from large general web search engines such
as Google, Bing and Yahoo to small domain-specific engines. First, we
experiment with estimating the size of uncooperative search engines on the web
using query based sampling and propose a new method using the ClueWeb09
dataset. We find the size estimates to be highly effective in resource
selection. Second, we show that an optimized federated search system based on
smaller web search engines can be an alternative to a system using large web
search engines. Third, we provide an empirical comparison of several popular
resource selection methods and find that these methods are not readily suitable
for resource selection on the web. Challenges include the sparse resource
descriptions and extremely skewed sizes of collections.
| new_dataset | 0.926037 |
1609.04653 | Peter Pinggera | Peter Pinggera, Sebastian Ramos, Stefan Gehrig, Uwe Franke, Carsten
Rother, Rudolf Mester | Lost and Found: Detecting Small Road Hazards for Self-Driving Vehicles | To be presented at IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS) 2016 | null | null | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Detecting small obstacles on the road ahead is a critical part of the driving
task which has to be mastered by fully autonomous cars. In this paper, we
present a method based on stereo vision to reliably detect such obstacles from
a moving vehicle. The proposed algorithm performs statistical hypothesis tests
in disparity space directly on stereo image data, assessing freespace and
obstacle hypotheses on independent local patches. This detection approach does
not depend on a global road model and handles both static and moving obstacles.
For evaluation, we employ a novel lost-cargo image sequence dataset comprising
more than two thousand frames with pixelwise annotations of obstacle and
free-space and provide a thorough comparison to several stereo-based baseline
methods. The dataset will be made available to the community to foster further
research on this important topic. The proposed approach outperforms all
considered baselines in our evaluations on both pixel and object level and runs
at frame rates of up to 20 Hz on 2 mega-pixel stereo imagery. Small obstacles
down to the height of 5 cm can successfully be detected at 20 m distance at low
false positive rates.
| [
{
"version": "v1",
"created": "Thu, 15 Sep 2016 14:01:03 GMT"
}
] | 2016-09-16T00:00:00 | [
[
"Pinggera",
"Peter",
""
],
[
"Ramos",
"Sebastian",
""
],
[
"Gehrig",
"Stefan",
""
],
[
"Franke",
"Uwe",
""
],
[
"Rother",
"Carsten",
""
],
[
"Mester",
"Rudolf",
""
]
] | TITLE: Lost and Found: Detecting Small Road Hazards for Self-Driving Vehicles
ABSTRACT: Detecting small obstacles on the road ahead is a critical part of the driving
task which has to be mastered by fully autonomous cars. In this paper, we
present a method based on stereo vision to reliably detect such obstacles from
a moving vehicle. The proposed algorithm performs statistical hypothesis tests
in disparity space directly on stereo image data, assessing freespace and
obstacle hypotheses on independent local patches. This detection approach does
not depend on a global road model and handles both static and moving obstacles.
For evaluation, we employ a novel lost-cargo image sequence dataset comprising
more than two thousand frames with pixelwise annotations of obstacle and
free-space and provide a thorough comparison to several stereo-based baseline
methods. The dataset will be made available to the community to foster further
research on this important topic. The proposed approach outperforms all
considered baselines in our evaluations on both pixel and object level and runs
at frame rates of up to 20 Hz on 2 mega-pixel stereo imagery. Small obstacles
down to the height of 5 cm can successfully be detected at 20 m distance at low
false positive rates.
| new_dataset | 0.9601 |
1609.04718 | Paul Irolla | Paul Irolla and Eric Filiol | Glassbox: Dynamic Analysis Platform for Malware Android Applications on
Real Devices | 11 pages, 4 figures. This paper have been submitted to the ICISSP
workshop FORmal methods for Security Engineering (ForSE 2017) | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Android is the most widely used smartphone OS with 82.8% market share in
2015. It is therefore the most widely targeted system by malware authors.
Researchers rely on dynamic analysis to extract malware behaviors and often use
emulators to do so. However, using emulators lead to new issues. Malware may
detect emulation and as a result it does not execute the payload to prevent the
analysis. Dealing with virtual device evasion is a never-ending war and comes
with a non-negligible computation cost. To overcome this state of affairs, we
propose a system that does not use virtual devices for analysing malware
behavior. Glassbox is a functional prototype for the dynamic analysis of
malware applications. It executes applications on real devices in a monitored
and controlled environment. It is a fully automated system that installs, tests
and extracts features from the application for further analysis. We present the
architecture of the platform and we compare it with existing Android dynamic
analysis platforms. Lastly, we evaluate the capacity of Glassbox to trigger
application behaviors by measuring the average coverage of basic blocks on the
AndroCoverage dataset. We show that it executes on average 13.52% more basic
blocks than the Monkey program.
| [
{
"version": "v1",
"created": "Thu, 15 Sep 2016 16:16:56 GMT"
}
] | 2016-09-16T00:00:00 | [
[
"Irolla",
"Paul",
""
],
[
"Filiol",
"Eric",
""
]
] | TITLE: Glassbox: Dynamic Analysis Platform for Malware Android Applications on
Real Devices
ABSTRACT: Android is the most widely used smartphone OS with 82.8% market share in
2015. It is therefore the most widely targeted system by malware authors.
Researchers rely on dynamic analysis to extract malware behaviors and often use
emulators to do so. However, using emulators lead to new issues. Malware may
detect emulation and as a result it does not execute the payload to prevent the
analysis. Dealing with virtual device evasion is a never-ending war and comes
with a non-negligible computation cost. To overcome this state of affairs, we
propose a system that does not use virtual devices for analysing malware
behavior. Glassbox is a functional prototype for the dynamic analysis of
malware applications. It executes applications on real devices in a monitored
and controlled environment. It is a fully automated system that installs, tests
and extracts features from the application for further analysis. We present the
architecture of the platform and we compare it with existing Android dynamic
analysis platforms. Lastly, we evaluate the capacity of Glassbox to trigger
application behaviors by measuring the average coverage of basic blocks on the
AndroCoverage dataset. We show that it executes on average 13.52% more basic
blocks than the Monkey program.
| no_new_dataset | 0.934932 |
1511.04590 | Li Yao | Li Yao, Nicolas Ballas, Kyunghyun Cho, John R. Smith, Yoshua Bengio | Oracle performance for visual captioning | BMVC2016 (Oral paper) | null | null | null | cs.CV cs.CL stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The task of associating images and videos with a natural language description
has attracted a great amount of attention recently. Rapid progress has been
made in terms of both developing novel algorithms and releasing new datasets.
Indeed, the state-of-the-art results on some of the standard datasets have been
pushed into the regime where it has become more and more difficult to make
significant improvements. Instead of proposing new models, this work
investigates the possibility of empirically establishing performance upper
bounds on various visual captioning datasets without extra data labelling
effort or human evaluation. In particular, it is assumed that visual captioning
is decomposed into two steps: from visual inputs to visual concepts, and from
visual concepts to natural language descriptions. One would be able to obtain
an upper bound when assuming the first step is perfect and only requiring
training a conditional language model for the second step. We demonstrate the
construction of such bounds on MS-COCO, YouTube2Text and LSMDC (a combination
of M-VAD and MPII-MD). Surprisingly, despite of the imperfect process we used
for visual concept extraction in the first step and the simplicity of the
language model for the second step, we show that current state-of-the-art
models fall short when being compared with the learned upper bounds.
Furthermore, with such a bound, we quantify several important factors
concerning image and video captioning: the number of visual concepts captured
by different models, the trade-off between the amount of visual elements
captured and their accuracy, and the intrinsic difficulty and blessing of
different datasets.
| [
{
"version": "v1",
"created": "Sat, 14 Nov 2015 18:02:39 GMT"
},
{
"version": "v2",
"created": "Wed, 18 Nov 2015 04:20:08 GMT"
},
{
"version": "v3",
"created": "Sun, 3 Jan 2016 04:55:57 GMT"
},
{
"version": "v4",
"created": "Wed, 6 Jan 2016 23:38:25 GMT"
},
{
"version": "v5",
"created": "Wed, 14 Sep 2016 16:55:29 GMT"
}
] | 2016-09-15T00:00:00 | [
[
"Yao",
"Li",
""
],
[
"Ballas",
"Nicolas",
""
],
[
"Cho",
"Kyunghyun",
""
],
[
"Smith",
"John R.",
""
],
[
"Bengio",
"Yoshua",
""
]
] | TITLE: Oracle performance for visual captioning
ABSTRACT: The task of associating images and videos with a natural language description
has attracted a great amount of attention recently. Rapid progress has been
made in terms of both developing novel algorithms and releasing new datasets.
Indeed, the state-of-the-art results on some of the standard datasets have been
pushed into the regime where it has become more and more difficult to make
significant improvements. Instead of proposing new models, this work
investigates the possibility of empirically establishing performance upper
bounds on various visual captioning datasets without extra data labelling
effort or human evaluation. In particular, it is assumed that visual captioning
is decomposed into two steps: from visual inputs to visual concepts, and from
visual concepts to natural language descriptions. One would be able to obtain
an upper bound when assuming the first step is perfect and only requiring
training a conditional language model for the second step. We demonstrate the
construction of such bounds on MS-COCO, YouTube2Text and LSMDC (a combination
of M-VAD and MPII-MD). Surprisingly, despite of the imperfect process we used
for visual concept extraction in the first step and the simplicity of the
language model for the second step, we show that current state-of-the-art
models fall short when being compared with the learned upper bounds.
Furthermore, with such a bound, we quantify several important factors
concerning image and video captioning: the number of visual concepts captured
by different models, the trade-off between the amount of visual elements
captured and their accuracy, and the intrinsic difficulty and blessing of
different datasets.
| no_new_dataset | 0.9455 |
1608.07094 | Mahamad Suhil | D S Guru, Mahamad Suhil | A Novel Term_Class Relevance Measure for Text Categorization | 12 pages, 6 figures, 2 tables | Procedia Computer Science, vol.45, pp.13-22, 2015 | 10.1016/j.procs.2015.03.074 | null | cs.IR cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we introduce a new measure called Term_Class relevance to
compute the relevancy of a term in classifying a document into a particular
class. The proposed measure estimates the degree of relevance of a given term,
in placing an unlabeled document to be a member of a known class, as a product
of Class_Term weight and Class_Term density; where the Class_Term weight is the
ratio of the number of documents of the class containing the term to the total
number of documents containing the term and the Class_Term density is the
relative density of occurrence of the term in the class to the total occurrence
of the term in the entire population. Unlike the other existing term weighting
schemes such as TF-IDF and its variants, the proposed relevance measure takes
into account the degree of relative participation of the term across all
documents of the class to the entire population. To demonstrate the
significance of the proposed measure experimentation has been conducted on the
20 Newsgroups dataset. Further, the superiority of the novel measure is brought
out through a comparative analysis.
| [
{
"version": "v1",
"created": "Thu, 25 Aug 2016 11:46:06 GMT"
},
{
"version": "v2",
"created": "Wed, 14 Sep 2016 12:51:50 GMT"
}
] | 2016-09-15T00:00:00 | [
[
"Guru",
"D S",
""
],
[
"Suhil",
"Mahamad",
""
]
] | TITLE: A Novel Term_Class Relevance Measure for Text Categorization
ABSTRACT: In this paper, we introduce a new measure called Term_Class relevance to
compute the relevancy of a term in classifying a document into a particular
class. The proposed measure estimates the degree of relevance of a given term,
in placing an unlabeled document to be a member of a known class, as a product
of Class_Term weight and Class_Term density; where the Class_Term weight is the
ratio of the number of documents of the class containing the term to the total
number of documents containing the term and the Class_Term density is the
relative density of occurrence of the term in the class to the total occurrence
of the term in the entire population. Unlike the other existing term weighting
schemes such as TF-IDF and its variants, the proposed relevance measure takes
into account the degree of relative participation of the term across all
documents of the class to the entire population. To demonstrate the
significance of the proposed measure experimentation has been conducted on the
20 Newsgroups dataset. Further, the superiority of the novel measure is brought
out through a comparative analysis.
| no_new_dataset | 0.950273 |
1609.04104 | Morteza Mardani | Morteza Mardani, Georgios B. Giannakis, and Kamil Ugurbil | Tracking Tensor Subspaces with Informative Random Sampling for Real-Time
MR Imaging | null | null | null | null | cs.LG cs.CV cs.IT math.IT stat.CO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Magnetic resonance imaging (MRI) nowadays serves as an important modality for
diagnostic and therapeutic guidance in clinics. However, the {\it slow
acquisition} process, the dynamic deformation of organs, as well as the need
for {\it real-time} reconstruction, pose major challenges toward obtaining
artifact-free images. To cope with these challenges, the present paper
advocates a novel subspace learning framework that permeates benefits from
parallel factor (PARAFAC) decomposition of tensors (multiway data) to low-rank
modeling of temporal sequence of images. Treating images as multiway data
arrays, the novel method preserves spatial structures and unravels the latent
correlations across various dimensions by means of the tensor subspace.
Leveraging the spatio-temporal correlation of images, Tykhonov regularization
is adopted as a rank surrogate for a least-squares optimization program.
Alteranating majorization minimization is adopted to develop online algorithms
that recursively procure the reconstruction upon arrival of a new undersampled
$k$-space frame. The developed algorithms are {\it provably convergent} and
highly {\it parallelizable} with lightweight FFT tasks per iteration. To
further accelerate the acquisition process, randomized subsampling policies are
devised that leverage intermediate estimates of the tensor subspace, offered by
the online scheme, to {\it randomly} acquire {\it informative} $k$-space
samples. In a nutshell, the novel approach enables tracking motion dynamics
under low acquisition rates `on the fly.' GPU-based tests with real {\it in
vivo} MRI datasets of cardiac cine images corroborate the merits of the novel
approach relative to state-of-the-art alternatives.
| [
{
"version": "v1",
"created": "Wed, 14 Sep 2016 01:23:05 GMT"
}
] | 2016-09-15T00:00:00 | [
[
"Mardani",
"Morteza",
""
],
[
"Giannakis",
"Georgios B.",
""
],
[
"Ugurbil",
"Kamil",
""
]
] | TITLE: Tracking Tensor Subspaces with Informative Random Sampling for Real-Time
MR Imaging
ABSTRACT: Magnetic resonance imaging (MRI) nowadays serves as an important modality for
diagnostic and therapeutic guidance in clinics. However, the {\it slow
acquisition} process, the dynamic deformation of organs, as well as the need
for {\it real-time} reconstruction, pose major challenges toward obtaining
artifact-free images. To cope with these challenges, the present paper
advocates a novel subspace learning framework that permeates benefits from
parallel factor (PARAFAC) decomposition of tensors (multiway data) to low-rank
modeling of temporal sequence of images. Treating images as multiway data
arrays, the novel method preserves spatial structures and unravels the latent
correlations across various dimensions by means of the tensor subspace.
Leveraging the spatio-temporal correlation of images, Tykhonov regularization
is adopted as a rank surrogate for a least-squares optimization program.
Alteranating majorization minimization is adopted to develop online algorithms
that recursively procure the reconstruction upon arrival of a new undersampled
$k$-space frame. The developed algorithms are {\it provably convergent} and
highly {\it parallelizable} with lightweight FFT tasks per iteration. To
further accelerate the acquisition process, randomized subsampling policies are
devised that leverage intermediate estimates of the tensor subspace, offered by
the online scheme, to {\it randomly} acquire {\it informative} $k$-space
samples. In a nutshell, the novel approach enables tracking motion dynamics
under low acquisition rates `on the fly.' GPU-based tests with real {\it in
vivo} MRI datasets of cardiac cine images corroborate the merits of the novel
approach relative to state-of-the-art alternatives.
| no_new_dataset | 0.948965 |
1609.04116 | Songcan Chen | Qing Tian, Songcan Chen | Joint Gender Classification and Age Estimation by Nearly Orthogonalizing
Their Semantic Spaces | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | In human face-based biometrics, gender classification and age estimation are
two typical learning tasks. Although a variety of approaches have been proposed
to handle them, just a few of them are solved jointly, even so, these joint
methods do not yet specifically concern the semantic difference between human
gender and age, which is intuitively helpful for joint learning, consequently
leaving us a room of further improving the performance. To this end, in this
work we firstly propose a general learning framework for jointly estimating
human gender and age by specially attempting to formulate such semantic
relationships as a form of near-orthogonality regularization and then
incorporate it into the objective of the joint learning framework. In order to
evaluate the effectiveness of the proposed framework, we exemplify it by
respectively taking the widely used binary-class SVM for gender classification,
and two threshold-based ordinal regression methods (i.e., the discriminant
learning for ordinal regression and support vector ordinal regression) for age
estimation, and crucially coupling both through the proposed semantic
formulation. Moreover, we develop its kernelized nonlinear counterpart by
deriving a representer theorem for the joint learning strategy. Finally,
through extensive experiments on three aging datasets FG-NET, Morph Album I and
Morph Album II, we demonstrate the effectiveness and superiority of the
proposed joint learning strategy.
| [
{
"version": "v1",
"created": "Wed, 14 Sep 2016 02:45:37 GMT"
}
] | 2016-09-15T00:00:00 | [
[
"Tian",
"Qing",
""
],
[
"Chen",
"Songcan",
""
]
] | TITLE: Joint Gender Classification and Age Estimation by Nearly Orthogonalizing
Their Semantic Spaces
ABSTRACT: In human face-based biometrics, gender classification and age estimation are
two typical learning tasks. Although a variety of approaches have been proposed
to handle them, just a few of them are solved jointly, even so, these joint
methods do not yet specifically concern the semantic difference between human
gender and age, which is intuitively helpful for joint learning, consequently
leaving us a room of further improving the performance. To this end, in this
work we firstly propose a general learning framework for jointly estimating
human gender and age by specially attempting to formulate such semantic
relationships as a form of near-orthogonality regularization and then
incorporate it into the objective of the joint learning framework. In order to
evaluate the effectiveness of the proposed framework, we exemplify it by
respectively taking the widely used binary-class SVM for gender classification,
and two threshold-based ordinal regression methods (i.e., the discriminant
learning for ordinal regression and support vector ordinal regression) for age
estimation, and crucially coupling both through the proposed semantic
formulation. Moreover, we develop its kernelized nonlinear counterpart by
deriving a representer theorem for the joint learning strategy. Finally,
through extensive experiments on three aging datasets FG-NET, Morph Album I and
Morph Album II, we demonstrate the effectiveness and superiority of the
proposed joint learning strategy.
| no_new_dataset | 0.946151 |
1609.04253 | Amir H. Jadidinejad | Amir H. Jadidinejad | Neural Machine Transliteration: Preliminary Results | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Machine transliteration is the process of automatically transforming the
script of a word from a source language to a target language, while preserving
pronunciation. Sequence to sequence learning has recently emerged as a new
paradigm in supervised learning. In this paper a character-based
encoder-decoder model has been proposed that consists of two Recurrent Neural
Networks. The encoder is a Bidirectional recurrent neural network that encodes
a sequence of symbols into a fixed-length vector representation, and the
decoder generates the target sequence using an attention-based recurrent neural
network. The encoder, the decoder and the attention mechanism are jointly
trained to maximize the conditional probability of a target sequence given a
source sequence. Our experiments on different datasets show that the proposed
encoder-decoder model is able to achieve significantly higher transliteration
quality over traditional statistical models.
| [
{
"version": "v1",
"created": "Wed, 14 Sep 2016 13:12:12 GMT"
}
] | 2016-09-15T00:00:00 | [
[
"Jadidinejad",
"Amir H.",
""
]
] | TITLE: Neural Machine Transliteration: Preliminary Results
ABSTRACT: Machine transliteration is the process of automatically transforming the
script of a word from a source language to a target language, while preserving
pronunciation. Sequence to sequence learning has recently emerged as a new
paradigm in supervised learning. In this paper a character-based
encoder-decoder model has been proposed that consists of two Recurrent Neural
Networks. The encoder is a Bidirectional recurrent neural network that encodes
a sequence of symbols into a fixed-length vector representation, and the
decoder generates the target sequence using an attention-based recurrent neural
network. The encoder, the decoder and the attention mechanism are jointly
trained to maximize the conditional probability of a target sequence given a
source sequence. Our experiments on different datasets show that the proposed
encoder-decoder model is able to achieve significantly higher transliteration
quality over traditional statistical models.
| no_new_dataset | 0.947962 |
1609.04281 | Ridho Reinanda | Ridho Reinanda, Edgar Meij, Maarten de Rijke | Document Filtering for Long-tail Entities | CIKM2016, Proceedings of the 25th ACM International Conference on
Information and Knowledge Management. 2016 | null | 10.1145/2983323.2983728 | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Filtering relevant documents with respect to entities is an essential task in
the context of knowledge base construction and maintenance. It entails
processing a time-ordered stream of documents that might be relevant to an
entity in order to select only those that contain vital information.
State-of-the-art approaches to document filtering for popular entities are
entity-dependent: they rely on and are also trained on the specifics of
differentiating features for each specific entity. Moreover, these approaches
tend to use so-called extrinsic information such as Wikipedia page views and
related entities which is typically only available only for popular head
entities. Entity-dependent approaches based on such signals are therefore
ill-suited as filtering methods for long-tail entities. In this paper we
propose a document filtering method for long-tail entities that is
entity-independent and thus also generalizes to unseen or rarely seen entities.
It is based on intrinsic features, i.e., features that are derived from the
documents in which the entities are mentioned. We propose a set of features
that capture informativeness, entity-saliency, and timeliness. In particular,
we introduce features based on entity aspect similarities, relation patterns,
and temporal expressions and combine these with standard features for document
filtering. Experiments following the TREC KBA 2014 setup on a publicly
available dataset show that our model is able to improve the filtering
performance for long-tail entities over several baselines. Results of applying
the model to unseen entities are promising, indicating that the model is able
to learn the general characteristics of a vital document. The overall
performance across all entities---i.e., not just long-tail entities---improves
upon the state-of-the-art without depending on any entity-specific training
data.
| [
{
"version": "v1",
"created": "Wed, 14 Sep 2016 14:09:20 GMT"
}
] | 2016-09-15T00:00:00 | [
[
"Reinanda",
"Ridho",
""
],
[
"Meij",
"Edgar",
""
],
[
"de Rijke",
"Maarten",
""
]
] | TITLE: Document Filtering for Long-tail Entities
ABSTRACT: Filtering relevant documents with respect to entities is an essential task in
the context of knowledge base construction and maintenance. It entails
processing a time-ordered stream of documents that might be relevant to an
entity in order to select only those that contain vital information.
State-of-the-art approaches to document filtering for popular entities are
entity-dependent: they rely on and are also trained on the specifics of
differentiating features for each specific entity. Moreover, these approaches
tend to use so-called extrinsic information such as Wikipedia page views and
related entities which is typically only available only for popular head
entities. Entity-dependent approaches based on such signals are therefore
ill-suited as filtering methods for long-tail entities. In this paper we
propose a document filtering method for long-tail entities that is
entity-independent and thus also generalizes to unseen or rarely seen entities.
It is based on intrinsic features, i.e., features that are derived from the
documents in which the entities are mentioned. We propose a set of features
that capture informativeness, entity-saliency, and timeliness. In particular,
we introduce features based on entity aspect similarities, relation patterns,
and temporal expressions and combine these with standard features for document
filtering. Experiments following the TREC KBA 2014 setup on a publicly
available dataset show that our model is able to improve the filtering
performance for long-tail entities over several baselines. Results of applying
the model to unseen entities are promising, indicating that the model is able
to learn the general characteristics of a vital document. The overall
performance across all entities---i.e., not just long-tail entities---improves
upon the state-of-the-art without depending on any entity-specific training
data.
| no_new_dataset | 0.953492 |
1609.04321 | Luca Masera | Luca Masera, Enrico Blanzieri | Very Simple Classifier: a Concept Binary Classifier toInvestigate
Features Based on Subsampling and Localility | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose Very Simple Classifier (VSC) a novel method designed to
incorporate the concepts of subsampling and locality in the definition of
features to be used as the input of a perceptron. The rationale is that
locality theoretically guarantees a bound on the generalization error. Each
feature in VSC is a max-margin classifier built on randomly-selected pairs of
samples. The locality in VSC is achieved by multiplying the value of the
feature by a confidence measure that can be characterized in terms of the
Chebichev inequality. The output of the layer is then fed in a output layer of
neurons. The weights of the output layer are then determined by a regularized
pseudoinverse. Extensive comparison of VSC against 9 competitors in the task of
binary classification is carried out. Results on 22 benchmark datasets with
fixed parameters show that VSC is competitive with the Multi Layer Perceptron
(MLP) and outperforms the other competitors. An exploration of the parameter
space shows VSC can outperform MLP.
| [
{
"version": "v1",
"created": "Wed, 14 Sep 2016 15:51:46 GMT"
}
] | 2016-09-15T00:00:00 | [
[
"Masera",
"Luca",
""
],
[
"Blanzieri",
"Enrico",
""
]
] | TITLE: Very Simple Classifier: a Concept Binary Classifier toInvestigate
Features Based on Subsampling and Localility
ABSTRACT: We propose Very Simple Classifier (VSC) a novel method designed to
incorporate the concepts of subsampling and locality in the definition of
features to be used as the input of a perceptron. The rationale is that
locality theoretically guarantees a bound on the generalization error. Each
feature in VSC is a max-margin classifier built on randomly-selected pairs of
samples. The locality in VSC is achieved by multiplying the value of the
feature by a confidence measure that can be characterized in terms of the
Chebichev inequality. The output of the layer is then fed in a output layer of
neurons. The weights of the output layer are then determined by a regularized
pseudoinverse. Extensive comparison of VSC against 9 competitors in the task of
binary classification is carried out. Results on 22 benchmark datasets with
fixed parameters show that VSC is competitive with the Multi Layer Perceptron
(MLP) and outperforms the other competitors. An exploration of the parameter
space shows VSC can outperform MLP.
| no_new_dataset | 0.944842 |
1512.04103 | Yaser Souri | Yaser Souri, Erfan Noury, Ehsan Adeli | Deep Relative Attributes | ACCV 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Visual attributes are great means of describing images or scenes, in a way
both humans and computers understand. In order to establish a correspondence
between images and to be able to compare the strength of each property between
images, relative attributes were introduced. However, since their introduction,
hand-crafted and engineered features were used to learn increasingly complex
models for the problem of relative attributes. This limits the applicability of
those methods for more realistic cases. We introduce a deep neural network
architecture for the task of relative attribute prediction. A convolutional
neural network (ConvNet) is adopted to learn the features by including an
additional layer (ranking layer) that learns to rank the images based on these
features. We adopt an appropriate ranking loss to train the whole network in an
end-to-end fashion. Our proposed method outperforms the baseline and
state-of-the-art methods in relative attribute prediction on various coarse and
fine-grained datasets. Our qualitative results along with the visualization of
the saliency maps show that the network is able to learn effective features for
each specific attribute. Source code of the proposed method is available at
https://github.com/yassersouri/ghiaseddin.
| [
{
"version": "v1",
"created": "Sun, 13 Dec 2015 19:10:16 GMT"
},
{
"version": "v2",
"created": "Tue, 13 Sep 2016 08:21:43 GMT"
}
] | 2016-09-14T00:00:00 | [
[
"Souri",
"Yaser",
""
],
[
"Noury",
"Erfan",
""
],
[
"Adeli",
"Ehsan",
""
]
] | TITLE: Deep Relative Attributes
ABSTRACT: Visual attributes are great means of describing images or scenes, in a way
both humans and computers understand. In order to establish a correspondence
between images and to be able to compare the strength of each property between
images, relative attributes were introduced. However, since their introduction,
hand-crafted and engineered features were used to learn increasingly complex
models for the problem of relative attributes. This limits the applicability of
those methods for more realistic cases. We introduce a deep neural network
architecture for the task of relative attribute prediction. A convolutional
neural network (ConvNet) is adopted to learn the features by including an
additional layer (ranking layer) that learns to rank the images based on these
features. We adopt an appropriate ranking loss to train the whole network in an
end-to-end fashion. Our proposed method outperforms the baseline and
state-of-the-art methods in relative attribute prediction on various coarse and
fine-grained datasets. Our qualitative results along with the visualization of
the saliency maps show that the network is able to learn effective features for
each specific attribute. Source code of the proposed method is available at
https://github.com/yassersouri/ghiaseddin.
| no_new_dataset | 0.947186 |
1602.00828 | Hossein Rahmani | Hossein Rahmani and Ajmal Mian and Mubarak Shah | Learning a Deep Model for Human Action Recognition from Novel Viewpoints | null | Phys. Rev. D 94, 065007 (2016) | 10.1103/PhysRevD.94.065007 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recognizing human actions from unknown and unseen (novel) views is a
challenging problem. We propose a Robust Non-Linear Knowledge Transfer Model
(R-NKTM) for human action recognition from novel views. The proposed R-NKTM is
a deep fully-connected neural network that transfers knowledge of human actions
from any unknown view to a shared high-level virtual view by finding a
non-linear virtual path that connects the views. The R-NKTM is learned from
dense trajectories of synthetic 3D human models fitted to real motion capture
data and generalizes to real videos of human actions. The strength of our
technique is that we learn a single R-NKTM for all actions and all viewpoints
for knowledge transfer of any real human action video without the need for
re-training or fine-tuning the model. Thus, R-NKTM can efficiently scale to
incorporate new action classes. R-NKTM is learned with dummy labels and does
not require knowledge of the camera viewpoint at any stage. Experiments on
three benchmark cross-view human action datasets show that our method
outperforms existing state-of-the-art.
| [
{
"version": "v1",
"created": "Tue, 2 Feb 2016 08:42:44 GMT"
}
] | 2016-09-14T00:00:00 | [
[
"Rahmani",
"Hossein",
""
],
[
"Mian",
"Ajmal",
""
],
[
"Shah",
"Mubarak",
""
]
] | TITLE: Learning a Deep Model for Human Action Recognition from Novel Viewpoints
ABSTRACT: Recognizing human actions from unknown and unseen (novel) views is a
challenging problem. We propose a Robust Non-Linear Knowledge Transfer Model
(R-NKTM) for human action recognition from novel views. The proposed R-NKTM is
a deep fully-connected neural network that transfers knowledge of human actions
from any unknown view to a shared high-level virtual view by finding a
non-linear virtual path that connects the views. The R-NKTM is learned from
dense trajectories of synthetic 3D human models fitted to real motion capture
data and generalizes to real videos of human actions. The strength of our
technique is that we learn a single R-NKTM for all actions and all viewpoints
for knowledge transfer of any real human action video without the need for
re-training or fine-tuning the model. Thus, R-NKTM can efficiently scale to
incorporate new action classes. R-NKTM is learned with dummy labels and does
not require knowledge of the camera viewpoint at any stage. Experiments on
three benchmark cross-view human action datasets show that our method
outperforms existing state-of-the-art.
| no_new_dataset | 0.94474 |
1606.07674 | Yin Zheng | Yin Zheng, Cailiang Liu, Bangsheng Tang, Hanning Zhou | Neural Autoregressive Collaborative Filtering for Implicit Feedback | 5 pages, 2 figures, accepted by DLRS2016 http://dlrs-workshop.org/ | null | 10.1145/2988450.2988453 | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes implicit CF-NADE, a neural autoregressive model for
collaborative filtering tasks using implicit feedback ( e.g. click, watch,
browse behaviors). We first convert a users implicit feedback into a like
vector and a confidence vector, and then model the probability of the like
vector, weighted by the confidence vector. The training objective of implicit
CF-NADE is to maximize a weighted negative log-likelihood. We test the
performance of implicit CF-NADE on a dataset collected from a popular digital
TV streaming service. More specifically, in the experiments, we describe how to
convert watch counts into implicit relative rating, and feed into implicit
CF-NADE. Then we compare the performance of implicit CF-NADE model with the
popular implicit matrix factorization approach. Experimental results show that
implicit CF-NADE significantly outperforms the baseline.
| [
{
"version": "v1",
"created": "Fri, 24 Jun 2016 13:10:50 GMT"
},
{
"version": "v2",
"created": "Tue, 13 Sep 2016 03:11:12 GMT"
}
] | 2016-09-14T00:00:00 | [
[
"Zheng",
"Yin",
""
],
[
"Liu",
"Cailiang",
""
],
[
"Tang",
"Bangsheng",
""
],
[
"Zhou",
"Hanning",
""
]
] | TITLE: Neural Autoregressive Collaborative Filtering for Implicit Feedback
ABSTRACT: This paper proposes implicit CF-NADE, a neural autoregressive model for
collaborative filtering tasks using implicit feedback ( e.g. click, watch,
browse behaviors). We first convert a users implicit feedback into a like
vector and a confidence vector, and then model the probability of the like
vector, weighted by the confidence vector. The training objective of implicit
CF-NADE is to maximize a weighted negative log-likelihood. We test the
performance of implicit CF-NADE on a dataset collected from a popular digital
TV streaming service. More specifically, in the experiments, we describe how to
convert watch counts into implicit relative rating, and feed into implicit
CF-NADE. Then we compare the performance of implicit CF-NADE model with the
popular implicit matrix factorization approach. Experimental results show that
implicit CF-NADE significantly outperforms the baseline.
| no_new_dataset | 0.950686 |
1607.08206 | Cheng Zhang | Cheng Zhang, Hedvig Kjellstrom, Carl Henrik Ek, Bo C. Bertilson | Diagnostic Prediction Using Discomfort Drawings with IBTM | Presented at 2016 Machine Learning and Healthcare Conference (MLHC
2016), Los Angeles, CA | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we explore the possibility to apply machine learning to make
diagnostic predictions using discomfort drawings. A discomfort drawing is an
intuitive way for patients to express discomfort and pain related symptoms.
These drawings have proven to be an effective method to collect patient data
and make diagnostic decisions in real-life practice. A dataset from real-world
patient cases is collected for which medical experts provide diagnostic labels.
Next, we use a factorized multimodal topic model, Inter-Battery Topic Model
(IBTM), to train a system that can make diagnostic predictions given an unseen
discomfort drawing. The number of output diagnostic labels is determined by
using mean-shift clustering on the discomfort drawing. Experimental results
show reasonable predictions of diagnostic labels given an unseen discomfort
drawing. Additionally, we generate synthetic discomfort drawings with IBTM
given a diagnostic label, which results in typical cases of symptoms. The
positive result indicates a significant potential of machine learning to be
used for parts of the pain diagnostic process and to be a decision support
system for physicians and other health care personnel.
| [
{
"version": "v1",
"created": "Wed, 27 Jul 2016 18:20:01 GMT"
},
{
"version": "v2",
"created": "Tue, 13 Sep 2016 16:26:41 GMT"
}
] | 2016-09-14T00:00:00 | [
[
"Zhang",
"Cheng",
""
],
[
"Kjellstrom",
"Hedvig",
""
],
[
"Ek",
"Carl Henrik",
""
],
[
"Bertilson",
"Bo C.",
""
]
] | TITLE: Diagnostic Prediction Using Discomfort Drawings with IBTM
ABSTRACT: In this paper, we explore the possibility to apply machine learning to make
diagnostic predictions using discomfort drawings. A discomfort drawing is an
intuitive way for patients to express discomfort and pain related symptoms.
These drawings have proven to be an effective method to collect patient data
and make diagnostic decisions in real-life practice. A dataset from real-world
patient cases is collected for which medical experts provide diagnostic labels.
Next, we use a factorized multimodal topic model, Inter-Battery Topic Model
(IBTM), to train a system that can make diagnostic predictions given an unseen
discomfort drawing. The number of output diagnostic labels is determined by
using mean-shift clustering on the discomfort drawing. Experimental results
show reasonable predictions of diagnostic labels given an unseen discomfort
drawing. Additionally, we generate synthetic discomfort drawings with IBTM
given a diagnostic label, which results in typical cases of symptoms. The
positive result indicates a significant potential of machine learning to be
used for parts of the pain diagnostic process and to be a decision support
system for physicians and other health care personnel.
| no_new_dataset | 0.943348 |
1609.03540 | Babak Salimi | Babak Salimi, Dan Suciu | ZaliQL: A SQL-Based Framework for Drawing Causal Inference from Big Data | null | null | null | null | cs.DB cs.AI cs.LG cs.PF | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Causal inference from observational data is a subject of active research and
development in statistics and computer science. Many toolkits have been
developed for this purpose that depends on statistical software. However, these
toolkits do not scale to large datasets. In this paper we describe a suite of
techniques for expressing causal inference tasks from observational data in
SQL. This suite supports the state-of-the-art methods for causal inference and
run at scale within a database engine. In addition, we introduce several
optimization techniques that significantly speedup causal inference, both in
the online and offline setting. We evaluate the quality and performance of our
techniques by experiments of real datasets.
| [
{
"version": "v1",
"created": "Mon, 12 Sep 2016 19:24:14 GMT"
},
{
"version": "v2",
"created": "Tue, 13 Sep 2016 01:59:05 GMT"
}
] | 2016-09-14T00:00:00 | [
[
"Salimi",
"Babak",
""
],
[
"Suciu",
"Dan",
""
]
] | TITLE: ZaliQL: A SQL-Based Framework for Drawing Causal Inference from Big Data
ABSTRACT: Causal inference from observational data is a subject of active research and
development in statistics and computer science. Many toolkits have been
developed for this purpose that depends on statistical software. However, these
toolkits do not scale to large datasets. In this paper we describe a suite of
techniques for expressing causal inference tasks from observational data in
SQL. This suite supports the state-of-the-art methods for causal inference and
run at scale within a database engine. In addition, we introduce several
optimization techniques that significantly speedup causal inference, both in
the online and offline setting. We evaluate the quality and performance of our
techniques by experiments of real datasets.
| no_new_dataset | 0.945601 |
1609.03666 | S\'ebastien Arnold | S\'ebastien Arnold | A Greedy Algorithm to Cluster Specialists | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Several recent deep neural networks experiments leverage the
generalist-specialist paradigm for classification. However, no formal study
compared the performance of different clustering algorithms for class
assignment. In this paper we perform such a study, suggest slight modifications
to the clustering procedures, and propose a novel algorithm designed to
optimize the performance of of the specialist-generalist classification system.
Our experiments on the CIFAR-10 and CIFAR-100 datasets allow us to investigate
situations for varying number of classes on similar data. We find that our
\emph{greedy pairs} clustering algorithm consistently outperforms other
alternatives, while the choice of the confusion matrix has little impact on the
final performance.
| [
{
"version": "v1",
"created": "Tue, 13 Sep 2016 03:26:42 GMT"
}
] | 2016-09-14T00:00:00 | [
[
"Arnold",
"Sébastien",
""
]
] | TITLE: A Greedy Algorithm to Cluster Specialists
ABSTRACT: Several recent deep neural networks experiments leverage the
generalist-specialist paradigm for classification. However, no formal study
compared the performance of different clustering algorithms for class
assignment. In this paper we perform such a study, suggest slight modifications
to the clustering procedures, and propose a novel algorithm designed to
optimize the performance of of the specialist-generalist classification system.
Our experiments on the CIFAR-10 and CIFAR-100 datasets allow us to investigate
situations for varying number of classes on similar data. We find that our
\emph{greedy pairs} clustering algorithm consistently outperforms other
alternatives, while the choice of the confusion matrix has little impact on the
final performance.
| no_new_dataset | 0.95253 |
1609.03795 | Domen Tabernik | Domen Tabernik, Matej Kristan, Jeremy L. Wyatt, and Ale\v{s} Leonardis | Towards Deep Compositional Networks | Published in proceedings of 23th International Conference on Pattern
Recognition (ICPR 2016) | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hierarchical feature learning based on convolutional neural networks (CNN)
has recently shown significant potential in various computer vision tasks.
While allowing high-quality discriminative feature learning, the downside of
CNNs is the lack of explicit structure in features, which often leads to
overfitting, absence of reconstruction from partial observations and limited
generative abilities. Explicit structure is inherent in hierarchical
compositional models, however, these lack the ability to optimize a
well-defined cost function. We propose a novel analytic model of a basic unit
in a layered hierarchical model with both explicit compositional structure and
a well-defined discriminative cost function. Our experiments on two datasets
show that the proposed compositional model performs on a par with standard CNNs
on discriminative tasks, while, due to explicit modeling of the structure in
the feature units, affording a straight-forward visualization of parts and
faster inference due to separability of the units. Actions
| [
{
"version": "v1",
"created": "Tue, 13 Sep 2016 12:31:29 GMT"
}
] | 2016-09-14T00:00:00 | [
[
"Tabernik",
"Domen",
""
],
[
"Kristan",
"Matej",
""
],
[
"Wyatt",
"Jeremy L.",
""
],
[
"Leonardis",
"Aleš",
""
]
] | TITLE: Towards Deep Compositional Networks
ABSTRACT: Hierarchical feature learning based on convolutional neural networks (CNN)
has recently shown significant potential in various computer vision tasks.
While allowing high-quality discriminative feature learning, the downside of
CNNs is the lack of explicit structure in features, which often leads to
overfitting, absence of reconstruction from partial observations and limited
generative abilities. Explicit structure is inherent in hierarchical
compositional models, however, these lack the ability to optimize a
well-defined cost function. We propose a novel analytic model of a basic unit
in a layered hierarchical model with both explicit compositional structure and
a well-defined discriminative cost function. Our experiments on two datasets
show that the proposed compositional model performs on a par with standard CNNs
on discriminative tasks, while, due to explicit modeling of the structure in
the feature units, affording a straight-forward visualization of parts and
faster inference due to separability of the units. Actions
| no_new_dataset | 0.949342 |
1609.03894 | Francisco Massa | Francisco Massa, Renaud Marlet, Mathieu Aubry | Crafting a multi-task CNN for viewpoint estimation | To appear in BMVC 2016 | null | null | null | cs.CV cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Convolutional Neural Networks (CNNs) were recently shown to provide
state-of-the-art results for object category viewpoint estimation. However
different ways of formulating this problem have been proposed and the competing
approaches have been explored with very different design choices. This paper
presents a comparison of these approaches in a unified setting as well as a
detailed analysis of the key factors that impact performance. Followingly, we
present a new joint training method with the detection task and demonstrate its
benefit. We also highlight the superiority of classification approaches over
regression approaches, quantify the benefits of deeper architectures and
extended training data, and demonstrate that synthetic data is beneficial even
when using ImageNet training data. By combining all these elements, we
demonstrate an improvement of approximately 5% mAVP over previous
state-of-the-art results on the Pascal3D+ dataset. In particular for their most
challenging 24 view classification task we improve the results from 31.1% to
36.1% mAVP.
| [
{
"version": "v1",
"created": "Tue, 13 Sep 2016 15:19:38 GMT"
}
] | 2016-09-14T00:00:00 | [
[
"Massa",
"Francisco",
""
],
[
"Marlet",
"Renaud",
""
],
[
"Aubry",
"Mathieu",
""
]
] | TITLE: Crafting a multi-task CNN for viewpoint estimation
ABSTRACT: Convolutional Neural Networks (CNNs) were recently shown to provide
state-of-the-art results for object category viewpoint estimation. However
different ways of formulating this problem have been proposed and the competing
approaches have been explored with very different design choices. This paper
presents a comparison of these approaches in a unified setting as well as a
detailed analysis of the key factors that impact performance. Followingly, we
present a new joint training method with the detection task and demonstrate its
benefit. We also highlight the superiority of classification approaches over
regression approaches, quantify the benefits of deeper architectures and
extended training data, and demonstrate that synthetic data is beneficial even
when using ImageNet training data. By combining all these elements, we
demonstrate an improvement of approximately 5% mAVP over previous
state-of-the-art results on the Pascal3D+ dataset. In particular for their most
challenging 24 view classification task we improve the results from 31.1% to
36.1% mAVP.
| no_new_dataset | 0.947039 |
1609.03976 | Ozan \c{C}a\u{g}layan | Ozan Caglayan, Lo\"ic Barrault, Fethi Bougares | Multimodal Attention for Neural Machine Translation | 10 pages, under review COLING 2016 | null | null | null | cs.CL cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The attention mechanism is an important part of the neural machine
translation (NMT) where it was reported to produce richer source representation
compared to fixed-length encoding sequence-to-sequence models. Recently, the
effectiveness of attention has also been explored in the context of image
captioning. In this work, we assess the feasibility of a multimodal attention
mechanism that simultaneously focus over an image and its natural language
description for generating a description in another language. We train several
variants of our proposed attention mechanism on the Multi30k multilingual image
captioning dataset. We show that a dedicated attention for each modality
achieves up to 1.6 points in BLEU and METEOR compared to a textual NMT
baseline.
| [
{
"version": "v1",
"created": "Tue, 13 Sep 2016 18:46:03 GMT"
}
] | 2016-09-14T00:00:00 | [
[
"Caglayan",
"Ozan",
""
],
[
"Barrault",
"Loïc",
""
],
[
"Bougares",
"Fethi",
""
]
] | TITLE: Multimodal Attention for Neural Machine Translation
ABSTRACT: The attention mechanism is an important part of the neural machine
translation (NMT) where it was reported to produce richer source representation
compared to fixed-length encoding sequence-to-sequence models. Recently, the
effectiveness of attention has also been explored in the context of image
captioning. In this work, we assess the feasibility of a multimodal attention
mechanism that simultaneously focus over an image and its natural language
description for generating a description in another language. We train several
variants of our proposed attention mechanism on the Multi30k multilingual image
captioning dataset. We show that a dedicated attention for each modality
achieves up to 1.6 points in BLEU and METEOR compared to a textual NMT
baseline.
| no_new_dataset | 0.950319 |
1504.04785 | Mahdi Boloursaz Mashhadi | Mahdi Boloursaz Mashhadi, Ehsan Asadi, Mohsen Eskandari, Shahrzad
Kiani, and Farrokh Marvasti | Heart Rate Tracking using Wrist-Type Photoplethysmographic (PPG) Signals
during Physical Exercise with Simultaneous Accelerometry | Accepted for publication in IEEE Signal Processing Letters | null | 10.1109/LSP.2015.2509868 | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper considers the problem of casual heart rate tracking during
intensive physical exercise using simultaneous 2 channel photoplethysmographic
(PPG) and 3 dimensional (3D) acceleration signals recorded from wrist. This is
a challenging problem because the PPG signals recorded from wrist during
exercise are contaminated by strong Motion Artifacts (MAs). In this work, a
novel algorithm is proposed which consists of two main steps of MA Cancellation
and Spectral Analysis. The MA cancellation step cleanses the MA-contaminated
PPG signals utilizing the acceleration data and the spectral analysis step
estimates a higher resolution spectrum of the signal and selects the spectral
peaks corresponding to HR. Experimental results on datasets recorded from 12
subjects during fast running at the peak speed of 15 km/hour showed that the
proposed algorithm achieves an average absolute error of 1.25 beat per minute
(BPM). These experimental results also confirm that the proposed algorithm
keeps high estimation accuracies even in strong MA conditions.
| [
{
"version": "v1",
"created": "Sun, 19 Apr 2015 03:29:15 GMT"
},
{
"version": "v2",
"created": "Mon, 12 Sep 2016 19:28:32 GMT"
}
] | 2016-09-13T00:00:00 | [
[
"Mashhadi",
"Mahdi Boloursaz",
""
],
[
"Asadi",
"Ehsan",
""
],
[
"Eskandari",
"Mohsen",
""
],
[
"Kiani",
"Shahrzad",
""
],
[
"Marvasti",
"Farrokh",
""
]
] | TITLE: Heart Rate Tracking using Wrist-Type Photoplethysmographic (PPG) Signals
during Physical Exercise with Simultaneous Accelerometry
ABSTRACT: This paper considers the problem of casual heart rate tracking during
intensive physical exercise using simultaneous 2 channel photoplethysmographic
(PPG) and 3 dimensional (3D) acceleration signals recorded from wrist. This is
a challenging problem because the PPG signals recorded from wrist during
exercise are contaminated by strong Motion Artifacts (MAs). In this work, a
novel algorithm is proposed which consists of two main steps of MA Cancellation
and Spectral Analysis. The MA cancellation step cleanses the MA-contaminated
PPG signals utilizing the acceleration data and the spectral analysis step
estimates a higher resolution spectrum of the signal and selects the spectral
peaks corresponding to HR. Experimental results on datasets recorded from 12
subjects during fast running at the peak speed of 15 km/hour showed that the
proposed algorithm achieves an average absolute error of 1.25 beat per minute
(BPM). These experimental results also confirm that the proposed algorithm
keeps high estimation accuracies even in strong MA conditions.
| no_new_dataset | 0.939692 |
1604.06486 | Saeed Reza Kheradpisheh | Saeed Reza Kheradpisheh, Masoud Ghodrati, Mohammad Ganjtabesh,
Timoth\'ee Masquelier | Humans and deep networks largely agree on which kinds of variation make
object recognition harder | null | Frontiers in Computational Neuroscience (2016) 10:92 | 10.3389/fncom.2016.00092 | null | cs.CV q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | View-invariant object recognition is a challenging problem, which has
attracted much attention among the psychology, neuroscience, and computer
vision communities. Humans are notoriously good at it, even if some variations
are presumably more difficult to handle than others (e.g. 3D rotations). Humans
are thought to solve the problem through hierarchical processing along the
ventral stream, which progressively extracts more and more invariant visual
features. This feed-forward architecture has inspired a new generation of
bio-inspired computer vision systems called deep convolutional neural networks
(DCNN), which are currently the best algorithms for object recognition in
natural images. Here, for the first time, we systematically compared human
feed-forward vision and DCNNs at view-invariant object recognition using the
same images and controlling for both the kinds of transformation as well as
their magnitude. We used four object categories and images were rendered from
3D computer models. In total, 89 human subjects participated in 10 experiments
in which they had to discriminate between two or four categories after rapid
presentation with backward masking. We also tested two recent DCNNs on the same
tasks. We found that humans and DCNNs largely agreed on the relative
difficulties of each kind of variation: rotation in depth is by far the hardest
transformation to handle, followed by scale, then rotation in plane, and
finally position. This suggests that humans recognize objects mainly through 2D
template matching, rather than by constructing 3D object models, and that DCNNs
are not too unreasonable models of human feed-forward vision. Also, our results
show that the variation levels in rotation in depth and scale strongly modulate
both humans' and DCNNs' recognition performances. We thus argue that these
variations should be controlled in the image datasets used in vision research.
| [
{
"version": "v1",
"created": "Thu, 21 Apr 2016 20:53:00 GMT"
}
] | 2016-09-13T00:00:00 | [
[
"Kheradpisheh",
"Saeed Reza",
""
],
[
"Ghodrati",
"Masoud",
""
],
[
"Ganjtabesh",
"Mohammad",
""
],
[
"Masquelier",
"Timothée",
""
]
] | TITLE: Humans and deep networks largely agree on which kinds of variation make
object recognition harder
ABSTRACT: View-invariant object recognition is a challenging problem, which has
attracted much attention among the psychology, neuroscience, and computer
vision communities. Humans are notoriously good at it, even if some variations
are presumably more difficult to handle than others (e.g. 3D rotations). Humans
are thought to solve the problem through hierarchical processing along the
ventral stream, which progressively extracts more and more invariant visual
features. This feed-forward architecture has inspired a new generation of
bio-inspired computer vision systems called deep convolutional neural networks
(DCNN), which are currently the best algorithms for object recognition in
natural images. Here, for the first time, we systematically compared human
feed-forward vision and DCNNs at view-invariant object recognition using the
same images and controlling for both the kinds of transformation as well as
their magnitude. We used four object categories and images were rendered from
3D computer models. In total, 89 human subjects participated in 10 experiments
in which they had to discriminate between two or four categories after rapid
presentation with backward masking. We also tested two recent DCNNs on the same
tasks. We found that humans and DCNNs largely agreed on the relative
difficulties of each kind of variation: rotation in depth is by far the hardest
transformation to handle, followed by scale, then rotation in plane, and
finally position. This suggests that humans recognize objects mainly through 2D
template matching, rather than by constructing 3D object models, and that DCNNs
are not too unreasonable models of human feed-forward vision. Also, our results
show that the variation levels in rotation in depth and scale strongly modulate
both humans' and DCNNs' recognition performances. We thus argue that these
variations should be controlled in the image datasets used in vision research.
| no_new_dataset | 0.9462 |
1606.03152 | Mehdi Fatemi | Mehdi Fatemi, Layla El Asri, Hannes Schulz, Jing He, Kaheer Suleman | Policy Networks with Two-Stage Training for Dialogue Systems | SIGDial 2016 (Submitted: May 2016; Accepted: Jun 30, 2016) | Proceedings of the SIGDIAL 2016 Conference, pages 101--110, Los
Angeles, USA, 13-15 September 2016. Association for Computational Linguistics | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose to use deep policy networks which are trained with
an advantage actor-critic method for statistically optimised dialogue systems.
First, we show that, on summary state and action spaces, deep Reinforcement
Learning (RL) outperforms Gaussian Processes methods. Summary state and action
spaces lead to good performance but require pre-engineering effort, RL
knowledge, and domain expertise. In order to remove the need to define such
summary spaces, we show that deep RL can also be trained efficiently on the
original state and action spaces. Dialogue systems based on partially
observable Markov decision processes are known to require many dialogues to
train, which makes them unappealing for practical deployment. We show that a
deep RL method based on an actor-critic architecture can exploit a small amount
of data very efficiently. Indeed, with only a few hundred dialogues collected
with a handcrafted policy, the actor-critic deep learner is considerably
bootstrapped from a combination of supervised and batch RL. In addition,
convergence to an optimal policy is significantly sped up compared to other
deep RL methods initialized on the data with batch RL. All experiments are
performed on a restaurant domain derived from the Dialogue State Tracking
Challenge 2 (DSTC2) dataset.
| [
{
"version": "v1",
"created": "Fri, 10 Jun 2016 01:02:19 GMT"
},
{
"version": "v2",
"created": "Tue, 19 Jul 2016 16:20:18 GMT"
},
{
"version": "v3",
"created": "Sat, 20 Aug 2016 21:20:21 GMT"
},
{
"version": "v4",
"created": "Mon, 12 Sep 2016 16:23:42 GMT"
}
] | 2016-09-13T00:00:00 | [
[
"Fatemi",
"Mehdi",
""
],
[
"Asri",
"Layla El",
""
],
[
"Schulz",
"Hannes",
""
],
[
"He",
"Jing",
""
],
[
"Suleman",
"Kaheer",
""
]
] | TITLE: Policy Networks with Two-Stage Training for Dialogue Systems
ABSTRACT: In this paper, we propose to use deep policy networks which are trained with
an advantage actor-critic method for statistically optimised dialogue systems.
First, we show that, on summary state and action spaces, deep Reinforcement
Learning (RL) outperforms Gaussian Processes methods. Summary state and action
spaces lead to good performance but require pre-engineering effort, RL
knowledge, and domain expertise. In order to remove the need to define such
summary spaces, we show that deep RL can also be trained efficiently on the
original state and action spaces. Dialogue systems based on partially
observable Markov decision processes are known to require many dialogues to
train, which makes them unappealing for practical deployment. We show that a
deep RL method based on an actor-critic architecture can exploit a small amount
of data very efficiently. Indeed, with only a few hundred dialogues collected
with a handcrafted policy, the actor-critic deep learner is considerably
bootstrapped from a combination of supervised and batch RL. In addition,
convergence to an optimal policy is significantly sped up compared to other
deep RL methods initialized on the data with batch RL. All experiments are
performed on a restaurant domain derived from the Dialogue State Tracking
Challenge 2 (DSTC2) dataset.
| no_new_dataset | 0.943348 |
1607.01845 | Agustin Indaco | Agustin Indaco and Lev Manovich | Urban Social Media Inequality: Definition, Measurements, and Application | 53 pages, 11 figures, 3 tables | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Social media content shared today in cities, such as Instagram images, their
tags and descriptions, is the key form of contemporary city life. It tells
people where activities and locations that interest them are and it allows them
to share their urban experiences and self-representations. Therefore, any
analysis of urban structures and cultures needs to consider social media
activity. In our paper, we introduce the novel concept of social media
inequality. This concept allows us to quantitatively compare patterns in social
media activities between parts of a city, a number of cities, or any other
spatial areas. We define this concept using an analogy with the concept of
economic inequality. Economic inequality indicates how some economic
characteristics or material resources, such as income, wealth or consumption
are distributed in a city, country or between countries. Accordingly, we can
define social media inequality as the measure of the distribution of
characteristics from social media content shared in a particular geographic
area or between areas. An example of such characteristics is the number of
photos shared by all users of a social network such as Instagram in a given
city or city area, or the content of these photos. We propose that the standard
inequality measures used in other disciplines, such as the Gini coefficient,
can also be used to characterize social media inequality. To test our ideas, we
use a dataset of 7,442,454 public geo-coded Instagram images shared in
Manhattan during five months (March-July) in 2014, and also selected data for
287 Census tracts in Manhattan. We compare patterns in Instagram sharing for
locals and for visitors for all tracts, and also for hours in a 24-hour cycle.
We also look at relations between social media inequality and socio-economic
inequality using selected indicators for Census tracts.
| [
{
"version": "v1",
"created": "Thu, 7 Jul 2016 00:43:25 GMT"
},
{
"version": "v2",
"created": "Sun, 11 Sep 2016 20:45:35 GMT"
}
] | 2016-09-13T00:00:00 | [
[
"Indaco",
"Agustin",
""
],
[
"Manovich",
"Lev",
""
]
] | TITLE: Urban Social Media Inequality: Definition, Measurements, and Application
ABSTRACT: Social media content shared today in cities, such as Instagram images, their
tags and descriptions, is the key form of contemporary city life. It tells
people where activities and locations that interest them are and it allows them
to share their urban experiences and self-representations. Therefore, any
analysis of urban structures and cultures needs to consider social media
activity. In our paper, we introduce the novel concept of social media
inequality. This concept allows us to quantitatively compare patterns in social
media activities between parts of a city, a number of cities, or any other
spatial areas. We define this concept using an analogy with the concept of
economic inequality. Economic inequality indicates how some economic
characteristics or material resources, such as income, wealth or consumption
are distributed in a city, country or between countries. Accordingly, we can
define social media inequality as the measure of the distribution of
characteristics from social media content shared in a particular geographic
area or between areas. An example of such characteristics is the number of
photos shared by all users of a social network such as Instagram in a given
city or city area, or the content of these photos. We propose that the standard
inequality measures used in other disciplines, such as the Gini coefficient,
can also be used to characterize social media inequality. To test our ideas, we
use a dataset of 7,442,454 public geo-coded Instagram images shared in
Manhattan during five months (March-July) in 2014, and also selected data for
287 Census tracts in Manhattan. We compare patterns in Instagram sharing for
locals and for visitors for all tracts, and also for hours in a 24-hour cycle.
We also look at relations between social media inequality and socio-economic
inequality using selected indicators for Census tracts.
| no_new_dataset | 0.730819 |
1608.01769 | Abhimanyu Dubey | Abhimanyu Dubey, Nikhil Naik, Devi Parikh, Ramesh Raskar and C\'esar
A. Hidalgo | Deep Learning the City : Quantifying Urban Perception At A Global Scale | 23 pages, 8 figures. Accepted to the European Conference on Computer
Vision (ECCV), 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Computer vision methods that quantify the perception of urban environment are
increasingly being used to study the relationship between a city's physical
appearance and the behavior and health of its residents. Yet, the throughput of
current methods is too limited to quantify the perception of cities across the
world. To tackle this challenge, we introduce a new crowdsourced dataset
containing 110,988 images from 56 cities, and 1,170,000 pairwise comparisons
provided by 81,630 online volunteers along six perceptual attributes: safe,
lively, boring, wealthy, depressing, and beautiful. Using this data, we train a
Siamese-like convolutional neural architecture, which learns from a joint
classification and ranking loss, to predict human judgments of pairwise image
comparisons. Our results show that crowdsourcing combined with neural networks
can produce urban perception data at the global scale.
| [
{
"version": "v1",
"created": "Fri, 5 Aug 2016 05:58:35 GMT"
},
{
"version": "v2",
"created": "Mon, 12 Sep 2016 18:48:37 GMT"
}
] | 2016-09-13T00:00:00 | [
[
"Dubey",
"Abhimanyu",
""
],
[
"Naik",
"Nikhil",
""
],
[
"Parikh",
"Devi",
""
],
[
"Raskar",
"Ramesh",
""
],
[
"Hidalgo",
"César A.",
""
]
] | TITLE: Deep Learning the City : Quantifying Urban Perception At A Global Scale
ABSTRACT: Computer vision methods that quantify the perception of urban environment are
increasingly being used to study the relationship between a city's physical
appearance and the behavior and health of its residents. Yet, the throughput of
current methods is too limited to quantify the perception of cities across the
world. To tackle this challenge, we introduce a new crowdsourced dataset
containing 110,988 images from 56 cities, and 1,170,000 pairwise comparisons
provided by 81,630 online volunteers along six perceptual attributes: safe,
lively, boring, wealthy, depressing, and beautiful. Using this data, we train a
Siamese-like convolutional neural architecture, which learns from a joint
classification and ranking loss, to predict human judgments of pairwise image
comparisons. Our results show that crowdsourcing combined with neural networks
can produce urban perception data at the global scale.
| new_dataset | 0.963231 |
1609.02947 | Nhien-An Le-Khac | Andree Linke, Nhien-An Le-Khac | Control Flow Change in Assembly as a Classifier in Malware Analysis | null | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As currently classical malware detection methods based on signatures fail to
detect new malware, they are not always efficient with new obfuscation
techniques. Besides, new malware is easily created and old malware can be
recoded to produce new one. Therefore, classical Antivirus becomes consistently
less effective in dealing with those new threats. Also malware gets hand
tailored to bypass network security and Antivirus. But as analysts do not have
enough time to dissect suspected malware by hand, automated approaches have
been developed. To cope with the mass of new malware, statistical and machine
learning methods proved to be a good approach classifying programs, especially
when using multiple approaches together to provide a likelihood of software
being malicious. In normal approach, some steps have been taken, mostly by
analyzing the opcodes or mnemonics of disassembly and their distribution. In
this paper, we focus on the control flow change (CFC) itself and finding out if
it is significant to detect malware. In the scope of this work, only relative
control flow changes are contemplated, as these are easier to extract from the
first chosen disassembler library and are within a range of 256 addresses.
These features are analyzed as a raw feature, as n-grams of length 2, 4 and 6
and the even more abstract feature of the occurrences of the n-grams is used.
Statistical methods were used as well as the Naive-Bayes algorithm to find out
if there is significant data in CFC. We also test our approach with real-world
datasets.
| [
{
"version": "v1",
"created": "Fri, 9 Sep 2016 21:21:14 GMT"
}
] | 2016-09-13T00:00:00 | [
[
"Linke",
"Andree",
""
],
[
"Le-Khac",
"Nhien-An",
""
]
] | TITLE: Control Flow Change in Assembly as a Classifier in Malware Analysis
ABSTRACT: As currently classical malware detection methods based on signatures fail to
detect new malware, they are not always efficient with new obfuscation
techniques. Besides, new malware is easily created and old malware can be
recoded to produce new one. Therefore, classical Antivirus becomes consistently
less effective in dealing with those new threats. Also malware gets hand
tailored to bypass network security and Antivirus. But as analysts do not have
enough time to dissect suspected malware by hand, automated approaches have
been developed. To cope with the mass of new malware, statistical and machine
learning methods proved to be a good approach classifying programs, especially
when using multiple approaches together to provide a likelihood of software
being malicious. In normal approach, some steps have been taken, mostly by
analyzing the opcodes or mnemonics of disassembly and their distribution. In
this paper, we focus on the control flow change (CFC) itself and finding out if
it is significant to detect malware. In the scope of this work, only relative
control flow changes are contemplated, as these are easier to extract from the
first chosen disassembler library and are within a range of 256 addresses.
These features are analyzed as a raw feature, as n-grams of length 2, 4 and 6
and the even more abstract feature of the occurrences of the n-grams is used.
Statistical methods were used as well as the Naive-Bayes algorithm to find out
if there is significant data in CFC. We also test our approach with real-world
datasets.
| no_new_dataset | 0.943086 |
1609.02948 | Ruichi Yu | Ruichi Yu, Xi Chen, Vlad I. Morariu, Larry S. Davis | The Role of Context Selection in Object Detection | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate the reasons why context in object detection has limited
utility by isolating and evaluating the predictive power of different context
cues under ideal conditions in which context provided by an oracle. Based on
this study, we propose a region-based context re-scoring method with dynamic
context selection to remove noise and emphasize informative context. We
introduce latent indicator variables to select (or ignore) potential contextual
regions, and learn the selection strategy with latent-SVM. We conduct
experiments to evaluate the performance of the proposed context selection
method on the SUN RGB-D dataset. The method achieves a significant improvement
in terms of mean average precision (mAP), compared with both appearance based
detectors and a conventional context model without the selection scheme.
| [
{
"version": "v1",
"created": "Fri, 9 Sep 2016 21:30:14 GMT"
}
] | 2016-09-13T00:00:00 | [
[
"Yu",
"Ruichi",
""
],
[
"Chen",
"Xi",
""
],
[
"Morariu",
"Vlad I.",
""
],
[
"Davis",
"Larry S.",
""
]
] | TITLE: The Role of Context Selection in Object Detection
ABSTRACT: We investigate the reasons why context in object detection has limited
utility by isolating and evaluating the predictive power of different context
cues under ideal conditions in which context provided by an oracle. Based on
this study, we propose a region-based context re-scoring method with dynamic
context selection to remove noise and emphasize informative context. We
introduce latent indicator variables to select (or ignore) potential contextual
regions, and learn the selection strategy with latent-SVM. We conduct
experiments to evaluate the performance of the proposed context selection
method on the SUN RGB-D dataset. The method achieves a significant improvement
in terms of mean average precision (mAP), compared with both appearance based
detectors and a conventional context model without the selection scheme.
| no_new_dataset | 0.954393 |
1609.03020 | Daniele Sgandurra | Daniele Sgandurra, Luis Mu\~noz-Gonz\'alez, Rabih Mohsen, Emil C. Lupu | Automated Dynamic Analysis of Ransomware: Benefits, Limitations and use
for Detection | null | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent statistics show that in 2015 more than 140 millions new malware
samples have been found. Among these, a large portion is due to ransomware, the
class of malware whose specific goal is to render the victim's system unusable,
in particular by encrypting important files, and then ask the user to pay a
ransom to revert the damage. Several ransomware include sophisticated packing
techniques, and are hence difficult to statically analyse. We present EldeRan,
a machine learning approach for dynamically analysing and classifying
ransomware. EldeRan monitors a set of actions performed by applications in
their first phases of installation checking for characteristics signs of
ransomware. Our tests over a dataset of 582 ransomware belonging to 11
families, and with 942 goodware applications, show that EldeRan achieves an
area under the ROC curve of 0.995. Furthermore, EldeRan works without requiring
that an entire ransomware family is available beforehand. These results suggest
that dynamic analysis can support ransomware detection, since ransomware
samples exhibit a set of characteristic features at run-time that are common
across families, and that helps the early detection of new variants. We also
outline some limitations of dynamic analysis for ransomware and propose
possible solutions.
| [
{
"version": "v1",
"created": "Sat, 10 Sep 2016 09:49:36 GMT"
}
] | 2016-09-13T00:00:00 | [
[
"Sgandurra",
"Daniele",
""
],
[
"Muñoz-González",
"Luis",
""
],
[
"Mohsen",
"Rabih",
""
],
[
"Lupu",
"Emil C.",
""
]
] | TITLE: Automated Dynamic Analysis of Ransomware: Benefits, Limitations and use
for Detection
ABSTRACT: Recent statistics show that in 2015 more than 140 millions new malware
samples have been found. Among these, a large portion is due to ransomware, the
class of malware whose specific goal is to render the victim's system unusable,
in particular by encrypting important files, and then ask the user to pay a
ransom to revert the damage. Several ransomware include sophisticated packing
techniques, and are hence difficult to statically analyse. We present EldeRan,
a machine learning approach for dynamically analysing and classifying
ransomware. EldeRan monitors a set of actions performed by applications in
their first phases of installation checking for characteristics signs of
ransomware. Our tests over a dataset of 582 ransomware belonging to 11
families, and with 942 goodware applications, show that EldeRan achieves an
area under the ROC curve of 0.995. Furthermore, EldeRan works without requiring
that an entire ransomware family is available beforehand. These results suggest
that dynamic analysis can support ransomware detection, since ransomware
samples exhibit a set of characteristic features at run-time that are common
across families, and that helps the early detection of new variants. We also
outline some limitations of dynamic analysis for ransomware and propose
possible solutions.
| no_new_dataset | 0.928539 |
1609.03146 | Mathieu Guillame-Bert | Mathieu Guillame-Bert | Honey: A dataflow programming language for the processing, featurization
and analysis of multivariate, asynchronous and non-uniformly sampled scalar
symbolic time sequences | The source code of four presented tasks are available on the Honey
website | null | null | null | cs.PL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce HONEY; a new specialized programming language designed to
facilitate the processing of multivariate, asynchronous and non-uniformly
sampled symbolic and scalar time sequences. When compiled, a Honey program is
transformed into a static process flow diagram, which is then executed by a
virtual machine. Honey's most notable features are: (1) Honey introduces a new,
efficient and non-prone to error paradigm for defining recursive process flow
diagrams from text input with the mindset of imperative programming. Honey's
specialized, high level and concise syntax allows fast and easy writing,
reading and maintenance of complex processing of large scalar symbolic time
sequence datasets. (2) Honey guarantees programs will be executed similarly on
static or real-time streaming datasets. (3) Honey's IDE includes an interactive
visualization tool which allows for an interactive exploration of the
intermediate and final outputs. This combination enables fast incremental
prototyping, debugging, monitoring and maintenance of complex programs. (4) In
case of large datasets (larger than the available memory), Honey programs can
be executed to process input greedily. (5) The graphical structure of a
compiled program provides several desirable properties, including distributed
and/or paralleled execution, memory optimization, and program structure
visualization. (6) Honey contains a large library of both common and novel
operators developed through various research projects. An open source C++
implementation of Honey as well as the Honey IDE and the interactive data
visualizer are publicly available.
| [
{
"version": "v1",
"created": "Sun, 11 Sep 2016 10:18:29 GMT"
}
] | 2016-09-13T00:00:00 | [
[
"Guillame-Bert",
"Mathieu",
""
]
] | TITLE: Honey: A dataflow programming language for the processing, featurization
and analysis of multivariate, asynchronous and non-uniformly sampled scalar
symbolic time sequences
ABSTRACT: We introduce HONEY; a new specialized programming language designed to
facilitate the processing of multivariate, asynchronous and non-uniformly
sampled symbolic and scalar time sequences. When compiled, a Honey program is
transformed into a static process flow diagram, which is then executed by a
virtual machine. Honey's most notable features are: (1) Honey introduces a new,
efficient and non-prone to error paradigm for defining recursive process flow
diagrams from text input with the mindset of imperative programming. Honey's
specialized, high level and concise syntax allows fast and easy writing,
reading and maintenance of complex processing of large scalar symbolic time
sequence datasets. (2) Honey guarantees programs will be executed similarly on
static or real-time streaming datasets. (3) Honey's IDE includes an interactive
visualization tool which allows for an interactive exploration of the
intermediate and final outputs. This combination enables fast incremental
prototyping, debugging, monitoring and maintenance of complex programs. (4) In
case of large datasets (larger than the available memory), Honey programs can
be executed to process input greedily. (5) The graphical structure of a
compiled program provides several desirable properties, including distributed
and/or paralleled execution, memory optimization, and program structure
visualization. (6) Honey contains a large library of both common and novel
operators developed through various research projects. An open source C++
implementation of Honey as well as the Honey IDE and the interactive data
visualizer are publicly available.
| no_new_dataset | 0.940898 |
1609.03205 | Ella Rabinovich | Ella Rabinovich and Shuly Wintner | Unsupervised Identification of Translationese | TACL2015, 14 pages | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Translated texts are distinctively different from original ones, to the
extent that supervised text classification methods can distinguish between them
with high accuracy. These differences were proven useful for statistical
machine translation. However, it has been suggested that the accuracy of
translation detection deteriorates when the classifier is evaluated outside the
domain it was trained on. We show that this is indeed the case, in a variety of
evaluation scenarios. We then show that unsupervised classification is highly
accurate on this task. We suggest a method for determining the correct labels
of the clustering outcomes, and then use the labels for voting, improving the
accuracy even further. Moreover, we suggest a simple method for clustering in
the challenging case of mixed-domain datasets, in spite of the dominance of
domain-related features over translation-related ones. The result is an
effective, fully-unsupervised method for distinguishing between original and
translated texts that can be applied to new domains with reasonable accuracy.
| [
{
"version": "v1",
"created": "Sun, 11 Sep 2016 19:52:28 GMT"
}
] | 2016-09-13T00:00:00 | [
[
"Rabinovich",
"Ella",
""
],
[
"Wintner",
"Shuly",
""
]
] | TITLE: Unsupervised Identification of Translationese
ABSTRACT: Translated texts are distinctively different from original ones, to the
extent that supervised text classification methods can distinguish between them
with high accuracy. These differences were proven useful for statistical
machine translation. However, it has been suggested that the accuracy of
translation detection deteriorates when the classifier is evaluated outside the
domain it was trained on. We show that this is indeed the case, in a variety of
evaluation scenarios. We then show that unsupervised classification is highly
accurate on this task. We suggest a method for determining the correct labels
of the clustering outcomes, and then use the labels for voting, improving the
accuracy even further. Moreover, we suggest a simple method for clustering in
the challenging case of mixed-domain datasets, in spite of the dominance of
domain-related features over translation-related ones. The result is an
effective, fully-unsupervised method for distinguishing between original and
translated texts that can be applied to new domains with reasonable accuracy.
| no_new_dataset | 0.949342 |
1609.03224 | Ali Fatih Demir | A. Fatih Demir, Huseyin Arslan, Ismail Uysal | Bio-Inspired Filter Banks for SSVEP-based Brain-Computer Interfaces | 2016 IEEE International Conference on Biomedical and Health
Informatics (BHI) | 2016 IEEE-EMBS International Conference on Biomedical and Health
Informatics (BHI), Feb. 2016, pp. 144-147 | 10.1109/BHI.2016.7455855 | null | cs.HC q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Brain-computer interfaces (BCI) have the potential to play a vital role in
future healthcare technologies by providing an alternative way of communication
and control. More specifically, steady-state visual evoked potential (SSVEP)
based BCIs have the advantage of higher accuracy and higher information
transfer rate (ITR). In order to fully exploit the capabilities of such
devices, it is necessary to understand the features of SSVEP and design the
system considering its biological characteristics. This paper introduces
bio-inspired filter banks (BIFB) for a novel SSVEP frequency detection method.
It is known that SSVEP response to a flickering visual stimulus is frequency
selective and gets weaker as the frequency of the stimuli increases. In the
proposed approach, the gain and bandwidth of the filters are designed and tuned
based on these characteristics while also incorporating harmonic SSVEP
responses. This method not only improves the accuracy but also increases the
available number of commands by allowing the use of stimuli frequencies elicit
weak SSVEP responses. The BIFB method achieved reliable performance when tested
on datasets available online and compared with two well-known SSVEP frequency
detection methods, power spectral density analysis (PSDA) and canonical
correlation analysis (CCA). The results show the potential of bio-inspired
design which will be extended to include further SSVEP characteristic (e.g.
time-domain waveform) for future SSVEP based BCIs.
| [
{
"version": "v1",
"created": "Sun, 11 Sep 2016 22:15:12 GMT"
}
] | 2016-09-13T00:00:00 | [
[
"Demir",
"A. Fatih",
""
],
[
"Arslan",
"Huseyin",
""
],
[
"Uysal",
"Ismail",
""
]
] | TITLE: Bio-Inspired Filter Banks for SSVEP-based Brain-Computer Interfaces
ABSTRACT: Brain-computer interfaces (BCI) have the potential to play a vital role in
future healthcare technologies by providing an alternative way of communication
and control. More specifically, steady-state visual evoked potential (SSVEP)
based BCIs have the advantage of higher accuracy and higher information
transfer rate (ITR). In order to fully exploit the capabilities of such
devices, it is necessary to understand the features of SSVEP and design the
system considering its biological characteristics. This paper introduces
bio-inspired filter banks (BIFB) for a novel SSVEP frequency detection method.
It is known that SSVEP response to a flickering visual stimulus is frequency
selective and gets weaker as the frequency of the stimuli increases. In the
proposed approach, the gain and bandwidth of the filters are designed and tuned
based on these characteristics while also incorporating harmonic SSVEP
responses. This method not only improves the accuracy but also increases the
available number of commands by allowing the use of stimuli frequencies elicit
weak SSVEP responses. The BIFB method achieved reliable performance when tested
on datasets available online and compared with two well-known SSVEP frequency
detection methods, power spectral density analysis (PSDA) and canonical
correlation analysis (CCA). The results show the potential of bio-inspired
design which will be extended to include further SSVEP characteristic (e.g.
time-domain waveform) for future SSVEP based BCIs.
| no_new_dataset | 0.949902 |
1609.03277 | Mahamad Suhil | Sumithra R, Mahamad Suhil, D.S. Guru | Segmentation and Classification of Skin Lesions for Disease Diagnosis | 10 pages, 6 figures, 2 Tables in Elsevier, Proceedia Computer
Science, International Conference on Advanced Computing Technologies and
Applications (ICACTA-2015) | null | 10.1016/j.procs.2015.03.090 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, a novel approach for automatic segmentation and classification
of skin lesions is proposed. Initially, skin images are filtered to remove
unwanted hairs and noise and then the segmentation process is carried out to
extract lesion areas. For segmentation, a region growing method is applied by
automatic initialization of seed points. The segmentation performance is
measured with different well known measures and the results are appreciable.
Subsequently, the extracted lesion areas are represented by color and texture
features. SVM and k-NN classifiers are used along with their fusion for the
classification using the extracted features. The performance of the system is
tested on our own dataset of 726 samples from 141 images consisting of 5
different classes of diseases. The results are very promising with 46.71% and
34% of F-measure using SVM and k-NN classifier respectively and with 61% of
F-measure for fusion of SVM and k-NN.
| [
{
"version": "v1",
"created": "Mon, 12 Sep 2016 06:05:55 GMT"
}
] | 2016-09-13T00:00:00 | [
[
"R",
"Sumithra",
""
],
[
"Suhil",
"Mahamad",
""
],
[
"Guru",
"D. S.",
""
]
] | TITLE: Segmentation and Classification of Skin Lesions for Disease Diagnosis
ABSTRACT: In this paper, a novel approach for automatic segmentation and classification
of skin lesions is proposed. Initially, skin images are filtered to remove
unwanted hairs and noise and then the segmentation process is carried out to
extract lesion areas. For segmentation, a region growing method is applied by
automatic initialization of seed points. The segmentation performance is
measured with different well known measures and the results are appreciable.
Subsequently, the extracted lesion areas are represented by color and texture
features. SVM and k-NN classifiers are used along with their fusion for the
classification using the extracted features. The performance of the system is
tested on our own dataset of 726 samples from 141 images consisting of 5
different classes of diseases. The results are very promising with 46.71% and
34% of F-measure using SVM and k-NN classifier respectively and with 61% of
F-measure for fusion of SVM and k-NN.
| new_dataset | 0.960805 |
1609.03536 | Zhenheng Yang | Zhenheng Yang and Ram Nevatia | A Multi-Scale Cascade Fully Convolutional Network Face Detector | Accepted to ICPR 16' | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Face detection is challenging as faces in images could be present at
arbitrary locations and in different scales. We propose a three-stage cascade
structure based on fully convolutional neural networks (FCNs). It first
proposes the approximate locations where the faces may be, then aims to find
the accurate location by zooming on to the faces. Each level of the FCN cascade
is a multi-scale fully-convolutional network, which generates scores at
different locations and in different scales. A score map is generated after
each FCN stage. Probable regions of face are selected and fed to the next
stage. The number of proposals is decreased after each level, and the areas of
regions are decreased to more precisely fit the face. Compared to passing
proposals directly between stages, passing probable regions can decrease the
number of proposals and reduce the cases where first stage doesn't propose good
bounding boxes. We show that by using FCN and score map, the FCN cascade face
detector can achieve strong performance on public datasets.
| [
{
"version": "v1",
"created": "Mon, 12 Sep 2016 19:13:46 GMT"
}
] | 2016-09-13T00:00:00 | [
[
"Yang",
"Zhenheng",
""
],
[
"Nevatia",
"Ram",
""
]
] | TITLE: A Multi-Scale Cascade Fully Convolutional Network Face Detector
ABSTRACT: Face detection is challenging as faces in images could be present at
arbitrary locations and in different scales. We propose a three-stage cascade
structure based on fully convolutional neural networks (FCNs). It first
proposes the approximate locations where the faces may be, then aims to find
the accurate location by zooming on to the faces. Each level of the FCN cascade
is a multi-scale fully-convolutional network, which generates scores at
different locations and in different scales. A score map is generated after
each FCN stage. Probable regions of face are selected and fed to the next
stage. The number of proposals is decreased after each level, and the areas of
regions are decreased to more precisely fit the face. Compared to passing
proposals directly between stages, passing probable regions can decrease the
number of proposals and reduce the cases where first stage doesn't propose good
bounding boxes. We show that by using FCN and score map, the FCN cascade face
detector can achieve strong performance on public datasets.
| no_new_dataset | 0.957794 |
1609.03544 | Xin Jiang | Xin Jiang, Rebecca Willett | Online Data Thinning via Multi-Subspace Tracking | 32 pages, 10 figures | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In an era of ubiquitous large-scale streaming data, the availability of data
far exceeds the capacity of expert human analysts. In many settings, such data
is either discarded or stored unprocessed in datacenters. This paper proposes a
method of online data thinning, in which large-scale streaming datasets are
winnowed to preserve unique, anomalous, or salient elements for timely expert
analysis. At the heart of this proposed approach is an online anomaly detection
method based on dynamic, low-rank Gaussian mixture models. Specifically, the
high-dimensional covariances matrices associated with the Gaussian components
are associated with low-rank models. According to this model, most observations
lie near a union of subspaces. The low-rank modeling mitigates the curse of
dimensionality associated with anomaly detection for high-dimensional data, and
recent advances in subspace clustering and subspace tracking allow the proposed
method to adapt to dynamic environments. Furthermore, the proposed method
allows subsampling, is robust to missing data, and uses a mini-batch online
optimization approach. The resulting algorithms are scalable, efficient, and
are capable of operating in real time. Experiments on wide-area motion imagery
and e-mail databases illustrate the efficacy of the proposed approach.
| [
{
"version": "v1",
"created": "Mon, 12 Sep 2016 19:34:02 GMT"
}
] | 2016-09-13T00:00:00 | [
[
"Jiang",
"Xin",
""
],
[
"Willett",
"Rebecca",
""
]
] | TITLE: Online Data Thinning via Multi-Subspace Tracking
ABSTRACT: In an era of ubiquitous large-scale streaming data, the availability of data
far exceeds the capacity of expert human analysts. In many settings, such data
is either discarded or stored unprocessed in datacenters. This paper proposes a
method of online data thinning, in which large-scale streaming datasets are
winnowed to preserve unique, anomalous, or salient elements for timely expert
analysis. At the heart of this proposed approach is an online anomaly detection
method based on dynamic, low-rank Gaussian mixture models. Specifically, the
high-dimensional covariances matrices associated with the Gaussian components
are associated with low-rank models. According to this model, most observations
lie near a union of subspaces. The low-rank modeling mitigates the curse of
dimensionality associated with anomaly detection for high-dimensional data, and
recent advances in subspace clustering and subspace tracking allow the proposed
method to adapt to dynamic environments. Furthermore, the proposed method
allows subsampling, is robust to missing data, and uses a mini-batch online
optimization approach. The resulting algorithms are scalable, efficient, and
are capable of operating in real time. Experiments on wide-area motion imagery
and e-mail databases illustrate the efficacy of the proposed approach.
| no_new_dataset | 0.949856 |
1604.08504 | Yao Lu | Linqing Liu, Yao Lu, Ye Luo, Renxian Zhang, Laurent Itti and Jianwei
Lu | Detecting "Smart" Spammers On Social Network: A Topic Model Approach | NAACL-HLT 2016, Student Research Workshop | null | 10.18653/v1/N16-2007 | null | cs.CL cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Spammer detection on social network is a challenging problem. The rigid
anti-spam rules have resulted in emergence of "smart" spammers. They resemble
legitimate users who are difficult to identify. In this paper, we present a
novel spammer classification approach based on Latent Dirichlet
Allocation(LDA), a topic model. Our approach extracts both the local and the
global information of topic distribution patterns, which capture the essence of
spamming. Tested on one benchmark dataset and one self-collected dataset, our
proposed method outperforms other state-of-the-art methods in terms of averaged
F1-score.
| [
{
"version": "v1",
"created": "Thu, 28 Apr 2016 16:36:35 GMT"
},
{
"version": "v2",
"created": "Thu, 9 Jun 2016 06:50:36 GMT"
}
] | 2016-09-12T00:00:00 | [
[
"Liu",
"Linqing",
""
],
[
"Lu",
"Yao",
""
],
[
"Luo",
"Ye",
""
],
[
"Zhang",
"Renxian",
""
],
[
"Itti",
"Laurent",
""
],
[
"Lu",
"Jianwei",
""
]
] | TITLE: Detecting "Smart" Spammers On Social Network: A Topic Model Approach
ABSTRACT: Spammer detection on social network is a challenging problem. The rigid
anti-spam rules have resulted in emergence of "smart" spammers. They resemble
legitimate users who are difficult to identify. In this paper, we present a
novel spammer classification approach based on Latent Dirichlet
Allocation(LDA), a topic model. Our approach extracts both the local and the
global information of topic distribution patterns, which capture the essence of
spamming. Tested on one benchmark dataset and one self-collected dataset, our
proposed method outperforms other state-of-the-art methods in terms of averaged
F1-score.
| new_dataset | 0.957873 |
1609.02631 | Varvara Kollia | Varvara Kollia, Oguz H. Elibol | Distributed Processing of Biosignal-Database for Emotion Recognition
with Mahout | 4 pages, 5 png figures | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper investigates the use of distributed processing on the problem of
emotion recognition from physiological sensors using a popular machine learning
library on distributed mode. Specifically, we run a random forests classifier
on the biosignal-data, which have been pre-processed to form exclusive groups
in an unsupervised fashion, on a Cloudera cluster using Mahout. The use of
distributed processing significantly reduces the time required for the offline
training of the classifier, enabling processing of large physiological datasets
through many iterations.
| [
{
"version": "v1",
"created": "Fri, 9 Sep 2016 01:13:20 GMT"
}
] | 2016-09-12T00:00:00 | [
[
"Kollia",
"Varvara",
""
],
[
"Elibol",
"Oguz H.",
""
]
] | TITLE: Distributed Processing of Biosignal-Database for Emotion Recognition
with Mahout
ABSTRACT: This paper investigates the use of distributed processing on the problem of
emotion recognition from physiological sensors using a popular machine learning
library on distributed mode. Specifically, we run a random forests classifier
on the biosignal-data, which have been pre-processed to form exclusive groups
in an unsupervised fashion, on a Cloudera cluster using Mahout. The use of
distributed processing significantly reduces the time required for the offline
training of the classifier, enabling processing of large physiological datasets
through many iterations.
| no_new_dataset | 0.949902 |
1609.02715 | Amin Fehri | Amin Fehri (CMM), Santiago Velasco-Forero (CMM), Fernand Meyer (CMM) | Automatic Selection of Stochastic Watershed Hierarchies | in European Conference of Signal Processing (EUSIPCO), 2016,
Budapest, Hungary | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The segmentation, seen as the association of a partition with an image, is a
difficult task. It can be decomposed in two steps: at first, a family of
contours associated with a series of nested partitions (or hierarchy) is
created and organized, then pertinent contours are extracted. A coarser
partition is obtained by merging adjacent regions of a finer partition. The
strength of a contour is then measured by the level of the hierarchy for which
its two adjacent regions merge. We present an automatic segmentation strategy
using a wide range of stochastic watershed hierarchies. For a given set of
homogeneous images, our approach selects automatically the best hierarchy and
cut level to perform image simplification given an evaluation score.
Experimental results illustrate the advantages of our approach on several
real-life images datasets.
| [
{
"version": "v1",
"created": "Fri, 9 Sep 2016 09:26:22 GMT"
}
] | 2016-09-12T00:00:00 | [
[
"Fehri",
"Amin",
"",
"CMM"
],
[
"Velasco-Forero",
"Santiago",
"",
"CMM"
],
[
"Meyer",
"Fernand",
"",
"CMM"
]
] | TITLE: Automatic Selection of Stochastic Watershed Hierarchies
ABSTRACT: The segmentation, seen as the association of a partition with an image, is a
difficult task. It can be decomposed in two steps: at first, a family of
contours associated with a series of nested partitions (or hierarchy) is
created and organized, then pertinent contours are extracted. A coarser
partition is obtained by merging adjacent regions of a finer partition. The
strength of a contour is then measured by the level of the hierarchy for which
its two adjacent regions merge. We present an automatic segmentation strategy
using a wide range of stochastic watershed hierarchies. For a given set of
homogeneous images, our approach selects automatically the best hierarchy and
cut level to perform image simplification given an evaluation score.
Experimental results illustrate the advantages of our approach on several
real-life images datasets.
| no_new_dataset | 0.954942 |
1609.02727 | Vlad Sandulescu | Vlad Sandulescu, Martin Ester | Detecting Singleton Review Spammers Using Semantic Similarity | 6 pages, WWW 2015 | WWW '15 Companion Proceedings of the 24th International Conference
on World Wide Web, 2015, p.971-976 | 10.1145/2740908.2742570 | null | cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Online reviews have increasingly become a very important resource for
consumers when making purchases. Though it is becoming more and more difficult
for people to make well-informed buying decisions without being deceived by
fake reviews. Prior works on the opinion spam problem mostly considered
classifying fake reviews using behavioral user patterns. They focused on
prolific users who write more than a couple of reviews, discarding one-time
reviewers. The number of singleton reviewers however is expected to be high for
many review websites. While behavioral patterns are effective when dealing with
elite users, for one-time reviewers, the review text needs to be exploited. In
this paper we tackle the problem of detecting fake reviews written by the same
person using multiple names, posting each review under a different name. We
propose two methods to detect similar reviews and show the results generally
outperform the vectorial similarity measures used in prior works. The first
method extends the semantic similarity between words to the reviews level. The
second method is based on topic modeling and exploits the similarity of the
reviews topic distributions using two models: bag-of-words and
bag-of-opinion-phrases. The experiments were conducted on reviews from three
different datasets: Yelp (57K reviews), Trustpilot (9K reviews) and Ott dataset
(800 reviews).
| [
{
"version": "v1",
"created": "Fri, 9 Sep 2016 09:58:45 GMT"
}
] | 2016-09-12T00:00:00 | [
[
"Sandulescu",
"Vlad",
""
],
[
"Ester",
"Martin",
""
]
] | TITLE: Detecting Singleton Review Spammers Using Semantic Similarity
ABSTRACT: Online reviews have increasingly become a very important resource for
consumers when making purchases. Though it is becoming more and more difficult
for people to make well-informed buying decisions without being deceived by
fake reviews. Prior works on the opinion spam problem mostly considered
classifying fake reviews using behavioral user patterns. They focused on
prolific users who write more than a couple of reviews, discarding one-time
reviewers. The number of singleton reviewers however is expected to be high for
many review websites. While behavioral patterns are effective when dealing with
elite users, for one-time reviewers, the review text needs to be exploited. In
this paper we tackle the problem of detecting fake reviews written by the same
person using multiple names, posting each review under a different name. We
propose two methods to detect similar reviews and show the results generally
outperform the vectorial similarity measures used in prior works. The first
method extends the semantic similarity between words to the reviews level. The
second method is based on topic modeling and exploits the similarity of the
reviews topic distributions using two models: bag-of-words and
bag-of-opinion-phrases. The experiments were conducted on reviews from three
different datasets: Yelp (57K reviews), Trustpilot (9K reviews) and Ott dataset
(800 reviews).
| no_new_dataset | 0.949669 |
1609.02745 | Sebastian Ruder | Sebastian Ruder, Parsa Ghaffari, and John G. Breslin | A Hierarchical Model of Reviews for Aspect-based Sentiment Analysis | To be published at EMNLP 2016, 7 pages | null | null | null | cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Opinion mining from customer reviews has become pervasive in recent years.
Sentences in reviews, however, are usually classified independently, even
though they form part of a review's argumentative structure. Intuitively,
sentences in a review build and elaborate upon each other; knowledge of the
review structure and sentential context should thus inform the classification
of each sentence. We demonstrate this hypothesis for the task of aspect-based
sentiment analysis by modeling the interdependencies of sentences in a review
with a hierarchical bidirectional LSTM. We show that the hierarchical model
outperforms two non-hierarchical baselines, obtains results competitive with
the state-of-the-art, and outperforms the state-of-the-art on five
multilingual, multi-domain datasets without any hand-engineered features or
external resources.
| [
{
"version": "v1",
"created": "Fri, 9 Sep 2016 11:16:15 GMT"
}
] | 2016-09-12T00:00:00 | [
[
"Ruder",
"Sebastian",
""
],
[
"Ghaffari",
"Parsa",
""
],
[
"Breslin",
"John G.",
""
]
] | TITLE: A Hierarchical Model of Reviews for Aspect-based Sentiment Analysis
ABSTRACT: Opinion mining from customer reviews has become pervasive in recent years.
Sentences in reviews, however, are usually classified independently, even
though they form part of a review's argumentative structure. Intuitively,
sentences in a review build and elaborate upon each other; knowledge of the
review structure and sentential context should thus inform the classification
of each sentence. We demonstrate this hypothesis for the task of aspect-based
sentiment analysis by modeling the interdependencies of sentences in a review
with a hierarchical bidirectional LSTM. We show that the hierarchical model
outperforms two non-hierarchical baselines, obtains results competitive with
the state-of-the-art, and outperforms the state-of-the-art on five
multilingual, multi-domain datasets without any hand-engineered features or
external resources.
| no_new_dataset | 0.945951 |
1609.02781 | Gabriel De Barros Paranhos Da Costa | Gabriel B. Paranhos da Costa, Welinton A. Contato, Tiago S. Nazare,
Jo\~ao E. S. Batista Neto, Moacir Ponti | An empirical study on the effects of different types of noise in image
classification tasks | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Image classification is one of the main research problems in computer vision
and machine learning. Since in most real-world image classification
applications there is no control over how the images are captured, it is
necessary to consider the possibility that these images might be affected by
noise (e.g. sensor noise in a low-quality surveillance camera). In this paper
we analyse the impact of three different types of noise on descriptors
extracted by two widely used feature extraction methods (LBP and HOG) and how
denoising the images can help to mitigate this problem. We carry out
experiments on two different datasets and consider several types of noise,
noise levels, and denoising methods. Our results show that noise can hinder
classification performance considerably and make classes harder to separate.
Although denoising methods were not able to reach the same performance of the
noise-free scenario, they improved classification results for noisy data.
| [
{
"version": "v1",
"created": "Fri, 9 Sep 2016 13:19:41 GMT"
}
] | 2016-09-12T00:00:00 | [
[
"da Costa",
"Gabriel B. Paranhos",
""
],
[
"Contato",
"Welinton A.",
""
],
[
"Nazare",
"Tiago S.",
""
],
[
"Neto",
"João E. S. Batista",
""
],
[
"Ponti",
"Moacir",
""
]
] | TITLE: An empirical study on the effects of different types of noise in image
classification tasks
ABSTRACT: Image classification is one of the main research problems in computer vision
and machine learning. Since in most real-world image classification
applications there is no control over how the images are captured, it is
necessary to consider the possibility that these images might be affected by
noise (e.g. sensor noise in a low-quality surveillance camera). In this paper
we analyse the impact of three different types of noise on descriptors
extracted by two widely used feature extraction methods (LBP and HOG) and how
denoising the images can help to mitigate this problem. We carry out
experiments on two different datasets and consider several types of noise,
noise levels, and denoising methods. Our results show that noise can hinder
classification performance considerably and make classes harder to separate.
Although denoising methods were not able to reach the same performance of the
noise-free scenario, they improved classification results for noisy data.
| no_new_dataset | 0.948346 |
1609.02805 | Nicolas Jaccard | Nicolas Jaccard, Thomas W. Rogers, Edward J. Morton, Lewis D. Griffin | Automated detection of smuggled high-risk security threats using Deep
Learning | Submission for Crime Detection and Prevention conference 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The security infrastructure is ill-equipped to detect and deter the smuggling
of non-explosive devices that enable terror attacks such as those recently
perpetrated in western Europe. The detection of so-called "small metallic
threats" (SMTs) in cargo containers currently relies on statistical risk
analysis, intelligence reports, and visual inspection of X-ray images by
security officers. The latter is very slow and unreliable due to the difficulty
of the task: objects potentially spanning less than 50 pixels have to be
detected in images containing more than 2 million pixels against very complex
and cluttered backgrounds. In this contribution, we demonstrate for the first
time the use of Convolutional Neural Networks (CNNs), a type of Deep Learning,
to automate the detection of SMTs in fullsize X-ray images of cargo containers.
Novel approaches for dataset augmentation allowed to train CNNs from-scratch
despite the scarcity of data available. We report fewer than 6% false alarms
when detecting 90% SMTs synthetically concealed in stream-of-commerce images,
which corresponds to an improvement of over an order of magnitude over
conventional approaches such as Bag-of-Words (BoWs). The proposed scheme offers
potentially super-human performance for a fraction of the time it would take
for a security officers to carry out visual inspection (processing time is
approximately 3.5s per container image).
| [
{
"version": "v1",
"created": "Fri, 9 Sep 2016 14:14:52 GMT"
}
] | 2016-09-12T00:00:00 | [
[
"Jaccard",
"Nicolas",
""
],
[
"Rogers",
"Thomas W.",
""
],
[
"Morton",
"Edward J.",
""
],
[
"Griffin",
"Lewis D.",
""
]
] | TITLE: Automated detection of smuggled high-risk security threats using Deep
Learning
ABSTRACT: The security infrastructure is ill-equipped to detect and deter the smuggling
of non-explosive devices that enable terror attacks such as those recently
perpetrated in western Europe. The detection of so-called "small metallic
threats" (SMTs) in cargo containers currently relies on statistical risk
analysis, intelligence reports, and visual inspection of X-ray images by
security officers. The latter is very slow and unreliable due to the difficulty
of the task: objects potentially spanning less than 50 pixels have to be
detected in images containing more than 2 million pixels against very complex
and cluttered backgrounds. In this contribution, we demonstrate for the first
time the use of Convolutional Neural Networks (CNNs), a type of Deep Learning,
to automate the detection of SMTs in fullsize X-ray images of cargo containers.
Novel approaches for dataset augmentation allowed to train CNNs from-scratch
despite the scarcity of data available. We report fewer than 6% false alarms
when detecting 90% SMTs synthetically concealed in stream-of-commerce images,
which corresponds to an improvement of over an order of magnitude over
conventional approaches such as Bag-of-Words (BoWs). The proposed scheme offers
potentially super-human performance for a fraction of the time it would take
for a security officers to carry out visual inspection (processing time is
approximately 3.5s per container image).
| no_new_dataset | 0.948346 |
1609.02809 | Edward Dixon Mr | Alexei Bastidas, Edward Dixon, Chris Loo, John Ryan | Harassment detection: a benchmark on the #HackHarassment dataset | Accepted to the Collaborative European Research Conference 2016 | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Online harassment has been a problem to a greater or lesser extent since the
early days of the internet. Previous work has applied anti-spam techniques like
machine-learning based text classification (Reynolds, 2011) to detecting
harassing messages. However, existing public datasets are limited in size, with
labels of varying quality. The #HackHarassment initiative (an alliance of 1
tech companies and NGOs devoted to fighting bullying on the internet) has begun
to address this issue by creating a new dataset superior to its predecssors in
terms of both size and quality. As we (#HackHarassment) complete further rounds
of labelling, later iterations of this dataset will increase the available
samples by at least an order of magnitude, enabling corresponding improvements
in the quality of machine learning models for harassment detection. In this
paper, we introduce the first models built on the #HackHarassment dataset v1.0
(a new open dataset, which we are delighted to share with any interested
researcherss) as a benchmark for future research.
| [
{
"version": "v1",
"created": "Fri, 9 Sep 2016 14:23:02 GMT"
}
] | 2016-09-12T00:00:00 | [
[
"Bastidas",
"Alexei",
""
],
[
"Dixon",
"Edward",
""
],
[
"Loo",
"Chris",
""
],
[
"Ryan",
"John",
""
]
] | TITLE: Harassment detection: a benchmark on the #HackHarassment dataset
ABSTRACT: Online harassment has been a problem to a greater or lesser extent since the
early days of the internet. Previous work has applied anti-spam techniques like
machine-learning based text classification (Reynolds, 2011) to detecting
harassing messages. However, existing public datasets are limited in size, with
labels of varying quality. The #HackHarassment initiative (an alliance of 1
tech companies and NGOs devoted to fighting bullying on the internet) has begun
to address this issue by creating a new dataset superior to its predecssors in
terms of both size and quality. As we (#HackHarassment) complete further rounds
of labelling, later iterations of this dataset will increase the available
samples by at least an order of magnitude, enabling corresponding improvements
in the quality of machine learning models for harassment detection. In this
paper, we introduce the first models built on the #HackHarassment dataset v1.0
(a new open dataset, which we are delighted to share with any interested
researcherss) as a benchmark for future research.
| new_dataset | 0.957991 |
1609.02825 | Xi Peng | Xi Peng, Qiong Hu, Junzhou Huang, Dimitris N. Metaxas | Track Facial Points in Unconstrained Videos | British Machine Vision Conference (BMVC), 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Tracking Facial Points in unconstrained videos is challenging due to the
non-rigid deformation that changes over time. In this paper, we propose to
exploit incremental learning for person-specific alignment in wild conditions.
Our approach takes advantage of part-based representation and cascade
regression for robust and efficient alignment on each frame. Unlike existing
methods that usually rely on models trained offline, we incrementally update
the representation subspace and the cascade of regressors in a unified
framework to achieve personalized modeling on the fly. To alleviate the
drifting issue, the fitting results are evaluated using a deep neural network,
where well-aligned faces are picked out to incrementally update the
representation and fitting models. Both image and video datasets are employed
to valid the proposed method. The results demonstrate the superior performance
of our approach compared with existing approaches in terms of fitting accuracy
and efficiency.
| [
{
"version": "v1",
"created": "Fri, 9 Sep 2016 15:02:08 GMT"
}
] | 2016-09-12T00:00:00 | [
[
"Peng",
"Xi",
""
],
[
"Hu",
"Qiong",
""
],
[
"Huang",
"Junzhou",
""
],
[
"Metaxas",
"Dimitris N.",
""
]
] | TITLE: Track Facial Points in Unconstrained Videos
ABSTRACT: Tracking Facial Points in unconstrained videos is challenging due to the
non-rigid deformation that changes over time. In this paper, we propose to
exploit incremental learning for person-specific alignment in wild conditions.
Our approach takes advantage of part-based representation and cascade
regression for robust and efficient alignment on each frame. Unlike existing
methods that usually rely on models trained offline, we incrementally update
the representation subspace and the cascade of regressors in a unified
framework to achieve personalized modeling on the fly. To alleviate the
drifting issue, the fitting results are evaluated using a deep neural network,
where well-aligned faces are picked out to incrementally update the
representation and fitting models. Both image and video datasets are employed
to valid the proposed method. The results demonstrate the superior performance
of our approach compared with existing approaches in terms of fitting accuracy
and efficiency.
| no_new_dataset | 0.953232 |
1609.02839 | Richard Oentaryo | Jovian Lin, Richard Oentaryo, Ee-Peng Lim, Casey Vu, Adrian Vu, Agus
Kwee | Where is the Goldmine? Finding Promising Business Locations through
Facebook Data Analytics | null | Proceedings of the ACM Conference on Hypertext and Social Media,
Halifax, Canada, 2016, pp. 93-102 | 10.1145/2914586.2914588 | null | cs.SI cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | If you were to open your own cafe, would you not want to effortlessly
identify the most suitable location to set up your shop? Choosing an optimal
physical location is a critical decision for numerous businesses, as many
factors contribute to the final choice of the location. In this paper, we seek
to address the issue by investigating the use of publicly available Facebook
Pages data---which include user check-ins, types of business, and business
locations---to evaluate a user-selected physical location with respect to a
type of business. Using a dataset of 20,877 food businesses in Singapore, we
conduct analysis of several key factors including business categories,
locations, and neighboring businesses. From these factors, we extract a set of
relevant features and develop a robust predictive model to estimate the
popularity of a business location. Our experiments have shown that the
popularity of neighboring business contributes the key features to perform
accurate prediction. We finally illustrate the practical usage of our proposed
approach via an interactive web application system.
| [
{
"version": "v1",
"created": "Fri, 9 Sep 2016 15:48:50 GMT"
}
] | 2016-09-12T00:00:00 | [
[
"Lin",
"Jovian",
""
],
[
"Oentaryo",
"Richard",
""
],
[
"Lim",
"Ee-Peng",
""
],
[
"Vu",
"Casey",
""
],
[
"Vu",
"Adrian",
""
],
[
"Kwee",
"Agus",
""
]
] | TITLE: Where is the Goldmine? Finding Promising Business Locations through
Facebook Data Analytics
ABSTRACT: If you were to open your own cafe, would you not want to effortlessly
identify the most suitable location to set up your shop? Choosing an optimal
physical location is a critical decision for numerous businesses, as many
factors contribute to the final choice of the location. In this paper, we seek
to address the issue by investigating the use of publicly available Facebook
Pages data---which include user check-ins, types of business, and business
locations---to evaluate a user-selected physical location with respect to a
type of business. Using a dataset of 20,877 food businesses in Singapore, we
conduct analysis of several key factors including business categories,
locations, and neighboring businesses. From these factors, we extract a set of
relevant features and develop a robust predictive model to estimate the
popularity of a business location. Our experiments have shown that the
popularity of neighboring business contributes the key features to perform
accurate prediction. We finally illustrate the practical usage of our proposed
approach via an interactive web application system.
| no_new_dataset | 0.938688 |
1603.08152 | Yair Movshovitz-Attias | Yair Movshovitz-Attias, Takeo Kanade, Yaser Sheikh | How useful is photo-realistic rendering for visual learning? | Published in GMDL 2016 In conjunction with ECCV 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Data seems cheap to get, and in many ways it is, but the process of creating
a high quality labeled dataset from a mass of data is time-consuming and
expensive.
With the advent of rich 3D repositories, photo-realistic rendering systems
offer the opportunity to provide nearly limitless data. Yet, their primary
value for visual learning may be the quality of the data they can provide
rather than the quantity. Rendering engines offer the promise of perfect labels
in addition to the data: what the precise camera pose is; what the precise
lighting location, temperature, and distribution is; what the geometry of the
object is.
In this work we focus on semi-automating dataset creation through use of
synthetic data and apply this method to an important task -- object viewpoint
estimation. Using state-of-the-art rendering software we generate a large
labeled dataset of cars rendered densely in viewpoint space. We investigate the
effect of rendering parameters on estimation performance and show realism is
important. We show that generalizing from synthetic data is not harder than the
domain adaptation required between two real-image datasets and that combining
synthetic images with a small amount of real data improves estimation accuracy.
| [
{
"version": "v1",
"created": "Sat, 26 Mar 2016 22:56:53 GMT"
},
{
"version": "v2",
"created": "Thu, 8 Sep 2016 03:43:58 GMT"
}
] | 2016-09-09T00:00:00 | [
[
"Movshovitz-Attias",
"Yair",
""
],
[
"Kanade",
"Takeo",
""
],
[
"Sheikh",
"Yaser",
""
]
] | TITLE: How useful is photo-realistic rendering for visual learning?
ABSTRACT: Data seems cheap to get, and in many ways it is, but the process of creating
a high quality labeled dataset from a mass of data is time-consuming and
expensive.
With the advent of rich 3D repositories, photo-realistic rendering systems
offer the opportunity to provide nearly limitless data. Yet, their primary
value for visual learning may be the quality of the data they can provide
rather than the quantity. Rendering engines offer the promise of perfect labels
in addition to the data: what the precise camera pose is; what the precise
lighting location, temperature, and distribution is; what the geometry of the
object is.
In this work we focus on semi-automating dataset creation through use of
synthetic data and apply this method to an important task -- object viewpoint
estimation. Using state-of-the-art rendering software we generate a large
labeled dataset of cars rendered densely in viewpoint space. We investigate the
effect of rendering parameters on estimation performance and show realism is
important. We show that generalizing from synthetic data is not harder than the
domain adaptation required between two real-image datasets and that combining
synthetic images with a small amount of real data improves estimation accuracy.
| no_new_dataset | 0.510435 |
1608.03075 | Sungheon Park | Sungheon Park, Jihye Hwang, Nojun Kwak | 3D Human Pose Estimation Using Convolutional Neural Networks with 2D
Pose Information | ECCV 2016 workshop | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While there has been a success in 2D human pose estimation with convolutional
neural networks (CNNs), 3D human pose estimation has not been thoroughly
studied. In this paper, we tackle the 3D human pose estimation task with
end-to-end learning using CNNs. Relative 3D positions between one joint and the
other joints are learned via CNNs. The proposed method improves the performance
of CNN with two novel ideas. First, we added 2D pose information to estimate a
3D pose from an image by concatenating 2D pose estimation result with the
features from an image. Second, we have found that more accurate 3D poses are
obtained by combining information on relative positions with respect to
multiple joints, instead of just one root joint. Experimental results show that
the proposed method achieves comparable performance to the state-of-the-art
methods on Human 3.6m dataset.
| [
{
"version": "v1",
"created": "Wed, 10 Aug 2016 08:18:30 GMT"
},
{
"version": "v2",
"created": "Thu, 8 Sep 2016 02:25:08 GMT"
}
] | 2016-09-09T00:00:00 | [
[
"Park",
"Sungheon",
""
],
[
"Hwang",
"Jihye",
""
],
[
"Kwak",
"Nojun",
""
]
] | TITLE: 3D Human Pose Estimation Using Convolutional Neural Networks with 2D
Pose Information
ABSTRACT: While there has been a success in 2D human pose estimation with convolutional
neural networks (CNNs), 3D human pose estimation has not been thoroughly
studied. In this paper, we tackle the 3D human pose estimation task with
end-to-end learning using CNNs. Relative 3D positions between one joint and the
other joints are learned via CNNs. The proposed method improves the performance
of CNN with two novel ideas. First, we added 2D pose information to estimate a
3D pose from an image by concatenating 2D pose estimation result with the
features from an image. Second, we have found that more accurate 3D poses are
obtained by combining information on relative positions with respect to
multiple joints, instead of just one root joint. Experimental results show that
the proposed method achieves comparable performance to the state-of-the-art
methods on Human 3.6m dataset.
| no_new_dataset | 0.947624 |
1608.07068 | Kuo-Hao Zeng | Kuo-Hao Zeng and Tseng-Hung Chen and Juan Carlos Niebles and Min Sun | Title Generation for User Generated Videos | 14 pages, 4 figures, ECCV2016 | null | null | null | cs.CV cs.AI cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A great video title describes the most salient event compactly and captures
the viewer's attention. In contrast, video captioning tends to generate
sentences that describe the video as a whole. Although generating a video title
automatically is a very useful task, it is much less addressed than video
captioning. We address video title generation for the first time by proposing
two methods that extend state-of-the-art video captioners to this new task.
First, we make video captioners highlight sensitive by priming them with a
highlight detector. Our framework allows for jointly training a model for title
generation and video highlight localization. Second, we induce high sentence
diversity in video captioners, so that the generated titles are also diverse
and catchy. This means that a large number of sentences might be required to
learn the sentence structure of titles. Hence, we propose a novel sentence
augmentation method to train a captioner with additional sentence-only examples
that come without corresponding videos. We collected a large-scale Video Titles
in the Wild (VTW) dataset of 18100 automatically crawled user-generated videos
and titles. On VTW, our methods consistently improve title prediction accuracy,
and achieve the best performance in both automatic and human evaluation.
Finally, our sentence augmentation method also outperforms the baselines on the
M-VAD dataset.
| [
{
"version": "v1",
"created": "Thu, 25 Aug 2016 09:49:23 GMT"
},
{
"version": "v2",
"created": "Thu, 8 Sep 2016 17:36:13 GMT"
}
] | 2016-09-09T00:00:00 | [
[
"Zeng",
"Kuo-Hao",
""
],
[
"Chen",
"Tseng-Hung",
""
],
[
"Niebles",
"Juan Carlos",
""
],
[
"Sun",
"Min",
""
]
] | TITLE: Title Generation for User Generated Videos
ABSTRACT: A great video title describes the most salient event compactly and captures
the viewer's attention. In contrast, video captioning tends to generate
sentences that describe the video as a whole. Although generating a video title
automatically is a very useful task, it is much less addressed than video
captioning. We address video title generation for the first time by proposing
two methods that extend state-of-the-art video captioners to this new task.
First, we make video captioners highlight sensitive by priming them with a
highlight detector. Our framework allows for jointly training a model for title
generation and video highlight localization. Second, we induce high sentence
diversity in video captioners, so that the generated titles are also diverse
and catchy. This means that a large number of sentences might be required to
learn the sentence structure of titles. Hence, we propose a novel sentence
augmentation method to train a captioner with additional sentence-only examples
that come without corresponding videos. We collected a large-scale Video Titles
in the Wild (VTW) dataset of 18100 automatically crawled user-generated videos
and titles. On VTW, our methods consistently improve title prediction accuracy,
and achieve the best performance in both automatic and human evaluation.
Finally, our sentence augmentation method also outperforms the baselines on the
M-VAD dataset.
| new_dataset | 0.915658 |
1609.02020 | Zhenyu Liao | Zhenyu Liao, Romain Couillet | Random matrices meet machine learning: a large dimensional analysis of
LS-SVM | wrong article submitted | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This article proposes a performance analysis of kernel least squares support
vector machines (LS-SVMs) based on a random matrix approach, in the regime
where both the dimension of data $p$ and their number $n$ grow large at the
same rate. Under a two-class Gaussian mixture model for the input data, we
prove that the LS-SVM decision function is asymptotically normal with means and
covariances shown to depend explicitly on the derivatives of the kernel
function. This provides improved understanding along with new insights into the
internal workings of SVM-type methods for large datasets.
| [
{
"version": "v1",
"created": "Wed, 7 Sep 2016 15:39:24 GMT"
},
{
"version": "v2",
"created": "Thu, 8 Sep 2016 07:26:00 GMT"
}
] | 2016-09-09T00:00:00 | [
[
"Liao",
"Zhenyu",
""
],
[
"Couillet",
"Romain",
""
]
] | TITLE: Random matrices meet machine learning: a large dimensional analysis of
LS-SVM
ABSTRACT: This article proposes a performance analysis of kernel least squares support
vector machines (LS-SVMs) based on a random matrix approach, in the regime
where both the dimension of data $p$ and their number $n$ grow large at the
same rate. Under a two-class Gaussian mixture model for the input data, we
prove that the LS-SVM decision function is asymptotically normal with means and
covariances shown to depend explicitly on the derivatives of the kernel
function. This provides improved understanding along with new insights into the
internal workings of SVM-type methods for large datasets.
| no_new_dataset | 0.950041 |
1609.02258 | Haishan Ye | Haishan Ye, Qiaoming Ye, Zhihua Zhang | Tighter bound of Sketched Generalized Matrix Approximation | null | null | null | null | cs.NA | http://creativecommons.org/licenses/by/4.0/ | Generalized matrix approximation plays a fundamental role in many machine
learning problems, such as CUR decomposition, kernel approximation, and matrix
low rank approximation. Especially with today's applications involved in larger
and larger dataset, more and more efficient generalized matrix approximation
algorithems become a crucially important research issue. In this paper, we find
new sketching techniques to reduce the size of the original data matrix to
develop new matrix approximation algorithms. Our results derive a much tighter
bound for the approximation than previous works: we obtain a $(1+\epsilon)$
approximation ratio with small sketched dimensions which implies a more
efficient generalized matrix approximation.
| [
{
"version": "v1",
"created": "Thu, 8 Sep 2016 04:01:02 GMT"
}
] | 2016-09-09T00:00:00 | [
[
"Ye",
"Haishan",
""
],
[
"Ye",
"Qiaoming",
""
],
[
"Zhang",
"Zhihua",
""
]
] | TITLE: Tighter bound of Sketched Generalized Matrix Approximation
ABSTRACT: Generalized matrix approximation plays a fundamental role in many machine
learning problems, such as CUR decomposition, kernel approximation, and matrix
low rank approximation. Especially with today's applications involved in larger
and larger dataset, more and more efficient generalized matrix approximation
algorithems become a crucially important research issue. In this paper, we find
new sketching techniques to reduce the size of the original data matrix to
develop new matrix approximation algorithms. Our results derive a much tighter
bound for the approximation than previous works: we obtain a $(1+\epsilon)$
approximation ratio with small sketched dimensions which implies a more
efficient generalized matrix approximation.
| no_new_dataset | 0.949623 |
1609.02281 | Kanji Tanaka | Kanji Tanaka | Deformable Map Matching for Uncertain Loop-Less Maps | 7 pages, 8 figures, Draft of a paper submitted to a conference | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the classical context of robotic mapping and localization, map matching is
typically defined as the task of finding a rigid transformation (i.e., 3DOF
rotation/translation on the 2D moving plane) that aligns the query and
reference maps built by mobile robots. This definition is valid in loop-rich
trajectories that enable a mapper robot to close many loops, for which precise
maps can be assumed. The same cannot be said about the newly emerging
autonomous navigation and driving systems, which typically operate in loop-less
trajectories that have no large loop (e.g., straight paths). In this paper, we
propose a solution that overcomes this limitation by merging the two maps. Our
study is motivated by the observation that even when there is no large loop in
either the query or reference map, many loops can often be obtained in the
merged map. We add two new aspects to map matching: (1) image retrieval with
discriminative deep convolutional neural network (DCNN) features, which
efficiently generates a small number of good initial alignment hypotheses; and
(2) map merge, which jointly deforms the two maps to minimize differences in
shape between them. To realize practical computation time, we also present a
preemption scheme that avoids excessive evaluation of useless map-matching
hypotheses. To verify our approach experimentally, we created a novel
collection of uncertain loop-less maps by utilizing the recently published
North Campus Long-Term (NCLT) dataset and its ground-truth GPS data. The
results obtained using these map collections confirm that our approach improves
on previous map-matching approaches.
| [
{
"version": "v1",
"created": "Thu, 8 Sep 2016 05:43:42 GMT"
}
] | 2016-09-09T00:00:00 | [
[
"Tanaka",
"Kanji",
""
]
] | TITLE: Deformable Map Matching for Uncertain Loop-Less Maps
ABSTRACT: In the classical context of robotic mapping and localization, map matching is
typically defined as the task of finding a rigid transformation (i.e., 3DOF
rotation/translation on the 2D moving plane) that aligns the query and
reference maps built by mobile robots. This definition is valid in loop-rich
trajectories that enable a mapper robot to close many loops, for which precise
maps can be assumed. The same cannot be said about the newly emerging
autonomous navigation and driving systems, which typically operate in loop-less
trajectories that have no large loop (e.g., straight paths). In this paper, we
propose a solution that overcomes this limitation by merging the two maps. Our
study is motivated by the observation that even when there is no large loop in
either the query or reference map, many loops can often be obtained in the
merged map. We add two new aspects to map matching: (1) image retrieval with
discriminative deep convolutional neural network (DCNN) features, which
efficiently generates a small number of good initial alignment hypotheses; and
(2) map merge, which jointly deforms the two maps to minimize differences in
shape between them. To realize practical computation time, we also present a
preemption scheme that avoids excessive evaluation of useless map-matching
hypotheses. To verify our approach experimentally, we created a novel
collection of uncertain loop-less maps by utilizing the recently published
North Campus Long-Term (NCLT) dataset and its ground-truth GPS data. The
results obtained using these map collections confirm that our approach improves
on previous map-matching approaches.
| new_dataset | 0.943504 |
1609.02284 | Jiyang Gao | Jiyang Gao, Ram Nevatia | Learning Action Concept Trees and Semantic Alignment Networks from
Image-Description Data | 16 pages, 5 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Action classification in still images has been a popular research topic in
computer vision. Labelling large scale datasets for action classification
requires tremendous manual work, which is hard to scale up. Besides, the action
categories in such datasets are pre-defined and vocabularies are fixed. However
humans may describe the same action with different phrases, which leads to the
difficulty of vocabulary expansion for traditional fully-supervised methods. We
observe that large amounts of images with sentence descriptions are readily
available on the Internet. The sentence descriptions can be regarded as weak
labels for the images, which contain rich information and could be used to
learn flexible expressions of action categories. We propose a method to learn
an Action Concept Tree (ACT) and an Action Semantic Alignment (ASA) model for
classification from image-description data via a two-stage learning process. A
new dataset for the task of learning actions from descriptions is built.
Experimental results show that our method outperforms several baseline methods
significantly.
| [
{
"version": "v1",
"created": "Thu, 8 Sep 2016 05:53:31 GMT"
}
] | 2016-09-09T00:00:00 | [
[
"Gao",
"Jiyang",
""
],
[
"Nevatia",
"Ram",
""
]
] | TITLE: Learning Action Concept Trees and Semantic Alignment Networks from
Image-Description Data
ABSTRACT: Action classification in still images has been a popular research topic in
computer vision. Labelling large scale datasets for action classification
requires tremendous manual work, which is hard to scale up. Besides, the action
categories in such datasets are pre-defined and vocabularies are fixed. However
humans may describe the same action with different phrases, which leads to the
difficulty of vocabulary expansion for traditional fully-supervised methods. We
observe that large amounts of images with sentence descriptions are readily
available on the Internet. The sentence descriptions can be regarded as weak
labels for the images, which contain rich information and could be used to
learn flexible expressions of action categories. We propose a method to learn
an Action Concept Tree (ACT) and an Action Semantic Alignment (ASA) model for
classification from image-description data via a two-stage learning process. A
new dataset for the task of learning actions from descriptions is built.
Experimental results show that our method outperforms several baseline methods
significantly.
| new_dataset | 0.959345 |
1609.02452 | Andreas Bulling | Sabrina Hoppe, Andreas Bulling | End-to-End Eye Movement Detection Using Convolutional Neural Networks | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Common computational methods for automated eye movement detection - i.e. the
task of detecting different types of eye movement in a continuous stream of
gaze data - are limited in that they either involve thresholding on
hand-crafted signal features, require individual detectors each only detecting
a single movement, or require pre-segmented data. We propose a novel approach
for eye movement detection that only involves learning a single detector
end-to-end, i.e. directly from the continuous gaze data stream and
simultaneously for different eye movements without any manual feature crafting
or segmentation. Our method is based on convolutional neural networks (CNN)
that recently demonstrated superior performance in a variety of tasks in
computer vision, signal processing, and machine learning. We further introduce
a novel multi-participant dataset that contains scripted and free-viewing
sequences of ground-truth annotated saccades, fixations, and smooth pursuits.
We show that our CNN-based method outperforms state-of-the-art baselines by a
large margin on this challenging dataset, thereby underlining the significant
potential of this approach for holistic, robust, and accurate eye movement
protocol analysis.
| [
{
"version": "v1",
"created": "Thu, 8 Sep 2016 14:58:15 GMT"
}
] | 2016-09-09T00:00:00 | [
[
"Hoppe",
"Sabrina",
""
],
[
"Bulling",
"Andreas",
""
]
] | TITLE: End-to-End Eye Movement Detection Using Convolutional Neural Networks
ABSTRACT: Common computational methods for automated eye movement detection - i.e. the
task of detecting different types of eye movement in a continuous stream of
gaze data - are limited in that they either involve thresholding on
hand-crafted signal features, require individual detectors each only detecting
a single movement, or require pre-segmented data. We propose a novel approach
for eye movement detection that only involves learning a single detector
end-to-end, i.e. directly from the continuous gaze data stream and
simultaneously for different eye movements without any manual feature crafting
or segmentation. Our method is based on convolutional neural networks (CNN)
that recently demonstrated superior performance in a variety of tasks in
computer vision, signal processing, and machine learning. We further introduce
a novel multi-participant dataset that contains scripted and free-viewing
sequences of ground-truth annotated saccades, fixations, and smooth pursuits.
We show that our CNN-based method outperforms state-of-the-art baselines by a
large margin on this challenging dataset, thereby underlining the significant
potential of this approach for holistic, robust, and accurate eye movement
protocol analysis.
| new_dataset | 0.960175 |
1609.02469 | Joseph Antony A | Joseph Antony, Kevin McGuinness, Noel E O Connor, Kieran Moran | Quantifying Radiographic Knee Osteoarthritis Severity using Deep
Convolutional Neural Networks | Included in ICPR 2016 proceedings | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes a new approach to automatically quantify the severity of
knee osteoarthritis (OA) from radiographs using deep convolutional neural
networks (CNN). Clinically, knee OA severity is assessed using Kellgren \&
Lawrence (KL) grades, a five point scale. Previous work on automatically
predicting KL grades from radiograph images were based on training shallow
classifiers using a variety of hand engineered features. We demonstrate that
classification accuracy can be significantly improved using deep convolutional
neural network models pre-trained on ImageNet and fine-tuned on knee OA images.
Furthermore, we argue that it is more appropriate to assess the accuracy of
automatic knee OA severity predictions using a continuous distance-based
evaluation metric like mean squared error than it is to use classification
accuracy. This leads to the formulation of the prediction of KL grades as a
regression problem and further improves accuracy. Results on a dataset of X-ray
images and KL grades from the Osteoarthritis Initiative (OAI) show a sizable
improvement over the current state-of-the-art.
| [
{
"version": "v1",
"created": "Thu, 8 Sep 2016 15:39:48 GMT"
}
] | 2016-09-09T00:00:00 | [
[
"Antony",
"Joseph",
""
],
[
"McGuinness",
"Kevin",
""
],
[
"Connor",
"Noel E O",
""
],
[
"Moran",
"Kieran",
""
]
] | TITLE: Quantifying Radiographic Knee Osteoarthritis Severity using Deep
Convolutional Neural Networks
ABSTRACT: This paper proposes a new approach to automatically quantify the severity of
knee osteoarthritis (OA) from radiographs using deep convolutional neural
networks (CNN). Clinically, knee OA severity is assessed using Kellgren \&
Lawrence (KL) grades, a five point scale. Previous work on automatically
predicting KL grades from radiograph images were based on training shallow
classifiers using a variety of hand engineered features. We demonstrate that
classification accuracy can be significantly improved using deep convolutional
neural network models pre-trained on ImageNet and fine-tuned on knee OA images.
Furthermore, we argue that it is more appropriate to assess the accuracy of
automatic knee OA severity predictions using a continuous distance-based
evaluation metric like mean squared error than it is to use classification
accuracy. This leads to the formulation of the prediction of KL grades as a
regression problem and further improves accuracy. Results on a dataset of X-ray
images and KL grades from the Osteoarthritis Initiative (OAI) show a sizable
improvement over the current state-of-the-art.
| no_new_dataset | 0.954563 |
1609.02521 | Rohit Babbar | Rohit Babbar and Bernhard Shoelkopf | DiSMEC - Distributed Sparse Machines for Extreme Multi-label
Classification | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Extreme multi-label classification refers to supervised multi-label learning
involving hundreds of thousands or even millions of labels. Datasets in extreme
classification exhibit fit to power-law distribution, i.e. a large fraction of
labels have very few positive instances in the data distribution. Most
state-of-the-art approaches for extreme multi-label classification attempt to
capture correlation among labels by embedding the label matrix to a
low-dimensional linear sub-space. However, in the presence of power-law
distributed extremely large and diverse label spaces, structural assumptions
such as low rank can be easily violated.
In this work, we present DiSMEC, which is a large-scale distributed framework
for learning one-versus-rest linear classifiers coupled with explicit capacity
control to control model size. Unlike most state-of-the-art methods, DiSMEC
does not make any low rank assumptions on the label matrix. Using double layer
of parallelization, DiSMEC can learn classifiers for datasets consisting
hundreds of thousands labels within few hours. The explicit capacity control
mechanism filters out spurious parameters which keep the model compact in size,
without losing prediction accuracy. We conduct extensive empirical evaluation
on publicly available real-world datasets consisting upto 670,000 labels. We
compare DiSMEC with recent state-of-the-art approaches, including - SLEEC which
is a leading approach for learning sparse local embeddings, and FastXML which
is a tree-based approach optimizing ranking based loss function. On some of the
datasets, DiSMEC can significantly boost prediction accuracies - 10% better
compared to SLECC and 15% better compared to FastXML, in absolute terms.
| [
{
"version": "v1",
"created": "Thu, 8 Sep 2016 18:17:25 GMT"
}
] | 2016-09-09T00:00:00 | [
[
"Babbar",
"Rohit",
""
],
[
"Shoelkopf",
"Bernhard",
""
]
] | TITLE: DiSMEC - Distributed Sparse Machines for Extreme Multi-label
Classification
ABSTRACT: Extreme multi-label classification refers to supervised multi-label learning
involving hundreds of thousands or even millions of labels. Datasets in extreme
classification exhibit fit to power-law distribution, i.e. a large fraction of
labels have very few positive instances in the data distribution. Most
state-of-the-art approaches for extreme multi-label classification attempt to
capture correlation among labels by embedding the label matrix to a
low-dimensional linear sub-space. However, in the presence of power-law
distributed extremely large and diverse label spaces, structural assumptions
such as low rank can be easily violated.
In this work, we present DiSMEC, which is a large-scale distributed framework
for learning one-versus-rest linear classifiers coupled with explicit capacity
control to control model size. Unlike most state-of-the-art methods, DiSMEC
does not make any low rank assumptions on the label matrix. Using double layer
of parallelization, DiSMEC can learn classifiers for datasets consisting
hundreds of thousands labels within few hours. The explicit capacity control
mechanism filters out spurious parameters which keep the model compact in size,
without losing prediction accuracy. We conduct extensive empirical evaluation
on publicly available real-world datasets consisting upto 670,000 labels. We
compare DiSMEC with recent state-of-the-art approaches, including - SLEEC which
is a leading approach for learning sparse local embeddings, and FastXML which
is a tree-based approach optimizing ranking based loss function. On some of the
datasets, DiSMEC can significantly boost prediction accuracies - 10% better
compared to SLECC and 15% better compared to FastXML, in absolute terms.
| no_new_dataset | 0.947527 |
1609.02531 | Yu Sun | Matteo Bianchi, Jeannette Bohg, and Yu Sun | Latest Datasets and Technologies Presented in the Workshop on Grasping
and Manipulation Datasets | null | null | null | null | cs.RO cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper reports the activities and outcomes in the Workshop on Grasping
and Manipulation Datasets that was organized under the International Conference
on Robotics and Automation (ICRA) 2016. The half day workshop was packed with
nine invited talks, 12 interactive presentations, and one panel discussion with
ten panelists. This paper summarizes all the talks and presentations and recaps
what has been discussed in the panels session. This summary servers as a review
of recent developments in data collection in grasping and manipulation. Many of
the presentations describe ongoing efforts or explorations that could be
achieved and fully available in a year or two. The panel discussion not only
commented on the current approaches, but also indicates new directions and
focuses. The workshop clearly displayed the importance of quality datasets in
robotics and robotic grasping and manipulation field. Hopefully the workshop
could motivate larger efforts to create big datasets that are comparable with
big datasets in other communities such as computer vision.
| [
{
"version": "v1",
"created": "Thu, 8 Sep 2016 19:01:59 GMT"
}
] | 2016-09-09T00:00:00 | [
[
"Bianchi",
"Matteo",
""
],
[
"Bohg",
"Jeannette",
""
],
[
"Sun",
"Yu",
""
]
] | TITLE: Latest Datasets and Technologies Presented in the Workshop on Grasping
and Manipulation Datasets
ABSTRACT: This paper reports the activities and outcomes in the Workshop on Grasping
and Manipulation Datasets that was organized under the International Conference
on Robotics and Automation (ICRA) 2016. The half day workshop was packed with
nine invited talks, 12 interactive presentations, and one panel discussion with
ten panelists. This paper summarizes all the talks and presentations and recaps
what has been discussed in the panels session. This summary servers as a review
of recent developments in data collection in grasping and manipulation. Many of
the presentations describe ongoing efforts or explorations that could be
achieved and fully available in a year or two. The panel discussion not only
commented on the current approaches, but also indicates new directions and
focuses. The workshop clearly displayed the importance of quality datasets in
robotics and robotic grasping and manipulation field. Hopefully the workshop
could motivate larger efforts to create big datasets that are comparable with
big datasets in other communities such as computer vision.
| no_new_dataset | 0.950732 |
0807.4729 | Adrian Melott | Adrian L. Melott (University of Kansas) | Long-term cycles in the history of life: Periodic biodiversity in the
Paleobiology Database | Published in PLoS ONE. 5 pages, 3 figures. Version with live links,
discussion available at
http://www.plosone.org/article/info:doi/10.1371/journal.pone.0004044#top | PLoS ONE 3(12): e4044. (2008) | 10.1371/journal.pone.0004044 | null | q-bio.PE astro-ph physics.bio-ph physics.data-an physics.geo-ph q-bio.QM stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Time series analysis of fossil biodiversity of marine invertebrates in the
Paleobiology Database (PBDB) shows a significant periodicity at approximately
63 My, in agreement with previous analyses based on the Sepkoski database. I
discuss how this result did not appear in a previous analysis of the PBDB. The
existence of the 63 My periodicity, despite very different treatment of
systematic error in both PBDB and Sepkoski databases strongly argues for
consideration of its reality in the fossil record. Cross-spectral analysis of
the two datasets finds that a 62 My periodicity coincides in phase by 1.6 My,
equivalent to better than the errors in either measurement. Consequently, the
two data sets not only contain the same strong periodicity, but its peaks and
valleys closely correspond in time. Two other spectral peaks appear in the PBDB
analysis, but appear to be artifacts associated with detrending and with the
increased interval length. Sampling-standardization procedures implemented by
the PBDB collaboration suggest that the signal is not an artifact of sampling
bias. Further work should focus on finding the cause of the 62 My periodicity.
| [
{
"version": "v1",
"created": "Tue, 29 Jul 2008 20:01:51 GMT"
},
{
"version": "v2",
"created": "Fri, 1 Aug 2008 14:23:30 GMT"
},
{
"version": "v3",
"created": "Fri, 26 Sep 2008 19:56:34 GMT"
},
{
"version": "v4",
"created": "Tue, 25 Nov 2008 13:36:31 GMT"
},
{
"version": "v5",
"created": "Sat, 13 Dec 2008 23:51:16 GMT"
},
{
"version": "v6",
"created": "Wed, 24 Dec 2008 14:59:10 GMT"
}
] | 2016-09-08T00:00:00 | [
[
"Melott",
"Adrian L.",
"",
"University of Kansas"
]
] | TITLE: Long-term cycles in the history of life: Periodic biodiversity in the
Paleobiology Database
ABSTRACT: Time series analysis of fossil biodiversity of marine invertebrates in the
Paleobiology Database (PBDB) shows a significant periodicity at approximately
63 My, in agreement with previous analyses based on the Sepkoski database. I
discuss how this result did not appear in a previous analysis of the PBDB. The
existence of the 63 My periodicity, despite very different treatment of
systematic error in both PBDB and Sepkoski databases strongly argues for
consideration of its reality in the fossil record. Cross-spectral analysis of
the two datasets finds that a 62 My periodicity coincides in phase by 1.6 My,
equivalent to better than the errors in either measurement. Consequently, the
two data sets not only contain the same strong periodicity, but its peaks and
valleys closely correspond in time. Two other spectral peaks appear in the PBDB
analysis, but appear to be artifacts associated with detrending and with the
increased interval length. Sampling-standardization procedures implemented by
the PBDB collaboration suggest that the signal is not an artifact of sampling
bias. Further work should focus on finding the cause of the 62 My periodicity.
| no_new_dataset | 0.938124 |
0911.1765 | Ion Mandoiu | Justin Kennedy, Ion I. Mandoiu, and Bogdan Pasaniuc | GEDI: Scalable Algorithms for Genotype Error Detection and Imputation | null | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Genome-wide association studies generate very large datasets that require
scalable analysis algorithms. In this report we describe the GEDI software
package, which implements efficient algorithms for performing several common
tasks in the analysis of population genotype data, including genotype error
detection and correction, imputation of both randomly missing and untyped
genotypes, and genotype phasing. Experimental results show that GEDI achieves
high accuracy with a runtime scaling linearly with the number of markers and
samples. The open source C++ code of GEDI, released under the GNU General
Public License, is available for download at
http://dna.engr.uconn.edu/software/GEDI/
| [
{
"version": "v1",
"created": "Mon, 9 Nov 2009 23:35:41 GMT"
}
] | 2016-09-08T00:00:00 | [
[
"Kennedy",
"Justin",
""
],
[
"Mandoiu",
"Ion I.",
""
],
[
"Pasaniuc",
"Bogdan",
""
]
] | TITLE: GEDI: Scalable Algorithms for Genotype Error Detection and Imputation
ABSTRACT: Genome-wide association studies generate very large datasets that require
scalable analysis algorithms. In this report we describe the GEDI software
package, which implements efficient algorithms for performing several common
tasks in the analysis of population genotype data, including genotype error
detection and correction, imputation of both randomly missing and untyped
genotypes, and genotype phasing. Experimental results show that GEDI achieves
high accuracy with a runtime scaling linearly with the number of markers and
samples. The open source C++ code of GEDI, released under the GNU General
Public License, is available for download at
http://dna.engr.uconn.edu/software/GEDI/
| no_new_dataset | 0.949529 |
1602.03291 | Habibur Rahman | Habibur Rahman and Lucas Joppa and Senjuti Basu Roy | Feature Based Task Recommendation in Crowdsourcing with Implicit
Observations | null | null | null | null | cs.AI cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Existing research in crowdsourcing has investigated how to recommend tasks to
workers based on which task the workers have already completed, referred to as
{\em implicit feedback}. We, on the other hand, investigate the task
recommendation problem, where we leverage both implicit feedback and explicit
features of the task. We assume that we are given a set of workers, a set of
tasks, interactions (such as the number of times a worker has completed a
particular task), and the presence of explicit features of each task (such as,
task location). We intend to recommend tasks to the workers by exploiting the
implicit interactions, and the presence or absence of explicit features in the
tasks. We formalize the problem as an optimization problem, propose two
alternative problem formulations and respective solutions that exploit implicit
feedback, explicit features, as well as similarity between the tasks. We
compare the efficacy of our proposed solutions against multiple
state-of-the-art techniques using two large scale real world datasets.
| [
{
"version": "v1",
"created": "Wed, 10 Feb 2016 08:06:32 GMT"
},
{
"version": "v2",
"created": "Wed, 7 Sep 2016 05:13:51 GMT"
}
] | 2016-09-08T00:00:00 | [
[
"Rahman",
"Habibur",
""
],
[
"Joppa",
"Lucas",
""
],
[
"Roy",
"Senjuti Basu",
""
]
] | TITLE: Feature Based Task Recommendation in Crowdsourcing with Implicit
Observations
ABSTRACT: Existing research in crowdsourcing has investigated how to recommend tasks to
workers based on which task the workers have already completed, referred to as
{\em implicit feedback}. We, on the other hand, investigate the task
recommendation problem, where we leverage both implicit feedback and explicit
features of the task. We assume that we are given a set of workers, a set of
tasks, interactions (such as the number of times a worker has completed a
particular task), and the presence of explicit features of each task (such as,
task location). We intend to recommend tasks to the workers by exploiting the
implicit interactions, and the presence or absence of explicit features in the
tasks. We formalize the problem as an optimization problem, propose two
alternative problem formulations and respective solutions that exploit implicit
feedback, explicit features, as well as similarity between the tasks. We
compare the efficacy of our proposed solutions against multiple
state-of-the-art techniques using two large scale real world datasets.
| no_new_dataset | 0.947186 |
1609.00496 | Shu Liu | Shu Liu, Bo Li, Yangyu Fan, Zhe Guo, Ashok Samal | Label distribution based facial attractiveness computation by deep
residual learning | 3 pages, 3 figures. The first two authors are parallel first author | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Two challenges lie in the facial attractiveness computation research: the
lack of true attractiveness labels (scores), and the lack of an accurate face
representation. In order to address the first challenge, this paper recasts
facial attractiveness computation as a label distribution learning (LDL)
problem rather than a traditional single-label supervised learning task. In
this way, the negative influence of the label incomplete problem can be
reduced. Inspired by the recent promising work in face recognition using deep
neural networks to learn effective features, the second challenge is expected
to be solved from a deep learning point of view. A very deep residual network
is utilized to enable automatic learning of hierarchical aesthetics
representation. Integrating these two ideas, an end-to-end deep learning
framework is established. Our approach achieves the best results on a standard
benchmark SCUT-FBP dataset compared with other state-of-the-art work.
| [
{
"version": "v1",
"created": "Fri, 2 Sep 2016 08:08:39 GMT"
},
{
"version": "v2",
"created": "Wed, 7 Sep 2016 09:06:31 GMT"
}
] | 2016-09-08T00:00:00 | [
[
"Liu",
"Shu",
""
],
[
"Li",
"Bo",
""
],
[
"Fan",
"Yangyu",
""
],
[
"Guo",
"Zhe",
""
],
[
"Samal",
"Ashok",
""
]
] | TITLE: Label distribution based facial attractiveness computation by deep
residual learning
ABSTRACT: Two challenges lie in the facial attractiveness computation research: the
lack of true attractiveness labels (scores), and the lack of an accurate face
representation. In order to address the first challenge, this paper recasts
facial attractiveness computation as a label distribution learning (LDL)
problem rather than a traditional single-label supervised learning task. In
this way, the negative influence of the label incomplete problem can be
reduced. Inspired by the recent promising work in face recognition using deep
neural networks to learn effective features, the second challenge is expected
to be solved from a deep learning point of view. A very deep residual network
is utilized to enable automatic learning of hierarchical aesthetics
representation. Integrating these two ideas, an end-to-end deep learning
framework is established. Our approach achieves the best results on a standard
benchmark SCUT-FBP dataset compared with other state-of-the-art work.
| no_new_dataset | 0.949482 |
1609.01962 | Arkaitz Zubiaga | Michal Lukasik, Kalina Bontcheva, Trevor Cohn, Arkaitz Zubiaga, Maria
Liakata, Rob Procter | Using Gaussian Processes for Rumour Stance Classification in Social
Media | null | null | null | null | cs.CL cs.IR cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Social media tend to be rife with rumours while new reports are released
piecemeal during breaking news. Interestingly, one can mine multiple reactions
expressed by social media users in those situations, exploring their stance
towards rumours, ultimately enabling the flagging of highly disputed rumours as
being potentially false. In this work, we set out to develop an automated,
supervised classifier that uses multi-task learning to classify the stance
expressed in each individual tweet in a rumourous conversation as either
supporting, denying or questioning the rumour. Using a classifier based on
Gaussian Processes, and exploring its effectiveness on two datasets with very
different characteristics and varying distributions of stances, we show that
our approach consistently outperforms competitive baseline classifiers. Our
classifier is especially effective in estimating the distribution of different
types of stance associated with a given rumour, which we set forth as a desired
characteristic for a rumour-tracking system that will warn both ordinary users
of Twitter and professional news practitioners when a rumour is being rebutted.
| [
{
"version": "v1",
"created": "Wed, 7 Sep 2016 12:33:02 GMT"
}
] | 2016-09-08T00:00:00 | [
[
"Lukasik",
"Michal",
""
],
[
"Bontcheva",
"Kalina",
""
],
[
"Cohn",
"Trevor",
""
],
[
"Zubiaga",
"Arkaitz",
""
],
[
"Liakata",
"Maria",
""
],
[
"Procter",
"Rob",
""
]
] | TITLE: Using Gaussian Processes for Rumour Stance Classification in Social
Media
ABSTRACT: Social media tend to be rife with rumours while new reports are released
piecemeal during breaking news. Interestingly, one can mine multiple reactions
expressed by social media users in those situations, exploring their stance
towards rumours, ultimately enabling the flagging of highly disputed rumours as
being potentially false. In this work, we set out to develop an automated,
supervised classifier that uses multi-task learning to classify the stance
expressed in each individual tweet in a rumourous conversation as either
supporting, denying or questioning the rumour. Using a classifier based on
Gaussian Processes, and exploring its effectiveness on two datasets with very
different characteristics and varying distributions of stances, we show that
our approach consistently outperforms competitive baseline classifiers. Our
classifier is especially effective in estimating the distribution of different
types of stance associated with a given rumour, which we set forth as a desired
characteristic for a rumour-tracking system that will warn both ordinary users
of Twitter and professional news practitioners when a rumour is being rebutted.
| no_new_dataset | 0.948965 |
1609.01984 | Jinyoung Choi | Jinyoung Choi, Beom-Jin Lee, and Byoung-Tak Zhang | Human Body Orientation Estimation using Convolutional Neural Network | null | null | null | null | cs.RO cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Personal robots are expected to interact with the user by recognizing the
user's face. However, in most of the service robot applications, the user needs
to move himself/herself to allow the robot to see him/her face to face. To
overcome such limitations, a method for estimating human body orientation is
required. Previous studies used various components such as feature extractors
and classification models to classify the orientation which resulted in low
performance. For a more robust and accurate approach, we propose the light
weight convolutional neural networks, an end to end system, for estimating
human body orientation. Our body orientation estimation model achieved 81.58%
and 94% accuracy with the benchmark dataset and our own dataset respectively.
The proposed method can be used in a wide range of service robot applications
which depend on the ability to estimate human body orientation. To show its
usefulness in service robot applications, we designed a simple robot
application which allows the robot to move towards the user's frontal plane.
With this, we demonstrated an improved face detection rate.
| [
{
"version": "v1",
"created": "Wed, 7 Sep 2016 13:53:26 GMT"
}
] | 2016-09-08T00:00:00 | [
[
"Choi",
"Jinyoung",
""
],
[
"Lee",
"Beom-Jin",
""
],
[
"Zhang",
"Byoung-Tak",
""
]
] | TITLE: Human Body Orientation Estimation using Convolutional Neural Network
ABSTRACT: Personal robots are expected to interact with the user by recognizing the
user's face. However, in most of the service robot applications, the user needs
to move himself/herself to allow the robot to see him/her face to face. To
overcome such limitations, a method for estimating human body orientation is
required. Previous studies used various components such as feature extractors
and classification models to classify the orientation which resulted in low
performance. For a more robust and accurate approach, we propose the light
weight convolutional neural networks, an end to end system, for estimating
human body orientation. Our body orientation estimation model achieved 81.58%
and 94% accuracy with the benchmark dataset and our own dataset respectively.
The proposed method can be used in a wide range of service robot applications
which depend on the ability to estimate human body orientation. To show its
usefulness in service robot applications, we designed a simple robot
application which allows the robot to move towards the user's frontal plane.
With this, we demonstrated an improved face detection rate.
| new_dataset | 0.525673 |
1609.02053 | Davide Zambrano | Davide Zambrano and Sander M. Bohte | Fast and Efficient Asynchronous Neural Computation with Adapting Spiking
Neural Networks | 14 pages, 9 figures | null | null | null | cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Biological neurons communicate with a sparing exchange of pulses - spikes. It
is an open question how real spiking neurons produce the kind of powerful
neural computation that is possible with deep artificial neural networks, using
only so very few spikes to communicate. Building on recent insights in
neuroscience, we present an Adapting Spiking Neural Network (ASNN) based on
adaptive spiking neurons. These spiking neurons efficiently encode information
in spike-trains using a form of Asynchronous Pulsed Sigma-Delta coding while
homeostatically optimizing their firing rate. In the proposed paradigm of
spiking neuron computation, neural adaptation is tightly coupled to synaptic
plasticity, to ensure that downstream neurons can correctly decode upstream
spiking neurons. We show that this type of network is inherently able to carry
out asynchronous and event-driven neural computation, while performing
identical to corresponding artificial neural networks (ANNs). In particular, we
show that these adaptive spiking neurons can be drop in replacements for ReLU
neurons in standard feedforward ANNs comprised of such units. We demonstrate
that this can also be successfully applied to a ReLU based deep convolutional
neural network for classifying the MNIST dataset. The ASNN thus outperforms
current Spiking Neural Networks (SNNs) implementations, while responding (up
to) an order of magnitude faster and using an order of magnitude fewer spikes.
Additionally, in a streaming setting where frames are continuously classified,
we show that the ASNN requires substantially fewer network updates as compared
to the corresponding ANN.
| [
{
"version": "v1",
"created": "Wed, 7 Sep 2016 16:30:01 GMT"
}
] | 2016-09-08T00:00:00 | [
[
"Zambrano",
"Davide",
""
],
[
"Bohte",
"Sander M.",
""
]
] | TITLE: Fast and Efficient Asynchronous Neural Computation with Adapting Spiking
Neural Networks
ABSTRACT: Biological neurons communicate with a sparing exchange of pulses - spikes. It
is an open question how real spiking neurons produce the kind of powerful
neural computation that is possible with deep artificial neural networks, using
only so very few spikes to communicate. Building on recent insights in
neuroscience, we present an Adapting Spiking Neural Network (ASNN) based on
adaptive spiking neurons. These spiking neurons efficiently encode information
in spike-trains using a form of Asynchronous Pulsed Sigma-Delta coding while
homeostatically optimizing their firing rate. In the proposed paradigm of
spiking neuron computation, neural adaptation is tightly coupled to synaptic
plasticity, to ensure that downstream neurons can correctly decode upstream
spiking neurons. We show that this type of network is inherently able to carry
out asynchronous and event-driven neural computation, while performing
identical to corresponding artificial neural networks (ANNs). In particular, we
show that these adaptive spiking neurons can be drop in replacements for ReLU
neurons in standard feedforward ANNs comprised of such units. We demonstrate
that this can also be successfully applied to a ReLU based deep convolutional
neural network for classifying the MNIST dataset. The ASNN thus outperforms
current Spiking Neural Networks (SNNs) implementations, while responding (up
to) an order of magnitude faster and using an order of magnitude fewer spikes.
Additionally, in a streaming setting where frames are continuously classified,
we show that the ASNN requires substantially fewer network updates as compared
to the corresponding ANN.
| no_new_dataset | 0.949201 |
1608.04664 | Stefanos Eleftheriadis | Stefanos Eleftheriadis, Ognjen Rudovic, Marc P. Deisenroth, Maja
Pantic | Variational Gaussian Process Auto-Encoder for Ordinal Prediction of
Facial Action Units | null | null | null | null | stat.ML cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We address the task of simultaneous feature fusion and modeling of discrete
ordinal outputs. We propose a novel Gaussian process(GP) auto-encoder modeling
approach. In particular, we introduce GP encoders to project multiple observed
features onto a latent space, while GP decoders are responsible for
reconstructing the original features. Inference is performed in a novel
variational framework, where the recovered latent representations are further
constrained by the ordinal output labels. In this way, we seamlessly integrate
the ordinal structure in the learned manifold, while attaining robust fusion of
the input features. We demonstrate the representation abilities of our model on
benchmark datasets from machine learning and affect analysis. We further
evaluate the model on the tasks of feature fusion and joint ordinal prediction
of facial action units. Our experiments demonstrate the benefits of the
proposed approach compared to the state of the art.
| [
{
"version": "v1",
"created": "Tue, 16 Aug 2016 16:31:39 GMT"
},
{
"version": "v2",
"created": "Mon, 5 Sep 2016 21:25:48 GMT"
}
] | 2016-09-07T00:00:00 | [
[
"Eleftheriadis",
"Stefanos",
""
],
[
"Rudovic",
"Ognjen",
""
],
[
"Deisenroth",
"Marc P.",
""
],
[
"Pantic",
"Maja",
""
]
] | TITLE: Variational Gaussian Process Auto-Encoder for Ordinal Prediction of
Facial Action Units
ABSTRACT: We address the task of simultaneous feature fusion and modeling of discrete
ordinal outputs. We propose a novel Gaussian process(GP) auto-encoder modeling
approach. In particular, we introduce GP encoders to project multiple observed
features onto a latent space, while GP decoders are responsible for
reconstructing the original features. Inference is performed in a novel
variational framework, where the recovered latent representations are further
constrained by the ordinal output labels. In this way, we seamlessly integrate
the ordinal structure in the learned manifold, while attaining robust fusion of
the input features. We demonstrate the representation abilities of our model on
benchmark datasets from machine learning and affect analysis. We further
evaluate the model on the tasks of feature fusion and joint ordinal prediction
of facial action units. Our experiments demonstrate the benefits of the
proposed approach compared to the state of the art.
| no_new_dataset | 0.947672 |
1609.00489 | Truyen Tran | Morakot Choetkiertikul, Hoa Khanh Dam, Truyen Tran, Trang Pham, Aditya
Ghose and Tim Menzies | A deep learning model for estimating story points | Submitted to ICSE'17 | null | null | null | cs.SE cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Although there has been substantial research in software analytics for effort
estimation in traditional software projects, little work has been done for
estimation in agile projects, especially estimating user stories or issues.
Story points are the most common unit of measure used for estimating the effort
involved in implementing a user story or resolving an issue. In this paper, we
offer for the \emph{first} time a comprehensive dataset for story points-based
estimation that contains 23,313 issues from 16 open source projects. We also
propose a prediction model for estimating story points based on a novel
combination of two powerful deep learning architectures: long short-term memory
and recurrent highway network. Our prediction system is \emph{end-to-end}
trainable from raw input data to prediction outcomes without any manual feature
engineering. An empirical evaluation demonstrates that our approach
consistently outperforms three common effort estimation baselines and two
alternatives in both Mean Absolute Error and the Standardized Accuracy.
| [
{
"version": "v1",
"created": "Fri, 2 Sep 2016 07:42:29 GMT"
},
{
"version": "v2",
"created": "Tue, 6 Sep 2016 06:18:04 GMT"
}
] | 2016-09-07T00:00:00 | [
[
"Choetkiertikul",
"Morakot",
""
],
[
"Dam",
"Hoa Khanh",
""
],
[
"Tran",
"Truyen",
""
],
[
"Pham",
"Trang",
""
],
[
"Ghose",
"Aditya",
""
],
[
"Menzies",
"Tim",
""
]
] | TITLE: A deep learning model for estimating story points
ABSTRACT: Although there has been substantial research in software analytics for effort
estimation in traditional software projects, little work has been done for
estimation in agile projects, especially estimating user stories or issues.
Story points are the most common unit of measure used for estimating the effort
involved in implementing a user story or resolving an issue. In this paper, we
offer for the \emph{first} time a comprehensive dataset for story points-based
estimation that contains 23,313 issues from 16 open source projects. We also
propose a prediction model for estimating story points based on a novel
combination of two powerful deep learning architectures: long short-term memory
and recurrent highway network. Our prediction system is \emph{end-to-end}
trainable from raw input data to prediction outcomes without any manual feature
engineering. An empirical evaluation demonstrates that our approach
consistently outperforms three common effort estimation baselines and two
alternatives in both Mean Absolute Error and the Standardized Accuracy.
| new_dataset | 0.954942 |
1609.01326 | Weichao Qiu | Weichao Qiu, Alan Yuille | UnrealCV: Connecting Computer Vision to Unreal Engine | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Computer graphics can not only generate synthetic images and ground truth but
it also offers the possibility of constructing virtual worlds in which: (i) an
agent can perceive, navigate, and take actions guided by AI algorithms, (ii)
properties of the worlds can be modified (e.g., material and reflectance),
(iii) physical simulations can be performed, and (iv) algorithms can be learnt
and evaluated. But creating realistic virtual worlds is not easy. The game
industry, however, has spent a lot of effort creating 3D worlds, which a player
can interact with. So researchers can build on these resources to create
virtual worlds, provided we can access and modify the internal data structures
of the games. To enable this we created an open-source plugin UnrealCV
(http://unrealcv.github.io) for a popular game engine Unreal Engine 4 (UE4). We
show two applications: (i) a proof of concept image dataset, and (ii) linking
Caffe with the virtual world to test deep network algorithms.
| [
{
"version": "v1",
"created": "Mon, 5 Sep 2016 21:09:33 GMT"
}
] | 2016-09-07T00:00:00 | [
[
"Qiu",
"Weichao",
""
],
[
"Yuille",
"Alan",
""
]
] | TITLE: UnrealCV: Connecting Computer Vision to Unreal Engine
ABSTRACT: Computer graphics can not only generate synthetic images and ground truth but
it also offers the possibility of constructing virtual worlds in which: (i) an
agent can perceive, navigate, and take actions guided by AI algorithms, (ii)
properties of the worlds can be modified (e.g., material and reflectance),
(iii) physical simulations can be performed, and (iv) algorithms can be learnt
and evaluated. But creating realistic virtual worlds is not easy. The game
industry, however, has spent a lot of effort creating 3D worlds, which a player
can interact with. So researchers can build on these resources to create
virtual worlds, provided we can access and modify the internal data structures
of the games. To enable this we created an open-source plugin UnrealCV
(http://unrealcv.github.io) for a popular game engine Unreal Engine 4 (UE4). We
show two applications: (i) a proof of concept image dataset, and (ii) linking
Caffe with the virtual world to test deep network algorithms.
| new_dataset | 0.894052 |
1609.01345 | Andr\'as B\'odis-Szomor\'u | Andr\'as B\'odis-Szomor\'u, Hayko Riemenschneider, Luc Van Gool | Efficient Volumetric Fusion of Airborne and Street-Side Data for Urban
Reconstruction | To appear in ICPR 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Airborne acquisition and on-road mobile mapping provide complementary 3D
information of an urban landscape: the former acquires roof structures, ground,
and vegetation at a large scale, but lacks the facade and street-side details,
while the latter is incomplete for higher floors and often totally misses out
on pedestrian-only areas or undriven districts. In this work, we introduce an
approach that efficiently unifies a detailed street-side Structure-from-Motion
(SfM) or Multi-View Stereo (MVS) point cloud and a coarser but more complete
point cloud from airborne acquisition in a joint surface mesh. We propose a
point cloud blending and a volumetric fusion based on ray casting across a 3D
tetrahedralization (3DT), extended with data reduction techniques to handle
large datasets. To the best of our knowledge, we are the first to adopt a 3DT
approach for airborne/street-side data fusion. Our pipeline exploits typical
characteristics of airborne and ground data, and produces a seamless,
watertight mesh that is both complete and detailed. Experiments on 3D urban
data from multiple sources and different data densities show the effectiveness
and benefits of our approach.
| [
{
"version": "v1",
"created": "Mon, 5 Sep 2016 22:28:49 GMT"
}
] | 2016-09-07T00:00:00 | [
[
"Bódis-Szomorú",
"András",
""
],
[
"Riemenschneider",
"Hayko",
""
],
[
"Van Gool",
"Luc",
""
]
] | TITLE: Efficient Volumetric Fusion of Airborne and Street-Side Data for Urban
Reconstruction
ABSTRACT: Airborne acquisition and on-road mobile mapping provide complementary 3D
information of an urban landscape: the former acquires roof structures, ground,
and vegetation at a large scale, but lacks the facade and street-side details,
while the latter is incomplete for higher floors and often totally misses out
on pedestrian-only areas or undriven districts. In this work, we introduce an
approach that efficiently unifies a detailed street-side Structure-from-Motion
(SfM) or Multi-View Stereo (MVS) point cloud and a coarser but more complete
point cloud from airborne acquisition in a joint surface mesh. We propose a
point cloud blending and a volumetric fusion based on ray casting across a 3D
tetrahedralization (3DT), extended with data reduction techniques to handle
large datasets. To the best of our knowledge, we are the first to adopt a 3DT
approach for airborne/street-side data fusion. Our pipeline exploits typical
characteristics of airborne and ground data, and produces a seamless,
watertight mesh that is both complete and detailed. Experiments on 3D urban
data from multiple sources and different data densities show the effectiveness
and benefits of our approach.
| no_new_dataset | 0.95418 |
1609.01388 | Yale Song | Yale Song, Miriam Redi, Jordi Vallmitjana, Alejandro Jaimes | To Click or Not To Click: Automatic Selection of Beautiful Thumbnails
from Videos | To appear in CIKM 2016 | null | null | null | cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Thumbnails play such an important role in online videos. As the most
representative snapshot, they capture the essence of a video and provide the
first impression to the viewers; ultimately, a great thumbnail makes a video
more attractive to click and watch. We present an automatic thumbnail selection
system that exploits two important characteristics commonly associated with
meaningful and attractive thumbnails: high relevance to video content and
superior visual aesthetic quality. Our system selects attractive thumbnails by
analyzing various visual quality and aesthetic metrics of video frames, and
performs a clustering analysis to determine the relevance to video content,
thus making the resulting thumbnails more representative of the video. On the
task of predicting thumbnails chosen by professional video editors, we
demonstrate the effectiveness of our system against six baseline methods, using
a real-world dataset of 1,118 videos collected from Yahoo Screen. In addition,
we study what makes a frame a good thumbnail by analyzing the statistical
relationship between thumbnail frames and non-thumbnail frames in terms of
various image quality features. Our study suggests that the selection of a good
thumbnail is highly correlated with objective visual quality metrics, such as
the frame texture and sharpness, implying the possibility of building an
automatic thumbnail selection system based on visual aesthetics.
| [
{
"version": "v1",
"created": "Tue, 6 Sep 2016 04:33:34 GMT"
}
] | 2016-09-07T00:00:00 | [
[
"Song",
"Yale",
""
],
[
"Redi",
"Miriam",
""
],
[
"Vallmitjana",
"Jordi",
""
],
[
"Jaimes",
"Alejandro",
""
]
] | TITLE: To Click or Not To Click: Automatic Selection of Beautiful Thumbnails
from Videos
ABSTRACT: Thumbnails play such an important role in online videos. As the most
representative snapshot, they capture the essence of a video and provide the
first impression to the viewers; ultimately, a great thumbnail makes a video
more attractive to click and watch. We present an automatic thumbnail selection
system that exploits two important characteristics commonly associated with
meaningful and attractive thumbnails: high relevance to video content and
superior visual aesthetic quality. Our system selects attractive thumbnails by
analyzing various visual quality and aesthetic metrics of video frames, and
performs a clustering analysis to determine the relevance to video content,
thus making the resulting thumbnails more representative of the video. On the
task of predicting thumbnails chosen by professional video editors, we
demonstrate the effectiveness of our system against six baseline methods, using
a real-world dataset of 1,118 videos collected from Yahoo Screen. In addition,
we study what makes a frame a good thumbnail by analyzing the statistical
relationship between thumbnail frames and non-thumbnail frames in terms of
various image quality features. Our study suggests that the selection of a good
thumbnail is highly correlated with objective visual quality metrics, such as
the frame texture and sharpness, implying the possibility of building an
automatic thumbnail selection system based on visual aesthetics.
| no_new_dataset | 0.932699 |
1609.01414 | Vinay Kumar N | N. Vinay Kumar, Pratheek, V. Vijaya Kantha, K. N. Govindaraju, and D.
S. Guru | Features Fusion for Classification of Logos | 10 pages, 5 figures, 9 tables | Procedia Computer Science, Volume 85, 2016, Pages 370-379 | 10.1016/j.procs.2016.05.245 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, a logo classification system based on the appearance of logo
images is proposed. The proposed classification system makes use of global
characteristics of logo images for classification. Color, texture, and shape of
a logo wholly describe the global characteristics of logo images. The various
combinations of these characteristics are used for classification. The
combination contains only with single feature or with fusion of two features or
fusion of all three features considered at a time respectively. Further, the
system categorizes the logo image into: a logo image with fully text or with
fully symbols or containing both symbols and texts.. The K-Nearest Neighbour
(K-NN) classifier is used for classification. Due to the lack of color logo
image dataset in the literature, the same is created consisting 5044 color logo
images. Finally, the performance of the classification system is evaluated
through accuracy, precision, recall and F-measure computed from the confusion
matrix. The experimental results show that the most promising results are
obtained for fusion of features.
| [
{
"version": "v1",
"created": "Tue, 6 Sep 2016 07:29:56 GMT"
}
] | 2016-09-07T00:00:00 | [
[
"Kumar",
"N. Vinay",
""
],
[
"Pratheek",
"",
""
],
[
"Kantha",
"V. Vijaya",
""
],
[
"Govindaraju",
"K. N.",
""
],
[
"Guru",
"D. S.",
""
]
] | TITLE: Features Fusion for Classification of Logos
ABSTRACT: In this paper, a logo classification system based on the appearance of logo
images is proposed. The proposed classification system makes use of global
characteristics of logo images for classification. Color, texture, and shape of
a logo wholly describe the global characteristics of logo images. The various
combinations of these characteristics are used for classification. The
combination contains only with single feature or with fusion of two features or
fusion of all three features considered at a time respectively. Further, the
system categorizes the logo image into: a logo image with fully text or with
fully symbols or containing both symbols and texts.. The K-Nearest Neighbour
(K-NN) classifier is used for classification. Due to the lack of color logo
image dataset in the literature, the same is created consisting 5044 color logo
images. Finally, the performance of the classification system is evaluated
through accuracy, precision, recall and F-measure computed from the confusion
matrix. The experimental results show that the most promising results are
obtained for fusion of features.
| no_new_dataset | 0.867934 |
1609.01483 | Sverre Holm | Saeed Mehdizadeh, Sebastien Muller, Gabriel Kiss, Tonni F. Johansen,
and Sverre Holm | Joint Beamforming and Feature Detection for Enhanced Visualization of
Spinal Bone Surfaces in Ultrasound Images | 12 figures | null | null | null | physics.med-ph physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a framework for extracting the bone surface from B-mode images
employing the eigenspace minimum variance (ESMV) beamformer and a ridge
detection method. We show that an ESMV beamformer with a rank-1 signal subspace
can preserve the bone anatomy and enhance the edges, despite an image which is
less visually appealing due to some speckle pattern distortion. The beamformed
images are post-processed using the phase symmetry (PS) technique. We validate
this framework by registering the ultrasound images of a vertebra (in a water
bath) against the corresponding Computed Tomography (CT) dataset. The results
show a bone localization error in the same order of magnitude as the standard
delay-and-sum (DAS) technique, but with approximately 20% smaller standard
deviation (STD) of the image intensity distribution around the bone surface.
This indicates a sharper bone surface detection. Further, the noise level
inside the bone shadow is reduced by 60%. In in-vivo experiments, this
framework is used for imaging the spinal anatomy. We show that PS images
obtained from this beamformer setup have sharper bone boundaries in comparison
with the standard DAS ones, and they are reasonably well separated from the
surrounding soft tissue.
| [
{
"version": "v1",
"created": "Tue, 6 Sep 2016 10:56:13 GMT"
}
] | 2016-09-07T00:00:00 | [
[
"Mehdizadeh",
"Saeed",
""
],
[
"Muller",
"Sebastien",
""
],
[
"Kiss",
"Gabriel",
""
],
[
"Johansen",
"Tonni F.",
""
],
[
"Holm",
"Sverre",
""
]
] | TITLE: Joint Beamforming and Feature Detection for Enhanced Visualization of
Spinal Bone Surfaces in Ultrasound Images
ABSTRACT: We propose a framework for extracting the bone surface from B-mode images
employing the eigenspace minimum variance (ESMV) beamformer and a ridge
detection method. We show that an ESMV beamformer with a rank-1 signal subspace
can preserve the bone anatomy and enhance the edges, despite an image which is
less visually appealing due to some speckle pattern distortion. The beamformed
images are post-processed using the phase symmetry (PS) technique. We validate
this framework by registering the ultrasound images of a vertebra (in a water
bath) against the corresponding Computed Tomography (CT) dataset. The results
show a bone localization error in the same order of magnitude as the standard
delay-and-sum (DAS) technique, but with approximately 20% smaller standard
deviation (STD) of the image intensity distribution around the bone surface.
This indicates a sharper bone surface detection. Further, the noise level
inside the bone shadow is reduced by 60%. In in-vivo experiments, this
framework is used for imaging the spinal anatomy. We show that PS images
obtained from this beamformer setup have sharper bone boundaries in comparison
with the standard DAS ones, and they are reasonably well separated from the
surrounding soft tissue.
| no_new_dataset | 0.955981 |
1609.01484 | Sujoy Chatterjee | Sujoy Chatterjee, Enakshi Kundu and Anirban Mukhopadhyay | A Markov Chain based Ensemble Method for Crowdsourced Clustering | Works in Progress, Fourth AAAI Conference on Human Computation and
Crowdsourcing (HCOMP 2016), Austin, TX, USA | null | null | null | cs.HC cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In presence of multiple clustering solutions for the same dataset, a
clustering ensemble approach aims to yield a single clustering of the dataset
by achieving a consensus among the input clustering solutions. The goal of this
consensus is to improve the quality of clustering. It has been seen that there
are some image clustering tasks that cannot be easily solved by computer. But
if these images can be outsourced to the general people (crowd workers) to
group them based on some similar features, and opinions are collected from
them, then this task can be managed in an efficient manner and time effective
way. In this work, the power of crowd has been used to annotate the images so
that multiple clustering solutions can be obtained from them and thereafter a
Markov chain based ensemble method is introduced to make a consensus of
multiple clustering solutions.
| [
{
"version": "v1",
"created": "Tue, 6 Sep 2016 10:58:34 GMT"
}
] | 2016-09-07T00:00:00 | [
[
"Chatterjee",
"Sujoy",
""
],
[
"Kundu",
"Enakshi",
""
],
[
"Mukhopadhyay",
"Anirban",
""
]
] | TITLE: A Markov Chain based Ensemble Method for Crowdsourced Clustering
ABSTRACT: In presence of multiple clustering solutions for the same dataset, a
clustering ensemble approach aims to yield a single clustering of the dataset
by achieving a consensus among the input clustering solutions. The goal of this
consensus is to improve the quality of clustering. It has been seen that there
are some image clustering tasks that cannot be easily solved by computer. But
if these images can be outsourced to the general people (crowd workers) to
group them based on some similar features, and opinions are collected from
them, then this task can be managed in an efficient manner and time effective
way. In this work, the power of crowd has been used to annotate the images so
that multiple clustering solutions can be obtained from them and thereafter a
Markov chain based ensemble method is introduced to make a consensus of
multiple clustering solutions.
| no_new_dataset | 0.951729 |
1609.01571 | Shaul Oron | Shaul Oron, Tali Dekel, Tianfan Xue, William T. Freeman, Shai Avidan | Best-Buddies Similarity - Robust Template Matching using Mutual Nearest
Neighbors | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a novel method for template matching in unconstrained
environments. Its essence is the Best-Buddies Similarity (BBS), a useful,
robust, and parameter-free similarity measure between two sets of points. BBS
is based on counting the number of Best-Buddies Pairs (BBPs)--pairs of points
in source and target sets, where each point is the nearest neighbor of the
other. BBS has several key features that make it robust against complex
geometric deformations and high levels of outliers, such as those arising from
background clutter and occlusions. We study these properties, provide a
statistical analysis that justifies them, and demonstrate the consistent
success of BBS on a challenging real-world dataset while using different types
of features.
| [
{
"version": "v1",
"created": "Tue, 6 Sep 2016 14:24:36 GMT"
}
] | 2016-09-07T00:00:00 | [
[
"Oron",
"Shaul",
""
],
[
"Dekel",
"Tali",
""
],
[
"Xue",
"Tianfan",
""
],
[
"Freeman",
"William T.",
""
],
[
"Avidan",
"Shai",
""
]
] | TITLE: Best-Buddies Similarity - Robust Template Matching using Mutual Nearest
Neighbors
ABSTRACT: We propose a novel method for template matching in unconstrained
environments. Its essence is the Best-Buddies Similarity (BBS), a useful,
robust, and parameter-free similarity measure between two sets of points. BBS
is based on counting the number of Best-Buddies Pairs (BBPs)--pairs of points
in source and target sets, where each point is the nearest neighbor of the
other. BBS has several key features that make it robust against complex
geometric deformations and high levels of outliers, such as those arising from
background clutter and occlusions. We study these properties, provide a
statistical analysis that justifies them, and demonstrate the consistent
success of BBS on a challenging real-world dataset while using different types
of features.
| no_new_dataset | 0.953923 |
1601.04485 | Jose Velasco | Jose Velasco, Daniel Pizarro, Javier Macias-Guarasa and Afsaneh Asaei | TDOA Matrices: Algebraic Properties and their Application to Robust
Denoising with Missing Data | null | IEEE Transactions on Signal Processing ( Volume: 64, Issue: 20,
Oct.15, 15 2016 ) | 10.1109/TSP.2016.2593690 | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Measuring the Time delay of Arrival (TDOA) between a set of sensors is the
basic setup for many applications, such as localization or signal beamforming.
This paper presents the set of TDOA matrices, which are built from noise-free
TDOA measurements, not requiring knowledge of the sensor array geometry. We
prove that TDOA matrices are rank-two and have a special SVD decomposition that
leads to a compact linear parametric representation. Properties of TDOA
matrices are applied in this paper to perform denoising, by finding the TDOA
matrix closest to the matrix composed with noisy measurements. The paper shows
that this problem admits a closed-form solution for TDOA measurements
contaminated with Gaussian noise which extends to the case of having missing
data. The paper also proposes a novel robust denoising method resistant to
outliers, missing data and inspired in recent advances in robust low-rank
estimation. Experiments in synthetic and real datasets show TDOA-based
localization, both in terms of TDOA accuracy estimation and localization error.
| [
{
"version": "v1",
"created": "Mon, 18 Jan 2016 12:01:45 GMT"
},
{
"version": "v2",
"created": "Tue, 24 May 2016 14:00:01 GMT"
}
] | 2016-09-06T00:00:00 | [
[
"Velasco",
"Jose",
""
],
[
"Pizarro",
"Daniel",
""
],
[
"Macias-Guarasa",
"Javier",
""
],
[
"Asaei",
"Afsaneh",
""
]
] | TITLE: TDOA Matrices: Algebraic Properties and their Application to Robust
Denoising with Missing Data
ABSTRACT: Measuring the Time delay of Arrival (TDOA) between a set of sensors is the
basic setup for many applications, such as localization or signal beamforming.
This paper presents the set of TDOA matrices, which are built from noise-free
TDOA measurements, not requiring knowledge of the sensor array geometry. We
prove that TDOA matrices are rank-two and have a special SVD decomposition that
leads to a compact linear parametric representation. Properties of TDOA
matrices are applied in this paper to perform denoising, by finding the TDOA
matrix closest to the matrix composed with noisy measurements. The paper shows
that this problem admits a closed-form solution for TDOA measurements
contaminated with Gaussian noise which extends to the case of having missing
data. The paper also proposes a novel robust denoising method resistant to
outliers, missing data and inspired in recent advances in robust low-rank
estimation. Experiments in synthetic and real datasets show TDOA-based
localization, both in terms of TDOA accuracy estimation and localization error.
| no_new_dataset | 0.940517 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.