id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
sequencelengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1703.02002 | Md Mizanur Rahman | Mahmudur Rahman, Mizanur Rahman, Bogdan Carbunar, Duen Horng Chau | FairPlay: Fraud and Malware Detection in Google Play | Proceedings of the 2016 SIAM International Conference on Data Mining.
Society for Industrial and Applied Mathematics, 2016 | null | null | null | cs.SI cs.CR cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fraudulent behaviors in Google Android app market fuel search rank abuse and
malware proliferation. We present FairPlay, a novel system that uncovers both
malware and search rank fraud apps, by picking out trails that fraudsters leave
behind. To identify suspicious apps, FairPlay PCF algorithm correlates review
activities and uniquely combines detected review relations with linguistic and
behavioral signals gleaned from longitudinal Google Play app data. We
contribute a new longitudinal app dataset to the community, which consists of
over 87K apps, 2.9M reviews, and 2.4M reviewers, collected over half a year.
FairPlay achieves over 95% accuracy in classifying gold standard datasets of
malware, fraudulent and legitimate apps. We show that 75% of the identified
malware apps engage in search rank fraud. FairPlay discovers hundreds of
fraudulent apps that currently evade Google Bouncer detection technology, and
reveals a new type of attack campaign, where users are harassed into writing
positive reviews, and install and review other apps.
| [
{
"version": "v1",
"created": "Mon, 6 Mar 2017 17:51:16 GMT"
}
] | 2017-03-07T00:00:00 | [
[
"Rahman",
"Mahmudur",
""
],
[
"Rahman",
"Mizanur",
""
],
[
"Carbunar",
"Bogdan",
""
],
[
"Chau",
"Duen Horng",
""
]
] | TITLE: FairPlay: Fraud and Malware Detection in Google Play
ABSTRACT: Fraudulent behaviors in Google Android app market fuel search rank abuse and
malware proliferation. We present FairPlay, a novel system that uncovers both
malware and search rank fraud apps, by picking out trails that fraudsters leave
behind. To identify suspicious apps, FairPlay PCF algorithm correlates review
activities and uniquely combines detected review relations with linguistic and
behavioral signals gleaned from longitudinal Google Play app data. We
contribute a new longitudinal app dataset to the community, which consists of
over 87K apps, 2.9M reviews, and 2.4M reviewers, collected over half a year.
FairPlay achieves over 95% accuracy in classifying gold standard datasets of
malware, fraudulent and legitimate apps. We show that 75% of the identified
malware apps engage in search rank fraud. FairPlay discovers hundreds of
fraudulent apps that currently evade Google Bouncer detection technology, and
reveals a new type of attack campaign, where users are harassed into writing
positive reviews, and install and review other apps.
| new_dataset | 0.949153 |
1703.02019 | Gourav Ganesh Shenoy | Gourav G. Shenoy, Erika H. Dsouza, Sandra K\"ubler | Performing Stance Detection on Twitter Data using Computational
Linguistics Techniques | 8 pages, 9 figures, 5 tables | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As humans, we can often detect from a persons utterances if he or she is in
favor of or against a given target entity (topic, product, another person,
etc). But from the perspective of a computer, we need means to automatically
deduce the stance of the tweeter, given just the tweet text. In this paper, we
present our results of performing stance detection on twitter data using a
supervised approach. We begin by extracting bag-of-words to perform
classification using TIMBL, then try and optimize the features to improve
stance detection accuracy, followed by extending the dataset with two sets of
lexicons - arguing, and MPQA subjectivity; next we explore the MALT parser and
construct features using its dependency triples, finally we perform analysis
using Scikit-learn Random Forest implementation.
| [
{
"version": "v1",
"created": "Mon, 6 Mar 2017 18:44:49 GMT"
}
] | 2017-03-07T00:00:00 | [
[
"Shenoy",
"Gourav G.",
""
],
[
"Dsouza",
"Erika H.",
""
],
[
"Kübler",
"Sandra",
""
]
] | TITLE: Performing Stance Detection on Twitter Data using Computational
Linguistics Techniques
ABSTRACT: As humans, we can often detect from a persons utterances if he or she is in
favor of or against a given target entity (topic, product, another person,
etc). But from the perspective of a computer, we need means to automatically
deduce the stance of the tweeter, given just the tweet text. In this paper, we
present our results of performing stance detection on twitter data using a
supervised approach. We begin by extracting bag-of-words to perform
classification using TIMBL, then try and optimize the features to improve
stance detection accuracy, followed by extending the dataset with two sets of
lexicons - arguing, and MPQA subjectivity; next we explore the MALT parser and
construct features using its dependency triples, finally we perform analysis
using Scikit-learn Random Forest implementation.
| no_new_dataset | 0.944791 |
1606.04052 | Julien Perez | Julien Perez and Fei Liu | Dialog state tracking, a machine reading approach using Memory Network | 10 pages, 2 figures, 4 tables | null | null | null | cs.CL cs.NE stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In an end-to-end dialog system, the aim of dialog state tracking is to
accurately estimate a compact representation of the current dialog status from
a sequence of noisy observations produced by the speech recognition and the
natural language understanding modules. This paper introduces a novel method of
dialog state tracking based on the general paradigm of machine reading and
proposes to solve it using an End-to-End Memory Network, MemN2N, a
memory-enhanced neural network architecture. We evaluate the proposed approach
on the second Dialog State Tracking Challenge (DSTC-2) dataset. The corpus has
been converted for the occasion in order to frame the hidden state variable
inference as a question-answering task based on a sequence of utterances
extracted from a dialog. We show that the proposed tracker gives encouraging
results. Then, we propose to extend the DSTC-2 dataset with specific reasoning
capabilities requirement like counting, list maintenance, yes-no question
answering and indefinite knowledge management. Finally, we present encouraging
results using our proposed MemN2N based tracking model.
| [
{
"version": "v1",
"created": "Mon, 13 Jun 2016 18:09:40 GMT"
},
{
"version": "v2",
"created": "Tue, 14 Jun 2016 06:42:04 GMT"
},
{
"version": "v3",
"created": "Wed, 29 Jun 2016 00:07:41 GMT"
},
{
"version": "v4",
"created": "Thu, 13 Oct 2016 19:23:00 GMT"
},
{
"version": "v5",
"created": "Thu, 2 Mar 2017 20:17:23 GMT"
}
] | 2017-03-06T00:00:00 | [
[
"Perez",
"Julien",
""
],
[
"Liu",
"Fei",
""
]
] | TITLE: Dialog state tracking, a machine reading approach using Memory Network
ABSTRACT: In an end-to-end dialog system, the aim of dialog state tracking is to
accurately estimate a compact representation of the current dialog status from
a sequence of noisy observations produced by the speech recognition and the
natural language understanding modules. This paper introduces a novel method of
dialog state tracking based on the general paradigm of machine reading and
proposes to solve it using an End-to-End Memory Network, MemN2N, a
memory-enhanced neural network architecture. We evaluate the proposed approach
on the second Dialog State Tracking Challenge (DSTC-2) dataset. The corpus has
been converted for the occasion in order to frame the hidden state variable
inference as a question-answering task based on a sequence of utterances
extracted from a dialog. We show that the proposed tracker gives encouraging
results. Then, we propose to extend the DSTC-2 dataset with specific reasoning
capabilities requirement like counting, list maintenance, yes-no question
answering and indefinite knowledge management. Finally, we present encouraging
results using our proposed MemN2N based tracking model.
| no_new_dataset | 0.942981 |
1608.08128 | Xavier Gir\'o-i-Nieto | Alberto Montes, Amaia Salvador, Santiago Pascual and Xavier
Giro-i-Nieto | Temporal Activity Detection in Untrimmed Videos with Recurrent Neural
Networks | Best Poster Award at the 1st NIPS Workshop on Large Scale Computer
Vision Systems (Barcelona, December 2016). Source code available at
https://imatge-upc.github.io/activitynet-2016-cvprw/ | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This thesis explore different approaches using Convolutional and Recurrent
Neural Networks to classify and temporally localize activities on videos,
furthermore an implementation to achieve it has been proposed. As the first
step, features have been extracted from video frames using an state of the art
3D Convolutional Neural Network. This features are fed in a recurrent neural
network that solves the activity classification and temporally location tasks
in a simple and flexible way. Different architectures and configurations have
been tested in order to achieve the best performance and learning of the video
dataset provided. In addition it has been studied different kind of post
processing over the trained network's output to achieve a better results on the
temporally localization of activities on the videos. The results provided by
the neural network developed in this thesis have been submitted to the
ActivityNet Challenge 2016 of the CVPR, achieving competitive results using a
simple and flexible architecture.
| [
{
"version": "v1",
"created": "Mon, 29 Aug 2016 16:14:52 GMT"
},
{
"version": "v2",
"created": "Sun, 11 Dec 2016 16:25:11 GMT"
},
{
"version": "v3",
"created": "Thu, 2 Mar 2017 23:07:00 GMT"
}
] | 2017-03-06T00:00:00 | [
[
"Montes",
"Alberto",
""
],
[
"Salvador",
"Amaia",
""
],
[
"Pascual",
"Santiago",
""
],
[
"Giro-i-Nieto",
"Xavier",
""
]
] | TITLE: Temporal Activity Detection in Untrimmed Videos with Recurrent Neural
Networks
ABSTRACT: This thesis explore different approaches using Convolutional and Recurrent
Neural Networks to classify and temporally localize activities on videos,
furthermore an implementation to achieve it has been proposed. As the first
step, features have been extracted from video frames using an state of the art
3D Convolutional Neural Network. This features are fed in a recurrent neural
network that solves the activity classification and temporally location tasks
in a simple and flexible way. Different architectures and configurations have
been tested in order to achieve the best performance and learning of the video
dataset provided. In addition it has been studied different kind of post
processing over the trained network's output to achieve a better results on the
temporally localization of activities on the videos. The results provided by
the neural network developed in this thesis have been submitted to the
ActivityNet Challenge 2016 of the CVPR, achieving competitive results using a
simple and flexible architecture.
| no_new_dataset | 0.952042 |
1608.08139 | Xavier Gir\'o-i-Nieto | Cristian Reyes, Eva Mohedano, Kevin McGuinness, Noel E. O'Connor and
Xavier Giro-i-Nieto | Where is my Phone ? Personal Object Retrieval from Egocentric Images | Lifelogging Tools and Applications Workshop (LTA'16) at ACM
Multimedia 2016 | null | null | null | cs.IR cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work presents a retrieval pipeline and evaluation scheme for the problem
of finding the last appearance of personal objects in a large dataset of images
captured from a wearable camera. Each personal object is modelled by a small
set of images that define a query for a visual search engine.The retrieved
results are reranked considering the temporal timestamps of the images to
increase the relevance of the later detections. Finally, a temporal
interleaving of the results is introduced for robustness against false
detections. The Mean Reciprocal Rank is proposed as a metric to evaluate this
problem. This application could help into developing personal assistants
capable of helping users when they do not remember where they left their
personal belongings.
| [
{
"version": "v1",
"created": "Mon, 29 Aug 2016 16:41:52 GMT"
},
{
"version": "v2",
"created": "Thu, 2 Mar 2017 23:13:09 GMT"
}
] | 2017-03-06T00:00:00 | [
[
"Reyes",
"Cristian",
""
],
[
"Mohedano",
"Eva",
""
],
[
"McGuinness",
"Kevin",
""
],
[
"O'Connor",
"Noel E.",
""
],
[
"Giro-i-Nieto",
"Xavier",
""
]
] | TITLE: Where is my Phone ? Personal Object Retrieval from Egocentric Images
ABSTRACT: This work presents a retrieval pipeline and evaluation scheme for the problem
of finding the last appearance of personal objects in a large dataset of images
captured from a wearable camera. Each personal object is modelled by a small
set of images that define a query for a visual search engine.The retrieved
results are reranked considering the temporal timestamps of the images to
increase the relevance of the later detections. Finally, a temporal
interleaving of the results is introduced for robustness against false
detections. The Mean Reciprocal Rank is proposed as a metric to evaluate this
problem. This application could help into developing personal assistants
capable of helping users when they do not remember where they left their
personal belongings.
| no_new_dataset | 0.936401 |
1610.05755 | Nicolas Papernot | Nicolas Papernot, Mart\'in Abadi, \'Ulfar Erlingsson, Ian Goodfellow,
Kunal Talwar | Semi-supervised Knowledge Transfer for Deep Learning from Private
Training Data | Accepted to ICLR 17 as an oral | null | null | null | stat.ML cs.CR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Some machine learning applications involve training data that is sensitive,
such as the medical histories of patients in a clinical trial. A model may
inadvertently and implicitly store some of its training data; careful analysis
of the model may therefore reveal sensitive information.
To address this problem, we demonstrate a generally applicable approach to
providing strong privacy guarantees for training data: Private Aggregation of
Teacher Ensembles (PATE). The approach combines, in a black-box fashion,
multiple models trained with disjoint datasets, such as records from different
subsets of users. Because they rely directly on sensitive data, these models
are not published, but instead used as "teachers" for a "student" model. The
student learns to predict an output chosen by noisy voting among all of the
teachers, and cannot directly access an individual teacher or the underlying
data or parameters. The student's privacy properties can be understood both
intuitively (since no single teacher and thus no single dataset dictates the
student's training) and formally, in terms of differential privacy. These
properties hold even if an adversary can not only query the student but also
inspect its internal workings.
Compared with previous work, the approach imposes only weak assumptions on
how teachers are trained: it applies to any model, including non-convex models
like DNNs. We achieve state-of-the-art privacy/utility trade-offs on MNIST and
SVHN thanks to an improved privacy analysis and semi-supervised learning.
| [
{
"version": "v1",
"created": "Tue, 18 Oct 2016 19:37:37 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Nov 2016 13:18:56 GMT"
},
{
"version": "v3",
"created": "Mon, 7 Nov 2016 00:18:03 GMT"
},
{
"version": "v4",
"created": "Fri, 3 Mar 2017 18:56:43 GMT"
}
] | 2017-03-06T00:00:00 | [
[
"Papernot",
"Nicolas",
""
],
[
"Abadi",
"Martín",
""
],
[
"Erlingsson",
"Úlfar",
""
],
[
"Goodfellow",
"Ian",
""
],
[
"Talwar",
"Kunal",
""
]
] | TITLE: Semi-supervised Knowledge Transfer for Deep Learning from Private
Training Data
ABSTRACT: Some machine learning applications involve training data that is sensitive,
such as the medical histories of patients in a clinical trial. A model may
inadvertently and implicitly store some of its training data; careful analysis
of the model may therefore reveal sensitive information.
To address this problem, we demonstrate a generally applicable approach to
providing strong privacy guarantees for training data: Private Aggregation of
Teacher Ensembles (PATE). The approach combines, in a black-box fashion,
multiple models trained with disjoint datasets, such as records from different
subsets of users. Because they rely directly on sensitive data, these models
are not published, but instead used as "teachers" for a "student" model. The
student learns to predict an output chosen by noisy voting among all of the
teachers, and cannot directly access an individual teacher or the underlying
data or parameters. The student's privacy properties can be understood both
intuitively (since no single teacher and thus no single dataset dictates the
student's training) and formally, in terms of differential privacy. These
properties hold even if an adversary can not only query the student but also
inspect its internal workings.
Compared with previous work, the approach imposes only weak assumptions on
how teachers are trained: it applies to any model, including non-convex models
like DNNs. We achieve state-of-the-art privacy/utility trade-offs on MNIST and
SVHN thanks to an improved privacy analysis and semi-supervised learning.
| no_new_dataset | 0.939748 |
1611.03427 | Keerthiram Murugesan | Keerthiram Murugesan, Jaime Carbonell | Multi-Task Multiple Kernel Relationship Learning | 17th SIAM International Conference on Data Mining (SDM 2017),
Houston, Texas, USA, 2017 | null | null | null | stat.ML cs.LG | http://creativecommons.org/licenses/by/4.0/ | This paper presents a novel multitask multiple kernel learning framework that
efficiently learns the kernel weights leveraging the relationship across
multiple tasks. The idea is to automatically infer this task relationship in
the \textit{RKHS} space corresponding to the given base kernels. The problem is
formulated as a regularization-based approach called \textit{Multi-Task
Multiple Kernel Relationship Learning} (\textit{MK-MTRL}), which models the
task relationship matrix from the weights learned from latent feature spaces of
task-specific base kernels. Unlike in previous work, the proposed formulation
allows one to incorporate prior knowledge for simultaneously learning several
related tasks. We propose an alternating minimization algorithm to learn the
model parameters, kernel weights and task relationship matrix. In order to
tackle large-scale problems, we further propose a two-stage \textit{MK-MTRL}
online learning algorithm and show that it significantly reduces the
computational time, and also achieves performance comparable to that of the
joint learning framework. Experimental results on benchmark datasets show that
the proposed formulations outperform several state-of-the-art multitask
learning methods.
| [
{
"version": "v1",
"created": "Thu, 10 Nov 2016 17:54:22 GMT"
},
{
"version": "v2",
"created": "Thu, 2 Mar 2017 22:09:54 GMT"
}
] | 2017-03-06T00:00:00 | [
[
"Murugesan",
"Keerthiram",
""
],
[
"Carbonell",
"Jaime",
""
]
] | TITLE: Multi-Task Multiple Kernel Relationship Learning
ABSTRACT: This paper presents a novel multitask multiple kernel learning framework that
efficiently learns the kernel weights leveraging the relationship across
multiple tasks. The idea is to automatically infer this task relationship in
the \textit{RKHS} space corresponding to the given base kernels. The problem is
formulated as a regularization-based approach called \textit{Multi-Task
Multiple Kernel Relationship Learning} (\textit{MK-MTRL}), which models the
task relationship matrix from the weights learned from latent feature spaces of
task-specific base kernels. Unlike in previous work, the proposed formulation
allows one to incorporate prior knowledge for simultaneously learning several
related tasks. We propose an alternating minimization algorithm to learn the
model parameters, kernel weights and task relationship matrix. In order to
tackle large-scale problems, we further propose a two-stage \textit{MK-MTRL}
online learning algorithm and show that it significantly reduces the
computational time, and also achieves performance comparable to that of the
joint learning framework. Experimental results on benchmark datasets show that
the proposed formulations outperform several state-of-the-art multitask
learning methods.
| no_new_dataset | 0.940953 |
1703.00123 | Jian Dai | Jian Dai, Fei He, Wang-Chien Lee, Gang Chen, Beng Chin Ooi | DTNC: A New Server-side Data Cleansing Framework for Cellular Trajectory
Services | null | null | null | null | cs.NI cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It is essential for the cellular network operators to provide cellular
location services to meet the needs of their users and mobile applications.
However, cellular locations, estimated by network-based methods at the
server-side, bear with {\it high spatial errors} and {\it arbitrary missing
locations}. Moreover, auxiliary sensor data at the client-side are not
available to the operators. In this paper, we study the {\em cellular
trajectory cleansing problem} and propose an innovative data cleansing
framework, namely \underline{D}ynamic \underline{T}ransportation
\underline{N}etwork based \underline{C}leansing (DTNC) to improve the quality
of cellular locations delivered in online cellular trajectory services. We
maintain a dynamic transportation network (DTN), which associates a network
edge with a probabilistic distribution of travel times updated continuously. In
addition, we devise an object motion model, namely, {\em travel-time-aware
hidden semi-Markov model} ({\em TT-HsMM}), which is used to infer the most
probable traveled edge sequences on DTN. To validate our ideas, we conduct a
comprehensive evaluation using real-world cellular data provided by a major
cellular network operator and a GPS dataset collected by smartphones as the
ground truth. In the experiments, DTNC displays significant advantages over six
state-of-the-art techniques.
| [
{
"version": "v1",
"created": "Wed, 1 Mar 2017 03:41:40 GMT"
},
{
"version": "v2",
"created": "Fri, 3 Mar 2017 07:42:42 GMT"
}
] | 2017-03-06T00:00:00 | [
[
"Dai",
"Jian",
""
],
[
"He",
"Fei",
""
],
[
"Lee",
"Wang-Chien",
""
],
[
"Chen",
"Gang",
""
],
[
"Ooi",
"Beng Chin",
""
]
] | TITLE: DTNC: A New Server-side Data Cleansing Framework for Cellular Trajectory
Services
ABSTRACT: It is essential for the cellular network operators to provide cellular
location services to meet the needs of their users and mobile applications.
However, cellular locations, estimated by network-based methods at the
server-side, bear with {\it high spatial errors} and {\it arbitrary missing
locations}. Moreover, auxiliary sensor data at the client-side are not
available to the operators. In this paper, we study the {\em cellular
trajectory cleansing problem} and propose an innovative data cleansing
framework, namely \underline{D}ynamic \underline{T}ransportation
\underline{N}etwork based \underline{C}leansing (DTNC) to improve the quality
of cellular locations delivered in online cellular trajectory services. We
maintain a dynamic transportation network (DTN), which associates a network
edge with a probabilistic distribution of travel times updated continuously. In
addition, we devise an object motion model, namely, {\em travel-time-aware
hidden semi-Markov model} ({\em TT-HsMM}), which is used to infer the most
probable traveled edge sequences on DTN. To validate our ideas, we conduct a
comprehensive evaluation using real-world cellular data provided by a major
cellular network operator and a GPS dataset collected by smartphones as the
ground truth. In the experiments, DTNC displays significant advantages over six
state-of-the-art techniques.
| no_new_dataset | 0.945951 |
1703.00948 | Preeti Bhargava | Nemanja Spasojevic, Preeti Bhargava, Guoning Hu | DAWT: Densely Annotated Wikipedia Texts across multiple languages | 8 pages, 3 figures, 7 tables, WWW2017, WWW 2017 Companion proceedings | null | 10.1145/3041021.3053367 | null | cs.IR cs.AI cs.CL cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we open up the DAWT dataset - Densely Annotated Wikipedia Texts
across multiple languages. The annotations include labeled text mentions
mapping to entities (represented by their Freebase machine ids) as well as the
type of the entity. The data set contains total of 13.6M articles, 5.0B tokens,
13.8M mention entity co-occurrences. DAWT contains 4.8 times more anchor text
to entity links than originally present in the Wikipedia markup. Moreover, it
spans several languages including English, Spanish, Italian, German, French and
Arabic. We also present the methodology used to generate the dataset which
enriches Wikipedia markup in order to increase number of links. In addition to
the main dataset, we open up several derived datasets including mention entity
co-occurrence counts and entity embeddings, as well as mappings between
Freebase ids and Wikidata item ids. We also discuss two applications of these
datasets and hope that opening them up would prove useful for the Natural
Language Processing and Information Retrieval communities, as well as
facilitate multi-lingual research.
| [
{
"version": "v1",
"created": "Thu, 2 Mar 2017 20:55:20 GMT"
}
] | 2017-03-06T00:00:00 | [
[
"Spasojevic",
"Nemanja",
""
],
[
"Bhargava",
"Preeti",
""
],
[
"Hu",
"Guoning",
""
]
] | TITLE: DAWT: Densely Annotated Wikipedia Texts across multiple languages
ABSTRACT: In this work, we open up the DAWT dataset - Densely Annotated Wikipedia Texts
across multiple languages. The annotations include labeled text mentions
mapping to entities (represented by their Freebase machine ids) as well as the
type of the entity. The data set contains total of 13.6M articles, 5.0B tokens,
13.8M mention entity co-occurrences. DAWT contains 4.8 times more anchor text
to entity links than originally present in the Wikipedia markup. Moreover, it
spans several languages including English, Spanish, Italian, German, French and
Arabic. We also present the methodology used to generate the dataset which
enriches Wikipedia markup in order to increase number of links. In addition to
the main dataset, we open up several derived datasets including mention entity
co-occurrence counts and entity embeddings, as well as mappings between
Freebase ids and Wikidata item ids. We also discuss two applications of these
datasets and hope that opening them up would prove useful for the Natural
Language Processing and Information Retrieval communities, as well as
facilitate multi-lingual research.
| new_dataset | 0.964321 |
1703.00989 | Reza Bonyadi Reza Bonyadi | Mohammad Reza Bonyadi, Quang M. Tieng, David C. Reutens | Optimization of distributions differences for classification | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we introduce a new classification algorithm called Optimization
of Distributions Differences (ODD). The algorithm aims to find a transformation
from the feature space to a new space where the instances in the same class are
as close as possible to one another while the gravity centers of these classes
are as far as possible from one another. This aim is formulated as a
multiobjective optimization problem that is solved by a hybrid of an
evolutionary strategy and the Quasi-Newton method. The choice of the
transformation function is flexible and could be any continuous space function.
We experiment with a linear and a non-linear transformation in this paper. We
show that the algorithm can outperform 6 other state-of-the-art classification
methods, namely naive Bayes, support vector machines, linear discriminant
analysis, multi-layer perceptrons, decision trees, and k-nearest neighbors, in
12 standard classification datasets. Our results show that the method is less
sensitive to the imbalanced number of instances comparing to these methods. We
also show that ODD maintains its performance better than other classification
methods in these datasets, hence, offers a better generalization ability.
| [
{
"version": "v1",
"created": "Thu, 2 Mar 2017 23:42:33 GMT"
}
] | 2017-03-06T00:00:00 | [
[
"Bonyadi",
"Mohammad Reza",
""
],
[
"Tieng",
"Quang M.",
""
],
[
"Reutens",
"David C.",
""
]
] | TITLE: Optimization of distributions differences for classification
ABSTRACT: In this paper we introduce a new classification algorithm called Optimization
of Distributions Differences (ODD). The algorithm aims to find a transformation
from the feature space to a new space where the instances in the same class are
as close as possible to one another while the gravity centers of these classes
are as far as possible from one another. This aim is formulated as a
multiobjective optimization problem that is solved by a hybrid of an
evolutionary strategy and the Quasi-Newton method. The choice of the
transformation function is flexible and could be any continuous space function.
We experiment with a linear and a non-linear transformation in this paper. We
show that the algorithm can outperform 6 other state-of-the-art classification
methods, namely naive Bayes, support vector machines, linear discriminant
analysis, multi-layer perceptrons, decision trees, and k-nearest neighbors, in
12 standard classification datasets. Our results show that the method is less
sensitive to the imbalanced number of instances comparing to these methods. We
also show that ODD maintains its performance better than other classification
methods in these datasets, hence, offers a better generalization ability.
| no_new_dataset | 0.94625 |
1703.00994 | Keerthiram Murugesan | Keerthiram Murugesan, Jaime Carbonell, Yiming Yang | Co-Clustering for Multitask Learning | null | null | null | null | stat.ML cs.LG | http://creativecommons.org/licenses/by/4.0/ | This paper presents a new multitask learning framework that learns a shared
representation among the tasks, incorporating both task and feature clusters.
The jointly-induced clusters yield a shared latent subspace where task
relationships are learned more effectively and more generally than in
state-of-the-art multitask learning methods. The proposed general framework
enables the derivation of more specific or restricted state-of-the-art
multitask methods. The paper also proposes a highly-scalable multitask learning
algorithm, based on the new framework, using conjugate gradient descent and
generalized \textit{Sylvester equations}. Experimental results on synthetic and
benchmark datasets show that the proposed method systematically outperforms
several state-of-the-art multitask learning methods.
| [
{
"version": "v1",
"created": "Fri, 3 Mar 2017 00:03:14 GMT"
}
] | 2017-03-06T00:00:00 | [
[
"Murugesan",
"Keerthiram",
""
],
[
"Carbonell",
"Jaime",
""
],
[
"Yang",
"Yiming",
""
]
] | TITLE: Co-Clustering for Multitask Learning
ABSTRACT: This paper presents a new multitask learning framework that learns a shared
representation among the tasks, incorporating both task and feature clusters.
The jointly-induced clusters yield a shared latent subspace where task
relationships are learned more effectively and more generally than in
state-of-the-art multitask learning methods. The proposed general framework
enables the derivation of more specific or restricted state-of-the-art
multitask methods. The paper also proposes a highly-scalable multitask learning
algorithm, based on the new framework, using conjugate gradient descent and
generalized \textit{Sylvester equations}. Experimental results on synthetic and
benchmark datasets show that the proposed method systematically outperforms
several state-of-the-art multitask learning methods.
| no_new_dataset | 0.94699 |
1703.01049 | Ayan Sinha | Ayan Sinha, David F. Gleich and Karthik Ramani | Deconvolving Feedback Loops in Recommender Systems | Neural Information Processing Systems, 2016 | null | null | null | cs.SI cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Collaborative filtering is a popular technique to infer users' preferences on
new content based on the collective information of all users preferences.
Recommender systems then use this information to make personalized suggestions
to users. When users accept these recommendations it creates a feedback loop in
the recommender system, and these loops iteratively influence the collaborative
filtering algorithm's predictions over time. We investigate whether it is
possible to identify items affected by these feedback loops. We state
sufficient assumptions to deconvolve the feedback loops while keeping the
inverse solution tractable. We furthermore develop a metric to unravel the
recommender system's influence on the entire user-item rating matrix. We use
this metric on synthetic and real-world datasets to (1) identify the extent to
which the recommender system affects the final rating matrix, (2) rank
frequently recommended items, and (3) distinguish whether a user's rated item
was recommended or an intrinsic preference. Our results indicate that it is
possible to recover the ratings matrix of intrinsic user preferences using a
single snapshot of the ratings matrix without any temporal information.
| [
{
"version": "v1",
"created": "Fri, 3 Mar 2017 06:27:52 GMT"
}
] | 2017-03-06T00:00:00 | [
[
"Sinha",
"Ayan",
""
],
[
"Gleich",
"David F.",
""
],
[
"Ramani",
"Karthik",
""
]
] | TITLE: Deconvolving Feedback Loops in Recommender Systems
ABSTRACT: Collaborative filtering is a popular technique to infer users' preferences on
new content based on the collective information of all users preferences.
Recommender systems then use this information to make personalized suggestions
to users. When users accept these recommendations it creates a feedback loop in
the recommender system, and these loops iteratively influence the collaborative
filtering algorithm's predictions over time. We investigate whether it is
possible to identify items affected by these feedback loops. We state
sufficient assumptions to deconvolve the feedback loops while keeping the
inverse solution tractable. We furthermore develop a metric to unravel the
recommender system's influence on the entire user-item rating matrix. We use
this metric on synthetic and real-world datasets to (1) identify the extent to
which the recommender system affects the final rating matrix, (2) rank
frequently recommended items, and (3) distinguish whether a user's rated item
was recommended or an intrinsic preference. Our results indicate that it is
possible to recover the ratings matrix of intrinsic user preferences using a
single snapshot of the ratings matrix without any temporal information.
| no_new_dataset | 0.94743 |
1703.01226 | Zakaria Laskar | Zakaria Laskar, and Juho Kannala | Context Aware Query Image Representation for Particular Object Retrieval | 14 pages, Extended version of a manuscript submitted to SCIA 2017 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The current models of image representation based on Convolutional Neural
Networks (CNN) have shown tremendous performance in image retrieval. Such
models are inspired by the information flow along the visual pathway in the
human visual cortex. We propose that in the field of particular object
retrieval, the process of extracting CNN representations from query images with
a given region of interest (ROI) can also be modelled by taking inspiration
from human vision. Particularly, we show that by making the CNN pay attention
on the ROI while extracting query image representation leads to significant
improvement over the baseline methods on challenging Oxford5k and Paris6k
datasets. Furthermore, we propose an extension to a recently introduced
encoding method for CNN representations, regional maximum activations of
convolutions (R-MAC). The proposed extension weights the regional
representations using a novel saliency measure prior to aggregation. This leads
to further improvement in retrieval accuracy.
| [
{
"version": "v1",
"created": "Fri, 3 Mar 2017 16:14:53 GMT"
}
] | 2017-03-06T00:00:00 | [
[
"Laskar",
"Zakaria",
""
],
[
"Kannala",
"Juho",
""
]
] | TITLE: Context Aware Query Image Representation for Particular Object Retrieval
ABSTRACT: The current models of image representation based on Convolutional Neural
Networks (CNN) have shown tremendous performance in image retrieval. Such
models are inspired by the information flow along the visual pathway in the
human visual cortex. We propose that in the field of particular object
retrieval, the process of extracting CNN representations from query images with
a given region of interest (ROI) can also be modelled by taking inspiration
from human vision. Particularly, we show that by making the CNN pay attention
on the ROI while extracting query image representation leads to significant
improvement over the baseline methods on challenging Oxford5k and Paris6k
datasets. Furthermore, we propose an extension to a recently introduced
encoding method for CNN representations, regional maximum activations of
convolutions (R-MAC). The proposed extension weights the regional
representations using a novel saliency measure prior to aggregation. This leads
to further improvement in retrieval accuracy.
| no_new_dataset | 0.948058 |
1703.01229 | Lingxi Xie | Yan Wang, Lingxi Xie, Ya Zhang, Wenjun Zhang, Alan Yuille | Deep Collaborative Learning for Visual Recognition | Submitted to CVPR 2017 (10 pages, 5 figures) | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep neural networks are playing an important role in state-of-the-art visual
recognition. To represent high-level visual concepts, modern networks are
equipped with large convolutional layers, which use a large number of filters
and contribute significantly to model complexity. For example, more than half
of the weights of AlexNet are stored in the first fully-connected layer (4,096
filters).
We formulate the function of a convolutional layer as learning a large visual
vocabulary, and propose an alternative way, namely Deep Collaborative Learning
(DCL), to reduce the computational complexity. We replace a convolutional layer
with a two-stage DCL module, in which we first construct a couple of smaller
convolutional layers individually, and then fuse them at each spatial position
to consider feature co-occurrence. In mathematics, DCL can be explained as an
efficient way of learning compositional visual concepts, in which the
vocabulary size increases exponentially while the model complexity only
increases linearly. We evaluate DCL on a wide range of visual recognition
tasks, including a series of multi-digit number classification datasets, and
some generic image classification datasets such as SVHN, CIFAR and ILSVRC2012.
We apply DCL to several state-of-the-art network structures, improving the
recognition accuracy meanwhile reducing the number of parameters (16.82% fewer
in AlexNet).
| [
{
"version": "v1",
"created": "Fri, 3 Mar 2017 16:17:45 GMT"
}
] | 2017-03-06T00:00:00 | [
[
"Wang",
"Yan",
""
],
[
"Xie",
"Lingxi",
""
],
[
"Zhang",
"Ya",
""
],
[
"Zhang",
"Wenjun",
""
],
[
"Yuille",
"Alan",
""
]
] | TITLE: Deep Collaborative Learning for Visual Recognition
ABSTRACT: Deep neural networks are playing an important role in state-of-the-art visual
recognition. To represent high-level visual concepts, modern networks are
equipped with large convolutional layers, which use a large number of filters
and contribute significantly to model complexity. For example, more than half
of the weights of AlexNet are stored in the first fully-connected layer (4,096
filters).
We formulate the function of a convolutional layer as learning a large visual
vocabulary, and propose an alternative way, namely Deep Collaborative Learning
(DCL), to reduce the computational complexity. We replace a convolutional layer
with a two-stage DCL module, in which we first construct a couple of smaller
convolutional layers individually, and then fuse them at each spatial position
to consider feature co-occurrence. In mathematics, DCL can be explained as an
efficient way of learning compositional visual concepts, in which the
vocabulary size increases exponentially while the model complexity only
increases linearly. We evaluate DCL on a wide range of visual recognition
tasks, including a series of multi-digit number classification datasets, and
some generic image classification datasets such as SVHN, CIFAR and ILSVRC2012.
We apply DCL to several state-of-the-art network structures, improving the
recognition accuracy meanwhile reducing the number of parameters (16.82% fewer
in AlexNet).
| no_new_dataset | 0.949012 |
1407.1507 | Sebastian Deorowicz | Sebastian Deorowicz and Marek Kokot and Szymon Grabowski and Agnieszka
Debudaj-Grabysz | KMC 2: Fast and resource-frugal $k$-mer counting | null | Bioinformatics 31 (10): 1569-1576 (2015) | 10.1093/bioinformatics/btv022 | null | cs.DS cs.CE q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Motivation: Building the histogram of occurrences of every $k$-symbol long
substring of nucleotide data is a standard step in many bioinformatics
applications, known under the name of $k$-mer counting. Its applications
include developing de Bruijn graph genome assemblers, fast multiple sequence
alignment and repeat detection. The tremendous amounts of NGS data require fast
algorithms for $k$-mer counting, preferably using moderate amounts of memory.
Results: We present a novel method for $k$-mer counting, on large datasets at
least twice faster than the strongest competitors (Jellyfish~2, KMC~1), using
about 12\,GB (or less) of RAM memory. Our disk-based method bears some
resemblance to MSPKmerCounter, yet replacing the original minimizers with
signatures (a carefully selected subset of all minimizers) and using $(k,
x)$-mers allows to significantly reduce the I/O, and a highly parallel overall
architecture allows to achieve unprecedented processing speeds. For example,
KMC~2 allows to count the 28-mers of a human reads collection with 44-fold
coverage (106\,GB of compressed size) in about 20 minutes, on a 6-core Intel i7
PC with an SSD.
Availability: KMC~2 is freely available at http://sun.aei.polsl.pl/kmc.
Contact: [email protected]
| [
{
"version": "v1",
"created": "Sun, 6 Jul 2014 15:39:05 GMT"
}
] | 2017-03-03T00:00:00 | [
[
"Deorowicz",
"Sebastian",
""
],
[
"Kokot",
"Marek",
""
],
[
"Grabowski",
"Szymon",
""
],
[
"Debudaj-Grabysz",
"Agnieszka",
""
]
] | TITLE: KMC 2: Fast and resource-frugal $k$-mer counting
ABSTRACT: Motivation: Building the histogram of occurrences of every $k$-symbol long
substring of nucleotide data is a standard step in many bioinformatics
applications, known under the name of $k$-mer counting. Its applications
include developing de Bruijn graph genome assemblers, fast multiple sequence
alignment and repeat detection. The tremendous amounts of NGS data require fast
algorithms for $k$-mer counting, preferably using moderate amounts of memory.
Results: We present a novel method for $k$-mer counting, on large datasets at
least twice faster than the strongest competitors (Jellyfish~2, KMC~1), using
about 12\,GB (or less) of RAM memory. Our disk-based method bears some
resemblance to MSPKmerCounter, yet replacing the original minimizers with
signatures (a carefully selected subset of all minimizers) and using $(k,
x)$-mers allows to significantly reduce the I/O, and a highly parallel overall
architecture allows to achieve unprecedented processing speeds. For example,
KMC~2 allows to count the 28-mers of a human reads collection with 44-fold
coverage (106\,GB of compressed size) in about 20 minutes, on a 6-core Intel i7
PC with an SSD.
Availability: KMC~2 is freely available at http://sun.aei.polsl.pl/kmc.
Contact: [email protected]
| no_new_dataset | 0.940353 |
1603.06958 | Sebastian Deorowicz | Sebastin Deorowicz and Agnieszka Debudaj-Grabysz and Adam Gudys | Aligning 415 519 proteins in less than two hours on PC | null | Scientific Reports, Article no. 33964 (2016) | 10.1038/srep33964 | null | q-bio.GN cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Rapid development of modern sequencing platforms enabled an unprecedented
growth of protein families databases. The abundance of sets composed of
hundreds of thousands sequences is a great challenge for multiple sequence
alignment algorithms. In the article we introduce FAMSA, a new progressive
algorithm designed for fast and accurate alignment of thousands of protein
sequences. Its features include the utilisation of longest common subsequence
measure for determining pairwise similarities, a novel method of gap costs
evaluation, and a new iterative refinement scheme. Importantly, its
implementation is highly optimised and parallelised to make the most of modern
computer platforms. Thanks to the above, quality indicators, namely
sum-of-pairs and total-column scores, show FAMSA to be superior to competing
algorithms like Clustal Omega or MAFFT for datasets exceeding a few thousand of
sequences. The quality does not compromise time and memory requirements which
are an order of magnitude lower than that of existing solutions. For example, a
family of 415 519 sequences was analysed in less than two hours and required
only 8GB of RAM.
FAMSA is freely available at http://sun.aei.polsl.pl/REFRESH/famsa.
| [
{
"version": "v1",
"created": "Tue, 22 Mar 2016 20:03:43 GMT"
}
] | 2017-03-03T00:00:00 | [
[
"Deorowicz",
"Sebastin",
""
],
[
"Debudaj-Grabysz",
"Agnieszka",
""
],
[
"Gudys",
"Adam",
""
]
] | TITLE: Aligning 415 519 proteins in less than two hours on PC
ABSTRACT: Rapid development of modern sequencing platforms enabled an unprecedented
growth of protein families databases. The abundance of sets composed of
hundreds of thousands sequences is a great challenge for multiple sequence
alignment algorithms. In the article we introduce FAMSA, a new progressive
algorithm designed for fast and accurate alignment of thousands of protein
sequences. Its features include the utilisation of longest common subsequence
measure for determining pairwise similarities, a novel method of gap costs
evaluation, and a new iterative refinement scheme. Importantly, its
implementation is highly optimised and parallelised to make the most of modern
computer platforms. Thanks to the above, quality indicators, namely
sum-of-pairs and total-column scores, show FAMSA to be superior to competing
algorithms like Clustal Omega or MAFFT for datasets exceeding a few thousand of
sequences. The quality does not compromise time and memory requirements which
are an order of magnitude lower than that of existing solutions. For example, a
family of 415 519 sequences was analysed in less than two hours and required
only 8GB of RAM.
FAMSA is freely available at http://sun.aei.polsl.pl/REFRESH/famsa.
| no_new_dataset | 0.939913 |
1609.08546 | Jacob Varley | Jacob Varley, Chad DeChant, Adam Richardson, Joaqu\'in Ruales, Peter
Allen | Shape Completion Enabled Robotic Grasping | Under review at IEEE/RSJ International Conference on Intelligent
Robots and Systems(IROS) 2017 | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work provides an architecture to enable robotic grasp planning via shape
completion. Shape completion is accomplished through the use of a 3D
convolutional neural network (CNN). The network is trained on our own new open
source dataset of over 440,000 3D exemplars captured from varying viewpoints.
At runtime, a 2.5D pointcloud captured from a single point of view is fed into
the CNN, which fills in the occluded regions of the scene, allowing grasps to
be planned and executed on the completed object. Runtime shape completion is
very rapid because most of the computational costs of shape completion are
borne during offline training. We explore how the quality of completions vary
based on several factors. These include whether or not the object being
completed existed in the training data and how many object models were used to
train the network. We also look at the ability of the network to generalize to
novel objects allowing the system to complete previously unseen objects at
runtime. Finally, experimentation is done both in simulation and on actual
robotic hardware to explore the relationship between completion quality and the
utility of the completed mesh model for grasping.
| [
{
"version": "v1",
"created": "Tue, 27 Sep 2016 17:40:06 GMT"
},
{
"version": "v2",
"created": "Thu, 2 Mar 2017 18:19:56 GMT"
}
] | 2017-03-03T00:00:00 | [
[
"Varley",
"Jacob",
""
],
[
"DeChant",
"Chad",
""
],
[
"Richardson",
"Adam",
""
],
[
"Ruales",
"Joaquín",
""
],
[
"Allen",
"Peter",
""
]
] | TITLE: Shape Completion Enabled Robotic Grasping
ABSTRACT: This work provides an architecture to enable robotic grasp planning via shape
completion. Shape completion is accomplished through the use of a 3D
convolutional neural network (CNN). The network is trained on our own new open
source dataset of over 440,000 3D exemplars captured from varying viewpoints.
At runtime, a 2.5D pointcloud captured from a single point of view is fed into
the CNN, which fills in the occluded regions of the scene, allowing grasps to
be planned and executed on the completed object. Runtime shape completion is
very rapid because most of the computational costs of shape completion are
borne during offline training. We explore how the quality of completions vary
based on several factors. These include whether or not the object being
completed existed in the training data and how many object models were used to
train the network. We also look at the ability of the network to generalize to
novel objects allowing the system to complete previously unseen objects at
runtime. Finally, experimentation is done both in simulation and on actual
robotic hardware to explore the relationship between completion quality and the
utility of the completed mesh model for grasping.
| new_dataset | 0.957278 |
1611.08945 | Arvind Neelakantan | Arvind Neelakantan, Quoc V. Le, Martin Abadi, Andrew McCallum, Dario
Amodei | Learning a Natural Language Interface with Neural Programmer | Published as a conference paper at ICLR 2017 | null | null | null | cs.CL cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learning a natural language interface for database tables is a challenging
task that involves deep language understanding and multi-step reasoning. The
task is often approached by mapping natural language queries to logical forms
or programs that provide the desired response when executed on the database. To
our knowledge, this paper presents the first weakly supervised, end-to-end
neural network model to induce such programs on a real-world dataset. We
enhance the objective function of Neural Programmer, a neural network with
built-in discrete operations, and apply it on WikiTableQuestions, a natural
language question-answering dataset. The model is trained end-to-end with weak
supervision of question-answer pairs, and does not require domain-specific
grammars, rules, or annotations that are key elements in previous approaches to
program induction. The main experimental result in this paper is that a single
Neural Programmer model achieves 34.2% accuracy using only 10,000 examples with
weak supervision. An ensemble of 15 models, with a trivial combination
technique, achieves 37.7% accuracy, which is competitive to the current
state-of-the-art accuracy of 37.1% obtained by a traditional natural language
semantic parser.
| [
{
"version": "v1",
"created": "Mon, 28 Nov 2016 00:54:34 GMT"
},
{
"version": "v2",
"created": "Mon, 20 Feb 2017 16:18:14 GMT"
},
{
"version": "v3",
"created": "Tue, 21 Feb 2017 14:43:12 GMT"
},
{
"version": "v4",
"created": "Thu, 2 Mar 2017 16:02:00 GMT"
}
] | 2017-03-03T00:00:00 | [
[
"Neelakantan",
"Arvind",
""
],
[
"Le",
"Quoc V.",
""
],
[
"Abadi",
"Martin",
""
],
[
"McCallum",
"Andrew",
""
],
[
"Amodei",
"Dario",
""
]
] | TITLE: Learning a Natural Language Interface with Neural Programmer
ABSTRACT: Learning a natural language interface for database tables is a challenging
task that involves deep language understanding and multi-step reasoning. The
task is often approached by mapping natural language queries to logical forms
or programs that provide the desired response when executed on the database. To
our knowledge, this paper presents the first weakly supervised, end-to-end
neural network model to induce such programs on a real-world dataset. We
enhance the objective function of Neural Programmer, a neural network with
built-in discrete operations, and apply it on WikiTableQuestions, a natural
language question-answering dataset. The model is trained end-to-end with weak
supervision of question-answer pairs, and does not require domain-specific
grammars, rules, or annotations that are key elements in previous approaches to
program induction. The main experimental result in this paper is that a single
Neural Programmer model achieves 34.2% accuracy using only 10,000 examples with
weak supervision. An ensemble of 15 models, with a trivial combination
technique, achieves 37.7% accuracy, which is competitive to the current
state-of-the-art accuracy of 37.1% obtained by a traditional natural language
semantic parser.
| no_new_dataset | 0.944074 |
1703.00503 | Tianmin Shu | Tianmin Shu, Xiaofeng Gao, Michael S. Ryoo and Song-Chun Zhu | Learning Social Affordance Grammar from Videos: Transferring Human
Interactions to Human-Robot Interactions | The 2017 IEEE International Conference on Robotics and Automation
(ICRA) | null | null | null | cs.RO cs.AI cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we present a general framework for learning social affordance
grammar as a spatiotemporal AND-OR graph (ST-AOG) from RGB-D videos of human
interactions, and transfer the grammar to humanoids to enable a real-time
motion inference for human-robot interaction (HRI). Based on Gibbs sampling,
our weakly supervised grammar learning can automatically construct a
hierarchical representation of an interaction with long-term joint sub-tasks of
both agents and short term atomic actions of individual agents. Based on a new
RGB-D video dataset with rich instances of human interactions, our experiments
of Baxter simulation, human evaluation, and real Baxter test demonstrate that
the model learned from limited training data successfully generates human-like
behaviors in unseen scenarios and outperforms both baselines.
| [
{
"version": "v1",
"created": "Wed, 1 Mar 2017 21:05:10 GMT"
}
] | 2017-03-03T00:00:00 | [
[
"Shu",
"Tianmin",
""
],
[
"Gao",
"Xiaofeng",
""
],
[
"Ryoo",
"Michael S.",
""
],
[
"Zhu",
"Song-Chun",
""
]
] | TITLE: Learning Social Affordance Grammar from Videos: Transferring Human
Interactions to Human-Robot Interactions
ABSTRACT: In this paper, we present a general framework for learning social affordance
grammar as a spatiotemporal AND-OR graph (ST-AOG) from RGB-D videos of human
interactions, and transfer the grammar to humanoids to enable a real-time
motion inference for human-robot interaction (HRI). Based on Gibbs sampling,
our weakly supervised grammar learning can automatically construct a
hierarchical representation of an interaction with long-term joint sub-tasks of
both agents and short term atomic actions of individual agents. Based on a new
RGB-D video dataset with rich instances of human interactions, our experiments
of Baxter simulation, human evaluation, and real Baxter test demonstrate that
the model learned from limited training data successfully generates human-like
behaviors in unseen scenarios and outperforms both baselines.
| new_dataset | 0.961353 |
1703.00512 | Randal Olson | Randal S. Olson, William La Cava, Patryk Orzechowski, Ryan J.
Urbanowicz, Jason H. Moore | PMLB: A Large Benchmark Suite for Machine Learning Evaluation and
Comparison | 14 pages, 5 figures, submitted for review to JMLR | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The selection, development, or comparison of machine learning methods in data
mining can be a difficult task based on the target problem and goals of a
particular study. Numerous publicly available real-world and simulated
benchmark datasets have emerged from different sources, but their organization
and adoption as standards have been inconsistent. As such, selecting and
curating specific benchmarks remains an unnecessary burden on machine learning
practitioners and data scientists. The present study introduces an accessible,
curated, and developing public benchmark resource to facilitate identification
of the strengths and weaknesses of different machine learning methodologies. We
compare meta-features among the current set of benchmark datasets in this
resource to characterize the diversity of available data. Finally, we apply a
number of established machine learning methods to the entire benchmark suite
and analyze how datasets and algorithms cluster in terms of performance. This
work is an important first step towards understanding the limitations of
popular benchmarking suites and developing a resource that connects existing
benchmarking standards to more diverse and efficient standards in the future.
| [
{
"version": "v1",
"created": "Wed, 1 Mar 2017 21:20:11 GMT"
}
] | 2017-03-03T00:00:00 | [
[
"Olson",
"Randal S.",
""
],
[
"La Cava",
"William",
""
],
[
"Orzechowski",
"Patryk",
""
],
[
"Urbanowicz",
"Ryan J.",
""
],
[
"Moore",
"Jason H.",
""
]
] | TITLE: PMLB: A Large Benchmark Suite for Machine Learning Evaluation and
Comparison
ABSTRACT: The selection, development, or comparison of machine learning methods in data
mining can be a difficult task based on the target problem and goals of a
particular study. Numerous publicly available real-world and simulated
benchmark datasets have emerged from different sources, but their organization
and adoption as standards have been inconsistent. As such, selecting and
curating specific benchmarks remains an unnecessary burden on machine learning
practitioners and data scientists. The present study introduces an accessible,
curated, and developing public benchmark resource to facilitate identification
of the strengths and weaknesses of different machine learning methodologies. We
compare meta-features among the current set of benchmark datasets in this
resource to characterize the diversity of available data. Finally, we apply a
number of established machine learning methods to the entire benchmark suite
and analyze how datasets and algorithms cluster in terms of performance. This
work is an important first step towards understanding the limitations of
popular benchmarking suites and developing a resource that connects existing
benchmarking standards to more diverse and efficient standards in the future.
| no_new_dataset | 0.935876 |
1703.00551 | Md Amirul Islam | Md Amirul Islam, Shujon Naha, Mrigank Rochan, Neil Bruce, Yang Wang | Label Refinement Network for Coarse-to-Fine Semantic Segmentation | 9 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the problem of semantic image segmentation using deep
convolutional neural networks. We propose a novel network architecture called
the label refinement network that predicts segmentation labels in a
coarse-to-fine fashion at several resolutions. The segmentation labels at a
coarse resolution are used together with convolutional features to obtain finer
resolution segmentation labels. We define loss functions at several stages in
the network to provide supervisions at different stages. Our experimental
results on several standard datasets demonstrate that the proposed model
provides an effective way of producing pixel-wise dense image labeling.
| [
{
"version": "v1",
"created": "Wed, 1 Mar 2017 23:42:30 GMT"
}
] | 2017-03-03T00:00:00 | [
[
"Islam",
"Md Amirul",
""
],
[
"Naha",
"Shujon",
""
],
[
"Rochan",
"Mrigank",
""
],
[
"Bruce",
"Neil",
""
],
[
"Wang",
"Yang",
""
]
] | TITLE: Label Refinement Network for Coarse-to-Fine Semantic Segmentation
ABSTRACT: We consider the problem of semantic image segmentation using deep
convolutional neural networks. We propose a novel network architecture called
the label refinement network that predicts segmentation labels in a
coarse-to-fine fashion at several resolutions. The segmentation labels at a
coarse resolution are used together with convolutional features to obtain finer
resolution segmentation labels. We define loss functions at several stages in
the network to provide supervisions at different stages. Our experimental
results on several standard datasets demonstrate that the proposed model
provides an effective way of producing pixel-wise dense image labeling.
| no_new_dataset | 0.959837 |
1703.00552 | Kanji Tanaka | Murase Tomoya, Tanaka Kanji | Change Detection under Global Viewpoint Uncertainty | 8 pages, 9 figures, technical report | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper addresses the problem of change detection from a novel perspective
of long-term map learning. We are particularly interested in designing an
approach that can scale to large maps and that can function under global
uncertainty in the viewpoint (i.e., GPS-denied situations). Our approach, which
utilizes a compact bag-of-words (BoW) scene model, makes several contributions
to the problem:
1) Two kinds of prior information are extracted from the view sequence map
and used for change detection. Further, we propose a novel type of prior,
called motion prior, to predict the relative motions of stationary objects and
anomaly ego-motion detection. The proposed prior is also useful for
distinguishing stationary from non-stationary objects.
2) A small set of good reference images (e.g., 10) are efficiently retrieved
from the view sequence map by employing the recently developed
Bag-of-Local-Convolutional-Features (BoLCF) scene model.
3) Change detection is reformulated as a scene retrieval over these reference
images to find changed objects using a novel spatial Bag-of-Words (SBoW) scene
model. Evaluations conducted of individual techniques and also their
combinations on a challenging dataset of highly dynamic scenes in the publicly
available Malaga dataset verify their efficacy.
| [
{
"version": "v1",
"created": "Wed, 1 Mar 2017 23:51:03 GMT"
}
] | 2017-03-03T00:00:00 | [
[
"Tomoya",
"Murase",
""
],
[
"Kanji",
"Tanaka",
""
]
] | TITLE: Change Detection under Global Viewpoint Uncertainty
ABSTRACT: This paper addresses the problem of change detection from a novel perspective
of long-term map learning. We are particularly interested in designing an
approach that can scale to large maps and that can function under global
uncertainty in the viewpoint (i.e., GPS-denied situations). Our approach, which
utilizes a compact bag-of-words (BoW) scene model, makes several contributions
to the problem:
1) Two kinds of prior information are extracted from the view sequence map
and used for change detection. Further, we propose a novel type of prior,
called motion prior, to predict the relative motions of stationary objects and
anomaly ego-motion detection. The proposed prior is also useful for
distinguishing stationary from non-stationary objects.
2) A small set of good reference images (e.g., 10) are efficiently retrieved
from the view sequence map by employing the recently developed
Bag-of-Local-Convolutional-Features (BoLCF) scene model.
3) Change detection is reformulated as a scene retrieval over these reference
images to find changed objects using a novel spatial Bag-of-Words (SBoW) scene
model. Evaluations conducted of individual techniques and also their
combinations on a challenging dataset of highly dynamic scenes in the publicly
available Malaga dataset verify their efficacy.
| no_new_dataset | 0.944485 |
1703.00633 | Christos Bampis | Christos G. Bampis and Alan C. Bovik | Learning to Predict Streaming Video QoE: Distortions, Rebuffering and
Memory | under review in Transactions on Image Processing | null | null | null | cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mobile streaming video data accounts for a large and increasing percentage of
wireless network traffic. The available bandwidths of modern wireless networks
are often unstable, leading to difficulties in delivering smooth, high-quality
video. Streaming service providers such as Netflix and YouTube attempt to adapt
their systems to adjust in response to these bandwidth limitations by changing
the video bitrate or, failing that, allowing playback interruptions
(rebuffering). Being able to predict end user' quality of experience (QoE)
resulting from these adjustments could lead to perceptually-driven network
resource allocation strategies that would deliver streaming content of higher
quality to clients, while being cost effective for providers. Existing
objective QoE models only consider the effects on user QoE of video quality
changes or playback interruptions. For streaming applications, adaptive network
strategies may involve a combination of dynamic bitrate allocation along with
playback interruptions when the available bandwidth reaches a very low value.
Towards effectively predicting user QoE, we propose Video Assessment of
TemporaL Artifacts and Stalls (Video ATLAS): a machine learning framework where
we combine a number of QoE-related features, including objective quality
features, rebuffering-aware features and memory-driven features to make QoE
predictions. We evaluated our learning-based QoE prediction model on the
recently designed LIVE-Netflix Video QoE Database which consists of practical
playout patterns, where the videos are afflicted by both quality changes and
rebuffering events, and found that it provides improved performance over
state-of-the-art video quality metrics while generalizing well on different
datasets. The proposed algorithm is made publicly available at
http://live.ece.utexas.edu/research/Quality/VideoATLAS release_v2.rar.
| [
{
"version": "v1",
"created": "Thu, 2 Mar 2017 05:45:26 GMT"
}
] | 2017-03-03T00:00:00 | [
[
"Bampis",
"Christos G.",
""
],
[
"Bovik",
"Alan C.",
""
]
] | TITLE: Learning to Predict Streaming Video QoE: Distortions, Rebuffering and
Memory
ABSTRACT: Mobile streaming video data accounts for a large and increasing percentage of
wireless network traffic. The available bandwidths of modern wireless networks
are often unstable, leading to difficulties in delivering smooth, high-quality
video. Streaming service providers such as Netflix and YouTube attempt to adapt
their systems to adjust in response to these bandwidth limitations by changing
the video bitrate or, failing that, allowing playback interruptions
(rebuffering). Being able to predict end user' quality of experience (QoE)
resulting from these adjustments could lead to perceptually-driven network
resource allocation strategies that would deliver streaming content of higher
quality to clients, while being cost effective for providers. Existing
objective QoE models only consider the effects on user QoE of video quality
changes or playback interruptions. For streaming applications, adaptive network
strategies may involve a combination of dynamic bitrate allocation along with
playback interruptions when the available bandwidth reaches a very low value.
Towards effectively predicting user QoE, we propose Video Assessment of
TemporaL Artifacts and Stalls (Video ATLAS): a machine learning framework where
we combine a number of QoE-related features, including objective quality
features, rebuffering-aware features and memory-driven features to make QoE
predictions. We evaluated our learning-based QoE prediction model on the
recently designed LIVE-Netflix Video QoE Database which consists of practical
playout patterns, where the videos are afflicted by both quality changes and
rebuffering events, and found that it provides improved performance over
state-of-the-art video quality metrics while generalizing well on different
datasets. The proposed algorithm is made publicly available at
http://live.ece.utexas.edu/research/Quality/VideoATLAS release_v2.rar.
| no_new_dataset | 0.950549 |
1703.00768 | He Jiang | He Jiang, Xiaochen Li, Zijiang Yang, Jifeng Xuan | What Causes My Test Alarm? Automatic Cause Analysis for Test Alarms in
System and Integration Testing | 12 pages | null | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Driven by new software development processes and testing in clouds, system
and integration testing nowadays tends to produce enormous number of alarms.
Such test alarms lay an almost unbearable burden on software testing engineers
who have to manually analyze the causes of these alarms. The causes are
critical because they decide which stakeholders are responsible to fix the bugs
detected during the testing. In this paper, we present a novel approach that
aims to relieve the burden by automating the procedure. Our approach, called
Cause Analysis Model, exploits information retrieval techniques to efficiently
infer test alarm causes based on test logs. We have developed a prototype and
evaluated our tool on two industrial datasets with more than 14,000 test
alarms. Experiments on the two datasets show that our tool achieves an accuracy
of 58.3% and 65.8%, respectively, which outperforms the baseline algorithms by
up to 13.3%. Our algorithm is also extremely efficient, spending about 0.1s per
cause analysis. Due to the attractive experimental results, our industrial
partner, a leading information and communication technology company in the
world, has deployed the tool and it achieves an average accuracy of 72% after
two months of running, nearly three times more accurate than a previous
strategy based on regular expressions.
| [
{
"version": "v1",
"created": "Thu, 2 Mar 2017 12:54:26 GMT"
}
] | 2017-03-03T00:00:00 | [
[
"Jiang",
"He",
""
],
[
"Li",
"Xiaochen",
""
],
[
"Yang",
"Zijiang",
""
],
[
"Xuan",
"Jifeng",
""
]
] | TITLE: What Causes My Test Alarm? Automatic Cause Analysis for Test Alarms in
System and Integration Testing
ABSTRACT: Driven by new software development processes and testing in clouds, system
and integration testing nowadays tends to produce enormous number of alarms.
Such test alarms lay an almost unbearable burden on software testing engineers
who have to manually analyze the causes of these alarms. The causes are
critical because they decide which stakeholders are responsible to fix the bugs
detected during the testing. In this paper, we present a novel approach that
aims to relieve the burden by automating the procedure. Our approach, called
Cause Analysis Model, exploits information retrieval techniques to efficiently
infer test alarm causes based on test logs. We have developed a prototype and
evaluated our tool on two industrial datasets with more than 14,000 test
alarms. Experiments on the two datasets show that our tool achieves an accuracy
of 58.3% and 65.8%, respectively, which outperforms the baseline algorithms by
up to 13.3%. Our algorithm is also extremely efficient, spending about 0.1s per
cause analysis. Due to the attractive experimental results, our industrial
partner, a leading information and communication technology company in the
world, has deployed the tool and it achieves an average accuracy of 72% after
two months of running, nearly three times more accurate than a previous
strategy based on regular expressions.
| no_new_dataset | 0.943764 |
1703.00818 | Matthew Guzdial | Kristin Siu, Matthew Guzdial, and Mark O. Riedl | Evaluating Singleplayer and Multiplayer in Human Computation Games | 10 pages, 4 figures, 2 tables | null | null | null | cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Human computation games (HCGs) can provide novel solutions to intractable
computational problems, help enable scientific breakthroughs, and provide
datasets for artificial intelligence. However, our knowledge about how to
design and deploy HCGs that appeal to players and solve problems effectively is
incomplete. We present an investigatory HCG based on Super Mario Bros. We used
this game in a human subjects study to investigate how different social
conditions---singleplayer and multiplayer---and scoring
mechanics---collaborative and competitive---affect players' subjective
experiences, accuracy at the task, and the completion rate. In doing so, we
demonstrate a novel design approach for HCGs, and discuss the benefits and
tradeoffs of these mechanics in HCG design.
| [
{
"version": "v1",
"created": "Thu, 2 Mar 2017 15:01:59 GMT"
}
] | 2017-03-03T00:00:00 | [
[
"Siu",
"Kristin",
""
],
[
"Guzdial",
"Matthew",
""
],
[
"Riedl",
"Mark O.",
""
]
] | TITLE: Evaluating Singleplayer and Multiplayer in Human Computation Games
ABSTRACT: Human computation games (HCGs) can provide novel solutions to intractable
computational problems, help enable scientific breakthroughs, and provide
datasets for artificial intelligence. However, our knowledge about how to
design and deploy HCGs that appeal to players and solve problems effectively is
incomplete. We present an investigatory HCG based on Super Mario Bros. We used
this game in a human subjects study to investigate how different social
conditions---singleplayer and multiplayer---and scoring
mechanics---collaborative and competitive---affect players' subjective
experiences, accuracy at the task, and the completion rate. In doing so, we
demonstrate a novel design approach for HCGs, and discuss the benefits and
tradeoffs of these mechanics in HCG design.
| no_new_dataset | 0.940463 |
1703.00845 | Luis Angel Contreras-Toledo | Luis Contreras and Walterio Mayol-Cuevas | Towards CNN Map Compression for camera relocalisation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a study on the use of Convolutional Neural Networks for
camera relocalisation and its application to map compression. We follow state
of the art visual relocalisation results and evaluate response to different
data inputs -- namely, depth, grayscale, RGB, spatial position and combinations
of these. We use a CNN map representation and introduce the notion of CNN map
compression by using a smaller CNN architecture. We evaluate our proposal in a
series of publicly available datasets. This formulation allows us to improve
relocalisation accuracy by increasing the number of training trajectories while
maintaining a constant-size CNN.
| [
{
"version": "v1",
"created": "Thu, 2 Mar 2017 16:12:29 GMT"
}
] | 2017-03-03T00:00:00 | [
[
"Contreras",
"Luis",
""
],
[
"Mayol-Cuevas",
"Walterio",
""
]
] | TITLE: Towards CNN Map Compression for camera relocalisation
ABSTRACT: This paper presents a study on the use of Convolutional Neural Networks for
camera relocalisation and its application to map compression. We follow state
of the art visual relocalisation results and evaluate response to different
data inputs -- namely, depth, grayscale, RGB, spatial position and combinations
of these. We use a CNN map representation and introduce the notion of CNN map
compression by using a smaller CNN architecture. We evaluate our proposal in a
series of publicly available datasets. This formulation allows us to improve
relocalisation accuracy by increasing the number of training trajectories while
maintaining a constant-size CNN.
| no_new_dataset | 0.951414 |
1403.2123 | Emiliano De Cristofaro | Julien Freudiger and Emiliano De Cristofaro and Alex Brito | Privacy-Friendly Collaboration for Cyber Threat Mitigation | This paper has been withdrawn as it has been superseded by
arXiv:1502.05337 | null | null | null | cs.CR cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sharing of security data across organizational boundaries has often been
advocated as a promising way to enhance cyber threat mitigation. However,
collaborative security faces a number of important challenges, including
privacy, trust, and liability concerns with the potential disclosure of
sensitive data. In this paper, we focus on data sharing for predictive
blacklisting, i.e., forecasting attack sources based on past attack
information. We propose a novel privacy-enhanced data sharing approach in which
organizations estimate collaboration benefits without disclosing their
datasets, organize into coalitions of allied organizations, and securely share
data within these coalitions. We study how different partner selection
strategies affect prediction accuracy by experimenting on a real-world dataset
of 2 billion IP addresses and observe up to a 105% prediction improvement.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2014 01:28:11 GMT"
},
{
"version": "v2",
"created": "Sat, 17 May 2014 22:38:15 GMT"
},
{
"version": "v3",
"created": "Sun, 23 Nov 2014 13:13:15 GMT"
},
{
"version": "v4",
"created": "Wed, 1 Mar 2017 15:30:47 GMT"
}
] | 2017-03-02T00:00:00 | [
[
"Freudiger",
"Julien",
""
],
[
"De Cristofaro",
"Emiliano",
""
],
[
"Brito",
"Alex",
""
]
] | TITLE: Privacy-Friendly Collaboration for Cyber Threat Mitigation
ABSTRACT: Sharing of security data across organizational boundaries has often been
advocated as a promising way to enhance cyber threat mitigation. However,
collaborative security faces a number of important challenges, including
privacy, trust, and liability concerns with the potential disclosure of
sensitive data. In this paper, we focus on data sharing for predictive
blacklisting, i.e., forecasting attack sources based on past attack
information. We propose a novel privacy-enhanced data sharing approach in which
organizations estimate collaboration benefits without disclosing their
datasets, organize into coalitions of allied organizations, and securely share
data within these coalitions. We study how different partner selection
strategies affect prediction accuracy by experimenting on a real-world dataset
of 2 billion IP addresses and observe up to a 105% prediction improvement.
| no_new_dataset | 0.950041 |
1504.04804 | Yuechao Pan | Yuechao Pan, Yangzihao Wang, Yuduo Wu, Carl Yang and John D. Owens | Multi-GPU Graph Analytics | 12 pages. Final version submitted to IPDPS 2017 | null | null | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a single-node, multi-GPU programmable graph processing library
that allows programmers to easily extend single-GPU graph algorithms to achieve
scalable performance on large graphs with billions of edges. Directly using the
single-GPU implementations, our design only requires programmers to specify a
few algorithm-dependent concerns, hiding most multi-GPU related implementation
details. We analyze the theoretical and practical limits to scalability in the
context of varying graph primitives and datasets. We describe several
optimizations, such as direction optimizing traversal, and a just-enough memory
allocation scheme, for better performance and smaller memory consumption.
Compared to previous work, we achieve best-of-class performance across
operations and datasets, including excellent strong and weak scalability on
most primitives as we increase the number of GPUs in the system.
| [
{
"version": "v1",
"created": "Sun, 19 Apr 2015 07:12:04 GMT"
},
{
"version": "v2",
"created": "Wed, 13 Apr 2016 01:27:31 GMT"
},
{
"version": "v3",
"created": "Tue, 25 Oct 2016 22:21:07 GMT"
},
{
"version": "v4",
"created": "Wed, 1 Mar 2017 09:07:57 GMT"
}
] | 2017-03-02T00:00:00 | [
[
"Pan",
"Yuechao",
""
],
[
"Wang",
"Yangzihao",
""
],
[
"Wu",
"Yuduo",
""
],
[
"Yang",
"Carl",
""
],
[
"Owens",
"John D.",
""
]
] | TITLE: Multi-GPU Graph Analytics
ABSTRACT: We present a single-node, multi-GPU programmable graph processing library
that allows programmers to easily extend single-GPU graph algorithms to achieve
scalable performance on large graphs with billions of edges. Directly using the
single-GPU implementations, our design only requires programmers to specify a
few algorithm-dependent concerns, hiding most multi-GPU related implementation
details. We analyze the theoretical and practical limits to scalability in the
context of varying graph primitives and datasets. We describe several
optimizations, such as direction optimizing traversal, and a just-enough memory
allocation scheme, for better performance and smaller memory consumption.
Compared to previous work, we achieve best-of-class performance across
operations and datasets, including excellent strong and weak scalability on
most primitives as we increase the number of GPUs in the system.
| no_new_dataset | 0.938857 |
1606.00182 | G\'eraud Le Falher | G\'eraud Le Falher, Nicol\`o Cesa-Bianchi, Claudio Gentile, Fabio
Vitale | On the Troll-Trust Model for Edge Sign Prediction in Social Networks | v5: accepted to AISTATS 2017 | null | null | null | cs.LG cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the problem of edge sign prediction, we are given a directed graph
(representing a social network), and our task is to predict the binary labels
of the edges (i.e., the positive or negative nature of the social
relationships). Many successful heuristics for this problem are based on the
troll-trust features, estimating at each node the fraction of outgoing and
incoming positive/negative edges. We show that these heuristics can be
understood, and rigorously analyzed, as approximators to the Bayes optimal
classifier for a simple probabilistic model of the edge labels. We then show
that the maximum likelihood estimator for this model approximately corresponds
to the predictions of a Label Propagation algorithm run on a transformed
version of the original social graph. Extensive experiments on a number of
real-world datasets show that this algorithm is competitive against
state-of-the-art classifiers in terms of both accuracy and scalability.
Finally, we show that troll-trust features can also be used to derive online
learning algorithms which have theoretical guarantees even when edges are
adversarially labeled.
| [
{
"version": "v1",
"created": "Wed, 1 Jun 2016 09:16:46 GMT"
},
{
"version": "v2",
"created": "Thu, 2 Jun 2016 13:39:36 GMT"
},
{
"version": "v3",
"created": "Fri, 17 Jun 2016 16:47:46 GMT"
},
{
"version": "v4",
"created": "Fri, 14 Oct 2016 09:39:59 GMT"
},
{
"version": "v5",
"created": "Tue, 28 Feb 2017 21:33:41 GMT"
}
] | 2017-03-02T00:00:00 | [
[
"Falher",
"Géraud Le",
""
],
[
"Cesa-Bianchi",
"Nicolò",
""
],
[
"Gentile",
"Claudio",
""
],
[
"Vitale",
"Fabio",
""
]
] | TITLE: On the Troll-Trust Model for Edge Sign Prediction in Social Networks
ABSTRACT: In the problem of edge sign prediction, we are given a directed graph
(representing a social network), and our task is to predict the binary labels
of the edges (i.e., the positive or negative nature of the social
relationships). Many successful heuristics for this problem are based on the
troll-trust features, estimating at each node the fraction of outgoing and
incoming positive/negative edges. We show that these heuristics can be
understood, and rigorously analyzed, as approximators to the Bayes optimal
classifier for a simple probabilistic model of the edge labels. We then show
that the maximum likelihood estimator for this model approximately corresponds
to the predictions of a Label Propagation algorithm run on a transformed
version of the original social graph. Extensive experiments on a number of
real-world datasets show that this algorithm is competitive against
state-of-the-art classifiers in terms of both accuracy and scalability.
Finally, we show that troll-trust features can also be used to derive online
learning algorithms which have theoretical guarantees even when edges are
adversarially labeled.
| no_new_dataset | 0.942507 |
1702.01933 | Shubham Pachori | Shubham Pachori, Ameya Deshpande, Shanmuganathan Raman | Hashing in the Zero Shot Framework with Domain Adaptation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Techniques to learn hash codes which can store and retrieve large dimensional
multimedia data efficiently have attracted broad research interests in the
recent years. With rapid explosion of newly emerged concepts and online data,
existing supervised hashing algorithms suffer from the problem of scarcity of
ground truth annotations due to the high cost of obtaining manual annotations.
Therefore, we propose an algorithm to learn a hash function from training
images belonging to `seen' classes which can efficiently encode images of
`unseen' classes to binary codes. Specifically, we project the image features
from visual space and semantic features from semantic space into a common
Hamming subspace. Earlier works to generate hash codes have tried to relax the
discrete constraints on hash codes and solve the continuous optimization
problem. However, it often leads to quantization errors. In this work, we use
the max-margin classifier to learn an efficient hash function. To address the
concern of domain-shift which may arise due to the introduction of new classes,
we also introduce an unsupervised domain adaptation model in the proposed
hashing framework. Results on the three datasets show the advantage of using
domain adaptation in learning a high-quality hash function and superiority of
our method for the task of image retrieval performance as compared to several
state-of-the-art hashing methods.
| [
{
"version": "v1",
"created": "Tue, 7 Feb 2017 09:22:11 GMT"
},
{
"version": "v2",
"created": "Tue, 28 Feb 2017 19:43:41 GMT"
}
] | 2017-03-02T00:00:00 | [
[
"Pachori",
"Shubham",
""
],
[
"Deshpande",
"Ameya",
""
],
[
"Raman",
"Shanmuganathan",
""
]
] | TITLE: Hashing in the Zero Shot Framework with Domain Adaptation
ABSTRACT: Techniques to learn hash codes which can store and retrieve large dimensional
multimedia data efficiently have attracted broad research interests in the
recent years. With rapid explosion of newly emerged concepts and online data,
existing supervised hashing algorithms suffer from the problem of scarcity of
ground truth annotations due to the high cost of obtaining manual annotations.
Therefore, we propose an algorithm to learn a hash function from training
images belonging to `seen' classes which can efficiently encode images of
`unseen' classes to binary codes. Specifically, we project the image features
from visual space and semantic features from semantic space into a common
Hamming subspace. Earlier works to generate hash codes have tried to relax the
discrete constraints on hash codes and solve the continuous optimization
problem. However, it often leads to quantization errors. In this work, we use
the max-margin classifier to learn an efficient hash function. To address the
concern of domain-shift which may arise due to the introduction of new classes,
we also introduce an unsupervised domain adaptation model in the proposed
hashing framework. Results on the three datasets show the advantage of using
domain adaptation in learning a high-quality hash function and superiority of
our method for the task of image retrieval performance as compared to several
state-of-the-art hashing methods.
| no_new_dataset | 0.946001 |
1702.05373 | Gregory Cohen | Gregory Cohen, Saeed Afshar, Jonathan Tapson, Andr\'e van Schaik | EMNIST: an extension of MNIST to handwritten letters | The dataset is now available for download from
https://www.westernsydney.edu.au/bens/home/reproducible_research/emnist. This
link is also included in the revised article | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The MNIST dataset has become a standard benchmark for learning,
classification and computer vision systems. Contributing to its widespread
adoption are the understandable and intuitive nature of the task, its
relatively small size and storage requirements and the accessibility and
ease-of-use of the database itself. The MNIST database was derived from a
larger dataset known as the NIST Special Database 19 which contains digits,
uppercase and lowercase handwritten letters. This paper introduces a variant of
the full NIST dataset, which we have called Extended MNIST (EMNIST), which
follows the same conversion paradigm used to create the MNIST dataset. The
result is a set of datasets that constitute a more challenging classification
tasks involving letters and digits, and that shares the same image structure
and parameters as the original MNIST task, allowing for direct compatibility
with all existing classifiers and systems. Benchmark results are presented
along with a validation of the conversion process through the comparison of the
classification results on converted NIST digits and the MNIST digits.
| [
{
"version": "v1",
"created": "Fri, 17 Feb 2017 15:06:14 GMT"
},
{
"version": "v2",
"created": "Wed, 1 Mar 2017 08:55:36 GMT"
}
] | 2017-03-02T00:00:00 | [
[
"Cohen",
"Gregory",
""
],
[
"Afshar",
"Saeed",
""
],
[
"Tapson",
"Jonathan",
""
],
[
"van Schaik",
"André",
""
]
] | TITLE: EMNIST: an extension of MNIST to handwritten letters
ABSTRACT: The MNIST dataset has become a standard benchmark for learning,
classification and computer vision systems. Contributing to its widespread
adoption are the understandable and intuitive nature of the task, its
relatively small size and storage requirements and the accessibility and
ease-of-use of the database itself. The MNIST database was derived from a
larger dataset known as the NIST Special Database 19 which contains digits,
uppercase and lowercase handwritten letters. This paper introduces a variant of
the full NIST dataset, which we have called Extended MNIST (EMNIST), which
follows the same conversion paradigm used to create the MNIST dataset. The
result is a set of datasets that constitute a more challenging classification
tasks involving letters and digits, and that shares the same image structure
and parameters as the original MNIST task, allowing for direct compatibility
with all existing classifiers and systems. Benchmark results are presented
along with a validation of the conversion process through the comparison of the
classification results on converted NIST digits and the MNIST digits.
| new_dataset | 0.670177 |
1703.00037 | Peter Darch | Peter T. Darch | Managing the Public to Manage Data: Citizen Science and Astronomy | 16 pages, 0 figures, published in International Journal of Digital
Curation | International Journal of Digital Curation, 2014, 9(1), 25-40 | 10.2218/ijdc.v9i1.298 | null | astro-ph.IM cs.HC | http://creativecommons.org/licenses/by/4.0/ | Citizen science projects recruit members of the public as volunteers to
process and produce datasets. These datasets must win the trust of the
scientific community. The task of securing credibility involves, in part,
applying standard scientific procedures to clean these datasets. However,
effective management of volunteer behavior also makes a significant
contribution to enhancing data quality. Through a case study of Galaxy Zoo, a
citizen science project set up to generate datasets based on volunteer
classifications of galaxy morphologies, this paper explores how those involved
in running the project manage volunteers. The paper focuses on how methods for
crediting volunteer contributions motivate volunteers to provide higher quality
contributions and to behave in a way that better corresponds to statistical
assumptions made when combining volunteer contributions into datasets. These
methods have made a significant contribution to the success of the project in
securing trust in these datasets, which have been well used by other
scientists. Implications for practice are then presented for citizen science
projects, providing a list of considerations to guide choices regarding how to
credit volunteer contributions to improve the quality and trustworthiness of
citizen science-produced datasets.
| [
{
"version": "v1",
"created": "Tue, 28 Feb 2017 20:00:26 GMT"
}
] | 2017-03-02T00:00:00 | [
[
"Darch",
"Peter T.",
""
]
] | TITLE: Managing the Public to Manage Data: Citizen Science and Astronomy
ABSTRACT: Citizen science projects recruit members of the public as volunteers to
process and produce datasets. These datasets must win the trust of the
scientific community. The task of securing credibility involves, in part,
applying standard scientific procedures to clean these datasets. However,
effective management of volunteer behavior also makes a significant
contribution to enhancing data quality. Through a case study of Galaxy Zoo, a
citizen science project set up to generate datasets based on volunteer
classifications of galaxy morphologies, this paper explores how those involved
in running the project manage volunteers. The paper focuses on how methods for
crediting volunteer contributions motivate volunteers to provide higher quality
contributions and to behave in a way that better corresponds to statistical
assumptions made when combining volunteer contributions into datasets. These
methods have made a significant contribution to the success of the project in
securing trust in these datasets, which have been well used by other
scientists. Implications for practice are then presented for citizen science
projects, providing a list of considerations to guide choices regarding how to
credit volunteer contributions to improve the quality and trustworthiness of
citizen science-produced datasets.
| no_new_dataset | 0.943764 |
1703.00039 | Hiromitsu Mizutani | Hiromitsu Mizutani (1) and Ryota Kanai (1) ((1) Araya Inc.) | A description length approach to determining the number of k-means
clusters | 27 pages, 6 figures | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present an asymptotic criterion to determine the optimal number of
clusters in k-means. We consider k-means as data compression, and propose to
adopt the number of clusters that minimizes the estimated description length
after compression. Here we report two types of compression ratio based on two
ways to quantify the description length of data after compression. This
approach further offers a way to evaluate whether clusters obtained with
k-means have a hierarchical structure by examining whether multi-stage
compression can further reduce the description length. We applied our criteria
to determine the number of clusters to synthetic data and empirical
neuroimaging data to observe the behavior of the criteria across different
types of data set and suitability of the two types of criteria for different
datasets. We found that our method can offer reasonable clustering results that
are useful for dimension reduction. While our numerical results revealed
dependency of our criteria on the various aspects of dataset such as the
dimensionality, the description length approach proposed here provides a useful
guidance to determine the number of clusters in a principled manner when
underlying properties of the data are unknown and only inferred from
observation of data.
| [
{
"version": "v1",
"created": "Tue, 28 Feb 2017 20:05:08 GMT"
}
] | 2017-03-02T00:00:00 | [
[
"Mizutani",
"Hiromitsu",
"",
"Araya Inc"
],
[
"Kanai",
"Ryota",
"",
"Araya Inc"
]
] | TITLE: A description length approach to determining the number of k-means
clusters
ABSTRACT: We present an asymptotic criterion to determine the optimal number of
clusters in k-means. We consider k-means as data compression, and propose to
adopt the number of clusters that minimizes the estimated description length
after compression. Here we report two types of compression ratio based on two
ways to quantify the description length of data after compression. This
approach further offers a way to evaluate whether clusters obtained with
k-means have a hierarchical structure by examining whether multi-stage
compression can further reduce the description length. We applied our criteria
to determine the number of clusters to synthetic data and empirical
neuroimaging data to observe the behavior of the criteria across different
types of data set and suitability of the two types of criteria for different
datasets. We found that our method can offer reasonable clustering results that
are useful for dimension reduction. While our numerical results revealed
dependency of our criteria on the various aspects of dataset such as the
dimensionality, the description length approach proposed here provides a useful
guidance to determine the number of clusters in a principled manner when
underlying properties of the data are unknown and only inferred from
observation of data.
| no_new_dataset | 0.94887 |
1703.00069 | Yi-Hsuan Tsai | Yi-Hsuan Tsai, Xiaohui Shen, Zhe Lin, Kalyan Sunkavalli, Xin Lu,
Ming-Hsuan Yang | Deep Image Harmonization | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Compositing is one of the most common operations in photo editing. To
generate realistic composites, the appearances of foreground and background
need to be adjusted to make them compatible. Previous approaches to harmonize
composites have focused on learning statistical relationships between
hand-crafted appearance features of the foreground and background, which is
unreliable especially when the contents in the two layers are vastly different.
In this work, we propose an end-to-end deep convolutional neural network for
image harmonization, which can capture both the context and semantic
information of the composite images during harmonization. We also introduce an
efficient way to collect large-scale and high-quality training data that can
facilitate the training process. Experiments on the synthesized dataset and
real composite images show that the proposed network outperforms previous
state-of-the-art methods.
| [
{
"version": "v1",
"created": "Tue, 28 Feb 2017 21:58:45 GMT"
}
] | 2017-03-02T00:00:00 | [
[
"Tsai",
"Yi-Hsuan",
""
],
[
"Shen",
"Xiaohui",
""
],
[
"Lin",
"Zhe",
""
],
[
"Sunkavalli",
"Kalyan",
""
],
[
"Lu",
"Xin",
""
],
[
"Yang",
"Ming-Hsuan",
""
]
] | TITLE: Deep Image Harmonization
ABSTRACT: Compositing is one of the most common operations in photo editing. To
generate realistic composites, the appearances of foreground and background
need to be adjusted to make them compatible. Previous approaches to harmonize
composites have focused on learning statistical relationships between
hand-crafted appearance features of the foreground and background, which is
unreliable especially when the contents in the two layers are vastly different.
In this work, we propose an end-to-end deep convolutional neural network for
image harmonization, which can capture both the context and semantic
information of the composite images during harmonization. We also introduce an
efficient way to collect large-scale and high-quality training data that can
facilitate the training process. Experiments on the synthesized dataset and
real composite images show that the proposed network outperforms previous
state-of-the-art methods.
| no_new_dataset | 0.949529 |
1703.00196 | Yihang Lou | Yan Bai, Feng Gao, Yihang Lou, Shiqi Wang, Tiejun Huang, Ling-Yu Duan | Incorporating Intra-Class Variance to Fine-Grained Visual Recognition | 6 pages, 5 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fine-grained visual recognition aims to capture discriminative
characteristics amongst visually similar categories. The state-of-the-art
research work has significantly improved the fine-grained recognition
performance by deep metric learning using triplet network. However, the impact
of intra-category variance on the performance of recognition and robust feature
representation has not been well studied. In this paper, we propose to leverage
intra-class variance in metric learning of triplet network to improve the
performance of fine-grained recognition. Through partitioning training images
within each category into a few groups, we form the triplet samples across
different categories as well as different groups, which is called Group
Sensitive TRiplet Sampling (GS-TRS). Accordingly, the triplet loss function is
strengthened by incorporating intra-class variance with GS-TRS, which may
contribute to the optimization objective of triplet network. Extensive
experiments over benchmark datasets CompCar and VehicleID show that the
proposed GS-TRS has significantly outperformed state-of-the-art approaches in
both classification and retrieval tasks.
| [
{
"version": "v1",
"created": "Wed, 1 Mar 2017 09:41:02 GMT"
}
] | 2017-03-02T00:00:00 | [
[
"Bai",
"Yan",
""
],
[
"Gao",
"Feng",
""
],
[
"Lou",
"Yihang",
""
],
[
"Wang",
"Shiqi",
""
],
[
"Huang",
"Tiejun",
""
],
[
"Duan",
"Ling-Yu",
""
]
] | TITLE: Incorporating Intra-Class Variance to Fine-Grained Visual Recognition
ABSTRACT: Fine-grained visual recognition aims to capture discriminative
characteristics amongst visually similar categories. The state-of-the-art
research work has significantly improved the fine-grained recognition
performance by deep metric learning using triplet network. However, the impact
of intra-category variance on the performance of recognition and robust feature
representation has not been well studied. In this paper, we propose to leverage
intra-class variance in metric learning of triplet network to improve the
performance of fine-grained recognition. Through partitioning training images
within each category into a few groups, we form the triplet samples across
different categories as well as different groups, which is called Group
Sensitive TRiplet Sampling (GS-TRS). Accordingly, the triplet loss function is
strengthened by incorporating intra-class variance with GS-TRS, which may
contribute to the optimization objective of triplet network. Extensive
experiments over benchmark datasets CompCar and VehicleID show that the
proposed GS-TRS has significantly outperformed state-of-the-art approaches in
both classification and retrieval tasks.
| no_new_dataset | 0.9462 |
1703.00291 | Line K\"uhnel | Line K\"uhnel and Stefan Sommer | Stochastic Development Regression on Non-Linear Manifolds | null | null | null | null | cs.OH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a regression model for data on non-linear manifolds. The model
describes the relation between a set of manifold valued observations, such as
shapes of anatomical objects, and Euclidean explanatory variables. The approach
is based on stochastic development of Euclidean diffusion processes to the
manifold. Defining the data distribution as the transition distribution of the
mapped stochastic process, parameters of the model, the non-linear analogue of
design matrix and intercept, are found via maximum likelihood. The model is
intrinsically related to the geometry encoded in the connection of the
manifold. We propose an estimation procedure which applies the Laplace
approximation of the likelihood function. A simulation study of the performance
of the model is performed and the model is applied to a real dataset of Corpus
Callosum shapes.
| [
{
"version": "v1",
"created": "Wed, 1 Mar 2017 13:32:27 GMT"
}
] | 2017-03-02T00:00:00 | [
[
"Kühnel",
"Line",
""
],
[
"Sommer",
"Stefan",
""
]
] | TITLE: Stochastic Development Regression on Non-Linear Manifolds
ABSTRACT: We introduce a regression model for data on non-linear manifolds. The model
describes the relation between a set of manifold valued observations, such as
shapes of anatomical objects, and Euclidean explanatory variables. The approach
is based on stochastic development of Euclidean diffusion processes to the
manifold. Defining the data distribution as the transition distribution of the
mapped stochastic process, parameters of the model, the non-linear analogue of
design matrix and intercept, are found via maximum likelihood. The model is
intrinsically related to the geometry encoded in the connection of the
manifold. We propose an estimation procedure which applies the Laplace
approximation of the likelihood function. A simulation study of the performance
of the model is performed and the model is applied to a real dataset of Corpus
Callosum shapes.
| no_new_dataset | 0.945248 |
1703.00298 | Thomas Rinsma | Thomas Rinsma | Automatic Library Version Identification, an Exploration of Techniques | 9 pages, short technical report | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper is the result of a two month research internship on the topic of
library version identification. In this paper, ideas and techniques from
literature in the area of binary comparison and fingerprinting are outlined and
applied to the problem of (version) identification of shared libraries and of
libraries within statically linked binary executables. Six comparison
techniques are chosen and implemented in an open-source tool which in turn
makes use of the open-source radare2 framework for signature generation. The
effectiveness of the techniques is empirically analyzed by comparing both
artificial and real sample files against a reference dataset of multiple
versions of dozens of libraries. The results show that out of these techniques,
readable string--based techniques perform the best and that one of these
techniques correctly identifies multiple libraries contained in a stripped
statically linked executable file.
| [
{
"version": "v1",
"created": "Wed, 1 Mar 2017 13:58:52 GMT"
}
] | 2017-03-02T00:00:00 | [
[
"Rinsma",
"Thomas",
""
]
] | TITLE: Automatic Library Version Identification, an Exploration of Techniques
ABSTRACT: This paper is the result of a two month research internship on the topic of
library version identification. In this paper, ideas and techniques from
literature in the area of binary comparison and fingerprinting are outlined and
applied to the problem of (version) identification of shared libraries and of
libraries within statically linked binary executables. Six comparison
techniques are chosen and implemented in an open-source tool which in turn
makes use of the open-source radare2 framework for signature generation. The
effectiveness of the techniques is empirically analyzed by comparing both
artificial and real sample files against a reference dataset of multiple
versions of dozens of libraries. The results show that out of these techniques,
readable string--based techniques perform the best and that one of these
techniques correctly identifies multiple libraries contained in a stripped
statically linked executable file.
| no_new_dataset | 0.940024 |
1703.00304 | Angelos Valsamis | Angelos Valsamis, Alexandros Psychas, Fotis Aisopos, Andreas Menychtas
and Theodora Varvarigou | Second Screen User Profiling and Multi-level Smart Recommendations in
the context of Social TVs | In: Wu TT., Gennari R., Huang YM., Xie H., Cao Y. (eds) Emerging
Technologies for Education. SETE 2016 | Lecture Notes in Computer Science, vol 10108. Springer, Cham,
2017, pp 514-525 | 10.1007/978-3-319-52836-6_55 | null | cs.MM cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the context of Social TV, the increasing popularity of first and second
screen users, interacting and posting content online, illustrates new business
opportunities and related technical challenges, in order to enrich user
experience on such environments. SAM (Socializing Around Media) project uses
Social Media-connected infrastructure to deal with the aforementioned
challenges, providing intelligent user context management models and mechanisms
capturing social patterns, to apply collaborative filtering techniques and
personalized recommendations towards this direction. This paper presents the
Context Management mechanism of SAM, running in a Social TV environment to
provide smart recommendations for first and second screen content. Work
presented is evaluated using real movie rating dataset found online, to
validate the SAM's approach in terms of effectiveness as well as efficiency.
| [
{
"version": "v1",
"created": "Wed, 1 Mar 2017 14:06:44 GMT"
}
] | 2017-03-02T00:00:00 | [
[
"Valsamis",
"Angelos",
""
],
[
"Psychas",
"Alexandros",
""
],
[
"Aisopos",
"Fotis",
""
],
[
"Menychtas",
"Andreas",
""
],
[
"Varvarigou",
"Theodora",
""
]
] | TITLE: Second Screen User Profiling and Multi-level Smart Recommendations in
the context of Social TVs
ABSTRACT: In the context of Social TV, the increasing popularity of first and second
screen users, interacting and posting content online, illustrates new business
opportunities and related technical challenges, in order to enrich user
experience on such environments. SAM (Socializing Around Media) project uses
Social Media-connected infrastructure to deal with the aforementioned
challenges, providing intelligent user context management models and mechanisms
capturing social patterns, to apply collaborative filtering techniques and
personalized recommendations towards this direction. This paper presents the
Context Management mechanism of SAM, running in a Social TV environment to
provide smart recommendations for first and second screen content. Work
presented is evaluated using real movie rating dataset found online, to
validate the SAM's approach in terms of effectiveness as well as efficiency.
| no_new_dataset | 0.952706 |
1703.00397 | Sampoorna Biswas | Sampoorna Biswas, Laks V.S. Lakshmanan, Senjuti Basu Ray | Combating the Cold Start User Problem in Model Based Collaborative
Filtering | null | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | For tackling the well known cold-start user problem in model-based
recommender systems, one approach is to recommend a few items to a cold-start
user and use the feedback to learn a profile. The learned profile can then be
used to make good recommendations to the cold user. In the absence of a good
initial profile, the recommendations are like random probes, but if not chosen
judiciously, both bad recommendations and too many recommendations may turn off
a user. We formalize the cold-start user problem by asking what are the $b$
best items we should recommend to a cold-start user, in order to learn her
profile most accurately, where $b$, a given budget, is typically a small
number. We formalize the problem as an optimization problem and present
multiple non-trivial results, including NP-hardness as well as hardness of
approximation. We furthermore show that the objective function, i.e., the least
square error of the learned profile w.r.t. the true user profile, is neither
submodular nor supermodular, suggesting efficient approximations are unlikely
to exist. Finally, we discuss several scalable heuristic approaches for
identifying the $b$ best items to recommend to the user and experimentally
evaluate their performance on 4 real datasets. Our experiments show that our
proposed accelerated algorithms significantly outperform the prior art in
runnning time, while achieving similar error in the learned user profile as
well as in the rating predictions.
| [
{
"version": "v1",
"created": "Sat, 18 Feb 2017 03:06:09 GMT"
}
] | 2017-03-02T00:00:00 | [
[
"Biswas",
"Sampoorna",
""
],
[
"Lakshmanan",
"Laks V. S.",
""
],
[
"Ray",
"Senjuti Basu",
""
]
] | TITLE: Combating the Cold Start User Problem in Model Based Collaborative
Filtering
ABSTRACT: For tackling the well known cold-start user problem in model-based
recommender systems, one approach is to recommend a few items to a cold-start
user and use the feedback to learn a profile. The learned profile can then be
used to make good recommendations to the cold user. In the absence of a good
initial profile, the recommendations are like random probes, but if not chosen
judiciously, both bad recommendations and too many recommendations may turn off
a user. We formalize the cold-start user problem by asking what are the $b$
best items we should recommend to a cold-start user, in order to learn her
profile most accurately, where $b$, a given budget, is typically a small
number. We formalize the problem as an optimization problem and present
multiple non-trivial results, including NP-hardness as well as hardness of
approximation. We furthermore show that the objective function, i.e., the least
square error of the learned profile w.r.t. the true user profile, is neither
submodular nor supermodular, suggesting efficient approximations are unlikely
to exist. Finally, we discuss several scalable heuristic approaches for
identifying the $b$ best items to recommend to the user and experimentally
evaluate their performance on 4 real datasets. Our experiments show that our
proposed accelerated algorithms significantly outperform the prior art in
runnning time, while achieving similar error in the learned user profile as
well as in the rating predictions.
| no_new_dataset | 0.947769 |
1703.00426 | Francois Chollet | Cezary Kaliszyk, Fran\c{c}ois Chollet, Christian Szegedy | HolStep: A Machine Learning Dataset for Higher-order Logic Theorem
Proving | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large computer-understandable proofs consist of millions of intermediate
logical steps. The vast majority of such steps originate from manually selected
and manually guided heuristics applied to intermediate goals. So far, machine
learning has generally not been used to filter or generate these steps. In this
paper, we introduce a new dataset based on Higher-Order Logic (HOL) proofs, for
the purpose of developing new machine learning-based theorem-proving
strategies. We make this dataset publicly available under the BSD license. We
propose various machine learning tasks that can be performed on this dataset,
and discuss their significance for theorem proving. We also benchmark a set of
simple baseline machine learning models suited for the tasks (including
logistic regression, convolutional neural networks and recurrent neural
networks). The results of our baseline models show the promise of applying
machine learning to HOL theorem proving.
| [
{
"version": "v1",
"created": "Wed, 1 Mar 2017 18:20:19 GMT"
}
] | 2017-03-02T00:00:00 | [
[
"Kaliszyk",
"Cezary",
""
],
[
"Chollet",
"François",
""
],
[
"Szegedy",
"Christian",
""
]
] | TITLE: HolStep: A Machine Learning Dataset for Higher-order Logic Theorem
Proving
ABSTRACT: Large computer-understandable proofs consist of millions of intermediate
logical steps. The vast majority of such steps originate from manually selected
and manually guided heuristics applied to intermediate goals. So far, machine
learning has generally not been used to filter or generate these steps. In this
paper, we introduce a new dataset based on Higher-Order Logic (HOL) proofs, for
the purpose of developing new machine learning-based theorem-proving
strategies. We make this dataset publicly available under the BSD license. We
propose various machine learning tasks that can be performed on this dataset,
and discuss their significance for theorem proving. We also benchmark a set of
simple baseline machine learning models suited for the tasks (including
logistic regression, convolutional neural networks and recurrent neural
networks). The results of our baseline models show the promise of applying
machine learning to HOL theorem proving.
| new_dataset | 0.959383 |
1605.05045 | Raffaello Camoriano | Raffaello Camoriano, Giulia Pasquale, Carlo Ciliberto, Lorenzo Natale,
Lorenzo Rosasco, Giorgio Metta | Incremental Robot Learning of New Objects with Fixed Update Time | 8 pages, 3 figures | null | null | null | stat.ML cs.CV cs.LG cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider object recognition in the context of lifelong learning, where a
robotic agent learns to discriminate between a growing number of object classes
as it accumulates experience about the environment. We propose an incremental
variant of the Regularized Least Squares for Classification (RLSC) algorithm,
and exploit its structure to seamlessly add new classes to the learned model.
The presented algorithm addresses the problem of having an unbalanced
proportion of training examples per class, which occurs when new objects are
presented to the system for the first time.
We evaluate our algorithm on both a machine learning benchmark dataset and
two challenging object recognition tasks in a robotic setting. Empirical
evidence shows that our approach achieves comparable or higher classification
performance than its batch counterpart when classes are unbalanced, while being
significantly faster.
| [
{
"version": "v1",
"created": "Tue, 17 May 2016 07:50:58 GMT"
},
{
"version": "v2",
"created": "Wed, 25 Jan 2017 20:50:38 GMT"
},
{
"version": "v3",
"created": "Tue, 28 Feb 2017 16:53:19 GMT"
}
] | 2017-03-01T00:00:00 | [
[
"Camoriano",
"Raffaello",
""
],
[
"Pasquale",
"Giulia",
""
],
[
"Ciliberto",
"Carlo",
""
],
[
"Natale",
"Lorenzo",
""
],
[
"Rosasco",
"Lorenzo",
""
],
[
"Metta",
"Giorgio",
""
]
] | TITLE: Incremental Robot Learning of New Objects with Fixed Update Time
ABSTRACT: We consider object recognition in the context of lifelong learning, where a
robotic agent learns to discriminate between a growing number of object classes
as it accumulates experience about the environment. We propose an incremental
variant of the Regularized Least Squares for Classification (RLSC) algorithm,
and exploit its structure to seamlessly add new classes to the learned model.
The presented algorithm addresses the problem of having an unbalanced
proportion of training examples per class, which occurs when new objects are
presented to the system for the first time.
We evaluate our algorithm on both a machine learning benchmark dataset and
two challenging object recognition tasks in a robotic setting. Empirical
evidence shows that our approach achieves comparable or higher classification
performance than its batch counterpart when classes are unbalanced, while being
significantly faster.
| no_new_dataset | 0.953708 |
1605.08803 | Laurent Dinh | Laurent Dinh, Jascha Sohl-Dickstein, Samy Bengio | Density estimation using Real NVP | 10 pages of main content, 3 pages of bibliography, 18 pages of
appendix. Accepted at ICLR 2017 | null | null | null | cs.LG cs.AI cs.NE stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Unsupervised learning of probabilistic models is a central yet challenging
problem in machine learning. Specifically, designing models with tractable
learning, sampling, inference and evaluation is crucial in solving this task.
We extend the space of such models using real-valued non-volume preserving
(real NVP) transformations, a set of powerful invertible and learnable
transformations, resulting in an unsupervised learning algorithm with exact
log-likelihood computation, exact sampling, exact inference of latent
variables, and an interpretable latent space. We demonstrate its ability to
model natural images on four datasets through sampling, log-likelihood
evaluation and latent variable manipulations.
| [
{
"version": "v1",
"created": "Fri, 27 May 2016 21:24:32 GMT"
},
{
"version": "v2",
"created": "Mon, 14 Nov 2016 21:37:10 GMT"
},
{
"version": "v3",
"created": "Mon, 27 Feb 2017 23:21:10 GMT"
}
] | 2017-03-01T00:00:00 | [
[
"Dinh",
"Laurent",
""
],
[
"Sohl-Dickstein",
"Jascha",
""
],
[
"Bengio",
"Samy",
""
]
] | TITLE: Density estimation using Real NVP
ABSTRACT: Unsupervised learning of probabilistic models is a central yet challenging
problem in machine learning. Specifically, designing models with tractable
learning, sampling, inference and evaluation is crucial in solving this task.
We extend the space of such models using real-valued non-volume preserving
(real NVP) transformations, a set of powerful invertible and learnable
transformations, resulting in an unsupervised learning algorithm with exact
log-likelihood computation, exact sampling, exact inference of latent
variables, and an interpretable latent space. We demonstrate its ability to
model natural images on four datasets through sampling, log-likelihood
evaluation and latent variable manipulations.
| no_new_dataset | 0.950732 |
1607.01097 | Scott Yang | Corinna Cortes, Xavi Gonzalvo, Vitaly Kuznetsov, Mehryar Mohri and
Scott Yang | AdaNet: Adaptive Structural Learning of Artificial Neural Networks | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present new algorithms for adaptively learning artificial neural networks.
Our algorithms (AdaNet) adaptively learn both the structure of the network and
its weights. They are based on a solid theoretical analysis, including
data-dependent generalization guarantees that we prove and discuss in detail.
We report the results of large-scale experiments with one of our algorithms on
several binary classification tasks extracted from the CIFAR-10 dataset. The
results demonstrate that our algorithm can automatically learn network
structures with very competitive performance accuracies when compared with
those achieved for neural networks found by standard approaches.
| [
{
"version": "v1",
"created": "Tue, 5 Jul 2016 02:51:33 GMT"
},
{
"version": "v2",
"created": "Sat, 19 Nov 2016 00:46:26 GMT"
},
{
"version": "v3",
"created": "Tue, 28 Feb 2017 02:58:11 GMT"
}
] | 2017-03-01T00:00:00 | [
[
"Cortes",
"Corinna",
""
],
[
"Gonzalvo",
"Xavi",
""
],
[
"Kuznetsov",
"Vitaly",
""
],
[
"Mohri",
"Mehryar",
""
],
[
"Yang",
"Scott",
""
]
] | TITLE: AdaNet: Adaptive Structural Learning of Artificial Neural Networks
ABSTRACT: We present new algorithms for adaptively learning artificial neural networks.
Our algorithms (AdaNet) adaptively learn both the structure of the network and
its weights. They are based on a solid theoretical analysis, including
data-dependent generalization guarantees that we prove and discuss in detail.
We report the results of large-scale experiments with one of our algorithms on
several binary classification tasks extracted from the CIFAR-10 dataset. The
results demonstrate that our algorithm can automatically learn network
structures with very competitive performance accuracies when compared with
those achieved for neural networks found by standard approaches.
| no_new_dataset | 0.948442 |
1610.06454 | Tsendsuren Munkhdalai | Tsendsuren Munkhdalai and Hong Yu | Reasoning with Memory Augmented Neural Networks for Language
Comprehension | Accepted at ICLR 2017 | null | null | null | cs.CL cs.AI cs.NE stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hypothesis testing is an important cognitive process that supports human
reasoning. In this paper, we introduce a computational hypothesis testing
approach based on memory augmented neural networks. Our approach involves a
hypothesis testing loop that reconsiders and progressively refines a previously
formed hypothesis in order to generate new hypotheses to test. We apply the
proposed approach to language comprehension task by using Neural Semantic
Encoders (NSE). Our NSE models achieve the state-of-the-art results showing an
absolute improvement of 1.2% to 2.6% accuracy over previous results obtained by
single and ensemble systems on standard machine comprehension benchmarks such
as the Children's Book Test (CBT) and Who-Did-What (WDW) news article datasets.
| [
{
"version": "v1",
"created": "Thu, 20 Oct 2016 15:17:04 GMT"
},
{
"version": "v2",
"created": "Tue, 28 Feb 2017 17:06:17 GMT"
}
] | 2017-03-01T00:00:00 | [
[
"Munkhdalai",
"Tsendsuren",
""
],
[
"Yu",
"Hong",
""
]
] | TITLE: Reasoning with Memory Augmented Neural Networks for Language
Comprehension
ABSTRACT: Hypothesis testing is an important cognitive process that supports human
reasoning. In this paper, we introduce a computational hypothesis testing
approach based on memory augmented neural networks. Our approach involves a
hypothesis testing loop that reconsiders and progressively refines a previously
formed hypothesis in order to generate new hypotheses to test. We apply the
proposed approach to language comprehension task by using Neural Semantic
Encoders (NSE). Our NSE models achieve the state-of-the-art results showing an
absolute improvement of 1.2% to 2.6% accuracy over previous results obtained by
single and ensemble systems on standard machine comprehension benchmarks such
as the Children's Book Test (CBT) and Who-Did-What (WDW) news article datasets.
| no_new_dataset | 0.94887 |
1610.07442 | Mohsen Ghafoorian | Mohsen Ghafoorian, Nico Karssemeijer, Tom Heskes, Mayra Bergkamp,
Joost Wissink, Jiri Obels, Karlijn Keizer, Frank-Erik de Leeuw, Bram van
Ginneken, Elena Marchiori and Bram Platel | Deep Multi-scale Location-aware 3D Convolutional Neural Networks for
Automated Detection of Lacunes of Presumed Vascular Origin | 11 pages, 7 figures | Neuroimage Clin 14 (2017) 391-399 | 10.1016/j.nicl.2017.01.033 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Lacunes of presumed vascular origin (lacunes) are associated with an
increased risk of stroke, gait impairment, and dementia and are a primary
imaging feature of the small vessel disease. Quantification of lacunes may be
of great importance to elucidate the mechanisms behind neuro-degenerative
disorders and is recommended as part of study standards for small vessel
disease research. However, due to the different appearance of lacunes in
various brain regions and the existence of other similar-looking structures,
such as perivascular spaces, manual annotation is a difficult, elaborative and
subjective task, which can potentially be greatly improved by reliable and
consistent computer-aided detection (CAD) routines.
In this paper, we propose an automated two-stage method using deep
convolutional neural networks (CNN). We show that this method has good
performance and can considerably benefit readers. We first use a fully
convolutional neural network to detect initial candidates. In the second step,
we employ a 3D CNN as a false positive reduction tool. As the location
information is important to the analysis of candidate structures, we further
equip the network with contextual information using multi-scale analysis and
integration of explicit location features. We trained, validated and tested our
networks on a large dataset of 1075 cases obtained from two different studies.
Subsequently, we conducted an observer study with four trained observers and
compared our method with them using a free-response operating characteristic
analysis. Shown on a test set of 111 cases, the resulting CAD system exhibits
performance similar to the trained human observers and achieves a sensitivity
of 0.974 with 0.13 false positives per slice. A feasibility study also showed
that a trained human observer would considerably benefit once aided by the CAD
system.
| [
{
"version": "v1",
"created": "Mon, 24 Oct 2016 14:51:47 GMT"
},
{
"version": "v2",
"created": "Sat, 29 Oct 2016 13:14:32 GMT"
}
] | 2017-03-01T00:00:00 | [
[
"Ghafoorian",
"Mohsen",
""
],
[
"Karssemeijer",
"Nico",
""
],
[
"Heskes",
"Tom",
""
],
[
"Bergkamp",
"Mayra",
""
],
[
"Wissink",
"Joost",
""
],
[
"Obels",
"Jiri",
""
],
[
"Keizer",
"Karlijn",
""
],
[
"de Leeuw",
"Frank-Erik",
""
],
[
"van Ginneken",
"Bram",
""
],
[
"Marchiori",
"Elena",
""
],
[
"Platel",
"Bram",
""
]
] | TITLE: Deep Multi-scale Location-aware 3D Convolutional Neural Networks for
Automated Detection of Lacunes of Presumed Vascular Origin
ABSTRACT: Lacunes of presumed vascular origin (lacunes) are associated with an
increased risk of stroke, gait impairment, and dementia and are a primary
imaging feature of the small vessel disease. Quantification of lacunes may be
of great importance to elucidate the mechanisms behind neuro-degenerative
disorders and is recommended as part of study standards for small vessel
disease research. However, due to the different appearance of lacunes in
various brain regions and the existence of other similar-looking structures,
such as perivascular spaces, manual annotation is a difficult, elaborative and
subjective task, which can potentially be greatly improved by reliable and
consistent computer-aided detection (CAD) routines.
In this paper, we propose an automated two-stage method using deep
convolutional neural networks (CNN). We show that this method has good
performance and can considerably benefit readers. We first use a fully
convolutional neural network to detect initial candidates. In the second step,
we employ a 3D CNN as a false positive reduction tool. As the location
information is important to the analysis of candidate structures, we further
equip the network with contextual information using multi-scale analysis and
integration of explicit location features. We trained, validated and tested our
networks on a large dataset of 1075 cases obtained from two different studies.
Subsequently, we conducted an observer study with four trained observers and
compared our method with them using a free-response operating characteristic
analysis. Shown on a test set of 111 cases, the resulting CAD system exhibits
performance similar to the trained human observers and achieves a sensitivity
of 0.974 with 0.13 false positives per slice. A feasibility study also showed
that a trained human observer would considerably benefit once aided by the CAD
system.
| no_new_dataset | 0.839734 |
1612.03079 | Daniel Crankshaw | Daniel Crankshaw, Xin Wang, Giulio Zhou, Michael J. Franklin, Joseph
E. Gonzalez, Ion Stoica | Clipper: A Low-Latency Online Prediction Serving System | null | null | null | null | cs.DC cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Machine learning is being deployed in a growing number of applications which
demand real-time, accurate, and robust predictions under heavy query load.
However, most machine learning frameworks and systems only address model
training and not deployment.
In this paper, we introduce Clipper, a general-purpose low-latency prediction
serving system. Interposing between end-user applications and a wide range of
machine learning frameworks, Clipper introduces a modular architecture to
simplify model deployment across frameworks and applications. Furthermore, by
introducing caching, batching, and adaptive model selection techniques, Clipper
reduces prediction latency and improves prediction throughput, accuracy, and
robustness without modifying the underlying machine learning frameworks. We
evaluate Clipper on four common machine learning benchmark datasets and
demonstrate its ability to meet the latency, accuracy, and throughput demands
of online serving applications. Finally, we compare Clipper to the TensorFlow
Serving system and demonstrate that we are able to achieve comparable
throughput and latency while enabling model composition and online learning to
improve accuracy and render more robust predictions.
| [
{
"version": "v1",
"created": "Fri, 9 Dec 2016 16:29:16 GMT"
},
{
"version": "v2",
"created": "Tue, 28 Feb 2017 17:21:33 GMT"
}
] | 2017-03-01T00:00:00 | [
[
"Crankshaw",
"Daniel",
""
],
[
"Wang",
"Xin",
""
],
[
"Zhou",
"Giulio",
""
],
[
"Franklin",
"Michael J.",
""
],
[
"Gonzalez",
"Joseph E.",
""
],
[
"Stoica",
"Ion",
""
]
] | TITLE: Clipper: A Low-Latency Online Prediction Serving System
ABSTRACT: Machine learning is being deployed in a growing number of applications which
demand real-time, accurate, and robust predictions under heavy query load.
However, most machine learning frameworks and systems only address model
training and not deployment.
In this paper, we introduce Clipper, a general-purpose low-latency prediction
serving system. Interposing between end-user applications and a wide range of
machine learning frameworks, Clipper introduces a modular architecture to
simplify model deployment across frameworks and applications. Furthermore, by
introducing caching, batching, and adaptive model selection techniques, Clipper
reduces prediction latency and improves prediction throughput, accuracy, and
robustness without modifying the underlying machine learning frameworks. We
evaluate Clipper on four common machine learning benchmark datasets and
demonstrate its ability to meet the latency, accuracy, and throughput demands
of online serving applications. Finally, we compare Clipper to the TensorFlow
Serving system and demonstrate that we are able to achieve comparable
throughput and latency while enabling model composition and online learning to
improve accuracy and render more robust predictions.
| no_new_dataset | 0.945751 |
1701.06796 | Gaurav Pandey | Gaurav Pandey and Ambedkar Dukkipati | Discriminative Neural Topic Models | 6 pages, 9 figures | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a neural network based approach for learning topics from text and
image datasets. The model makes no assumptions about the conditional
distribution of the observed features given the latent topics. This allows us
to perform topic modelling efficiently using sentences of documents and patches
of images as observed features, rather than limiting ourselves to words.
Moreover, the proposed approach is online, and hence can be used for streaming
data. Furthermore, since the approach utilizes neural networks, it can be
implemented on GPU with ease, and hence it is very scalable.
| [
{
"version": "v1",
"created": "Tue, 24 Jan 2017 10:29:31 GMT"
},
{
"version": "v2",
"created": "Tue, 28 Feb 2017 14:17:16 GMT"
}
] | 2017-03-01T00:00:00 | [
[
"Pandey",
"Gaurav",
""
],
[
"Dukkipati",
"Ambedkar",
""
]
] | TITLE: Discriminative Neural Topic Models
ABSTRACT: We propose a neural network based approach for learning topics from text and
image datasets. The model makes no assumptions about the conditional
distribution of the observed features given the latent topics. This allows us
to perform topic modelling efficiently using sentences of documents and patches
of images as observed features, rather than limiting ourselves to words.
Moreover, the proposed approach is online, and hence can be used for streaming
data. Furthermore, since the approach utilizes neural networks, it can be
implemented on GPU with ease, and hence it is very scalable.
| no_new_dataset | 0.949248 |
1702.08540 | Yazhou Yang | Yazhou Yang and Marco Loog | Active Learning Using Uncertainty Information | 6 pages, 1 figure, International Conference on Pattern Recognition
(ICPR) 2016, Cancun, Mexico | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many active learning methods belong to the retraining-based approaches, which
select one unlabeled instance, add it to the training set with its possible
labels, retrain the classification model, and evaluate the criteria that we
base our selection on. However, since the true label of the selected instance
is unknown, these methods resort to calculating the average-case or worse-case
performance with respect to the unknown label. In this paper, we propose a
different method to solve this problem. In particular, our method aims to make
use of the uncertainty information to enhance the performance of
retraining-based models. We apply our method to two state-of-the-art algorithms
and carry out extensive experiments on a wide variety of real-world datasets.
The results clearly demonstrate the effectiveness of the proposed method and
indicate it can reduce human labeling efforts in many real-life applications.
| [
{
"version": "v1",
"created": "Mon, 27 Feb 2017 21:33:47 GMT"
}
] | 2017-03-01T00:00:00 | [
[
"Yang",
"Yazhou",
""
],
[
"Loog",
"Marco",
""
]
] | TITLE: Active Learning Using Uncertainty Information
ABSTRACT: Many active learning methods belong to the retraining-based approaches, which
select one unlabeled instance, add it to the training set with its possible
labels, retrain the classification model, and evaluate the criteria that we
base our selection on. However, since the true label of the selected instance
is unknown, these methods resort to calculating the average-case or worse-case
performance with respect to the unknown label. In this paper, we propose a
different method to solve this problem. In particular, our method aims to make
use of the uncertainty information to enhance the performance of
retraining-based models. We apply our method to two state-of-the-art algorithms
and carry out extensive experiments on a wide variety of real-world datasets.
The results clearly demonstrate the effectiveness of the proposed method and
indicate it can reduce human labeling efforts in many real-life applications.
| no_new_dataset | 0.94868 |
1702.08658 | Shengjia Zhao | Shengjia Zhao, Jiaming Song, Stefano Ermon | Towards Deeper Understanding of Variational Autoencoding Models | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a new family of optimization criteria for variational
auto-encoding models, generalizing the standard evidence lower bound. We
provide conditions under which they recover the data distribution and learn
latent features, and formally show that common issues such as blurry samples
and uninformative latent features arise when these conditions are not met.
Based on these new insights, we propose a new sequential VAE model that can
generate sharp samples on the LSUN image dataset based on pixel-wise
reconstruction loss, and propose an optimization criterion that encourages
unsupervised learning of informative latent features.
| [
{
"version": "v1",
"created": "Tue, 28 Feb 2017 06:04:23 GMT"
}
] | 2017-03-01T00:00:00 | [
[
"Zhao",
"Shengjia",
""
],
[
"Song",
"Jiaming",
""
],
[
"Ermon",
"Stefano",
""
]
] | TITLE: Towards Deeper Understanding of Variational Autoencoding Models
ABSTRACT: We propose a new family of optimization criteria for variational
auto-encoding models, generalizing the standard evidence lower bound. We
provide conditions under which they recover the data distribution and learn
latent features, and formally show that common issues such as blurry samples
and uninformative latent features arise when these conditions are not met.
Based on these new insights, we propose a new sequential VAE model that can
generate sharp samples on the LSUN image dataset based on pixel-wise
reconstruction loss, and propose an optimization criterion that encourages
unsupervised learning of informative latent features.
| no_new_dataset | 0.947186 |
1702.08681 | Hao Yang Dr | Hao Yang, Joey Tianyi Zhou, Jianfei Cai and Yew Soon Ong | MIML-FCN+: Multi-instance Multi-label Learning via Fully Convolutional
Networks with Privileged Information | Accepted in CVPR 2017 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-instance multi-label (MIML) learning has many interesting applications
in computer visions, including multi-object recognition and automatic image
tagging. In these applications, additional information such as bounding-boxes,
image captions and descriptions is often available during training phrase,
which is referred as privileged information (PI). However, as existing works on
learning using PI only consider instance-level PI (privileged instances), they
fail to make use of bag-level PI (privileged bags) available in MIML learning.
Therefore, in this paper, we propose a two-stream fully convolutional network,
named MIML-FCN+, unified by a novel PI loss to solve the problem of MIML
learning with privileged bags. Compared to the previous works on PI, the
proposed MIML-FCN+ utilizes the readily available privileged bags, instead of
hard-to-obtain privileged instances, making the system more general and
practical in real world applications. As the proposed PI loss is convex and SGD
compatible and the framework itself is a fully convolutional network, MIML-FCN+
can be easily integrated with state of-the-art deep learning networks.
Moreover, the flexibility of convolutional layers allows us to exploit
structured correlations among instances to facilitate more effective training
and testing. Experimental results on three benchmark datasets demonstrate the
effectiveness of the proposed MIML-FCN+, outperforming state-of-the-art methods
in the application of multi-object recognition.
| [
{
"version": "v1",
"created": "Tue, 28 Feb 2017 07:54:22 GMT"
}
] | 2017-03-01T00:00:00 | [
[
"Yang",
"Hao",
""
],
[
"Zhou",
"Joey Tianyi",
""
],
[
"Cai",
"Jianfei",
""
],
[
"Ong",
"Yew Soon",
""
]
] | TITLE: MIML-FCN+: Multi-instance Multi-label Learning via Fully Convolutional
Networks with Privileged Information
ABSTRACT: Multi-instance multi-label (MIML) learning has many interesting applications
in computer visions, including multi-object recognition and automatic image
tagging. In these applications, additional information such as bounding-boxes,
image captions and descriptions is often available during training phrase,
which is referred as privileged information (PI). However, as existing works on
learning using PI only consider instance-level PI (privileged instances), they
fail to make use of bag-level PI (privileged bags) available in MIML learning.
Therefore, in this paper, we propose a two-stream fully convolutional network,
named MIML-FCN+, unified by a novel PI loss to solve the problem of MIML
learning with privileged bags. Compared to the previous works on PI, the
proposed MIML-FCN+ utilizes the readily available privileged bags, instead of
hard-to-obtain privileged instances, making the system more general and
practical in real world applications. As the proposed PI loss is convex and SGD
compatible and the framework itself is a fully convolutional network, MIML-FCN+
can be easily integrated with state of-the-art deep learning networks.
Moreover, the flexibility of convolutional layers allows us to exploit
structured correlations among instances to facilitate more effective training
and testing. Experimental results on three benchmark datasets demonstrate the
effectiveness of the proposed MIML-FCN+, outperforming state-of-the-art methods
in the application of multi-object recognition.
| no_new_dataset | 0.949902 |
1702.08740 | Ziang Yan | Ziang Yan, Jian Liang, Weishen Pan, Jin Li, Changshui Zhang | Weakly- and Semi-Supervised Object Detection with
Expectation-Maximization Algorithm | 9 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Object detection when provided image-level labels instead of instance-level
labels (i.e., bounding boxes) during training is an important problem in
computer vision, since large scale image datasets with instance-level labels
are extremely costly to obtain. In this paper, we address this challenging
problem by developing an Expectation-Maximization (EM) based object detection
method using deep convolutional neural networks (CNNs). Our method is
applicable to both the weakly-supervised and semi-supervised settings.
Extensive experiments on PASCAL VOC 2007 benchmark show that (1) in the weakly
supervised setting, our method provides significant detection performance
improvement over current state-of-the-art methods, (2) having access to a small
number of strongly (instance-level) annotated images, our method can almost
match the performace of the fully supervised Fast RCNN. We share our source
code at https://github.com/ZiangYan/EM-WSD.
| [
{
"version": "v1",
"created": "Tue, 28 Feb 2017 11:03:39 GMT"
}
] | 2017-03-01T00:00:00 | [
[
"Yan",
"Ziang",
""
],
[
"Liang",
"Jian",
""
],
[
"Pan",
"Weishen",
""
],
[
"Li",
"Jin",
""
],
[
"Zhang",
"Changshui",
""
]
] | TITLE: Weakly- and Semi-Supervised Object Detection with
Expectation-Maximization Algorithm
ABSTRACT: Object detection when provided image-level labels instead of instance-level
labels (i.e., bounding boxes) during training is an important problem in
computer vision, since large scale image datasets with instance-level labels
are extremely costly to obtain. In this paper, we address this challenging
problem by developing an Expectation-Maximization (EM) based object detection
method using deep convolutional neural networks (CNNs). Our method is
applicable to both the weakly-supervised and semi-supervised settings.
Extensive experiments on PASCAL VOC 2007 benchmark show that (1) in the weakly
supervised setting, our method provides significant detection performance
improvement over current state-of-the-art methods, (2) having access to a small
number of strongly (instance-level) annotated images, our method can almost
match the performace of the fully supervised Fast RCNN. We share our source
code at https://github.com/ZiangYan/EM-WSD.
| no_new_dataset | 0.950641 |
1702.08745 | Paulo Adeodato Prof. | Paulo J. L. Adeodato, F\'abio C. Pereira and Rosalvo F. Oliveira Neto | Optimal Categorical Attribute Transformation for Granularity Change in
Relational Databases for Binary Decision Problems in Educational Data Mining | 5 pages, 2 figures, 2 tables | null | null | null | cs.AI cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents an approach for transforming data granularity in
hierarchical databases for binary decision problems by applying regression to
categorical attributes at the lower grain levels. Attributes from a lower
hierarchy entity in the relational database have their information content
optimized through regression on the categories histogram trained on a small
exclusive labelled sample, instead of the usual mode category of the
distribution. The paper validates the approach on a binary decision task for
assessing the quality of secondary schools focusing on how logistic regression
transforms the students and teachers attributes into school attributes.
Experiments were carried out on Brazilian schools public datasets via 10-fold
cross-validation comparison of the ranking score produced also by logistic
regression. The proposed approach achieved higher performance than the usual
distribution mode transformation and equal to the expert weighing approach
measured by the maximum Kolmogorov-Smirnov distance and the area under the ROC
curve at 0.01 significance level.
| [
{
"version": "v1",
"created": "Tue, 28 Feb 2017 11:13:17 GMT"
}
] | 2017-03-01T00:00:00 | [
[
"Adeodato",
"Paulo J. L.",
""
],
[
"Pereira",
"Fábio C.",
""
],
[
"Neto",
"Rosalvo F. Oliveira",
""
]
] | TITLE: Optimal Categorical Attribute Transformation for Granularity Change in
Relational Databases for Binary Decision Problems in Educational Data Mining
ABSTRACT: This paper presents an approach for transforming data granularity in
hierarchical databases for binary decision problems by applying regression to
categorical attributes at the lower grain levels. Attributes from a lower
hierarchy entity in the relational database have their information content
optimized through regression on the categories histogram trained on a small
exclusive labelled sample, instead of the usual mode category of the
distribution. The paper validates the approach on a binary decision task for
assessing the quality of secondary schools focusing on how logistic regression
transforms the students and teachers attributes into school attributes.
Experiments were carried out on Brazilian schools public datasets via 10-fold
cross-validation comparison of the ranking score produced also by logistic
regression. The proposed approach achieved higher performance than the usual
distribution mode transformation and equal to the expert weighing approach
measured by the maximum Kolmogorov-Smirnov distance and the area under the ROC
curve at 0.01 significance level.
| no_new_dataset | 0.956917 |
1702.08798 | Shanshan Huang | Shanshan Huang, Yichao Xiong, Ya Zhang and Jia Wang | Unsupervised Triplet Hashing for Fast Image Retrieval | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hashing has played a pivotal role in large-scale image retrieval. With the
development of Convolutional Neural Network (CNN), hashing learning has shown
great promise. But existing methods are mostly tuned for classification, which
are not optimized for retrieval tasks, especially for instance-level retrieval.
In this study, we propose a novel hashing method for large-scale image
retrieval. Considering the difficulty in obtaining labeled datasets for image
retrieval task in large scale, we propose a novel CNN-based unsupervised
hashing method, namely Unsupervised Triplet Hashing (UTH). The unsupervised
hashing network is designed under the following three principles: 1) more
discriminative representations for image retrieval; 2) minimum quantization
loss between the original real-valued feature descriptors and the learned hash
codes; 3) maximum information entropy for the learned hash codes. Extensive
experiments on CIFAR-10, MNIST and In-shop datasets have shown that UTH
outperforms several state-of-the-art unsupervised hashing methods in terms of
retrieval accuracy.
| [
{
"version": "v1",
"created": "Tue, 28 Feb 2017 14:26:14 GMT"
}
] | 2017-03-01T00:00:00 | [
[
"Huang",
"Shanshan",
""
],
[
"Xiong",
"Yichao",
""
],
[
"Zhang",
"Ya",
""
],
[
"Wang",
"Jia",
""
]
] | TITLE: Unsupervised Triplet Hashing for Fast Image Retrieval
ABSTRACT: Hashing has played a pivotal role in large-scale image retrieval. With the
development of Convolutional Neural Network (CNN), hashing learning has shown
great promise. But existing methods are mostly tuned for classification, which
are not optimized for retrieval tasks, especially for instance-level retrieval.
In this study, we propose a novel hashing method for large-scale image
retrieval. Considering the difficulty in obtaining labeled datasets for image
retrieval task in large scale, we propose a novel CNN-based unsupervised
hashing method, namely Unsupervised Triplet Hashing (UTH). The unsupervised
hashing network is designed under the following three principles: 1) more
discriminative representations for image retrieval; 2) minimum quantization
loss between the original real-valued feature descriptors and the learned hash
codes; 3) maximum information entropy for the learned hash codes. Extensive
experiments on CIFAR-10, MNIST and In-shop datasets have shown that UTH
outperforms several state-of-the-art unsupervised hashing methods in terms of
retrieval accuracy.
| no_new_dataset | 0.950273 |
1702.08884 | Raphael Petegrosso | Raphael Petegrosso, Wei Zhang, Zhuliu Li, Yousef Saad and Rui Kuang | Low-rank Label Propagation for Semi-supervised Learning with 100
Millions Samples | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The success of semi-supervised learning crucially relies on the scalability
to a huge amount of unlabelled data that are needed to capture the underlying
manifold structure for better classification. Since computing the pairwise
similarity between the training data is prohibitively expensive in most kinds
of input data, currently, there is no general ready-to-use semi-supervised
learning method/tool available for learning with tens of millions or more data
points. In this paper, we adopted the idea of two low-rank label propagation
algorithms, GLNP (Global Linear Neighborhood Propagation) and Kernel Nystr\"om
Approximation, and implemented the parallelized version of the two algorithms
accelerated with Nesterov's accelerated projected gradient descent for Big-data
Label Propagation (BigLP).
The parallel algorithms are tested on five real datasets ranging from 7000 to
10,000,000 in size and a simulation dataset of 100,000,000 samples. In the
experiments, the implementation can scale up to datasets with 100,000,000
samples and hundreds of features and the algorithms also significantly improved
the prediction accuracy when only a very small percentage of the data is
labeled. The results demonstrate that the BigLP implementation is highly
scalable to big data and effective in utilizing the unlabeled data for
semi-supervised learning.
| [
{
"version": "v1",
"created": "Tue, 28 Feb 2017 17:48:21 GMT"
}
] | 2017-03-01T00:00:00 | [
[
"Petegrosso",
"Raphael",
""
],
[
"Zhang",
"Wei",
""
],
[
"Li",
"Zhuliu",
""
],
[
"Saad",
"Yousef",
""
],
[
"Kuang",
"Rui",
""
]
] | TITLE: Low-rank Label Propagation for Semi-supervised Learning with 100
Millions Samples
ABSTRACT: The success of semi-supervised learning crucially relies on the scalability
to a huge amount of unlabelled data that are needed to capture the underlying
manifold structure for better classification. Since computing the pairwise
similarity between the training data is prohibitively expensive in most kinds
of input data, currently, there is no general ready-to-use semi-supervised
learning method/tool available for learning with tens of millions or more data
points. In this paper, we adopted the idea of two low-rank label propagation
algorithms, GLNP (Global Linear Neighborhood Propagation) and Kernel Nystr\"om
Approximation, and implemented the parallelized version of the two algorithms
accelerated with Nesterov's accelerated projected gradient descent for Big-data
Label Propagation (BigLP).
The parallel algorithms are tested on five real datasets ranging from 7000 to
10,000,000 in size and a simulation dataset of 100,000,000 samples. In the
experiments, the implementation can scale up to datasets with 100,000,000
samples and hundreds of features and the algorithms also significantly improved
the prediction accuracy when only a very small percentage of the data is
labeled. The results demonstrate that the BigLP implementation is highly
scalable to big data and effective in utilizing the unlabeled data for
semi-supervised learning.
| no_new_dataset | 0.94699 |
1507.03927 | Houwu Chen | Houwu Chen, Jiwu Shu | SkyHash: a Hash Opinion Dynamics Model | This paper has been withdrawn by the author due to a crucial
theoretic defect | null | null | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes the first hash opinion dynamics model, named SkyHash,
that can help a P2P network quickly reach consensus on hash opinion. The model
consists of a bit layer and a hash layer, each time when a node shapes its new
opinion, the bit layer is to determine each bit of a pseudo hash, and the hash
layer is to choose a hash opinion with minimum Hamming distance to the pseudo
hash. With simulations, we conducted a comprehensive study on the convergence
speed of the model by taking into account impacts of various configurations
such as network size, node degree, hash size, and initial hash density.
Evaluation demonstrates that using our model, consensus can be quickly reached
even in large networks. We also developed a denial-of-service (DoS) proof
extension for our model. Experiments on the SNAP dataset of the Wikipedia
who-votes-on-whom network demonstrate that besides the ability to refuse known
ill-behaved nodes, the DoS-proof extended model also outperforms Bitcoin by
producing consensus in 45 seconds, and tolerating DoS attack committed by up to
0.9% top influential nodes.
| [
{
"version": "v1",
"created": "Tue, 14 Jul 2015 17:03:56 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Jul 2015 10:25:24 GMT"
},
{
"version": "v3",
"created": "Wed, 22 Jul 2015 07:44:57 GMT"
},
{
"version": "v4",
"created": "Sat, 17 Oct 2015 11:15:55 GMT"
},
{
"version": "v5",
"created": "Tue, 17 Nov 2015 15:47:38 GMT"
},
{
"version": "v6",
"created": "Sun, 26 Feb 2017 23:22:50 GMT"
}
] | 2017-02-28T00:00:00 | [
[
"Chen",
"Houwu",
""
],
[
"Shu",
"Jiwu",
""
]
] | TITLE: SkyHash: a Hash Opinion Dynamics Model
ABSTRACT: This paper proposes the first hash opinion dynamics model, named SkyHash,
that can help a P2P network quickly reach consensus on hash opinion. The model
consists of a bit layer and a hash layer, each time when a node shapes its new
opinion, the bit layer is to determine each bit of a pseudo hash, and the hash
layer is to choose a hash opinion with minimum Hamming distance to the pseudo
hash. With simulations, we conducted a comprehensive study on the convergence
speed of the model by taking into account impacts of various configurations
such as network size, node degree, hash size, and initial hash density.
Evaluation demonstrates that using our model, consensus can be quickly reached
even in large networks. We also developed a denial-of-service (DoS) proof
extension for our model. Experiments on the SNAP dataset of the Wikipedia
who-votes-on-whom network demonstrate that besides the ability to refuse known
ill-behaved nodes, the DoS-proof extended model also outperforms Bitcoin by
producing consensus in 45 seconds, and tolerating DoS attack committed by up to
0.9% top influential nodes.
| no_new_dataset | 0.952042 |
1510.03164 | Purushottam Kar | Shuai Li and Purushottam Kar | Context-Aware Bandits | The paper has been withdrawn as the work has been superseded | null | null | null | cs.LG cs.AI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose an efficient Context-Aware clustering of Bandits (CAB) algorithm,
which can capture collaborative effects. CAB can be easily deployed in a
real-world recommendation system, where multi-armed bandits have been shown to
perform well in particular with respect to the cold-start problem. CAB utilizes
a context-aware clustering augmented by exploration-exploitation strategies.
CAB dynamically clusters the users based on the content universe under
consideration. We give a theoretical analysis in the standard stochastic
multi-armed bandits setting. We show the efficiency of our approach on
production and real-world datasets, demonstrate the scalability, and, more
importantly, the significant increased prediction performance against several
state-of-the-art methods.
| [
{
"version": "v1",
"created": "Mon, 12 Oct 2015 07:04:16 GMT"
},
{
"version": "v2",
"created": "Tue, 24 Nov 2015 05:47:32 GMT"
},
{
"version": "v3",
"created": "Thu, 9 Jun 2016 16:18:43 GMT"
},
{
"version": "v4",
"created": "Fri, 10 Jun 2016 20:51:08 GMT"
},
{
"version": "v5",
"created": "Sun, 26 Feb 2017 15:53:30 GMT"
}
] | 2017-02-28T00:00:00 | [
[
"Li",
"Shuai",
""
],
[
"Kar",
"Purushottam",
""
]
] | TITLE: Context-Aware Bandits
ABSTRACT: We propose an efficient Context-Aware clustering of Bandits (CAB) algorithm,
which can capture collaborative effects. CAB can be easily deployed in a
real-world recommendation system, where multi-armed bandits have been shown to
perform well in particular with respect to the cold-start problem. CAB utilizes
a context-aware clustering augmented by exploration-exploitation strategies.
CAB dynamically clusters the users based on the content universe under
consideration. We give a theoretical analysis in the standard stochastic
multi-armed bandits setting. We show the efficiency of our approach on
production and real-world datasets, demonstrate the scalability, and, more
importantly, the significant increased prediction performance against several
state-of-the-art methods.
| no_new_dataset | 0.943191 |
1605.08074 | Kun Tu | Kun Tu, Bruno Ribeiro, Ananthram Swami, Don Towsley | Temporal Clustering in Dynamic Networks with Tensor Decomposition | null | null | null | null | cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dynamic networks are increasingly being usedd to model real world datasets. A
challenging task in their analysis is to detect and characterize clusters. It
is useful for analyzing real-world data such as detecting evolving communities
in networks. We propose a temporal clustering framework based on a set of
network generative models to address this problem. We use PARAFAC decomposition
to learn network models from datasets.We then use $K$-means for clustering, the
Silhouette criterion to determine the number of clusters, and a similarity
score to order the clusters and retain the significant ones. In order to
address the time-dependent aspect of these clusters, we propose a segmentation
algorithm to detect their formations, dissolutions and lifetimes. Synthetic
networks with ground truth and real-world datasets are used to test our method
against state-of-the-art, and the results show that our method has better
performance in clustering and lifetime detection than previous methods.
| [
{
"version": "v1",
"created": "Wed, 25 May 2016 21:07:14 GMT"
},
{
"version": "v2",
"created": "Mon, 20 Feb 2017 06:18:34 GMT"
},
{
"version": "v3",
"created": "Mon, 27 Feb 2017 05:54:43 GMT"
}
] | 2017-02-28T00:00:00 | [
[
"Tu",
"Kun",
""
],
[
"Ribeiro",
"Bruno",
""
],
[
"Swami",
"Ananthram",
""
],
[
"Towsley",
"Don",
""
]
] | TITLE: Temporal Clustering in Dynamic Networks with Tensor Decomposition
ABSTRACT: Dynamic networks are increasingly being usedd to model real world datasets. A
challenging task in their analysis is to detect and characterize clusters. It
is useful for analyzing real-world data such as detecting evolving communities
in networks. We propose a temporal clustering framework based on a set of
network generative models to address this problem. We use PARAFAC decomposition
to learn network models from datasets.We then use $K$-means for clustering, the
Silhouette criterion to determine the number of clusters, and a similarity
score to order the clusters and retain the significant ones. In order to
address the time-dependent aspect of these clusters, we propose a segmentation
algorithm to detect their formations, dissolutions and lifetimes. Synthetic
networks with ground truth and real-world datasets are used to test our method
against state-of-the-art, and the results show that our method has better
performance in clustering and lifetime detection than previous methods.
| no_new_dataset | 0.950088 |
1606.04582 | Minjoon Seo | Minjoon Seo, Sewon Min, Ali Farhadi, Hannaneh Hajishirzi | Query-Reduction Networks for Question Answering | Published as a conference paper at ICLR 2017. Title of the paper has
changed from "Query-Regression Networks for Machine Comprehension" | null | null | null | cs.CL cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we study the problem of question answering when reasoning over
multiple facts is required. We propose Query-Reduction Network (QRN), a variant
of Recurrent Neural Network (RNN) that effectively handles both short-term
(local) and long-term (global) sequential dependencies to reason over multiple
facts. QRN considers the context sentences as a sequence of state-changing
triggers, and reduces the original query to a more informed query as it
observes each trigger (context sentence) through time. Our experiments show
that QRN produces the state-of-the-art results in bAbI QA and dialog tasks, and
in a real goal-oriented dialog dataset. In addition, QRN formulation allows
parallelization on RNN's time axis, saving an order of magnitude in time
complexity for training and inference.
| [
{
"version": "v1",
"created": "Tue, 14 Jun 2016 21:54:46 GMT"
},
{
"version": "v2",
"created": "Wed, 6 Jul 2016 21:54:45 GMT"
},
{
"version": "v3",
"created": "Wed, 16 Nov 2016 10:07:22 GMT"
},
{
"version": "v4",
"created": "Fri, 9 Dec 2016 00:05:06 GMT"
},
{
"version": "v5",
"created": "Tue, 7 Feb 2017 22:04:54 GMT"
},
{
"version": "v6",
"created": "Fri, 24 Feb 2017 19:59:01 GMT"
}
] | 2017-02-28T00:00:00 | [
[
"Seo",
"Minjoon",
""
],
[
"Min",
"Sewon",
""
],
[
"Farhadi",
"Ali",
""
],
[
"Hajishirzi",
"Hannaneh",
""
]
] | TITLE: Query-Reduction Networks for Question Answering
ABSTRACT: In this paper, we study the problem of question answering when reasoning over
multiple facts is required. We propose Query-Reduction Network (QRN), a variant
of Recurrent Neural Network (RNN) that effectively handles both short-term
(local) and long-term (global) sequential dependencies to reason over multiple
facts. QRN considers the context sentences as a sequence of state-changing
triggers, and reduces the original query to a more informed query as it
observes each trigger (context sentence) through time. Our experiments show
that QRN produces the state-of-the-art results in bAbI QA and dialog tasks, and
in a real goal-oriented dialog dataset. In addition, QRN formulation allows
parallelization on RNN's time axis, saving an order of magnitude in time
complexity for training and inference.
| no_new_dataset | 0.947381 |
1608.03544 | Claudio Gentile | Claudio Gentile, Shuai Li, Purushottam Kar, Alexandros Karatzoglou,
Evans Etrue, Giovanni Zappella | On Context-Dependent Clustering of Bandits | null | null | null | null | cs.LG cs.AI cs.IR stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate a novel cluster-of-bandit algorithm CAB for collaborative
recommendation tasks that implements the underlying feedback sharing mechanism
by estimating the neighborhood of users in a context-dependent manner. CAB
makes sharp departures from the state of the art by incorporating collaborative
effects into inference as well as learning processes in a manner that
seamlessly interleaving explore-exploit tradeoffs and collaborative steps. We
prove regret bounds under various assumptions on the data, which exhibit a
crisp dependence on the expected number of clusters over the users, a natural
measure of the statistical difficulty of the learning task. Experiments on
production and real-world datasets show that CAB offers significantly increased
prediction performance against a representative pool of state-of-the-art
methods.
| [
{
"version": "v1",
"created": "Sat, 6 Aug 2016 14:13:28 GMT"
},
{
"version": "v2",
"created": "Mon, 27 Feb 2017 17:16:22 GMT"
}
] | 2017-02-28T00:00:00 | [
[
"Gentile",
"Claudio",
""
],
[
"Li",
"Shuai",
""
],
[
"Kar",
"Purushottam",
""
],
[
"Karatzoglou",
"Alexandros",
""
],
[
"Etrue",
"Evans",
""
],
[
"Zappella",
"Giovanni",
""
]
] | TITLE: On Context-Dependent Clustering of Bandits
ABSTRACT: We investigate a novel cluster-of-bandit algorithm CAB for collaborative
recommendation tasks that implements the underlying feedback sharing mechanism
by estimating the neighborhood of users in a context-dependent manner. CAB
makes sharp departures from the state of the art by incorporating collaborative
effects into inference as well as learning processes in a manner that
seamlessly interleaving explore-exploit tradeoffs and collaborative steps. We
prove regret bounds under various assumptions on the data, which exhibit a
crisp dependence on the expected number of clusters over the users, a natural
measure of the statistical difficulty of the learning task. Experiments on
production and real-world datasets show that CAB offers significantly increased
prediction performance against a representative pool of state-of-the-art
methods.
| no_new_dataset | 0.944638 |
1608.05745 | Edward Choi | Edward Choi, Mohammad Taha Bahadori, Joshua A. Kulas, Andy Schuetz,
Walter F. Stewart, Jimeng Sun | RETAIN: An Interpretable Predictive Model for Healthcare using Reverse
Time Attention Mechanism | Accepted at Neural Information Processing Systems (NIPS) 2016 | null | null | null | cs.LG cs.AI cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accuracy and interpretability are two dominant features of successful
predictive models. Typically, a choice must be made in favor of complex black
box models such as recurrent neural networks (RNN) for accuracy versus less
accurate but more interpretable traditional models such as logistic regression.
This tradeoff poses challenges in medicine where both accuracy and
interpretability are important. We addressed this challenge by developing the
REverse Time AttentIoN model (RETAIN) for application to Electronic Health
Records (EHR) data. RETAIN achieves high accuracy while remaining clinically
interpretable and is based on a two-level neural attention model that detects
influential past visits and significant clinical variables within those visits
(e.g. key diagnoses). RETAIN mimics physician practice by attending the EHR
data in a reverse time order so that recent clinical visits are likely to
receive higher attention. RETAIN was tested on a large health system EHR
dataset with 14 million visits completed by 263K patients over an 8 year period
and demonstrated predictive accuracy and computational scalability comparable
to state-of-the-art methods such as RNN, and ease of interpretability
comparable to traditional models.
| [
{
"version": "v1",
"created": "Fri, 19 Aug 2016 21:54:46 GMT"
},
{
"version": "v2",
"created": "Tue, 30 Aug 2016 06:03:43 GMT"
},
{
"version": "v3",
"created": "Wed, 14 Sep 2016 19:45:03 GMT"
},
{
"version": "v4",
"created": "Sun, 26 Feb 2017 15:13:31 GMT"
}
] | 2017-02-28T00:00:00 | [
[
"Choi",
"Edward",
""
],
[
"Bahadori",
"Mohammad Taha",
""
],
[
"Kulas",
"Joshua A.",
""
],
[
"Schuetz",
"Andy",
""
],
[
"Stewart",
"Walter F.",
""
],
[
"Sun",
"Jimeng",
""
]
] | TITLE: RETAIN: An Interpretable Predictive Model for Healthcare using Reverse
Time Attention Mechanism
ABSTRACT: Accuracy and interpretability are two dominant features of successful
predictive models. Typically, a choice must be made in favor of complex black
box models such as recurrent neural networks (RNN) for accuracy versus less
accurate but more interpretable traditional models such as logistic regression.
This tradeoff poses challenges in medicine where both accuracy and
interpretability are important. We addressed this challenge by developing the
REverse Time AttentIoN model (RETAIN) for application to Electronic Health
Records (EHR) data. RETAIN achieves high accuracy while remaining clinically
interpretable and is based on a two-level neural attention model that detects
influential past visits and significant clinical variables within those visits
(e.g. key diagnoses). RETAIN mimics physician practice by attending the EHR
data in a reverse time order so that recent clinical visits are likely to
receive higher attention. RETAIN was tested on a large health system EHR
dataset with 14 million visits completed by 263K patients over an 8 year period
and demonstrated predictive accuracy and computational scalability comparable
to state-of-the-art methods such as RNN, and ease of interpretability
comparable to traditional models.
| no_new_dataset | 0.950134 |
1608.06902 | Joachim Ott | Joachim Ott, Zhouhan Lin, Ying Zhang, Shih-Chii Liu, Yoshua Bengio | Recurrent Neural Networks With Limited Numerical Precision | null | null | null | null | cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recurrent Neural Networks (RNNs) produce state-of-art performance on many
machine learning tasks but their demand on resources in terms of memory and
computational power are often high. Therefore, there is a great interest in
optimizing the computations performed with these models especially when
considering development of specialized low-power hardware for deep networks.
One way of reducing the computational needs is to limit the numerical precision
of the network weights and biases. This has led to different proposed rounding
methods which have been applied so far to only Convolutional Neural Networks
and Fully-Connected Networks. This paper addresses the question of how to best
reduce weight precision during training in the case of RNNs. We present results
from the use of different stochastic and deterministic reduced precision
training methods applied to three major RNN types which are then tested on
several datasets. The results show that the weight binarization methods do not
work with the RNNs. However, the stochastic and deterministic ternarization,
and pow2-ternarization methods gave rise to low-precision RNNs that produce
similar and even higher accuracy on certain datasets therefore providing a path
towards training more efficient implementations of RNNs in specialized
hardware.
| [
{
"version": "v1",
"created": "Wed, 24 Aug 2016 17:15:29 GMT"
},
{
"version": "v2",
"created": "Sun, 26 Feb 2017 14:01:40 GMT"
}
] | 2017-02-28T00:00:00 | [
[
"Ott",
"Joachim",
""
],
[
"Lin",
"Zhouhan",
""
],
[
"Zhang",
"Ying",
""
],
[
"Liu",
"Shih-Chii",
""
],
[
"Bengio",
"Yoshua",
""
]
] | TITLE: Recurrent Neural Networks With Limited Numerical Precision
ABSTRACT: Recurrent Neural Networks (RNNs) produce state-of-art performance on many
machine learning tasks but their demand on resources in terms of memory and
computational power are often high. Therefore, there is a great interest in
optimizing the computations performed with these models especially when
considering development of specialized low-power hardware for deep networks.
One way of reducing the computational needs is to limit the numerical precision
of the network weights and biases. This has led to different proposed rounding
methods which have been applied so far to only Convolutional Neural Networks
and Fully-Connected Networks. This paper addresses the question of how to best
reduce weight precision during training in the case of RNNs. We present results
from the use of different stochastic and deterministic reduced precision
training methods applied to three major RNN types which are then tested on
several datasets. The results show that the weight binarization methods do not
work with the RNNs. However, the stochastic and deterministic ternarization,
and pow2-ternarization methods gave rise to low-precision RNNs that produce
similar and even higher accuracy on certain datasets therefore providing a path
towards training more efficient implementations of RNNs in specialized
hardware.
| no_new_dataset | 0.947672 |
1609.00222 | Hande Alemdar | Hande Alemdar and Vincent Leroy and Adrien Prost-Boucle and
Fr\'ed\'eric P\'etrot | Ternary Neural Networks for Resource-Efficient AI Applications | null | null | null | null | cs.LG cs.AI cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The computation and storage requirements for Deep Neural Networks (DNNs) are
usually high. This issue limits their deployability on ubiquitous computing
devices such as smart phones, wearables and autonomous drones. In this paper,
we propose ternary neural networks (TNNs) in order to make deep learning more
resource-efficient. We train these TNNs using a teacher-student approach based
on a novel, layer-wise greedy methodology. Thanks to our two-stage training
procedure, the teacher network is still able to use state-of-the-art methods
such as dropout and batch normalization to increase accuracy and reduce
training time. Using only ternary weights and activations, the student ternary
network learns to mimic the behavior of its teacher network without using any
multiplication. Unlike its -1,1 binary counterparts, a ternary neural network
inherently prunes the smaller weights by setting them to zero during training.
This makes them sparser and thus more energy-efficient. We design a
purpose-built hardware architecture for TNNs and implement it on FPGA and ASIC.
We evaluate TNNs on several benchmark datasets and demonstrate up to 3.1x
better energy efficiency with respect to the state of the art while also
improving accuracy.
| [
{
"version": "v1",
"created": "Thu, 1 Sep 2016 13:08:47 GMT"
},
{
"version": "v2",
"created": "Sun, 26 Feb 2017 09:44:34 GMT"
}
] | 2017-02-28T00:00:00 | [
[
"Alemdar",
"Hande",
""
],
[
"Leroy",
"Vincent",
""
],
[
"Prost-Boucle",
"Adrien",
""
],
[
"Pétrot",
"Frédéric",
""
]
] | TITLE: Ternary Neural Networks for Resource-Efficient AI Applications
ABSTRACT: The computation and storage requirements for Deep Neural Networks (DNNs) are
usually high. This issue limits their deployability on ubiquitous computing
devices such as smart phones, wearables and autonomous drones. In this paper,
we propose ternary neural networks (TNNs) in order to make deep learning more
resource-efficient. We train these TNNs using a teacher-student approach based
on a novel, layer-wise greedy methodology. Thanks to our two-stage training
procedure, the teacher network is still able to use state-of-the-art methods
such as dropout and batch normalization to increase accuracy and reduce
training time. Using only ternary weights and activations, the student ternary
network learns to mimic the behavior of its teacher network without using any
multiplication. Unlike its -1,1 binary counterparts, a ternary neural network
inherently prunes the smaller weights by setting them to zero during training.
This makes them sparser and thus more energy-efficient. We design a
purpose-built hardware architecture for TNNs and implement it on FPGA and ASIC.
We evaluate TNNs on several benchmark datasets and demonstrate up to 3.1x
better energy efficiency with respect to the state of the art while also
improving accuracy.
| no_new_dataset | 0.951414 |
1610.03454 | Weiran Wang | Weiran Wang, Xinchen Yan, Honglak Lee, Karen Livescu | Deep Variational Canonical Correlation Analysis | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present deep variational canonical correlation analysis (VCCA), a deep
multi-view learning model that extends the latent variable model interpretation
of linear CCA to nonlinear observation models parameterized by deep neural
networks. We derive variational lower bounds of the data likelihood by
parameterizing the posterior probability of the latent variables from the view
that is available at test time. We also propose a variant of VCCA called
VCCA-private that can, in addition to the "common variables" underlying both
views, extract the "private variables" within each view, and disentangles the
shared and private information for multi-view data without hard supervision.
Experimental results on real-world datasets show that our methods are
competitive across domains.
| [
{
"version": "v1",
"created": "Tue, 11 Oct 2016 18:22:05 GMT"
},
{
"version": "v2",
"created": "Mon, 14 Nov 2016 16:29:11 GMT"
},
{
"version": "v3",
"created": "Sat, 25 Feb 2017 03:39:12 GMT"
}
] | 2017-02-28T00:00:00 | [
[
"Wang",
"Weiran",
""
],
[
"Yan",
"Xinchen",
""
],
[
"Lee",
"Honglak",
""
],
[
"Livescu",
"Karen",
""
]
] | TITLE: Deep Variational Canonical Correlation Analysis
ABSTRACT: We present deep variational canonical correlation analysis (VCCA), a deep
multi-view learning model that extends the latent variable model interpretation
of linear CCA to nonlinear observation models parameterized by deep neural
networks. We derive variational lower bounds of the data likelihood by
parameterizing the posterior probability of the latent variables from the view
that is available at test time. We also propose a variant of VCCA called
VCCA-private that can, in addition to the "common variables" underlying both
views, extract the "private variables" within each view, and disentangles the
shared and private information for multi-view data without hard supervision.
Experimental results on real-world datasets show that our methods are
competitive across domains.
| no_new_dataset | 0.9463 |
1611.01702 | Adji Bousso Dieng | Adji B. Dieng, Chong Wang, Jianfeng Gao, John Paisley | TopicRNN: A Recurrent Neural Network with Long-Range Semantic Dependency | International Conference on Learning Representations | null | null | null | cs.CL cs.AI cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose TopicRNN, a recurrent neural network (RNN)-based
language model designed to directly capture the global semantic meaning
relating words in a document via latent topics. Because of their sequential
nature, RNNs are good at capturing the local structure of a word sequence -
both semantic and syntactic - but might face difficulty remembering long-range
dependencies. Intuitively, these long-range dependencies are of semantic
nature. In contrast, latent topic models are able to capture the global
underlying semantic structure of a document but do not account for word
ordering. The proposed TopicRNN model integrates the merits of RNNs and latent
topic models: it captures local (syntactic) dependencies using an RNN and
global (semantic) dependencies using latent topics. Unlike previous work on
contextual RNN language modeling, our model is learned end-to-end. Empirical
results on word prediction show that TopicRNN outperforms existing contextual
RNN baselines. In addition, TopicRNN can be used as an unsupervised feature
extractor for documents. We do this for sentiment analysis on the IMDB movie
review dataset and report an error rate of $6.28\%$. This is comparable to the
state-of-the-art $5.91\%$ resulting from a semi-supervised approach. Finally,
TopicRNN also yields sensible topics, making it a useful alternative to
document models such as latent Dirichlet allocation.
| [
{
"version": "v1",
"created": "Sat, 5 Nov 2016 21:25:07 GMT"
},
{
"version": "v2",
"created": "Mon, 27 Feb 2017 03:03:38 GMT"
}
] | 2017-02-28T00:00:00 | [
[
"Dieng",
"Adji B.",
""
],
[
"Wang",
"Chong",
""
],
[
"Gao",
"Jianfeng",
""
],
[
"Paisley",
"John",
""
]
] | TITLE: TopicRNN: A Recurrent Neural Network with Long-Range Semantic Dependency
ABSTRACT: In this paper, we propose TopicRNN, a recurrent neural network (RNN)-based
language model designed to directly capture the global semantic meaning
relating words in a document via latent topics. Because of their sequential
nature, RNNs are good at capturing the local structure of a word sequence -
both semantic and syntactic - but might face difficulty remembering long-range
dependencies. Intuitively, these long-range dependencies are of semantic
nature. In contrast, latent topic models are able to capture the global
underlying semantic structure of a document but do not account for word
ordering. The proposed TopicRNN model integrates the merits of RNNs and latent
topic models: it captures local (syntactic) dependencies using an RNN and
global (semantic) dependencies using latent topics. Unlike previous work on
contextual RNN language modeling, our model is learned end-to-end. Empirical
results on word prediction show that TopicRNN outperforms existing contextual
RNN baselines. In addition, TopicRNN can be used as an unsupervised feature
extractor for documents. We do this for sentiment analysis on the IMDB movie
review dataset and report an error rate of $6.28\%$. This is comparable to the
state-of-the-art $5.91\%$ resulting from a semi-supervised approach. Finally,
TopicRNN also yields sensible topics, making it a useful alternative to
document models such as latent Dirichlet allocation.
| no_new_dataset | 0.949623 |
1611.03641 | Oded Avraham | Oded Avraham and Yoav Goldberg | Improving Reliability of Word Similarity Evaluation by Redesigning
Annotation Task and Performance Measure | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We suggest a new method for creating and using gold-standard datasets for
word similarity evaluation. Our goal is to improve the reliability of the
evaluation, and we do this by redesigning the annotation task to achieve higher
inter-rater agreement, and by defining a performance measure which takes the
reliability of each annotation decision in the dataset into account.
| [
{
"version": "v1",
"created": "Fri, 11 Nov 2016 10:06:29 GMT"
},
{
"version": "v2",
"created": "Mon, 27 Feb 2017 18:38:56 GMT"
}
] | 2017-02-28T00:00:00 | [
[
"Avraham",
"Oded",
""
],
[
"Goldberg",
"Yoav",
""
]
] | TITLE: Improving Reliability of Word Similarity Evaluation by Redesigning
Annotation Task and Performance Measure
ABSTRACT: We suggest a new method for creating and using gold-standard datasets for
word similarity evaluation. Our goal is to improve the reliability of the
evaluation, and we do this by redesigning the annotation task to achieve higher
inter-rater agreement, and by defining a performance measure which takes the
reliability of each annotation decision in the dataset into account.
| no_new_dataset | 0.951188 |
1611.07065 | Joachim Ott | Joachim Ott, Zhouhan Lin, Ying Zhang, Shih-Chii Liu, Yoshua Bengio | Recurrent Neural Networks With Limited Numerical Precision | NIPS 2016 EMDNN Workshop paper, condensed version of arXiv:1608.06902 | null | null | null | cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recurrent Neural Networks (RNNs) produce state-of-art performance on many
machine learning tasks but their demand on resources in terms of memory and
computational power are often high. Therefore, there is a great interest in
optimizing the computations performed with these models especially when
considering development of specialized low-power hardware for deep networks.
One way of reducing the computational needs is to limit the numerical precision
of the network weights and biases, and this will be addressed for the case of
RNNs. We present results from the use of different stochastic and deterministic
reduced precision training methods applied to two major RNN types, which are
then tested on three datasets. The results show that the stochastic and
deterministic ternarization, pow2- ternarization, and exponential quantization
methods gave rise to low-precision RNNs that produce similar and even higher
accuracy on certain datasets, therefore providing a path towards training more
efficient implementations of RNNs in specialized hardware.
| [
{
"version": "v1",
"created": "Mon, 21 Nov 2016 21:24:45 GMT"
},
{
"version": "v2",
"created": "Sun, 26 Feb 2017 14:13:25 GMT"
}
] | 2017-02-28T00:00:00 | [
[
"Ott",
"Joachim",
""
],
[
"Lin",
"Zhouhan",
""
],
[
"Zhang",
"Ying",
""
],
[
"Liu",
"Shih-Chii",
""
],
[
"Bengio",
"Yoshua",
""
]
] | TITLE: Recurrent Neural Networks With Limited Numerical Precision
ABSTRACT: Recurrent Neural Networks (RNNs) produce state-of-art performance on many
machine learning tasks but their demand on resources in terms of memory and
computational power are often high. Therefore, there is a great interest in
optimizing the computations performed with these models especially when
considering development of specialized low-power hardware for deep networks.
One way of reducing the computational needs is to limit the numerical precision
of the network weights and biases, and this will be addressed for the case of
RNNs. We present results from the use of different stochastic and deterministic
reduced precision training methods applied to two major RNN types, which are
then tested on three datasets. The results show that the stochastic and
deterministic ternarization, pow2- ternarization, and exponential quantization
methods gave rise to low-precision RNNs that produce similar and even higher
accuracy on certain datasets, therefore providing a path towards training more
efficient implementations of RNNs in specialized hardware.
| no_new_dataset | 0.94699 |
1701.04175 | Chuong Nguyen | Chuong V. Nguyen, Michael Milford, Robert Mahony | 3D tracking of water hazards with polarized stereo cameras | 7 pages, ICRA 2017 | null | null | null | cs.RO cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Current self-driving car systems operate well in sunny weather but struggle
in adverse conditions. One of the most commonly encountered adverse conditions
involves water on the road caused by rain, sleet, melting snow or flooding.
While some advances have been made in using conventional RGB camera and LIDAR
technology for detecting water hazards, other sources of information such as
polarization offer a promising and potentially superior approach to this
problem in terms of performance and cost. In this paper, we present a novel
stereo-polarization system for detecting and tracking water hazards based on
polarization and color variation of reflected light, with consideration of the
effect of polarized light from sky as function of reflection and azimuth
angles. To evaluate this system, we present a new large `water on road'
datasets spanning approximately 2 km of driving in various on-road and off-road
conditions and demonstrate for the first time reliable water detection and
tracking over a wide range of realistic car driving water conditions using
polarized vision as the primary sensing modality. Our system successfully
detects water hazards up to more than 100m. Finally, we discuss several
interesting challenges and propose future research directions for further
improving robust autonomous car perception in hazardous wet conditions using
polarization sensors.
| [
{
"version": "v1",
"created": "Mon, 16 Jan 2017 05:47:30 GMT"
},
{
"version": "v2",
"created": "Sun, 26 Feb 2017 07:36:42 GMT"
}
] | 2017-02-28T00:00:00 | [
[
"Nguyen",
"Chuong V.",
""
],
[
"Milford",
"Michael",
""
],
[
"Mahony",
"Robert",
""
]
] | TITLE: 3D tracking of water hazards with polarized stereo cameras
ABSTRACT: Current self-driving car systems operate well in sunny weather but struggle
in adverse conditions. One of the most commonly encountered adverse conditions
involves water on the road caused by rain, sleet, melting snow or flooding.
While some advances have been made in using conventional RGB camera and LIDAR
technology for detecting water hazards, other sources of information such as
polarization offer a promising and potentially superior approach to this
problem in terms of performance and cost. In this paper, we present a novel
stereo-polarization system for detecting and tracking water hazards based on
polarization and color variation of reflected light, with consideration of the
effect of polarized light from sky as function of reflection and azimuth
angles. To evaluate this system, we present a new large `water on road'
datasets spanning approximately 2 km of driving in various on-road and off-road
conditions and demonstrate for the first time reliable water detection and
tracking over a wide range of realistic car driving water conditions using
polarized vision as the primary sensing modality. Our system successfully
detects water hazards up to more than 100m. Finally, we discuss several
interesting challenges and propose future research directions for further
improving robust autonomous car perception in hazardous wet conditions using
polarization sensors.
| new_dataset | 0.962391 |
1702.06166 | Tammo Rukat | Tammo Rukat and Chris C. Holmes and Michalis K. Titsias and
Christopher Yau | Bayesian Boolean Matrix Factorisation | null | null | null | null | stat.ML cs.LG cs.NA q-bio.GN q-bio.QM stat.ME | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Boolean matrix factorisation aims to decompose a binary data matrix into an
approximate Boolean product of two low rank, binary matrices: one containing
meaningful patterns, the other quantifying how the observations can be
expressed as a combination of these patterns. We introduce the OrMachine, a
probabilistic generative model for Boolean matrix factorisation and derive a
Metropolised Gibbs sampler that facilitates efficient parallel posterior
inference. On real world and simulated data, our method outperforms all
currently existing approaches for Boolean matrix factorisation and completion.
This is the first method to provide full posterior inference for Boolean Matrix
factorisation which is relevant in applications, e.g. for controlling false
positive rates in collaborative filtering and, crucially, improves the
interpretability of the inferred patterns. The proposed algorithm scales to
large datasets as we demonstrate by analysing single cell gene expression data
in 1.3 million mouse brain cells across 11 thousand genes on commodity
hardware.
| [
{
"version": "v1",
"created": "Mon, 20 Feb 2017 20:31:39 GMT"
},
{
"version": "v2",
"created": "Sat, 25 Feb 2017 14:17:44 GMT"
}
] | 2017-02-28T00:00:00 | [
[
"Rukat",
"Tammo",
""
],
[
"Holmes",
"Chris C.",
""
],
[
"Titsias",
"Michalis K.",
""
],
[
"Yau",
"Christopher",
""
]
] | TITLE: Bayesian Boolean Matrix Factorisation
ABSTRACT: Boolean matrix factorisation aims to decompose a binary data matrix into an
approximate Boolean product of two low rank, binary matrices: one containing
meaningful patterns, the other quantifying how the observations can be
expressed as a combination of these patterns. We introduce the OrMachine, a
probabilistic generative model for Boolean matrix factorisation and derive a
Metropolised Gibbs sampler that facilitates efficient parallel posterior
inference. On real world and simulated data, our method outperforms all
currently existing approaches for Boolean matrix factorisation and completion.
This is the first method to provide full posterior inference for Boolean Matrix
factorisation which is relevant in applications, e.g. for controlling false
positive rates in collaborative filtering and, crucially, improves the
interpretability of the inferred patterns. The proposed algorithm scales to
large datasets as we demonstrate by analysing single cell gene expression data
in 1.3 million mouse brain cells across 11 thousand genes on commodity
hardware.
| no_new_dataset | 0.948298 |
1702.06270 | Fengli Xu | Fengli Xu, Zhen Tu, Yong Li, Pengyu Zhang, Xiaoming Fu, Depeng Jin | Trajectory Recovery From Ash: User Privacy Is NOT Preserved in
Aggregated Mobility Data | 10 pages, 11 figures, accepted in WWW 2017 | null | 10.1145/3038912.3052620 | null | cs.CY cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Human mobility data has been ubiquitously collected through cellular networks
and mobile applications, and publicly released for academic research and
commercial purposes for the last decade. Since releasing individual's mobility
records usually gives rise to privacy issues, datasets owners tend to only
publish aggregated mobility data, such as the number of users covered by a
cellular tower at a specific timestamp, which is believed to be sufficient for
preserving users' privacy. However, in this paper, we argue and prove that even
publishing aggregated mobility data could lead to privacy breach in
individuals' trajectories. We develop an attack system that is able to exploit
the uniqueness and regularity of human mobility to recover individual's
trajectories from the aggregated mobility data without any prior knowledge. By
conducting experiments on two real-world datasets collected from both mobile
application and cellular network, we reveal that the attack system is able to
recover users' trajectories with accuracy about 73%~91% at the scale of tens of
thousands to hundreds of thousands users, which indicates severe privacy
leakage in such datasets. Through the investigation on aggregated mobility
data, our work recognizes a novel privacy problem in publishing statistic data,
which appeals for immediate attentions from both academy and industry.
| [
{
"version": "v1",
"created": "Tue, 21 Feb 2017 05:24:43 GMT"
},
{
"version": "v2",
"created": "Sat, 25 Feb 2017 02:04:55 GMT"
}
] | 2017-02-28T00:00:00 | [
[
"Xu",
"Fengli",
""
],
[
"Tu",
"Zhen",
""
],
[
"Li",
"Yong",
""
],
[
"Zhang",
"Pengyu",
""
],
[
"Fu",
"Xiaoming",
""
],
[
"Jin",
"Depeng",
""
]
] | TITLE: Trajectory Recovery From Ash: User Privacy Is NOT Preserved in
Aggregated Mobility Data
ABSTRACT: Human mobility data has been ubiquitously collected through cellular networks
and mobile applications, and publicly released for academic research and
commercial purposes for the last decade. Since releasing individual's mobility
records usually gives rise to privacy issues, datasets owners tend to only
publish aggregated mobility data, such as the number of users covered by a
cellular tower at a specific timestamp, which is believed to be sufficient for
preserving users' privacy. However, in this paper, we argue and prove that even
publishing aggregated mobility data could lead to privacy breach in
individuals' trajectories. We develop an attack system that is able to exploit
the uniqueness and regularity of human mobility to recover individual's
trajectories from the aggregated mobility data without any prior knowledge. By
conducting experiments on two real-world datasets collected from both mobile
application and cellular network, we reveal that the attack system is able to
recover users' trajectories with accuracy about 73%~91% at the scale of tens of
thousands to hundreds of thousands users, which indicates severe privacy
leakage in such datasets. Through the investigation on aggregated mobility
data, our work recognizes a novel privacy problem in publishing statistic data,
which appeals for immediate attentions from both academy and industry.
| no_new_dataset | 0.949342 |
1702.06295 | Armen Aghajanyan | Armen Aghajanyan | Convolution Aware Initialization | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Initialization of parameters in deep neural networks has been shown to have a
big impact on the performance of the networks (Mishkin & Matas, 2015). The
initialization scheme devised by He et al, allowed convolution activations to
carry a constrained mean which allowed deep networks to be trained effectively
(He et al., 2015a). Orthogonal initializations and more generally orthogonal
matrices in standard recurrent networks have been proved to eradicate the
vanishing and exploding gradient problem (Pascanu et al., 2012). Majority of
current initialization schemes do not take fully into account the intrinsic
structure of the convolution operator. Using the duality of the Fourier
transform and the convolution operator, Convolution Aware Initialization builds
orthogonal filters in the Fourier space, and using the inverse Fourier
transform represents them in the standard space. With Convolution Aware
Initialization we noticed not only higher accuracy and lower loss, but faster
convergence. We achieve new state of the art on the CIFAR10 dataset, and
achieve close to state of the art on various other tasks.
| [
{
"version": "v1",
"created": "Tue, 21 Feb 2017 09:01:46 GMT"
},
{
"version": "v2",
"created": "Thu, 23 Feb 2017 06:00:34 GMT"
},
{
"version": "v3",
"created": "Mon, 27 Feb 2017 17:38:58 GMT"
}
] | 2017-02-28T00:00:00 | [
[
"Aghajanyan",
"Armen",
""
]
] | TITLE: Convolution Aware Initialization
ABSTRACT: Initialization of parameters in deep neural networks has been shown to have a
big impact on the performance of the networks (Mishkin & Matas, 2015). The
initialization scheme devised by He et al, allowed convolution activations to
carry a constrained mean which allowed deep networks to be trained effectively
(He et al., 2015a). Orthogonal initializations and more generally orthogonal
matrices in standard recurrent networks have been proved to eradicate the
vanishing and exploding gradient problem (Pascanu et al., 2012). Majority of
current initialization schemes do not take fully into account the intrinsic
structure of the convolution operator. Using the duality of the Fourier
transform and the convolution operator, Convolution Aware Initialization builds
orthogonal filters in the Fourier space, and using the inverse Fourier
transform represents them in the standard space. With Convolution Aware
Initialization we noticed not only higher accuracy and lower loss, but faster
convergence. We achieve new state of the art on the CIFAR10 dataset, and
achieve close to state of the art on various other tasks.
| no_new_dataset | 0.952838 |
1702.07772 | Aneeq Zia | Aneeq Zia, Yachna Sharma, Vinay Bettadapura, Eric L. Sarin and Irfan
Essa | Video and Accelerometer-Based Motion Analysis for Automated Surgical
Skills Assessment | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Purpose: Basic surgical skills of suturing and knot tying are an essential
part of medical training. Having an automated system for surgical skills
assessment could help save experts time and improve training efficiency. There
have been some recent attempts at automated surgical skills assessment using
either video analysis or acceleration data. In this paper, we present a novel
approach for automated assessment of OSATS based surgical skills and provide an
analysis of different features on multi-modal data (video and accelerometer
data). Methods: We conduct the largest study, to the best of our knowledge, for
basic surgical skills assessment on a dataset that contained video and
accelerometer data for suturing and knot-tying tasks. We introduce "entropy
based" features - Approximate Entropy (ApEn) and Cross-Approximate Entropy
(XApEn), which quantify the amount of predictability and regularity of
fluctuations in time-series data. The proposed features are compared to
existing methods of Sequential Motion Texture (SMT), Discrete Cosine Transform
(DCT) and Discrete Fourier Transform (DFT), for surgical skills assessment.
Results: We report average performance of different features across all
applicable OSATS criteria for suturing and knot tying tasks. Our analysis shows
that the proposed entropy based features out-perform previous state-of-the-art
methods using video data. For accelerometer data, our method performs better
for suturing only. We also show that fusion of video and acceleration features
can improve overall performance with the proposed entropy features achieving
highest accuracy. Conclusions: Automated surgical skills assessment can be
achieved with high accuracy using the proposed entropy features. Such a system
can significantly improve the efficiency of surgical training in medical
schools and teaching hospitals.
| [
{
"version": "v1",
"created": "Fri, 24 Feb 2017 21:30:31 GMT"
}
] | 2017-02-28T00:00:00 | [
[
"Zia",
"Aneeq",
""
],
[
"Sharma",
"Yachna",
""
],
[
"Bettadapura",
"Vinay",
""
],
[
"Sarin",
"Eric L.",
""
],
[
"Essa",
"Irfan",
""
]
] | TITLE: Video and Accelerometer-Based Motion Analysis for Automated Surgical
Skills Assessment
ABSTRACT: Purpose: Basic surgical skills of suturing and knot tying are an essential
part of medical training. Having an automated system for surgical skills
assessment could help save experts time and improve training efficiency. There
have been some recent attempts at automated surgical skills assessment using
either video analysis or acceleration data. In this paper, we present a novel
approach for automated assessment of OSATS based surgical skills and provide an
analysis of different features on multi-modal data (video and accelerometer
data). Methods: We conduct the largest study, to the best of our knowledge, for
basic surgical skills assessment on a dataset that contained video and
accelerometer data for suturing and knot-tying tasks. We introduce "entropy
based" features - Approximate Entropy (ApEn) and Cross-Approximate Entropy
(XApEn), which quantify the amount of predictability and regularity of
fluctuations in time-series data. The proposed features are compared to
existing methods of Sequential Motion Texture (SMT), Discrete Cosine Transform
(DCT) and Discrete Fourier Transform (DFT), for surgical skills assessment.
Results: We report average performance of different features across all
applicable OSATS criteria for suturing and knot tying tasks. Our analysis shows
that the proposed entropy based features out-perform previous state-of-the-art
methods using video data. For accelerometer data, our method performs better
for suturing only. We also show that fusion of video and acceleration features
can improve overall performance with the proposed entropy features achieving
highest accuracy. Conclusions: Automated surgical skills assessment can be
achieved with high accuracy using the proposed entropy features. Such a system
can significantly improve the efficiency of surgical training in medical
schools and teaching hospitals.
| no_new_dataset | 0.950732 |
1702.07784 | Emiliano De Cristofaro | Despoina Chatzakou, Nicolas Kourtellis, Jeremy Blackburn, Emiliano De
Cristofaro, Gianluca Stringhini, Athena Vakali | Measuring #GamerGate: A Tale of Hate, Sexism, and Bullying | WWW Cybersafety Workshop 2017 | null | null | null | cs.SI cs.AI cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Over the past few years, online aggression and abusive behaviors have
occurred in many different forms and on a variety of platforms. In extreme
cases, these incidents have evolved into hate, discrimination, and bullying,
and even materialized into real-world threats and attacks against individuals
or groups. In this paper, we study the Gamergate controversy. Started in August
2014 in the online gaming world, it quickly spread across various social
networking platforms, ultimately leading to many incidents of cyberbullying and
cyberaggression. We focus on Twitter, presenting a measurement study of a
dataset of 340k unique users and 1.6M tweets to study the properties of these
users, the content they post, and how they differ from random Twitter users. We
find that users involved in this "Twitter war" tend to have more friends and
followers, are generally more engaged and post tweets with negative sentiment,
less joy, and more hate than random users. We also perform preliminary
measurements on how the Twitter suspension mechanism deals with such abusive
behaviors. While we focus on Gamergate, our methodology to collect and analyze
tweets related to aggressive and bullying activities is of independent
interest.
| [
{
"version": "v1",
"created": "Fri, 24 Feb 2017 22:14:30 GMT"
}
] | 2017-02-28T00:00:00 | [
[
"Chatzakou",
"Despoina",
""
],
[
"Kourtellis",
"Nicolas",
""
],
[
"Blackburn",
"Jeremy",
""
],
[
"De Cristofaro",
"Emiliano",
""
],
[
"Stringhini",
"Gianluca",
""
],
[
"Vakali",
"Athena",
""
]
] | TITLE: Measuring #GamerGate: A Tale of Hate, Sexism, and Bullying
ABSTRACT: Over the past few years, online aggression and abusive behaviors have
occurred in many different forms and on a variety of platforms. In extreme
cases, these incidents have evolved into hate, discrimination, and bullying,
and even materialized into real-world threats and attacks against individuals
or groups. In this paper, we study the Gamergate controversy. Started in August
2014 in the online gaming world, it quickly spread across various social
networking platforms, ultimately leading to many incidents of cyberbullying and
cyberaggression. We focus on Twitter, presenting a measurement study of a
dataset of 340k unique users and 1.6M tweets to study the properties of these
users, the content they post, and how they differ from random Twitter users. We
find that users involved in this "Twitter war" tend to have more friends and
followers, are generally more engaged and post tweets with negative sentiment,
less joy, and more hate than random users. We also perform preliminary
measurements on how the Twitter suspension mechanism deals with such abusive
behaviors. While we focus on Gamergate, our methodology to collect and analyze
tweets related to aggressive and bullying activities is of independent
interest.
| new_dataset | 0.967502 |
1702.07790 | Mark Harmon | Mark Harmon, Diego Klabjan | Activation Ensembles for Deep Neural Networks | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many activation functions have been proposed in the past, but selecting an
adequate one requires trial and error. We propose a new methodology of
designing activation functions within a neural network at each layer. We call
this technique an "activation ensemble" because it allows the use of multiple
activation functions at each layer. This is done by introducing additional
variables, $\alpha$, at each activation layer of a network to allow for
multiple activation functions to be active at each neuron. By design,
activations with larger $\alpha$ values at a neuron is equivalent to having the
largest magnitude. Hence, those higher magnitude activations are "chosen" by
the network. We implement the activation ensembles on a variety of datasets
using an array of Feed Forward and Convolutional Neural Networks. By using the
activation ensemble, we achieve superior results compared to traditional
techniques. In addition, because of the flexibility of this methodology, we
more deeply explore activation functions and the features that they capture.
| [
{
"version": "v1",
"created": "Fri, 24 Feb 2017 22:30:29 GMT"
}
] | 2017-02-28T00:00:00 | [
[
"Harmon",
"Mark",
""
],
[
"Klabjan",
"Diego",
""
]
] | TITLE: Activation Ensembles for Deep Neural Networks
ABSTRACT: Many activation functions have been proposed in the past, but selecting an
adequate one requires trial and error. We propose a new methodology of
designing activation functions within a neural network at each layer. We call
this technique an "activation ensemble" because it allows the use of multiple
activation functions at each layer. This is done by introducing additional
variables, $\alpha$, at each activation layer of a network to allow for
multiple activation functions to be active at each neuron. By design,
activations with larger $\alpha$ values at a neuron is equivalent to having the
largest magnitude. Hence, those higher magnitude activations are "chosen" by
the network. We implement the activation ensembles on a variety of datasets
using an array of Feed Forward and Convolutional Neural Networks. By using the
activation ensemble, we achieve superior results compared to traditional
techniques. In addition, because of the flexibility of this methodology, we
more deeply explore activation functions and the features that they capture.
| no_new_dataset | 0.954732 |
1702.07908 | Sabri Pllana | Andre Viebke, Suejb Memeti, Sabri Pllana, Ajith Abraham | CHAOS: A Parallelization Scheme for Training Convolutional Neural
Networks on Intel Xeon Phi | The Journal of Supercomputing, 2017 | null | 10.1007/s11227-017-1994-x | null | cs.DC cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep learning is an important component of big-data analytic tools and
intelligent applications, such as, self-driving cars, computer vision, speech
recognition, or precision medicine. However, the training process is
computationally intensive, and often requires a large amount of time if
performed sequentially. Modern parallel computing systems provide the
capability to reduce the required training time of deep neural networks. In
this paper, we present our parallelization scheme for training convolutional
neural networks (CNN) named Controlled Hogwild with Arbitrary Order of
Synchronization (CHAOS). Major features of CHAOS include the support for thread
and vector parallelism, non-instant updates of weight parameters during
back-propagation without a significant delay, and implicit synchronization in
arbitrary order. CHAOS is tailored for parallel computing systems that are
accelerated with the Intel Xeon Phi. We evaluate our parallelization approach
empirically using measurement techniques and performance modeling for various
numbers of threads and CNN architectures. Experimental results for the MNIST
dataset of handwritten digits using the total number of threads on the Xeon Phi
show speedups of up to 103x compared to the execution on one thread of the Xeon
Phi, 14x compared to the sequential execution on Intel Xeon E5, and 58x
compared to the sequential execution on Intel Core i5.
| [
{
"version": "v1",
"created": "Sat, 25 Feb 2017 15:48:44 GMT"
}
] | 2017-02-28T00:00:00 | [
[
"Viebke",
"Andre",
""
],
[
"Memeti",
"Suejb",
""
],
[
"Pllana",
"Sabri",
""
],
[
"Abraham",
"Ajith",
""
]
] | TITLE: CHAOS: A Parallelization Scheme for Training Convolutional Neural
Networks on Intel Xeon Phi
ABSTRACT: Deep learning is an important component of big-data analytic tools and
intelligent applications, such as, self-driving cars, computer vision, speech
recognition, or precision medicine. However, the training process is
computationally intensive, and often requires a large amount of time if
performed sequentially. Modern parallel computing systems provide the
capability to reduce the required training time of deep neural networks. In
this paper, we present our parallelization scheme for training convolutional
neural networks (CNN) named Controlled Hogwild with Arbitrary Order of
Synchronization (CHAOS). Major features of CHAOS include the support for thread
and vector parallelism, non-instant updates of weight parameters during
back-propagation without a significant delay, and implicit synchronization in
arbitrary order. CHAOS is tailored for parallel computing systems that are
accelerated with the Intel Xeon Phi. We evaluate our parallelization approach
empirically using measurement techniques and performance modeling for various
numbers of threads and CNN architectures. Experimental results for the MNIST
dataset of handwritten digits using the total number of threads on the Xeon Phi
show speedups of up to 103x compared to the execution on one thread of the Xeon
Phi, 14x compared to the sequential execution on Intel Xeon E5, and 58x
compared to the sequential execution on Intel Core i5.
| no_new_dataset | 0.951233 |
1702.07942 | Laurent Duval | Camille Couprie, Laurent Duval, Maxime Moreaud, Sophie H\'enon,
M\'elinda Tebib, Vincent Souchon | BARCHAN: Blob Alignment for Robust CHromatographic ANalysis | 15 pages, published in the Special issue for RIVA 2016, 40th
International Symposium on Capillary Chromatography and 13th GCxGC Symposium | Journal of Chromatography A, Volume 1484, February 2017, Pages
65-72 | 10.1016/j.chroma.2017.01.003 | null | cs.CV physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Comprehensive Two dimensional gas chromatography (GCxGC) plays a central role
into the elucidation of complex samples. The automation of the identification
of peak areas is of prime interest to obtain a fast and repeatable analysis of
chromatograms. To determine the concentration of compounds or pseudo-compounds,
templates of blobs are defined and superimposed on a reference chromatogram.
The templates then need to be modified when different chromatograms are
recorded. In this study, we present a chromatogram and template alignment
method based on peak registration called BARCHAN. Peaks are identified using a
robust mathematical morphology tool. The alignment is performed by a
probabilistic estimation of a rigid transformation along the first dimension,
and a non-rigid transformation in the second dimension, taking into account
noise, outliers and missing peaks in a fully automated way. Resulting aligned
chromatograms and masks are presented on two datasets. The proposed algorithm
proves to be fast and reliable. It significantly reduces the time to results
for GCxGC analysis.
| [
{
"version": "v1",
"created": "Sat, 25 Feb 2017 19:59:39 GMT"
}
] | 2017-02-28T00:00:00 | [
[
"Couprie",
"Camille",
""
],
[
"Duval",
"Laurent",
""
],
[
"Moreaud",
"Maxime",
""
],
[
"Hénon",
"Sophie",
""
],
[
"Tebib",
"Mélinda",
""
],
[
"Souchon",
"Vincent",
""
]
] | TITLE: BARCHAN: Blob Alignment for Robust CHromatographic ANalysis
ABSTRACT: Comprehensive Two dimensional gas chromatography (GCxGC) plays a central role
into the elucidation of complex samples. The automation of the identification
of peak areas is of prime interest to obtain a fast and repeatable analysis of
chromatograms. To determine the concentration of compounds or pseudo-compounds,
templates of blobs are defined and superimposed on a reference chromatogram.
The templates then need to be modified when different chromatograms are
recorded. In this study, we present a chromatogram and template alignment
method based on peak registration called BARCHAN. Peaks are identified using a
robust mathematical morphology tool. The alignment is performed by a
probabilistic estimation of a rigid transformation along the first dimension,
and a non-rigid transformation in the second dimension, taking into account
noise, outliers and missing peaks in a fully automated way. Resulting aligned
chromatograms and masks are presented on two datasets. The proposed algorithm
proves to be fast and reliable. It significantly reduces the time to results
for GCxGC analysis.
| no_new_dataset | 0.947381 |
1702.07983 | Yanran Li | Tong Che, Yanran Li, Ruixiang Zhang, R Devon Hjelm, Wenjie Li, Yangqiu
Song, Yoshua Bengio | Maximum-Likelihood Augmented Discrete Generative Adversarial Networks | 11 pages, 3 figures | null | null | null | cs.AI cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite the successes in capturing continuous distributions, the application
of generative adversarial networks (GANs) to discrete settings, like natural
language tasks, is rather restricted. The fundamental reason is the difficulty
of back-propagation through discrete random variables combined with the
inherent instability of the GAN training objective. To address these problems,
we propose Maximum-Likelihood Augmented Discrete Generative Adversarial
Networks. Instead of directly optimizing the GAN objective, we derive a novel
and low-variance objective using the discriminator's output that follows
corresponds to the log-likelihood. Compared with the original, the new
objective is proved to be consistent in theory and beneficial in practice. The
experimental results on various discrete datasets demonstrate the effectiveness
of the proposed approach.
| [
{
"version": "v1",
"created": "Sun, 26 Feb 2017 03:19:13 GMT"
}
] | 2017-02-28T00:00:00 | [
[
"Che",
"Tong",
""
],
[
"Li",
"Yanran",
""
],
[
"Zhang",
"Ruixiang",
""
],
[
"Hjelm",
"R Devon",
""
],
[
"Li",
"Wenjie",
""
],
[
"Song",
"Yangqiu",
""
],
[
"Bengio",
"Yoshua",
""
]
] | TITLE: Maximum-Likelihood Augmented Discrete Generative Adversarial Networks
ABSTRACT: Despite the successes in capturing continuous distributions, the application
of generative adversarial networks (GANs) to discrete settings, like natural
language tasks, is rather restricted. The fundamental reason is the difficulty
of back-propagation through discrete random variables combined with the
inherent instability of the GAN training objective. To address these problems,
we propose Maximum-Likelihood Augmented Discrete Generative Adversarial
Networks. Instead of directly optimizing the GAN objective, we derive a novel
and low-variance objective using the discriminator's output that follows
corresponds to the log-likelihood. Compared with the original, the new
objective is proved to be consistent in theory and beneficial in practice. The
experimental results on various discrete datasets demonstrate the effectiveness
of the proposed approach.
| no_new_dataset | 0.95222 |
1702.08014 | Simon Kohl | Simon Kohl, David Bonekamp, Heinz-Peter Schlemmer, Kaneschka Yaqubi,
Markus Hohenfellner, Boris Hadaschik, Jan-Philipp Radtke and Klaus Maier-Hein | Adversarial Networks for the Detection of Aggressive Prostate Cancer | 8 pages, 3 figures; under review as a conference paper at MICCAI 2017 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Semantic segmentation constitutes an integral part of medical image analyses
for which breakthroughs in the field of deep learning were of high relevance.
The large number of trainable parameters of deep neural networks however
renders them inherently data hungry, a characteristic that heavily challenges
the medical imaging community. Though interestingly, with the de facto standard
training of fully convolutional networks (FCNs) for semantic segmentation being
agnostic towards the `structure' of the predicted label maps, valuable
complementary information about the global quality of the segmentation lies
idle. In order to tap into this potential, we propose utilizing an adversarial
network which discriminates between expert and generated annotations in order
to train FCNs for semantic segmentation. Because the adversary constitutes a
learned parametrization of what makes a good segmentation at a global level, we
hypothesize that the method holds particular advantages for segmentation tasks
on complex structured, small datasets. This holds true in our experiments: We
learn to segment aggressive prostate cancer utilizing MRI images of 152
patients and show that the proposed scheme is superior over the de facto
standard in terms of the detection sensitivity and the dice-score for
aggressive prostate cancer. The achieved relative gains are shown to be
particularly pronounced in the small dataset limit.
| [
{
"version": "v1",
"created": "Sun, 26 Feb 2017 10:08:49 GMT"
}
] | 2017-02-28T00:00:00 | [
[
"Kohl",
"Simon",
""
],
[
"Bonekamp",
"David",
""
],
[
"Schlemmer",
"Heinz-Peter",
""
],
[
"Yaqubi",
"Kaneschka",
""
],
[
"Hohenfellner",
"Markus",
""
],
[
"Hadaschik",
"Boris",
""
],
[
"Radtke",
"Jan-Philipp",
""
],
[
"Maier-Hein",
"Klaus",
""
]
] | TITLE: Adversarial Networks for the Detection of Aggressive Prostate Cancer
ABSTRACT: Semantic segmentation constitutes an integral part of medical image analyses
for which breakthroughs in the field of deep learning were of high relevance.
The large number of trainable parameters of deep neural networks however
renders them inherently data hungry, a characteristic that heavily challenges
the medical imaging community. Though interestingly, with the de facto standard
training of fully convolutional networks (FCNs) for semantic segmentation being
agnostic towards the `structure' of the predicted label maps, valuable
complementary information about the global quality of the segmentation lies
idle. In order to tap into this potential, we propose utilizing an adversarial
network which discriminates between expert and generated annotations in order
to train FCNs for semantic segmentation. Because the adversary constitutes a
learned parametrization of what makes a good segmentation at a global level, we
hypothesize that the method holds particular advantages for segmentation tasks
on complex structured, small datasets. This holds true in our experiments: We
learn to segment aggressive prostate cancer utilizing MRI images of 152
patients and show that the proposed scheme is superior over the de facto
standard in terms of the detection sensitivity and the dice-score for
aggressive prostate cancer. The achieved relative gains are shown to be
particularly pronounced in the small dataset limit.
| no_new_dataset | 0.944074 |
1702.08070 | William Rowe | William Rowe, Paul D. Dobson, Bede Constantinides, and Mark Platt | PubTree: A Hierarchical Search Tool for the MEDLINE Database | 7 pages, 2 figures | null | null | null | cs.IR cs.DL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Keeping track of the ever-increasing body of scientific literature is an
escalating challenge. We present PubTree a hierarchical search tool that
efficiently searches the PubMed/MEDLINE dataset based upon a decision tree
constructed using >26 million abstracts. The tool is implemented as a webpage,
where users are asked a series of eighteen questions to locate pertinent
articles. The implementation of this hierarchical search tool highlights issues
endemic with document retrieval. However, the construction of this tree
indicates that with future developments hierarchical search could become an
effective tool (or adjunct) in the mining of biological literature.
| [
{
"version": "v1",
"created": "Sun, 26 Feb 2017 19:09:59 GMT"
}
] | 2017-02-28T00:00:00 | [
[
"Rowe",
"William",
""
],
[
"Dobson",
"Paul D.",
""
],
[
"Constantinides",
"Bede",
""
],
[
"Platt",
"Mark",
""
]
] | TITLE: PubTree: A Hierarchical Search Tool for the MEDLINE Database
ABSTRACT: Keeping track of the ever-increasing body of scientific literature is an
escalating challenge. We present PubTree a hierarchical search tool that
efficiently searches the PubMed/MEDLINE dataset based upon a decision tree
constructed using >26 million abstracts. The tool is implemented as a webpage,
where users are asked a series of eighteen questions to locate pertinent
articles. The implementation of this hierarchical search tool highlights issues
endemic with document retrieval. However, the construction of this tree
indicates that with future developments hierarchical search could become an
effective tool (or adjunct) in the mining of biological literature.
| no_new_dataset | 0.944587 |
1702.08097 | Tianlang Chen | Tianlang Chen, Yuxiao Chen, Jiebo Luo | A Selfie is Worth a Thousand Words: Mining Personal Patterns behind User
Selfie-posting Behaviours | WWW 2017 Companion | null | null | null | cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Selfies have become increasingly fashionable in the social media era. People
are willing to share their selfies in various social media platforms such as
Facebook, Instagram and Flicker. The popularity of selfie have caught
researchers' attention, especially psychologists. In computer vision and
machine learning areas, little attention has been paid to this phenomenon as a
valuable data source. In this paper, we focus on exploring the deeper personal
patterns behind people's different kinds of selfie-posting behaviours. We
develop this work based on a dataset of WeChat, one of the most extensively
used instant messaging platform in China. In particular, we first propose an
unsupervised approach to classify the images posted by users. Based on the
classification result, we construct three types of user-level features that
reflect user preference, activity and posting habit. Based on these features,
for a series of selfie related tasks, we build classifiers that can accurately
predict two sets of users with opposite selfie-posting behaviours. We have
found that people's interest, activity and posting habit have a great influence
on their selfie-posting behaviours. For example, the classification accuracy
between selfie-posting addict and nonaddict reaches 89.36%. We also prove that
using user's image information to predict these behaviours achieve better
performance than using text information. More importantly, for each set of
users with a specific selfie-posting behaviour, we extract and visualize
significant personal patterns about them. In addition, we cluster users and
extract their high-level attributes, revealing the correlation between these
attributes and users' selfie-posting behaviours. In the end, we demonstrate
that users' selfie-posting behaviour, as a good predictor, could predict their
different preferences toward these high-level attributes accurately.
| [
{
"version": "v1",
"created": "Sun, 26 Feb 2017 22:12:09 GMT"
}
] | 2017-02-28T00:00:00 | [
[
"Chen",
"Tianlang",
""
],
[
"Chen",
"Yuxiao",
""
],
[
"Luo",
"Jiebo",
""
]
] | TITLE: A Selfie is Worth a Thousand Words: Mining Personal Patterns behind User
Selfie-posting Behaviours
ABSTRACT: Selfies have become increasingly fashionable in the social media era. People
are willing to share their selfies in various social media platforms such as
Facebook, Instagram and Flicker. The popularity of selfie have caught
researchers' attention, especially psychologists. In computer vision and
machine learning areas, little attention has been paid to this phenomenon as a
valuable data source. In this paper, we focus on exploring the deeper personal
patterns behind people's different kinds of selfie-posting behaviours. We
develop this work based on a dataset of WeChat, one of the most extensively
used instant messaging platform in China. In particular, we first propose an
unsupervised approach to classify the images posted by users. Based on the
classification result, we construct three types of user-level features that
reflect user preference, activity and posting habit. Based on these features,
for a series of selfie related tasks, we build classifiers that can accurately
predict two sets of users with opposite selfie-posting behaviours. We have
found that people's interest, activity and posting habit have a great influence
on their selfie-posting behaviours. For example, the classification accuracy
between selfie-posting addict and nonaddict reaches 89.36%. We also prove that
using user's image information to predict these behaviours achieve better
performance than using text information. More importantly, for each set of
users with a specific selfie-posting behaviour, we extract and visualize
significant personal patterns about them. In addition, we cluster users and
extract their high-level attributes, revealing the correlation between these
attributes and users' selfie-posting behaviours. In the end, we demonstrate
that users' selfie-posting behaviour, as a good predictor, could predict their
different preferences toward these high-level attributes accurately.
| no_new_dataset | 0.933249 |
1702.08192 | Christian Wachinger | Christian Wachinger, Martin Reuter, Tassilo Klein | DeepNAT: Deep Convolutional Neural Network for Segmenting Neuroanatomy | Accepted for publication in NeuroImage, special issue "Brain
Segmentation and Parcellation", 2017 | null | 10.1016/j.neuroimage.2017.02.035 | null | cs.CV cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce DeepNAT, a 3D Deep convolutional neural network for the
automatic segmentation of NeuroAnaTomy in T1-weighted magnetic resonance
images. DeepNAT is an end-to-end learning-based approach to brain segmentation
that jointly learns an abstract feature representation and a multi-class
classification. We propose a 3D patch-based approach, where we do not only
predict the center voxel of the patch but also neighbors, which is formulated
as multi-task learning. To address a class imbalance problem, we arrange two
networks hierarchically, where the first one separates foreground from
background, and the second one identifies 25 brain structures on the
foreground. Since patches lack spatial context, we augment them with
coordinates. To this end, we introduce a novel intrinsic parameterization of
the brain volume, formed by eigenfunctions of the Laplace-Beltrami operator. As
network architecture, we use three convolutional layers with pooling, batch
normalization, and non-linearities, followed by fully connected layers with
dropout. The final segmentation is inferred from the probabilistic output of
the network with a 3D fully connected conditional random field, which ensures
label agreement between close voxels. The roughly 2.7 million parameters in the
network are learned with stochastic gradient descent. Our results show that
DeepNAT compares favorably to state-of-the-art methods. Finally, the purely
learning-based method may have a high potential for the adaptation to young,
old, or diseased brains by fine-tuning the pre-trained network with a small
training sample on the target application, where the availability of larger
datasets with manual annotations may boost the overall segmentation accuracy in
the future.
| [
{
"version": "v1",
"created": "Mon, 27 Feb 2017 08:53:31 GMT"
}
] | 2017-02-28T00:00:00 | [
[
"Wachinger",
"Christian",
""
],
[
"Reuter",
"Martin",
""
],
[
"Klein",
"Tassilo",
""
]
] | TITLE: DeepNAT: Deep Convolutional Neural Network for Segmenting Neuroanatomy
ABSTRACT: We introduce DeepNAT, a 3D Deep convolutional neural network for the
automatic segmentation of NeuroAnaTomy in T1-weighted magnetic resonance
images. DeepNAT is an end-to-end learning-based approach to brain segmentation
that jointly learns an abstract feature representation and a multi-class
classification. We propose a 3D patch-based approach, where we do not only
predict the center voxel of the patch but also neighbors, which is formulated
as multi-task learning. To address a class imbalance problem, we arrange two
networks hierarchically, where the first one separates foreground from
background, and the second one identifies 25 brain structures on the
foreground. Since patches lack spatial context, we augment them with
coordinates. To this end, we introduce a novel intrinsic parameterization of
the brain volume, formed by eigenfunctions of the Laplace-Beltrami operator. As
network architecture, we use three convolutional layers with pooling, batch
normalization, and non-linearities, followed by fully connected layers with
dropout. The final segmentation is inferred from the probabilistic output of
the network with a 3D fully connected conditional random field, which ensures
label agreement between close voxels. The roughly 2.7 million parameters in the
network are learned with stochastic gradient descent. Our results show that
DeepNAT compares favorably to state-of-the-art methods. Finally, the purely
learning-based method may have a high potential for the adaptation to young,
old, or diseased brains by fine-tuning the pre-trained network with a small
training sample on the target application, where the availability of larger
datasets with manual annotations may boost the overall segmentation accuracy in
the future.
| no_new_dataset | 0.950641 |
1702.08210 | Shenghui Wang | Rob Koopman, Shenghui Wang, Andrea Scharnhorst | Contextualization of topics: Browsing through the universe of
bibliographic information | Special Issue of Scientometrics: Same data - different results?
Towards a comparative approach to the identification of thematic structures
in science | null | 10.1007/s11192-017-2303-4 | null | cs.DL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper describes how semantic indexing can help to generate a contextual
overview of topics and visually compare clusters of articles. The method was
originally developed for an innovative information exploration tool, called
Ariadne, which operates on bibliographic databases with tens of millions of
records. In this paper, the method behind Ariadne is further developed and
applied to the research question of the special issue "Same data, different
results" - the better understanding of topic (re-)construction by different
bibliometric approaches. For the case of the Astro dataset of 111,616 articles
in astronomy and astrophysics, a new instantiation of the interactive exploring
tool, LittleAriadne, has been created. This paper contributes to the overall
challenge to delineate and define topics in two different ways. First, we
produce two clustering solutions based on vector representations of articles in
a lexical space. These vectors are built on semantic indexing of entities
associated with those articles. Second, we discuss how LittleAriadne can be
used to browse through the network of topical terms, authors, journals,
citations and various cluster solutions of the Astro dataset. More
specifically, we treat the assignment of an article to the different clustering
solutions as an additional element of its bibliographic record. Keeping the
principle of semantic indexing on the level of such an extended list of
entities of the bibliographic record, LittleAriadne in turn provides a
visualization of the context of a specific clustering solution. It also conveys
the similarity of article clusters produced by different algorithms, hence
representing a complementary approach to other possible means of comparison.
| [
{
"version": "v1",
"created": "Mon, 27 Feb 2017 10:01:08 GMT"
}
] | 2017-02-28T00:00:00 | [
[
"Koopman",
"Rob",
""
],
[
"Wang",
"Shenghui",
""
],
[
"Scharnhorst",
"Andrea",
""
]
] | TITLE: Contextualization of topics: Browsing through the universe of
bibliographic information
ABSTRACT: This paper describes how semantic indexing can help to generate a contextual
overview of topics and visually compare clusters of articles. The method was
originally developed for an innovative information exploration tool, called
Ariadne, which operates on bibliographic databases with tens of millions of
records. In this paper, the method behind Ariadne is further developed and
applied to the research question of the special issue "Same data, different
results" - the better understanding of topic (re-)construction by different
bibliometric approaches. For the case of the Astro dataset of 111,616 articles
in astronomy and astrophysics, a new instantiation of the interactive exploring
tool, LittleAriadne, has been created. This paper contributes to the overall
challenge to delineate and define topics in two different ways. First, we
produce two clustering solutions based on vector representations of articles in
a lexical space. These vectors are built on semantic indexing of entities
associated with those articles. Second, we discuss how LittleAriadne can be
used to browse through the network of topical terms, authors, journals,
citations and various cluster solutions of the Astro dataset. More
specifically, we treat the assignment of an article to the different clustering
solutions as an additional element of its bibliographic record. Keeping the
principle of semantic indexing on the level of such an extended list of
entities of the bibliographic record, LittleAriadne in turn provides a
visualization of the context of a specific clustering solution. It also conveys
the similarity of article clusters produced by different algorithms, hence
representing a complementary approach to other possible means of comparison.
| no_new_dataset | 0.939192 |
1702.08236 | Timotheos Aslanidis | Stavros Birmpilis, Timotheos Aslanidis | A Critical Improvement On Open Shop Scheduling Algorithm For Routing In
Interconnection Networks | null | International Journal of Computer Networks & Communications
(IJCNC) Vol.9, No.1, January 2017 | null | null | cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the past years, Interconnection Networks have been used quite often and
especially in applications where parallelization is critical. Message packets
transmitted through such networks can be interrupted using buffers in order to
maximize network usage and minimize the time required for all messages to reach
their destination. However, preempting a packet will result in topology
reconfiguration and consequently in time cost. The problem of scheduling
message packets through such a network is referred to as PBS and is known to be
NP-Hard. In this paper we have improved, critically, variations of polynomially
solvable instances of Open Shop to approximate PBS. We have combined these
variations and called the induced algorithm IHSA, Improved Hybridic Scheduling
Algorithm. We ran experiments to establish the efficiency of IHSA and found
that in all datasets used it produces schedules very close to the optimal. In
addition, we tested IHSA with datasets that follow non-uniform distributions
and provided statistical data which illustrates better its performance.To
further establish the efficiency of IHSA we ran tests to compare it to SGA,
another algorithm which when tested in the past has yielded excellent results.
| [
{
"version": "v1",
"created": "Mon, 27 Feb 2017 11:18:00 GMT"
}
] | 2017-02-28T00:00:00 | [
[
"Birmpilis",
"Stavros",
""
],
[
"Aslanidis",
"Timotheos",
""
]
] | TITLE: A Critical Improvement On Open Shop Scheduling Algorithm For Routing In
Interconnection Networks
ABSTRACT: In the past years, Interconnection Networks have been used quite often and
especially in applications where parallelization is critical. Message packets
transmitted through such networks can be interrupted using buffers in order to
maximize network usage and minimize the time required for all messages to reach
their destination. However, preempting a packet will result in topology
reconfiguration and consequently in time cost. The problem of scheduling
message packets through such a network is referred to as PBS and is known to be
NP-Hard. In this paper we have improved, critically, variations of polynomially
solvable instances of Open Shop to approximate PBS. We have combined these
variations and called the induced algorithm IHSA, Improved Hybridic Scheduling
Algorithm. We ran experiments to establish the efficiency of IHSA and found
that in all datasets used it produces schedules very close to the optimal. In
addition, we tested IHSA with datasets that follow non-uniform distributions
and provided statistical data which illustrates better its performance.To
further establish the efficiency of IHSA we ran tests to compare it to SGA,
another algorithm which when tested in the past has yielded excellent results.
| no_new_dataset | 0.946349 |
1702.08319 | Hanwang Zhang | Hanwang Zhang, Zawlin Kyaw, Shih-Fu Chang, Tat-Seng Chua | Visual Translation Embedding Network for Visual Relation Detection | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Visual relations, such as "person ride bike" and "bike next to car", offer a
comprehensive scene understanding of an image, and have already shown their
great utility in connecting computer vision and natural language. However, due
to the challenging combinatorial complexity of modeling
subject-predicate-object relation triplets, very little work has been done to
localize and predict visual relations. Inspired by the recent advances in
relational representation learning of knowledge bases and convolutional object
detection networks, we propose a Visual Translation Embedding network (VTransE)
for visual relation detection. VTransE places objects in a low-dimensional
relation space where a relation can be modeled as a simple vector translation,
i.e., subject + predicate $\approx$ object. We propose a novel feature
extraction layer that enables object-relation knowledge transfer in a
fully-convolutional fashion that supports training and inference in a single
forward/backward pass. To the best of our knowledge, VTransE is the first
end-to-end relation detection network. We demonstrate the effectiveness of
VTransE over other state-of-the-art methods on two large-scale datasets: Visual
Relationship and Visual Genome. Note that even though VTransE is a purely
visual model, it is still competitive to the Lu's multi-modal model with
language priors.
| [
{
"version": "v1",
"created": "Mon, 27 Feb 2017 15:16:47 GMT"
}
] | 2017-02-28T00:00:00 | [
[
"Zhang",
"Hanwang",
""
],
[
"Kyaw",
"Zawlin",
""
],
[
"Chang",
"Shih-Fu",
""
],
[
"Chua",
"Tat-Seng",
""
]
] | TITLE: Visual Translation Embedding Network for Visual Relation Detection
ABSTRACT: Visual relations, such as "person ride bike" and "bike next to car", offer a
comprehensive scene understanding of an image, and have already shown their
great utility in connecting computer vision and natural language. However, due
to the challenging combinatorial complexity of modeling
subject-predicate-object relation triplets, very little work has been done to
localize and predict visual relations. Inspired by the recent advances in
relational representation learning of knowledge bases and convolutional object
detection networks, we propose a Visual Translation Embedding network (VTransE)
for visual relation detection. VTransE places objects in a low-dimensional
relation space where a relation can be modeled as a simple vector translation,
i.e., subject + predicate $\approx$ object. We propose a novel feature
extraction layer that enables object-relation knowledge transfer in a
fully-convolutional fashion that supports training and inference in a single
forward/backward pass. To the best of our knowledge, VTransE is the first
end-to-end relation detection network. We demonstrate the effectiveness of
VTransE over other state-of-the-art methods on two large-scale datasets: Visual
Relationship and Visual Genome. Note that even though VTransE is a purely
visual model, it is still competitive to the Lu's multi-modal model with
language priors.
| no_new_dataset | 0.942718 |
1702.08349 | P{\aa}l Sunds{\o}y | P\r{a}l Sunds{\o}y | Big Data for Social Sciences: Measuring patterns of human behavior
through large-scale mobile phone data | 166 pages, PHD thesis | null | null | null | cs.CY cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Through seven publications this dissertation shows how anonymized mobile
phone data can contribute to the social good and provide insights into human
behaviour on a large scale. The size of the datasets analysed ranges from 500
million to 300 billion phone records, covering millions of people. The key
contributions are two-fold:
1. Big Data for Social Good: Through prediction algorithms the results show
how mobile phone data can be useful to predict important socio-economic
indicators, such as income, illiteracy and poverty in developing countries.
Such knowledge can be used to identify where vulnerable groups in society are,
reduce economic shocks and is a critical component for monitoring poverty rates
over time. Further, the dissertation demonstrates how mobile phone data can be
used to better understand human behaviour during large shocks in society,
exemplified by an analysis of data from the terror attack in Norway and a
natural disaster on the south-coast in Bangladesh. This work leads to an
increased understanding of how information spreads, and how millions of people
move around. The intention is to identify displaced people faster, cheaper and
more accurately than existing survey-based methods.
2. Big Data for efficient marketing: Finally, the dissertation offers an
insight into how anonymised mobile phone data can be used to map out large
social networks, covering millions of people, to understand how products spread
inside these networks. Results show that by including social patterns and
machine learning techniques in a large-scale marketing experiment in Asia, the
adoption rate is increased by 13 times compared to the approach used by
experienced marketers. A data-driven and scientific approach to marketing,
through more tailored campaigns, contributes to less irrelevant offers for the
customers, and better cost efficiency for the companies.
| [
{
"version": "v1",
"created": "Mon, 27 Feb 2017 16:09:48 GMT"
}
] | 2017-02-28T00:00:00 | [
[
"Sundsøy",
"Pål",
""
]
] | TITLE: Big Data for Social Sciences: Measuring patterns of human behavior
through large-scale mobile phone data
ABSTRACT: Through seven publications this dissertation shows how anonymized mobile
phone data can contribute to the social good and provide insights into human
behaviour on a large scale. The size of the datasets analysed ranges from 500
million to 300 billion phone records, covering millions of people. The key
contributions are two-fold:
1. Big Data for Social Good: Through prediction algorithms the results show
how mobile phone data can be useful to predict important socio-economic
indicators, such as income, illiteracy and poverty in developing countries.
Such knowledge can be used to identify where vulnerable groups in society are,
reduce economic shocks and is a critical component for monitoring poverty rates
over time. Further, the dissertation demonstrates how mobile phone data can be
used to better understand human behaviour during large shocks in society,
exemplified by an analysis of data from the terror attack in Norway and a
natural disaster on the south-coast in Bangladesh. This work leads to an
increased understanding of how information spreads, and how millions of people
move around. The intention is to identify displaced people faster, cheaper and
more accurately than existing survey-based methods.
2. Big Data for efficient marketing: Finally, the dissertation offers an
insight into how anonymised mobile phone data can be used to map out large
social networks, covering millions of people, to understand how products spread
inside these networks. Results show that by including social patterns and
machine learning techniques in a large-scale marketing experiment in Asia, the
adoption rate is increased by 13 times compared to the approach used by
experienced marketers. A data-driven and scientific approach to marketing,
through more tailored campaigns, contributes to less irrelevant offers for the
customers, and better cost efficiency for the companies.
| no_new_dataset | 0.933552 |
1511.05943 | Dipan Pal | Dipan K. Pal, Marios Savvides | Unitary-Group Invariant Kernels and Features from Transformed Unlabeled
Data | 11 page main paper (including references), 2 page supplementary, for
a total of 13 pages. Submitted for review at ICLR 2016 | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The study of representations invariant to common transformations of the data
is important to learning. Most techniques have focused on local approximate
invariance implemented within expensive optimization frameworks lacking
explicit theoretical guarantees. In this paper, we study kernels that are
invariant to the unitary group while having theoretical guarantees in
addressing practical issues such as (1) unavailability of transformed versions
of labelled data and (2) not observing all transformations. We present a
theoretically motivated alternate approach to the invariant kernel SVM. Unlike
previous approaches to the invariant SVM, the proposed formulation solves both
issues mentioned. We also present a kernel extension of a recent technique to
extract linear unitary-group invariant features addressing both issues and
extend some guarantees regarding invariance and stability. We present
experiments on the UCI ML datasets to illustrate and validate our methods.
| [
{
"version": "v1",
"created": "Wed, 18 Nov 2015 20:48:18 GMT"
}
] | 2017-02-27T00:00:00 | [
[
"Pal",
"Dipan K.",
""
],
[
"Savvides",
"Marios",
""
]
] | TITLE: Unitary-Group Invariant Kernels and Features from Transformed Unlabeled
Data
ABSTRACT: The study of representations invariant to common transformations of the data
is important to learning. Most techniques have focused on local approximate
invariance implemented within expensive optimization frameworks lacking
explicit theoretical guarantees. In this paper, we study kernels that are
invariant to the unitary group while having theoretical guarantees in
addressing practical issues such as (1) unavailability of transformed versions
of labelled data and (2) not observing all transformations. We present a
theoretically motivated alternate approach to the invariant kernel SVM. Unlike
previous approaches to the invariant SVM, the proposed formulation solves both
issues mentioned. We also present a kernel extension of a recent technique to
extract linear unitary-group invariant features addressing both issues and
extend some guarantees regarding invariance and stability. We present
experiments on the UCI ML datasets to illustrate and validate our methods.
| no_new_dataset | 0.9463 |
1701.08837 | Dipan Pal | Dipan K. Pal, Vishnu Boddeti, Marios Savvides | Emergence of Selective Invariance in Hierarchical Feed Forward Networks | null | null | null | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many theories have emerged which investigate how in- variance is generated in
hierarchical networks through sim- ple schemes such as max and mean pooling.
The restriction to max/mean pooling in theoretical and empirical studies has
diverted attention away from a more general way of generating invariance to
nuisance transformations. We con- jecture that hierarchically building
selective invariance (i.e. carefully choosing the range of the transformation
to be in- variant to at each layer of a hierarchical network) is im- portant
for pattern recognition. We utilize a novel pooling layer called adaptive
pooling to find linear pooling weights within networks. These networks with the
learnt pooling weights have performances on object categorization tasks that
are comparable to max/mean pooling networks. In- terestingly, adaptive pooling
can converge to mean pooling (when initialized with random pooling weights),
find more general linear pooling schemes or even decide not to pool at all. We
illustrate the general notion of selective invari- ance through object
categorization experiments on large- scale datasets such as SVHN and ILSVRC
2012.
| [
{
"version": "v1",
"created": "Mon, 30 Jan 2017 21:44:27 GMT"
}
] | 2017-02-27T00:00:00 | [
[
"Pal",
"Dipan K.",
""
],
[
"Boddeti",
"Vishnu",
""
],
[
"Savvides",
"Marios",
""
]
] | TITLE: Emergence of Selective Invariance in Hierarchical Feed Forward Networks
ABSTRACT: Many theories have emerged which investigate how in- variance is generated in
hierarchical networks through sim- ple schemes such as max and mean pooling.
The restriction to max/mean pooling in theoretical and empirical studies has
diverted attention away from a more general way of generating invariance to
nuisance transformations. We con- jecture that hierarchically building
selective invariance (i.e. carefully choosing the range of the transformation
to be in- variant to at each layer of a hierarchical network) is im- portant
for pattern recognition. We utilize a novel pooling layer called adaptive
pooling to find linear pooling weights within networks. These networks with the
learnt pooling weights have performances on object categorization tasks that
are comparable to max/mean pooling networks. In- terestingly, adaptive pooling
can converge to mean pooling (when initialized with random pooling weights),
find more general linear pooling schemes or even decide not to pool at all. We
illustrate the general notion of selective invari- ance through object
categorization experiments on large- scale datasets such as SVHN and ILSVRC
2012.
| no_new_dataset | 0.95452 |
1702.00615 | Xuanyang Xi | Xuanyang Xi, Yongkang Luo, Fengfu Li, Peng Wang and Hong Qiao | A Fast and Compact Saliency Score Regression Network Based on Fully
Convolutional Network | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Visual saliency detection aims at identifying the most visually distinctive
parts in an image, and serves as a pre-processing step for a variety of
computer vision and image processing tasks. To this end, the saliency detection
procedure must be as fast and compact as possible and optimally processes input
images in a real time manner. It is an essential application requirement for
the saliency detection task. However, contemporary detection methods often
utilize some complicated procedures to pursue feeble improvements on the
detection precession, which always take hundreds of milliseconds and make them
not easy to be applied practically. In this paper, we tackle this problem by
proposing a fast and compact saliency score regression network which employs
fully convolutional network, a special deep convolutional neural network, to
estimate the saliency of objects in images. It is an extremely simplified
end-to-end deep neural network without any pre-processings and
post-processings. When given an image, the network can directly predict a dense
full-resolution saliency map (image-to-image prediction). It works like a
compact pipeline which effectively simplifies the detection procedure. Our
method is evaluated on six public datasets, and experimental results show that
it can achieve comparable or better precision performance than the
state-of-the-art methods while get a significant improvement in detection speed
(35 FPS, processing in real time).
| [
{
"version": "v1",
"created": "Thu, 2 Feb 2017 11:07:51 GMT"
},
{
"version": "v2",
"created": "Fri, 24 Feb 2017 14:15:31 GMT"
}
] | 2017-02-27T00:00:00 | [
[
"Xi",
"Xuanyang",
""
],
[
"Luo",
"Yongkang",
""
],
[
"Li",
"Fengfu",
""
],
[
"Wang",
"Peng",
""
],
[
"Qiao",
"Hong",
""
]
] | TITLE: A Fast and Compact Saliency Score Regression Network Based on Fully
Convolutional Network
ABSTRACT: Visual saliency detection aims at identifying the most visually distinctive
parts in an image, and serves as a pre-processing step for a variety of
computer vision and image processing tasks. To this end, the saliency detection
procedure must be as fast and compact as possible and optimally processes input
images in a real time manner. It is an essential application requirement for
the saliency detection task. However, contemporary detection methods often
utilize some complicated procedures to pursue feeble improvements on the
detection precession, which always take hundreds of milliseconds and make them
not easy to be applied practically. In this paper, we tackle this problem by
proposing a fast and compact saliency score regression network which employs
fully convolutional network, a special deep convolutional neural network, to
estimate the saliency of objects in images. It is an extremely simplified
end-to-end deep neural network without any pre-processings and
post-processings. When given an image, the network can directly predict a dense
full-resolution saliency map (image-to-image prediction). It works like a
compact pipeline which effectively simplifies the detection procedure. Our
method is evaluated on six public datasets, and experimental results show that
it can achieve comparable or better precision performance than the
state-of-the-art methods while get a significant improvement in detection speed
(35 FPS, processing in real time).
| no_new_dataset | 0.949201 |
1702.06506 | Aayush Bansal | Aayush Bansal, Xinlei Chen, Bryan Russell, Abhinav Gupta, Deva Ramanan | PixelNet: Representation of the pixels, by the pixels, and for the
pixels | Project Page: http://www.cs.cmu.edu/~aayushb/pixelNet/. arXiv admin
note: substantial text overlap with arXiv:1609.06694 | null | null | null | cs.CV cs.LG cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We explore design principles for general pixel-level prediction problems,
from low-level edge detection to mid-level surface normal estimation to
high-level semantic segmentation. Convolutional predictors, such as the
fully-convolutional network (FCN), have achieved remarkable success by
exploiting the spatial redundancy of neighboring pixels through convolutional
processing. Though computationally efficient, we point out that such approaches
are not statistically efficient during learning precisely because spatial
redundancy limits the information learned from neighboring pixels. We
demonstrate that stratified sampling of pixels allows one to (1) add diversity
during batch updates, speeding up learning; (2) explore complex nonlinear
predictors, improving accuracy; and (3) efficiently train state-of-the-art
models tabula rasa (i.e., "from scratch") for diverse pixel-labeling tasks. Our
single architecture produces state-of-the-art results for semantic segmentation
on PASCAL-Context dataset, surface normal estimation on NYUDv2 depth dataset,
and edge detection on BSDS.
| [
{
"version": "v1",
"created": "Tue, 21 Feb 2017 18:20:30 GMT"
}
] | 2017-02-27T00:00:00 | [
[
"Bansal",
"Aayush",
""
],
[
"Chen",
"Xinlei",
""
],
[
"Russell",
"Bryan",
""
],
[
"Gupta",
"Abhinav",
""
],
[
"Ramanan",
"Deva",
""
]
] | TITLE: PixelNet: Representation of the pixels, by the pixels, and for the
pixels
ABSTRACT: We explore design principles for general pixel-level prediction problems,
from low-level edge detection to mid-level surface normal estimation to
high-level semantic segmentation. Convolutional predictors, such as the
fully-convolutional network (FCN), have achieved remarkable success by
exploiting the spatial redundancy of neighboring pixels through convolutional
processing. Though computationally efficient, we point out that such approaches
are not statistically efficient during learning precisely because spatial
redundancy limits the information learned from neighboring pixels. We
demonstrate that stratified sampling of pixels allows one to (1) add diversity
during batch updates, speeding up learning; (2) explore complex nonlinear
predictors, improving accuracy; and (3) efficiently train state-of-the-art
models tabula rasa (i.e., "from scratch") for diverse pixel-labeling tasks. Our
single architecture produces state-of-the-art results for semantic segmentation
on PASCAL-Context dataset, surface normal estimation on NYUDv2 depth dataset,
and edge detection on BSDS.
| no_new_dataset | 0.948442 |
1702.07099 | Dezhi Fang | Dezhi Fang, Matthew Keezer, Jacob Williams, Kshitij Kulkarni, Robert
Pienta, Duen Horng Chau | Carina: Interactive Million-Node Graph Visualization using Web Browser
Technologies | null | null | 10.1145/3041021.3054234 | null | cs.HC | http://creativecommons.org/licenses/by/4.0/ | We are working on a scalable, interactive visualization system, called
Carina, for people to explore million-node graphs. By using latest web browser
technologies, Carina offers fast graph rendering via WebGL, and works across
desktop (via Electron) and mobile platforms. Different from most existing graph
visualization tools, Carina does not store the full graph in RAM, enabling it
to work with graphs with up to 69M edges. We are working to improve and
open-source Carina, to offer researchers and practitioners a new, scalable way
to explore and visualize large graph datasets.
| [
{
"version": "v1",
"created": "Thu, 23 Feb 2017 05:22:16 GMT"
},
{
"version": "v2",
"created": "Fri, 24 Feb 2017 18:52:52 GMT"
}
] | 2017-02-27T00:00:00 | [
[
"Fang",
"Dezhi",
""
],
[
"Keezer",
"Matthew",
""
],
[
"Williams",
"Jacob",
""
],
[
"Kulkarni",
"Kshitij",
""
],
[
"Pienta",
"Robert",
""
],
[
"Chau",
"Duen Horng",
""
]
] | TITLE: Carina: Interactive Million-Node Graph Visualization using Web Browser
Technologies
ABSTRACT: We are working on a scalable, interactive visualization system, called
Carina, for people to explore million-node graphs. By using latest web browser
technologies, Carina offers fast graph rendering via WebGL, and works across
desktop (via Electron) and mobile platforms. Different from most existing graph
visualization tools, Carina does not store the full graph in RAM, enabling it
to work with graphs with up to 69M edges. We are working to improve and
open-source Carina, to offer researchers and practitioners a new, scalable way
to explore and visualize large graph datasets.
| no_new_dataset | 0.945601 |
1702.07371 | Sunil Kumar | Tanu Srivastava, Raj Shree Singh, Sunil Kumar, Pavan Chakraborty | Feasibility of Principal Component Analysis in hand gesture recognition
system | conference | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Nowadays actions are increasingly being handled in electronic ways, instead
of physical interaction. From earlier times biometrics is used in the
authentication of a person. It recognizes a person by using a human trait
associated with it like eyes (by calculating the distance between the eyes) and
using hand gestures, fingerprint detection, face detection etc. Advantages of
using these traits for identification are that they uniquely identify a person
and cannot be forgotten or lost. These are unique features of a human being
which are being used widely to make the human life simpler. Hand gesture
recognition system is a powerful tool that supports efficient interaction
between the user and the computer. The main moto of hand gesture recognition
research is to create a system which can recognise specific hand gestures and
use them to convey useful information for device control. This paper presents
an experimental study over the feasibility of principal component analysis in
hand gesture recognition system. PCA is a powerful tool for analyzing data. The
primary goal of PCA is dimensionality reduction. Frames are extracted from the
Sheffield KInect Gesture (SKIG) dataset. The implementation is done by creating
a training set and then training the recognizer. It uses Eigen space by
processing the eigenvalues and eigenvectors of the images in training set.
Euclidean distance with the threshold value is used as similarity metric to
recognize the gestures. The experimental results show that PCA is feasible to
be used for hand gesture recognition system.
| [
{
"version": "v1",
"created": "Thu, 23 Feb 2017 19:34:25 GMT"
}
] | 2017-02-27T00:00:00 | [
[
"Srivastava",
"Tanu",
""
],
[
"Singh",
"Raj Shree",
""
],
[
"Kumar",
"Sunil",
""
],
[
"Chakraborty",
"Pavan",
""
]
] | TITLE: Feasibility of Principal Component Analysis in hand gesture recognition
system
ABSTRACT: Nowadays actions are increasingly being handled in electronic ways, instead
of physical interaction. From earlier times biometrics is used in the
authentication of a person. It recognizes a person by using a human trait
associated with it like eyes (by calculating the distance between the eyes) and
using hand gestures, fingerprint detection, face detection etc. Advantages of
using these traits for identification are that they uniquely identify a person
and cannot be forgotten or lost. These are unique features of a human being
which are being used widely to make the human life simpler. Hand gesture
recognition system is a powerful tool that supports efficient interaction
between the user and the computer. The main moto of hand gesture recognition
research is to create a system which can recognise specific hand gestures and
use them to convey useful information for device control. This paper presents
an experimental study over the feasibility of principal component analysis in
hand gesture recognition system. PCA is a powerful tool for analyzing data. The
primary goal of PCA is dimensionality reduction. Frames are extracted from the
Sheffield KInect Gesture (SKIG) dataset. The implementation is done by creating
a training set and then training the recognizer. It uses Eigen space by
processing the eigenvalues and eigenvectors of the images in training set.
Euclidean distance with the threshold value is used as similarity metric to
recognize the gestures. The experimental results show that PCA is feasible to
be used for hand gesture recognition system.
| no_new_dataset | 0.947866 |
1702.07386 | Shibani Santurkar | Shibani Santurkar, David Budden, Alexander Matveev, Heather Berlin,
Hayk Saribekyan, Yaron Meirovitch and Nir Shavit | Toward Streaming Synapse Detection with Compositional ConvNets | 10 pages, 9 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Connectomics is an emerging field in neuroscience that aims to reconstruct
the 3-dimensional morphology of neurons from electron microscopy (EM) images.
Recent studies have successfully demonstrated the use of convolutional neural
networks (ConvNets) for segmenting cell membranes to individuate neurons.
However, there has been comparatively little success in high-throughput
identification of the intercellular synaptic connections required for deriving
connectivity graphs.
In this study, we take a compositional approach to segmenting synapses,
modeling them explicitly as an intercellular cleft co-located with an
asymmetric vesicle density along a cell membrane. Instead of requiring a deep
network to learn all natural combinations of this compositionality, we train
lighter networks to model the simpler marginal distributions of membranes,
clefts and vesicles from just 100 electron microscopy samples. These feature
maps are then combined with simple rules-based heuristics derived from prior
biological knowledge.
Our approach to synapse detection is both more accurate than previous
state-of-the-art (7% higher recall and 5% higher F1-score) and yields a 20-fold
speed-up compared to the previous fastest implementations. We demonstrate by
reconstructing the first complete, directed connectome from the largest
available anisotropic microscopy dataset (245 GB) of mouse somatosensory cortex
(S1) in just 9.7 hours on a single shared-memory CPU system. We believe that
this work marks an important step toward the goal of a microscope-pace
streaming connectomics pipeline.
| [
{
"version": "v1",
"created": "Thu, 23 Feb 2017 20:48:13 GMT"
}
] | 2017-02-27T00:00:00 | [
[
"Santurkar",
"Shibani",
""
],
[
"Budden",
"David",
""
],
[
"Matveev",
"Alexander",
""
],
[
"Berlin",
"Heather",
""
],
[
"Saribekyan",
"Hayk",
""
],
[
"Meirovitch",
"Yaron",
""
],
[
"Shavit",
"Nir",
""
]
] | TITLE: Toward Streaming Synapse Detection with Compositional ConvNets
ABSTRACT: Connectomics is an emerging field in neuroscience that aims to reconstruct
the 3-dimensional morphology of neurons from electron microscopy (EM) images.
Recent studies have successfully demonstrated the use of convolutional neural
networks (ConvNets) for segmenting cell membranes to individuate neurons.
However, there has been comparatively little success in high-throughput
identification of the intercellular synaptic connections required for deriving
connectivity graphs.
In this study, we take a compositional approach to segmenting synapses,
modeling them explicitly as an intercellular cleft co-located with an
asymmetric vesicle density along a cell membrane. Instead of requiring a deep
network to learn all natural combinations of this compositionality, we train
lighter networks to model the simpler marginal distributions of membranes,
clefts and vesicles from just 100 electron microscopy samples. These feature
maps are then combined with simple rules-based heuristics derived from prior
biological knowledge.
Our approach to synapse detection is both more accurate than previous
state-of-the-art (7% higher recall and 5% higher F1-score) and yields a 20-fold
speed-up compared to the previous fastest implementations. We demonstrate by
reconstructing the first complete, directed connectome from the largest
available anisotropic microscopy dataset (245 GB) of mouse somatosensory cortex
(S1) in just 9.7 hours on a single shared-memory CPU system. We believe that
this work marks an important step toward the goal of a microscope-pace
streaming connectomics pipeline.
| no_new_dataset | 0.9455 |
1702.07451 | Patrick Wang | Patrick Wang and Kenneth Morton and Peter Torrione and Leslie Collins | Viewpoint Adaptation for Rigid Object Detection | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An object detector performs suboptimally when applied to image data taken
from a viewpoint different from the one with which it was trained. In this
paper, we present a viewpoint adaptation algorithm that allows a trained
single-view object detector to be adapted to a new, distinct viewpoint. We
first illustrate how a feature space transformation can be inferred from a
known homography between the source and target viewpoints. Second, we show that
a variety of trained classifiers can be modified to behave as if that
transformation were applied to each testing instance. The proposed algorithm is
evaluated on a person detection task using images from the PETS 2007 and CAVIAR
datasets, as well as from a new synthetic multi-view person detection dataset.
It yields substantial performance improvements when adapting single-view person
detectors to new viewpoints, and simultaneously reduces computational
complexity. This work has the potential to improve detection performance for
cameras viewing objects from arbitrary viewpoints, while simplifying data
collection and feature extraction.
| [
{
"version": "v1",
"created": "Fri, 24 Feb 2017 02:37:15 GMT"
}
] | 2017-02-27T00:00:00 | [
[
"Wang",
"Patrick",
""
],
[
"Morton",
"Kenneth",
""
],
[
"Torrione",
"Peter",
""
],
[
"Collins",
"Leslie",
""
]
] | TITLE: Viewpoint Adaptation for Rigid Object Detection
ABSTRACT: An object detector performs suboptimally when applied to image data taken
from a viewpoint different from the one with which it was trained. In this
paper, we present a viewpoint adaptation algorithm that allows a trained
single-view object detector to be adapted to a new, distinct viewpoint. We
first illustrate how a feature space transformation can be inferred from a
known homography between the source and target viewpoints. Second, we show that
a variety of trained classifiers can be modified to behave as if that
transformation were applied to each testing instance. The proposed algorithm is
evaluated on a person detection task using images from the PETS 2007 and CAVIAR
datasets, as well as from a new synthetic multi-view person detection dataset.
It yields substantial performance improvements when adapting single-view person
detectors to new viewpoints, and simultaneously reduces computational
complexity. This work has the potential to improve detection performance for
cameras viewing objects from arbitrary viewpoints, while simplifying data
collection and feature extraction.
| new_dataset | 0.958421 |
1702.07462 | Kun He Prof. | Kun He, Yingru Li, Sucheta Soundarajan, John E. Hopcroft | Hidden Community Detection in Social Networks | 10 pages, 6 figures, 4 tables, submitted to KDD 2017 | null | null | null | cs.SI physics.soc-ph stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a new paradigm that is important for community detection in the
realm of network analysis. Networks contain a set of strong, dominant
communities, which interfere with the detection of weak, natural community
structure. When most of the members of the weak communities also belong to
stronger communities, they are extremely hard to be uncovered. We call the weak
communities the hidden community structure.
We present a novel approach called HICODE (HIdden COmmunity DEtection) that
identifies the hidden community structure as well as the dominant community
structure. By weakening the strength of the dominant structure, one can uncover
the hidden structure beneath. Likewise, by reducing the strength of the hidden
structure, one can more accurately identify the dominant structure. In this
way, HICODE tackles both tasks simultaneously.
Extensive experiments on real-world networks demonstrate that HICODE
outperforms several state-of-the-art community detection methods in uncovering
both the dominant and the hidden structure. In the Facebook university social
networks, we find multiple non-redundant sets of communities that are strongly
associated with residential hall, year of registration or career position of
the faculties or students, while the state-of-the-art algorithms mainly locate
the dominant ground truth category. In the Due to the difficulty of labeling
all ground truth communities in real-world datasets, HICODE provides a
promising approach to pinpoint the existing latent communities and uncover
communities for which there is no ground truth. Finding this unknown structure
is an extremely important community detection problem.
| [
{
"version": "v1",
"created": "Fri, 24 Feb 2017 04:52:30 GMT"
}
] | 2017-02-27T00:00:00 | [
[
"He",
"Kun",
""
],
[
"Li",
"Yingru",
""
],
[
"Soundarajan",
"Sucheta",
""
],
[
"Hopcroft",
"John E.",
""
]
] | TITLE: Hidden Community Detection in Social Networks
ABSTRACT: We introduce a new paradigm that is important for community detection in the
realm of network analysis. Networks contain a set of strong, dominant
communities, which interfere with the detection of weak, natural community
structure. When most of the members of the weak communities also belong to
stronger communities, they are extremely hard to be uncovered. We call the weak
communities the hidden community structure.
We present a novel approach called HICODE (HIdden COmmunity DEtection) that
identifies the hidden community structure as well as the dominant community
structure. By weakening the strength of the dominant structure, one can uncover
the hidden structure beneath. Likewise, by reducing the strength of the hidden
structure, one can more accurately identify the dominant structure. In this
way, HICODE tackles both tasks simultaneously.
Extensive experiments on real-world networks demonstrate that HICODE
outperforms several state-of-the-art community detection methods in uncovering
both the dominant and the hidden structure. In the Facebook university social
networks, we find multiple non-redundant sets of communities that are strongly
associated with residential hall, year of registration or career position of
the faculties or students, while the state-of-the-art algorithms mainly locate
the dominant ground truth category. In the Due to the difficulty of labeling
all ground truth communities in real-world datasets, HICODE provides a
promising approach to pinpoint the existing latent communities and uncover
communities for which there is no ground truth. Finding this unknown structure
is an extremely important community detection problem.
| no_new_dataset | 0.94545 |
1702.07474 | Fei Han | Fei Han, Xue Yang, Christopher Reardon, Yu Zhang, Hao Zhang | Simultaneous Feature and Body-Part Learning for Real-Time Robot
Awareness of Human Behaviors | 8 pages, 6 figures, accepted by ICRA'17 | null | null | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Robot awareness of human actions is an essential research problem in robotics
with many important real-world applications, including human-robot
collaboration and teaming. Over the past few years, depth sensors have become a
standard device widely used by intelligent robots for 3D perception, which can
also offer human skeletal data in 3D space. Several methods based on skeletal
data were designed to enable robot awareness of human actions with satisfactory
accuracy. However, previous methods treated all body parts and features equally
important, without the capability to identify discriminative body parts and
features. In this paper, we propose a novel simultaneous Feature And Body-part
Learning (FABL) approach that simultaneously identifies discriminative body
parts and features, and efficiently integrates all available information
together to enable real-time robot awareness of human behaviors. We formulate
FABL as a regression-like optimization problem with structured
sparsity-inducing norms to model interrelationships of body parts and features.
We also develop an optimization algorithm to solve the formulated problem,
which possesses a theoretical guarantee to find the optimal solution. To
evaluate FABL, three experiments were performed using public benchmark
datasets, including the MSR Action3D and CAD-60 datasets, as well as a Baxter
robot in practical assistive living applications. Experimental results show
that our FABL approach obtains a high recognition accuracy with a processing
speed of the order-of-magnitude of 10e4 Hz, which makes FABL a promising method
to enable real-time robot awareness of human behaviors in practical robotics
applications.
| [
{
"version": "v1",
"created": "Fri, 24 Feb 2017 06:35:10 GMT"
}
] | 2017-02-27T00:00:00 | [
[
"Han",
"Fei",
""
],
[
"Yang",
"Xue",
""
],
[
"Reardon",
"Christopher",
""
],
[
"Zhang",
"Yu",
""
],
[
"Zhang",
"Hao",
""
]
] | TITLE: Simultaneous Feature and Body-Part Learning for Real-Time Robot
Awareness of Human Behaviors
ABSTRACT: Robot awareness of human actions is an essential research problem in robotics
with many important real-world applications, including human-robot
collaboration and teaming. Over the past few years, depth sensors have become a
standard device widely used by intelligent robots for 3D perception, which can
also offer human skeletal data in 3D space. Several methods based on skeletal
data were designed to enable robot awareness of human actions with satisfactory
accuracy. However, previous methods treated all body parts and features equally
important, without the capability to identify discriminative body parts and
features. In this paper, we propose a novel simultaneous Feature And Body-part
Learning (FABL) approach that simultaneously identifies discriminative body
parts and features, and efficiently integrates all available information
together to enable real-time robot awareness of human behaviors. We formulate
FABL as a regression-like optimization problem with structured
sparsity-inducing norms to model interrelationships of body parts and features.
We also develop an optimization algorithm to solve the formulated problem,
which possesses a theoretical guarantee to find the optimal solution. To
evaluate FABL, three experiments were performed using public benchmark
datasets, including the MSR Action3D and CAD-60 datasets, as well as a Baxter
robot in practical assistive living applications. Experimental results show
that our FABL approach obtains a high recognition accuracy with a processing
speed of the order-of-magnitude of 10e4 Hz, which makes FABL a promising method
to enable real-time robot awareness of human behaviors in practical robotics
applications.
| no_new_dataset | 0.945601 |
1702.07508 | Lianwen Jin | Songxuan Lai, Lianwen Jin, Weixin Yang | Toward high-performance online HCCR: a CNN approach with DropDistortion,
path signature and spatial stochastic max-pooling | 10 pages, 7 figures | null | 10.1016/j.patrec.2017.02.011 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents an investigation of several techniques that increase the
accuracy of online handwritten Chinese character recognition (HCCR). We propose
a new training strategy named DropDistortion to train a deep convolutional
neural network (DCNN) with distorted samples. DropDistortion gradually lowers
the degree of character distortion during training, which allows the DCNN to
better generalize. Path signature is used to extract effective features for
online characters. Further improvement is achieved by employing spatial
stochastic max-pooling as a method of feature map distortion and model
averaging. Experiments were carried out on three publicly available datasets,
namely CASIA-OLHWDB 1.0, CASIA-OLHWDB 1.1, and the ICDAR2013 online HCCR
competition dataset. The proposed techniques yield state-of-the-art recognition
accuracies of 97.67%, 97.30%, and 97.99%, respectively.
| [
{
"version": "v1",
"created": "Fri, 24 Feb 2017 09:26:15 GMT"
}
] | 2017-02-27T00:00:00 | [
[
"Lai",
"Songxuan",
""
],
[
"Jin",
"Lianwen",
""
],
[
"Yang",
"Weixin",
""
]
] | TITLE: Toward high-performance online HCCR: a CNN approach with DropDistortion,
path signature and spatial stochastic max-pooling
ABSTRACT: This paper presents an investigation of several techniques that increase the
accuracy of online handwritten Chinese character recognition (HCCR). We propose
a new training strategy named DropDistortion to train a deep convolutional
neural network (DCNN) with distorted samples. DropDistortion gradually lowers
the degree of character distortion during training, which allows the DCNN to
better generalize. Path signature is used to extract effective features for
online characters. Further improvement is achieved by employing spatial
stochastic max-pooling as a method of feature map distortion and model
averaging. Experiments were carried out on three publicly available datasets,
namely CASIA-OLHWDB 1.0, CASIA-OLHWDB 1.1, and the ICDAR2013 online HCCR
competition dataset. The proposed techniques yield state-of-the-art recognition
accuracies of 97.67%, 97.30%, and 97.99%, respectively.
| no_new_dataset | 0.948728 |
1702.07617 | Chen Wu | Chen Wu, Rodrigo Tobar, Kevin Vinsen, Andreas Wicenec, Dave Pallot,
Baoqiang Lao, Ruonan Wang, Tao An, Mark Boulton, Ian Cooper, Richard Dodson,
Markus Dolensky, Ying Mei, Feng Wang | DALiuGE: A Graph Execution Framework for Harnessing the Astronomical
Data Deluge | 31 pages, 12 figures, currently under review by Astronomy and
Computing | null | null | null | cs.DC physics.ins-det | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Data Activated Liu Graph Engine - DALiuGE - is an execution framework for
processing large astronomical datasets at a scale required by the Square
Kilometre Array Phase 1 (SKA1). It includes an interface for expressing complex
data reduction pipelines consisting of both data sets and algorithmic
components and an implementation run-time to execute such pipelines on
distributed resources. By mapping the logical view of a pipeline to its
physical realisation, DALiuGE separates the concerns of multiple stakeholders,
allowing them to collectively optimise large-scale data processing solutions in
a coherent manner. The execution in DALiuGE is data-activated, where each
individual data item autonomously triggers the processing on itself. Such
decentralisation also makes the execution framework very scalable and flexible,
supporting pipeline sizes ranging from less than ten tasks running on a laptop
to tens of millions of concurrent tasks on the second fastest supercomputer in
the world. DALiuGE has been used in production for reducing interferometry data
sets from the Karl E. Jansky Very Large Array and the Mingantu Ultrawide
Spectral Radioheliograph; and is being developed as the execution framework
prototype for the Science Data Processor (SDP) consortium of the Square
Kilometre Array (SKA) telescope. This paper presents a technical overview of
DALiuGE and discusses case studies from the CHILES and MUSER projects that use
DALiuGE to execute production pipelines. In a companion paper, we provide
in-depth analysis of DALiuGE's scalability to very large numbers of tasks on
two supercomputing facilities.
| [
{
"version": "v1",
"created": "Fri, 24 Feb 2017 14:54:45 GMT"
}
] | 2017-02-27T00:00:00 | [
[
"Wu",
"Chen",
""
],
[
"Tobar",
"Rodrigo",
""
],
[
"Vinsen",
"Kevin",
""
],
[
"Wicenec",
"Andreas",
""
],
[
"Pallot",
"Dave",
""
],
[
"Lao",
"Baoqiang",
""
],
[
"Wang",
"Ruonan",
""
],
[
"An",
"Tao",
""
],
[
"Boulton",
"Mark",
""
],
[
"Cooper",
"Ian",
""
],
[
"Dodson",
"Richard",
""
],
[
"Dolensky",
"Markus",
""
],
[
"Mei",
"Ying",
""
],
[
"Wang",
"Feng",
""
]
] | TITLE: DALiuGE: A Graph Execution Framework for Harnessing the Astronomical
Data Deluge
ABSTRACT: The Data Activated Liu Graph Engine - DALiuGE - is an execution framework for
processing large astronomical datasets at a scale required by the Square
Kilometre Array Phase 1 (SKA1). It includes an interface for expressing complex
data reduction pipelines consisting of both data sets and algorithmic
components and an implementation run-time to execute such pipelines on
distributed resources. By mapping the logical view of a pipeline to its
physical realisation, DALiuGE separates the concerns of multiple stakeholders,
allowing them to collectively optimise large-scale data processing solutions in
a coherent manner. The execution in DALiuGE is data-activated, where each
individual data item autonomously triggers the processing on itself. Such
decentralisation also makes the execution framework very scalable and flexible,
supporting pipeline sizes ranging from less than ten tasks running on a laptop
to tens of millions of concurrent tasks on the second fastest supercomputer in
the world. DALiuGE has been used in production for reducing interferometry data
sets from the Karl E. Jansky Very Large Array and the Mingantu Ultrawide
Spectral Radioheliograph; and is being developed as the execution framework
prototype for the Science Data Processor (SDP) consortium of the Square
Kilometre Array (SKA) telescope. This paper presents a technical overview of
DALiuGE and discusses case studies from the CHILES and MUSER projects that use
DALiuGE to execute production pipelines. In a companion paper, we provide
in-depth analysis of DALiuGE's scalability to very large numbers of tasks on
two supercomputing facilities.
| no_new_dataset | 0.94743 |
1702.07627 | Ge Ma | Ge Ma, Zhi Wang, Miao Zhang, Jiahui Ye, Minghua Chen and Wenwu Zhu | Understanding Performance of Edge Content Caching for Mobile Video
Streaming | 13 pages, 19 figures | null | null | null | cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Today's Internet has witnessed an increase in the popularity of mobile video
streaming, which is expected to exceed 3/4 of the global mobile data traffic by
2019. To satisfy the considerable amount of mobile video requests, video
service providers have been pushing their content delivery infrastructure to
edge networks--from regional CDN servers to peer CDN servers (e.g.,
smartrouters in users' homes)--to cache content and serve users with storage
and network resources nearby. Among the edge network content caching paradigms,
Wi-Fi access point caching and cellular base station caching have become two
mainstream solutions. Thus, understanding the effectiveness and performance of
these solutions for large-scale mobile video delivery is important. However,
the characteristics and request patterns of mobile video streaming are unclear
in practical wireless network. In this paper, we use real-world datasets
containing 50 million trace items of nearly 2 million users viewing more than
0.3 million unique videos using mobile devices in a metropolis in China over 2
weeks, not only to understand the request patterns and user behaviors in mobile
video streaming, but also to evaluate the effectiveness of Wi-Fi and
cellular-based edge content caching solutions. To understand performance of
edge content caching for mobile video streaming, we first present temporal and
spatial video request patterns, and we analyze their impacts on caching
performance using frequency-domain and entropy analysis approaches. We then
study the behaviors of mobile video users, including their mobility and
geographical migration behaviors. Using trace-driven experiments, we compare
strategies for edge content caching including LRU and LFU, in terms of
supporting mobile video requests. Moreover, we design an efficient caching
strategy based on the measurement insights and experimentally evaluate its
performance.
| [
{
"version": "v1",
"created": "Fri, 24 Feb 2017 15:28:20 GMT"
}
] | 2017-02-27T00:00:00 | [
[
"Ma",
"Ge",
""
],
[
"Wang",
"Zhi",
""
],
[
"Zhang",
"Miao",
""
],
[
"Ye",
"Jiahui",
""
],
[
"Chen",
"Minghua",
""
],
[
"Zhu",
"Wenwu",
""
]
] | TITLE: Understanding Performance of Edge Content Caching for Mobile Video
Streaming
ABSTRACT: Today's Internet has witnessed an increase in the popularity of mobile video
streaming, which is expected to exceed 3/4 of the global mobile data traffic by
2019. To satisfy the considerable amount of mobile video requests, video
service providers have been pushing their content delivery infrastructure to
edge networks--from regional CDN servers to peer CDN servers (e.g.,
smartrouters in users' homes)--to cache content and serve users with storage
and network resources nearby. Among the edge network content caching paradigms,
Wi-Fi access point caching and cellular base station caching have become two
mainstream solutions. Thus, understanding the effectiveness and performance of
these solutions for large-scale mobile video delivery is important. However,
the characteristics and request patterns of mobile video streaming are unclear
in practical wireless network. In this paper, we use real-world datasets
containing 50 million trace items of nearly 2 million users viewing more than
0.3 million unique videos using mobile devices in a metropolis in China over 2
weeks, not only to understand the request patterns and user behaviors in mobile
video streaming, but also to evaluate the effectiveness of Wi-Fi and
cellular-based edge content caching solutions. To understand performance of
edge content caching for mobile video streaming, we first present temporal and
spatial video request patterns, and we analyze their impacts on caching
performance using frequency-domain and entropy analysis approaches. We then
study the behaviors of mobile video users, including their mobility and
geographical migration behaviors. Using trace-driven experiments, we compare
strategies for edge content caching including LRU and LFU, in terms of
supporting mobile video requests. Moreover, we design an efficient caching
strategy based on the measurement insights and experimentally evaluate its
performance.
| no_new_dataset | 0.939582 |
1702.07670 | Amirali Aghazadeh | Amirali Aghazadeh and Mohammad Golbabaee and Andrew S. Lan and Richard
G. Baraniuk | Insense: Incoherent Sensor Selection for Sparse Signals | null | null | null | null | cs.OH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sensor selection refers to the problem of intelligently selecting a small
subset of a collection of available sensors to reduce the sensing cost while
preserving signal acquisition performance. The majority of sensor selection
algorithms find the subset of sensors that best recovers an arbitrary signal
from a number of linear measurements that is larger than the dimension of the
signal. In this paper, we develop a new sensor selection algorithm for sparse
(or near sparse) signals that finds a subset of sensors that best recovers such
signals from a number of measurements that is much smaller than the dimension
of the signal. Existing sensor selection algorithms cannot be applied in such
situations. Our proposed Incoherent Sensor Selection (Insense) algorithm
minimizes a coherence-based cost function that is adapted from recent results
in sparse recovery theory. Using six datasets, including two real-world
datasets on microbial diagnostics and structural health monitoring, we
demonstrate the superior performance of Insense for sparse-signal sensor
selection.
| [
{
"version": "v1",
"created": "Thu, 16 Feb 2017 16:42:23 GMT"
}
] | 2017-02-27T00:00:00 | [
[
"Aghazadeh",
"Amirali",
""
],
[
"Golbabaee",
"Mohammad",
""
],
[
"Lan",
"Andrew S.",
""
],
[
"Baraniuk",
"Richard G.",
""
]
] | TITLE: Insense: Incoherent Sensor Selection for Sparse Signals
ABSTRACT: Sensor selection refers to the problem of intelligently selecting a small
subset of a collection of available sensors to reduce the sensing cost while
preserving signal acquisition performance. The majority of sensor selection
algorithms find the subset of sensors that best recovers an arbitrary signal
from a number of linear measurements that is larger than the dimension of the
signal. In this paper, we develop a new sensor selection algorithm for sparse
(or near sparse) signals that finds a subset of sensors that best recovers such
signals from a number of measurements that is much smaller than the dimension
of the signal. Existing sensor selection algorithms cannot be applied in such
situations. Our proposed Incoherent Sensor Selection (Insense) algorithm
minimizes a coherence-based cost function that is adapted from recent results
in sparse recovery theory. Using six datasets, including two real-world
datasets on microbial diagnostics and structural health monitoring, we
demonstrate the superior performance of Insense for sparse-signal sensor
selection.
| no_new_dataset | 0.949389 |
1611.00910 | Suhansanu Kumar | Suhansanu Kumar, Hari Sundaram | Task-driven sampling of attributed networks | 16 pages | null | null | null | cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces new techniques for sampling attributed networks to
support standard Data Mining tasks. The problem is important for two reasons.
First, it is commonplace to perform data mining tasks such as clustering and
classification of network attributes (attributes of the nodes, including social
media posts). Furthermore, the extraordinarily large size of real-world
networks necessitates that we work with a smaller graph sample. Second, while
random sampling will provide an unbiased estimate of content, random access is
often unavailable for many networks. Hence, network samplers such as Snowball
sampling, Forest Fire, Random Walk, Metropolis-Hastings Random Walk are widely
used; however, these attribute-agnostic samplers were designed to capture
salient properties of network structure, not node content. The latter is
critical for clustering and classification tasks. There are three contributions
of this paper. First, we introduce several attribute-aware samplers based on
Information Theoretic principles. Second, we prove that these samplers have a
bias towards capturing new content, and are equivalent to uniform sampling in
the limit. Finally, our experimental results over large real-world datasets and
synthetic benchmarks are insightful: attribute-aware samplers outperform both
random sampling and baseline attribute-agnostic samplers by a wide margin in
clustering and classification tasks.
| [
{
"version": "v1",
"created": "Thu, 3 Nov 2016 08:21:15 GMT"
},
{
"version": "v2",
"created": "Wed, 22 Feb 2017 21:14:49 GMT"
}
] | 2017-02-24T00:00:00 | [
[
"Kumar",
"Suhansanu",
""
],
[
"Sundaram",
"Hari",
""
]
] | TITLE: Task-driven sampling of attributed networks
ABSTRACT: This paper introduces new techniques for sampling attributed networks to
support standard Data Mining tasks. The problem is important for two reasons.
First, it is commonplace to perform data mining tasks such as clustering and
classification of network attributes (attributes of the nodes, including social
media posts). Furthermore, the extraordinarily large size of real-world
networks necessitates that we work with a smaller graph sample. Second, while
random sampling will provide an unbiased estimate of content, random access is
often unavailable for many networks. Hence, network samplers such as Snowball
sampling, Forest Fire, Random Walk, Metropolis-Hastings Random Walk are widely
used; however, these attribute-agnostic samplers were designed to capture
salient properties of network structure, not node content. The latter is
critical for clustering and classification tasks. There are three contributions
of this paper. First, we introduce several attribute-aware samplers based on
Information Theoretic principles. Second, we prove that these samplers have a
bias towards capturing new content, and are equivalent to uniform sampling in
the limit. Finally, our experimental results over large real-world datasets and
synthetic benchmarks are insightful: attribute-aware samplers outperform both
random sampling and baseline attribute-agnostic samplers by a wide margin in
clustering and classification tasks.
| no_new_dataset | 0.950595 |
1611.04311 | Giulio Cimini | Matteo Serri, Guido Caldarelli, Giulio Cimini | How the interbank market becomes systemically dangerous: an agent-based
network model of financial distress propagation | null | Journal of Network Theory in Finance 3(1), 1-18 (2017) | 10.21314/JNTF.2017.025 | null | q-fin.RM physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Assessing the stability of economic systems is a fundamental research focus
in economics, that has become increasingly interdisciplinary in the currently
troubled economic situation. In particular, much attention has been devoted to
the interbank lending market as an important diffusion channel for financial
distress during the recent crisis. In this work we study the stability of the
interbank market to exogenous shocks using an agent-based network framework.
Our model encompasses several ingredients that have been recognized in the
literature as pro-cyclical triggers of financial distress in the banking
system: credit and liquidity shocks through bilateral exposures, liquidity
hoarding due to counterparty creditworthiness deterioration, target leveraging
policies and fire-sales spillovers. But we exclude the possibility of central
authorities intervention. We implement this framework on a dataset of 183
European banks that were publicly traded between 2004 and 2013. We document the
extreme fragility of the interbank lending market up to 2008, when a systemic
crisis leads to total depletion of market equity with an increasing speed of
market collapse. After the crisis instead the system is more resilient to
systemic events in terms of residual market equity. However, the speed at which
the crisis breaks out reaches a new maximum in 2011, and never goes back to
values observed before 2007. Our analysis points to the key role of the crisis
outbreak speed, which sets the maximum delay for central authorities
intervention to be effective.
| [
{
"version": "v1",
"created": "Mon, 14 Nov 2016 10:01:35 GMT"
}
] | 2017-02-24T00:00:00 | [
[
"Serri",
"Matteo",
""
],
[
"Caldarelli",
"Guido",
""
],
[
"Cimini",
"Giulio",
""
]
] | TITLE: How the interbank market becomes systemically dangerous: an agent-based
network model of financial distress propagation
ABSTRACT: Assessing the stability of economic systems is a fundamental research focus
in economics, that has become increasingly interdisciplinary in the currently
troubled economic situation. In particular, much attention has been devoted to
the interbank lending market as an important diffusion channel for financial
distress during the recent crisis. In this work we study the stability of the
interbank market to exogenous shocks using an agent-based network framework.
Our model encompasses several ingredients that have been recognized in the
literature as pro-cyclical triggers of financial distress in the banking
system: credit and liquidity shocks through bilateral exposures, liquidity
hoarding due to counterparty creditworthiness deterioration, target leveraging
policies and fire-sales spillovers. But we exclude the possibility of central
authorities intervention. We implement this framework on a dataset of 183
European banks that were publicly traded between 2004 and 2013. We document the
extreme fragility of the interbank lending market up to 2008, when a systemic
crisis leads to total depletion of market equity with an increasing speed of
market collapse. After the crisis instead the system is more resilient to
systemic events in terms of residual market equity. However, the speed at which
the crisis breaks out reaches a new maximum in 2011, and never goes back to
values observed before 2007. Our analysis points to the key role of the crisis
outbreak speed, which sets the maximum delay for central authorities
intervention to be effective.
| no_new_dataset | 0.94366 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.