id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
sequencelengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1606.00930 | Jacques Wainer | Jacques Wainer | Comparison of 14 different families of classification algorithms on 115
binary datasets | null | null | null | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We tested 14 very different classification algorithms (random forest,
gradient boosting machines, SVM - linear, polynomial, and RBF - 1-hidden-layer
neural nets, extreme learning machines, k-nearest neighbors and a bagging of
knn, naive Bayes, learning vector quantization, elastic net logistic
regression, sparse linear discriminant analysis, and a boosting of linear
classifiers) on 115 real life binary datasets. We followed the Demsar analysis
and found that the three best classifiers (random forest, gbm and RBF SVM) are
not significantly different from each other. We also discuss that a change of
less then 0.0112 in the error rate should be considered as an irrelevant
change, and used a Bayesian ANOVA analysis to conclude that with high
probability the differences between these three classifiers is not of practical
consequence. We also verified the execution time of "standard implementations"
of these algorithms and concluded that RBF SVM is the fastest (significantly
so) both in training time and in training plus testing time.
| [
{
"version": "v1",
"created": "Thu, 2 Jun 2016 23:01:25 GMT"
}
] | 2016-06-06T00:00:00 | [
[
"Wainer",
"Jacques",
""
]
] | TITLE: Comparison of 14 different families of classification algorithms on 115
binary datasets
ABSTRACT: We tested 14 very different classification algorithms (random forest,
gradient boosting machines, SVM - linear, polynomial, and RBF - 1-hidden-layer
neural nets, extreme learning machines, k-nearest neighbors and a bagging of
knn, naive Bayes, learning vector quantization, elastic net logistic
regression, sparse linear discriminant analysis, and a boosting of linear
classifiers) on 115 real life binary datasets. We followed the Demsar analysis
and found that the three best classifiers (random forest, gbm and RBF SVM) are
not significantly different from each other. We also discuss that a change of
less then 0.0112 in the error rate should be considered as an irrelevant
change, and used a Bayesian ANOVA analysis to conclude that with high
probability the differences between these three classifiers is not of practical
consequence. We also verified the execution time of "standard implementations"
of these algorithms and concluded that RBF SVM is the fastest (significantly
so) both in training time and in training plus testing time.
| no_new_dataset | 0.951863 |
1606.01160 | Dong Huang | Dong Huang and Jian-Huang Lai and Chang-Dong Wang | Robust Ensemble Clustering Using Probability Trajectories | The MATLAB code and experimental data of this work are available at:
https://www.researchgate.net/publication/284259332 | IEEE Transactions on Knowledge and Data Engineering, 2016, vol.28,
no.5, pp.1312-1326 | 10.1109/TKDE.2015.2503753 | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Although many successful ensemble clustering approaches have been developed
in recent years, there are still two limitations to most of the existing
approaches. First, they mostly overlook the issue of uncertain links, which may
mislead the overall consensus process. Second, they generally lack the ability
to incorporate global information to refine the local links. To address these
two limitations, in this paper, we propose a novel ensemble clustering approach
based on sparse graph representation and probability trajectory analysis. In
particular, we present the elite neighbor selection strategy to identify the
uncertain links by locally adaptive thresholds and build a sparse graph with a
small number of probably reliable links. We argue that a small number of
probably reliable links can lead to significantly better consensus results than
using all graph links regardless of their reliability. The random walk process
driven by a new transition probability matrix is utilized to explore the global
information in the graph. We derive a novel and dense similarity measure from
the sparse graph by analyzing the probability trajectories of the random
walkers, based on which two consensus functions are further proposed.
Experimental results on multiple real-world datasets demonstrate the
effectiveness and efficiency of our approach.
| [
{
"version": "v1",
"created": "Fri, 3 Jun 2016 16:09:32 GMT"
}
] | 2016-06-06T00:00:00 | [
[
"Huang",
"Dong",
""
],
[
"Lai",
"Jian-Huang",
""
],
[
"Wang",
"Chang-Dong",
""
]
] | TITLE: Robust Ensemble Clustering Using Probability Trajectories
ABSTRACT: Although many successful ensemble clustering approaches have been developed
in recent years, there are still two limitations to most of the existing
approaches. First, they mostly overlook the issue of uncertain links, which may
mislead the overall consensus process. Second, they generally lack the ability
to incorporate global information to refine the local links. To address these
two limitations, in this paper, we propose a novel ensemble clustering approach
based on sparse graph representation and probability trajectory analysis. In
particular, we present the elite neighbor selection strategy to identify the
uncertain links by locally adaptive thresholds and build a sparse graph with a
small number of probably reliable links. We argue that a small number of
probably reliable links can lead to significantly better consensus results than
using all graph links regardless of their reliability. The random walk process
driven by a new transition probability matrix is utilized to explore the global
information in the graph. We derive a novel and dense similarity measure from
the sparse graph by analyzing the probability trajectories of the random
walkers, based on which two consensus functions are further proposed.
Experimental results on multiple real-world datasets demonstrate the
effectiveness and efficiency of our approach.
| no_new_dataset | 0.948155 |
1606.01161 | Jiang Guo | Jiang Guo, Wanxiang Che, Haifeng Wang and Ting Liu | Exploiting Multi-typed Treebanks for Parsing with Deep Multi-task
Learning | 11 pages, 4 figures | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Various treebanks have been released for dependency parsing. Despite that
treebanks may belong to different languages or have different annotation
schemes, they contain syntactic knowledge that is potential to benefit each
other. This paper presents an universal framework for exploiting these
multi-typed treebanks to improve parsing with deep multi-task learning. We
consider two kinds of treebanks as source: the multilingual universal treebanks
and the monolingual heterogeneous treebanks. Multiple treebanks are trained
jointly and interacted with multi-level parameter sharing. Experiments on
several benchmark datasets in various languages demonstrate that our approach
can make effective use of arbitrary source treebanks to improve target parsing
models.
| [
{
"version": "v1",
"created": "Fri, 3 Jun 2016 16:09:52 GMT"
}
] | 2016-06-06T00:00:00 | [
[
"Guo",
"Jiang",
""
],
[
"Che",
"Wanxiang",
""
],
[
"Wang",
"Haifeng",
""
],
[
"Liu",
"Ting",
""
]
] | TITLE: Exploiting Multi-typed Treebanks for Parsing with Deep Multi-task
Learning
ABSTRACT: Various treebanks have been released for dependency parsing. Despite that
treebanks may belong to different languages or have different annotation
schemes, they contain syntactic knowledge that is potential to benefit each
other. This paper presents an universal framework for exploiting these
multi-typed treebanks to improve parsing with deep multi-task learning. We
consider two kinds of treebanks as source: the multilingual universal treebanks
and the monolingual heterogeneous treebanks. Multiple treebanks are trained
jointly and interacted with multi-level parameter sharing. Experiments on
several benchmark datasets in various languages demonstrate that our approach
can make effective use of arbitrary source treebanks to improve target parsing
models.
| no_new_dataset | 0.955194 |
1606.01178 | Md. Reza | Md. Alimoor Reza and Jana Kosecka | Reinforcement Learning for Semantic Segmentation in Indoor Scenes | null | null | null | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Future advancements in robot autonomy and sophistication of robotics tasks
rest on robust, efficient, and task-dependent semantic understanding of the
environment. Semantic segmentation is the problem of simultaneous segmentation
and categorization of a partition of sensory data. The majority of current
approaches tackle this using multi-class segmentation and labeling in a
Conditional Random Field (CRF) framework or by generating multiple object
hypotheses and combining them sequentially. In practical settings, the subset
of semantic labels that are needed depend on the task and particular scene and
labelling every single pixel is not always necessary. We pursue these
observations in developing a more modular and flexible approach to multi-class
parsing of RGBD data based on learning strategies for combining independent
binary object-vs-background segmentations in place of the usual monolithic
multi-label CRF approach. Parameters for the independent binary segmentation
models can be learned very efficiently, and the combination strategy---learned
using reinforcement learning---can be set independently and can vary over
different tasks and environments. Accuracy is comparable to state-of-art
methods on a subset of the NYU-V2 dataset of indoor scenes, while providing
additional flexibility and modularity.
| [
{
"version": "v1",
"created": "Fri, 3 Jun 2016 16:35:58 GMT"
}
] | 2016-06-06T00:00:00 | [
[
"Reza",
"Md. Alimoor",
""
],
[
"Kosecka",
"Jana",
""
]
] | TITLE: Reinforcement Learning for Semantic Segmentation in Indoor Scenes
ABSTRACT: Future advancements in robot autonomy and sophistication of robotics tasks
rest on robust, efficient, and task-dependent semantic understanding of the
environment. Semantic segmentation is the problem of simultaneous segmentation
and categorization of a partition of sensory data. The majority of current
approaches tackle this using multi-class segmentation and labeling in a
Conditional Random Field (CRF) framework or by generating multiple object
hypotheses and combining them sequentially. In practical settings, the subset
of semantic labels that are needed depend on the task and particular scene and
labelling every single pixel is not always necessary. We pursue these
observations in developing a more modular and flexible approach to multi-class
parsing of RGBD data based on learning strategies for combining independent
binary object-vs-background segmentations in place of the usual monolithic
multi-label CRF approach. Parameters for the independent binary segmentation
models can be learned very efficiently, and the combination strategy---learned
using reinforcement learning---can be set independently and can vary over
different tasks and environments. Accuracy is comparable to state-of-art
methods on a subset of the NYU-V2 dataset of indoor scenes, while providing
additional flexibility and modularity.
| no_new_dataset | 0.945901 |
1606.01208 | Yali Cui | Biao Leng, Yali Cui, Jianyuan Wang, Zhang Xiong, Shlomo Havlin, Daqing
Li | Gravitational scaling in Beijing Subway Network | null | null | null | null | physics.soc-ph cs.CY physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, with the availability of various traffic datasets, human mobility
has been studied in different contexts. Researchers attempt to understand the
collective behaviors of human movement with respect to the spatio-temporal
distribution in traffic dynamics, from which a gravitational scaling law
characterizing the relation between the traffic flow, population and distance
has been found. However, most studies focus on the integrated properties of
gravitational scaling, neglecting its dynamical evolution during different
hours of a day. Investigating the hourly traffic flow data of Beijing subway
network, based on the hop-count distance of passengers, we find that the
scaling exponent of the gravitational law is smaller in Beijing subway system
compared to that reported in Seoul subway system. This means that traffic
demand in Beijing is much stronger and less sensitive to the travel distance.
Furthermore, we analyzed the temporal evolution of the scaling exponents in
weekdays and weekends. Our findings may help to understand and improve the
traffic congestion control in different subway systems.
| [
{
"version": "v1",
"created": "Fri, 3 Jun 2016 18:17:50 GMT"
}
] | 2016-06-06T00:00:00 | [
[
"Leng",
"Biao",
""
],
[
"Cui",
"Yali",
""
],
[
"Wang",
"Jianyuan",
""
],
[
"Xiong",
"Zhang",
""
],
[
"Havlin",
"Shlomo",
""
],
[
"Li",
"Daqing",
""
]
] | TITLE: Gravitational scaling in Beijing Subway Network
ABSTRACT: Recently, with the availability of various traffic datasets, human mobility
has been studied in different contexts. Researchers attempt to understand the
collective behaviors of human movement with respect to the spatio-temporal
distribution in traffic dynamics, from which a gravitational scaling law
characterizing the relation between the traffic flow, population and distance
has been found. However, most studies focus on the integrated properties of
gravitational scaling, neglecting its dynamical evolution during different
hours of a day. Investigating the hourly traffic flow data of Beijing subway
network, based on the hop-count distance of passengers, we find that the
scaling exponent of the gravitational law is smaller in Beijing subway system
compared to that reported in Seoul subway system. This means that traffic
demand in Beijing is much stronger and less sensitive to the travel distance.
Furthermore, we analyzed the temporal evolution of the scaling exponents in
weekdays and weekends. Our findings may help to understand and improve the
traffic congestion control in different subway systems.
| no_new_dataset | 0.947769 |
1606.01219 | Steven H. H. Ding | Steven H. H. Ding, Benjamin C. M. Fung, Farkhund Iqbal, William K.
Cheung | Learning Stylometric Representations for Authorship Analysis | null | null | null | null | cs.CL cs.CY cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Authorship analysis (AA) is the study of unveiling the hidden properties of
authors from a body of exponentially exploding textual data. It extracts an
author's identity and sociolinguistic characteristics based on the reflected
writing styles in the text. It is an essential process for various areas, such
as cybercrime investigation, psycholinguistics, political socialization, etc.
However, most of the previous techniques critically depend on the manual
feature engineering process. Consequently, the choice of feature set has been
shown to be scenario- or dataset-dependent. In this paper, to mimic the human
sentence composition process using a neural network approach, we propose to
incorporate different categories of linguistic features into distributed
representation of words in order to learn simultaneously the writing style
representations based on unlabeled texts for authorship analysis. In
particular, the proposed models allow topical, lexical, syntactical, and
character-level feature vectors of each document to be extracted as
stylometrics. We evaluate the performance of our approach on the problems of
authorship characterization and authorship verification with the Twitter,
novel, and essay datasets. The experiments suggest that our proposed text
representation outperforms the bag-of-lexical-n-grams, Latent Dirichlet
Allocation, Latent Semantic Analysis, PVDM, PVDBOW, and word2vec
representations.
| [
{
"version": "v1",
"created": "Fri, 3 Jun 2016 18:42:14 GMT"
}
] | 2016-06-06T00:00:00 | [
[
"Ding",
"Steven H. H.",
""
],
[
"Fung",
"Benjamin C. M.",
""
],
[
"Iqbal",
"Farkhund",
""
],
[
"Cheung",
"William K.",
""
]
] | TITLE: Learning Stylometric Representations for Authorship Analysis
ABSTRACT: Authorship analysis (AA) is the study of unveiling the hidden properties of
authors from a body of exponentially exploding textual data. It extracts an
author's identity and sociolinguistic characteristics based on the reflected
writing styles in the text. It is an essential process for various areas, such
as cybercrime investigation, psycholinguistics, political socialization, etc.
However, most of the previous techniques critically depend on the manual
feature engineering process. Consequently, the choice of feature set has been
shown to be scenario- or dataset-dependent. In this paper, to mimic the human
sentence composition process using a neural network approach, we propose to
incorporate different categories of linguistic features into distributed
representation of words in order to learn simultaneously the writing style
representations based on unlabeled texts for authorship analysis. In
particular, the proposed models allow topical, lexical, syntactical, and
character-level feature vectors of each document to be extracted as
stylometrics. We evaluate the performance of our approach on the problems of
authorship characterization and authorship verification with the Twitter,
novel, and essay datasets. The experiments suggest that our proposed text
representation outperforms the bag-of-lexical-n-grams, Latent Dirichlet
Allocation, Latent Semantic Analysis, PVDM, PVDBOW, and word2vec
representations.
| no_new_dataset | 0.947186 |
1511.03339 | Liang-Chieh Chen | Liang-Chieh Chen, Yi Yang, Jiang Wang, Wei Xu, Alan L. Yuille | Attention to Scale: Scale-aware Semantic Image Segmentation | 14 pages. Accepted to appear at CVPR 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Incorporating multi-scale features in fully convolutional neural networks
(FCNs) has been a key element to achieving state-of-the-art performance on
semantic image segmentation. One common way to extract multi-scale features is
to feed multiple resized input images to a shared deep network and then merge
the resulting features for pixelwise classification. In this work, we propose
an attention mechanism that learns to softly weight the multi-scale features at
each pixel location. We adapt a state-of-the-art semantic image segmentation
model, which we jointly train with multi-scale input images and the attention
model. The proposed attention model not only outperforms average- and
max-pooling, but allows us to diagnostically visualize the importance of
features at different positions and scales. Moreover, we show that adding extra
supervision to the output at each scale is essential to achieving excellent
performance when merging multi-scale features. We demonstrate the effectiveness
of our model with extensive experiments on three challenging datasets,
including PASCAL-Person-Part, PASCAL VOC 2012 and a subset of MS-COCO 2014.
| [
{
"version": "v1",
"created": "Tue, 10 Nov 2015 23:53:57 GMT"
},
{
"version": "v2",
"created": "Thu, 2 Jun 2016 02:02:21 GMT"
}
] | 2016-06-03T00:00:00 | [
[
"Chen",
"Liang-Chieh",
""
],
[
"Yang",
"Yi",
""
],
[
"Wang",
"Jiang",
""
],
[
"Xu",
"Wei",
""
],
[
"Yuille",
"Alan L.",
""
]
] | TITLE: Attention to Scale: Scale-aware Semantic Image Segmentation
ABSTRACT: Incorporating multi-scale features in fully convolutional neural networks
(FCNs) has been a key element to achieving state-of-the-art performance on
semantic image segmentation. One common way to extract multi-scale features is
to feed multiple resized input images to a shared deep network and then merge
the resulting features for pixelwise classification. In this work, we propose
an attention mechanism that learns to softly weight the multi-scale features at
each pixel location. We adapt a state-of-the-art semantic image segmentation
model, which we jointly train with multi-scale input images and the attention
model. The proposed attention model not only outperforms average- and
max-pooling, but allows us to diagnostically visualize the importance of
features at different positions and scales. Moreover, we show that adding extra
supervision to the output at each scale is essential to achieving excellent
performance when merging multi-scale features. We demonstrate the effectiveness
of our model with extensive experiments on three challenging datasets,
including PASCAL-Person-Part, PASCAL VOC 2012 and a subset of MS-COCO 2014.
| no_new_dataset | 0.948775 |
1604.00971 | Dominik Traxl | Dominik Traxl, Niklas Boers and J\"urgen Kurths | Deep Graphs - a general framework to represent and analyze heterogeneous
complex systems across scales | 27 pages, 6 figures, 4 tables. For associated Python software
package, see https://github.com/deepgraph/deepgraph/ . Due to length
limitations the abstract appearing here is shorter than that in the PDF file.
To be published in "Chaos: An Interdisciplinary Journal of Nonlinear Science" | Chaos 26, 065303 (2016) | 10.1063/1.4952963 | null | physics.data-an cs.SI physics.ao-ph physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Network theory has proven to be a powerful tool in describing and analyzing
systems by modelling the relations between their constituent objects. In recent
years great progress has been made by augmenting `traditional' network theory.
However, existing network representations still lack crucial features in order
to serve as a general data analysis tool. These include, most importantly, an
explicit association of information with possibly heterogeneous types of
objects and relations, and a conclusive representation of the properties of
groups of nodes as well as the interactions between such groups on different
scales. In this paper, we introduce a collection of definitions resulting in a
framework that, on the one hand, entails and unifies existing network
representations (e.g., network of networks, multilayer networks), and on the
other hand, generalizes and extends them by incorporating the above features.
To implement these features, we first specify the nodes and edges of a finite
graph as sets of properties. Second, the mathematical concept of partition
lattices is transferred to network theory in order to demonstrate how
partitioning the node and edge set of a graph into supernodes and superedges
allows to aggregate, compute and allocate information on and between arbitrary
groups of nodes. The derived partition lattice of a graph, which we denote by
deep graph, constitutes a concise, yet comprehensive representation that
enables the expression and analysis of heterogeneous properties, relations and
interactions on all scales of a complex system in a self-contained manner.
Furthermore, to be able to utilize existing network-based methods and models,
we derive different representations of multilayer networks from our framework
and demonstrate the advantages of our representation. We exemplify an
application of deep graphs using a real world dataset of precipitation
measurements.
| [
{
"version": "v1",
"created": "Mon, 4 Apr 2016 18:22:09 GMT"
}
] | 2016-06-03T00:00:00 | [
[
"Traxl",
"Dominik",
""
],
[
"Boers",
"Niklas",
""
],
[
"Kurths",
"Jürgen",
""
]
] | TITLE: Deep Graphs - a general framework to represent and analyze heterogeneous
complex systems across scales
ABSTRACT: Network theory has proven to be a powerful tool in describing and analyzing
systems by modelling the relations between their constituent objects. In recent
years great progress has been made by augmenting `traditional' network theory.
However, existing network representations still lack crucial features in order
to serve as a general data analysis tool. These include, most importantly, an
explicit association of information with possibly heterogeneous types of
objects and relations, and a conclusive representation of the properties of
groups of nodes as well as the interactions between such groups on different
scales. In this paper, we introduce a collection of definitions resulting in a
framework that, on the one hand, entails and unifies existing network
representations (e.g., network of networks, multilayer networks), and on the
other hand, generalizes and extends them by incorporating the above features.
To implement these features, we first specify the nodes and edges of a finite
graph as sets of properties. Second, the mathematical concept of partition
lattices is transferred to network theory in order to demonstrate how
partitioning the node and edge set of a graph into supernodes and superedges
allows to aggregate, compute and allocate information on and between arbitrary
groups of nodes. The derived partition lattice of a graph, which we denote by
deep graph, constitutes a concise, yet comprehensive representation that
enables the expression and analysis of heterogeneous properties, relations and
interactions on all scales of a complex system in a self-contained manner.
Furthermore, to be able to utilize existing network-based methods and models,
we derive different representations of multilayer networks from our framework
and demonstrate the advantages of our representation. We exemplify an
application of deep graphs using a real world dataset of precipitation
measurements.
| no_new_dataset | 0.944485 |
1606.00480 | Vinh Nguyen | Vinh Nguyen, Jyoti Leeka, Olivier Bodenreider, Amit Sheth | A Formal Graph Model for RDF and Its Implementation | null | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Formalizing an RDF abstract graph model to be compatible with the RDF formal
semantics has remained one of the foundational problems in the Semantic Web. In
this paper, we propose a new formal graph model for RDF datasets. This model
allows us to express the current model-theoretic semantics in the form of a
graph. We also propose the concepts of resource path and triple path as well as
an algorithm for traversing the new graph. We demonstrate the feasibility of
this graph model through two implementations: one is a new graph engine called
GraphKE, and the other is extended from RDF-3X to show that existing systems
can also benefit from this model. In order to evaluate the empirical aspect of
our graph model, we choose the shortest path algorithm and implement it in the
GraphKE and the RDF-3X. Our experiments on both engines for finding the
shortest paths in the YAGO2S-SP dataset give decent performance in terms of
execution time. The empirical results show that our graph model with
well-defined semantics can be effectively implemented.
| [
{
"version": "v1",
"created": "Wed, 1 Jun 2016 21:51:38 GMT"
}
] | 2016-06-03T00:00:00 | [
[
"Nguyen",
"Vinh",
""
],
[
"Leeka",
"Jyoti",
""
],
[
"Bodenreider",
"Olivier",
""
],
[
"Sheth",
"Amit",
""
]
] | TITLE: A Formal Graph Model for RDF and Its Implementation
ABSTRACT: Formalizing an RDF abstract graph model to be compatible with the RDF formal
semantics has remained one of the foundational problems in the Semantic Web. In
this paper, we propose a new formal graph model for RDF datasets. This model
allows us to express the current model-theoretic semantics in the form of a
graph. We also propose the concepts of resource path and triple path as well as
an algorithm for traversing the new graph. We demonstrate the feasibility of
this graph model through two implementations: one is a new graph engine called
GraphKE, and the other is extended from RDF-3X to show that existing systems
can also benefit from this model. In order to evaluate the empirical aspect of
our graph model, we choose the shortest path algorithm and implement it in the
GraphKE and the RDF-3X. Our experiments on both engines for finding the
shortest paths in the YAGO2S-SP dataset give decent performance in terms of
execution time. The empirical results show that our graph model with
well-defined semantics can be effectively implemented.
| no_new_dataset | 0.948965 |
1606.00538 | Ludovic Trottier | Ludovic Trottier, Philippe Gigu\`ere, Brahim Chaib-draa | Dictionary Learning for Robotic Grasp Recognition and Detection | Submitted at the 2016 IEEE/RSJ International Conference on
Intelligent Robots and Systems (IROS 2016) | null | null | null | cs.RO cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The ability to grasp ordinary and potentially never-seen objects is an
important feature in both domestic and industrial robotics. For a system to
accomplish this, it must autonomously identify grasping locations by using
information from various sensors, such as Microsoft Kinect 3D camera. Despite
numerous progress, significant work still remains to be done in this field. To
this effect, we propose a dictionary learning and sparse representation (DLSR)
framework for representing RGBD images from 3D sensors in the context of
determining such good grasping locations. In contrast to previously proposed
approaches that relied on sophisticated regularization or very large datasets,
the derived perception system has a fast training phase and can work with small
datasets. It is also theoretically founded for dealing with masked-out entries,
which are common with 3D sensors. We contribute by presenting a comparative
study of several DLSR approach combinations for recognizing and detecting grasp
candidates on the standard Cornell dataset. Importantly, experimental results
show a performance improvement of 1.69% in detection and 3.16% in recognition
over current state-of-the-art convolutional neural network (CNN). Even though
nowadays most popular vision-based approach is CNN, this suggests that DLSR is
also a viable alternative with interesting advantages that CNN has not.
| [
{
"version": "v1",
"created": "Thu, 2 Jun 2016 05:20:14 GMT"
}
] | 2016-06-03T00:00:00 | [
[
"Trottier",
"Ludovic",
""
],
[
"Giguère",
"Philippe",
""
],
[
"Chaib-draa",
"Brahim",
""
]
] | TITLE: Dictionary Learning for Robotic Grasp Recognition and Detection
ABSTRACT: The ability to grasp ordinary and potentially never-seen objects is an
important feature in both domestic and industrial robotics. For a system to
accomplish this, it must autonomously identify grasping locations by using
information from various sensors, such as Microsoft Kinect 3D camera. Despite
numerous progress, significant work still remains to be done in this field. To
this effect, we propose a dictionary learning and sparse representation (DLSR)
framework for representing RGBD images from 3D sensors in the context of
determining such good grasping locations. In contrast to previously proposed
approaches that relied on sophisticated regularization or very large datasets,
the derived perception system has a fast training phase and can work with small
datasets. It is also theoretically founded for dealing with masked-out entries,
which are common with 3D sensors. We contribute by presenting a comparative
study of several DLSR approach combinations for recognizing and detecting grasp
candidates on the standard Cornell dataset. Importantly, experimental results
show a performance improvement of 1.69% in detection and 3.16% in recognition
over current state-of-the-art convolutional neural network (CNN). Even though
nowadays most popular vision-based approach is CNN, this suggests that DLSR is
also a viable alternative with interesting advantages that CNN has not.
| no_new_dataset | 0.947962 |
1606.00625 | Yu Liu | Yu Liu, Jianlong Fu, Tao Mei and Chang Wen Chen | Storytelling of Photo Stream with Bidirectional Multi-thread Recurrent
Neural Network | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Visual storytelling aims to generate human-level narrative language (i.e., a
natural paragraph with multiple sentences) from a photo streams. A typical
photo story consists of a global timeline with multi-thread local storylines,
where each storyline occurs in one different scene. Such complex structure
leads to large content gaps at scene transitions between consecutive photos.
Most existing image/video captioning methods can only achieve limited
performance, because the units in traditional recurrent neural networks (RNN)
tend to "forget" the previous state when the visual sequence is inconsistent.
In this paper, we propose a novel visual storytelling approach with
Bidirectional Multi-thread Recurrent Neural Network (BMRNN). First, based on
the mined local storylines, a skip gated recurrent unit (sGRU) with delay
control is proposed to maintain longer range visual information. Second, by
using sGRU as basic units, the BMRNN is trained to align the local storylines
into the global sequential timeline. Third, a new training scheme with a
storyline-constrained objective function is proposed by jointly considering
both global and local matches. Experiments on three standard storytelling
datasets show that the BMRNN model outperforms the state-of-the-art methods.
| [
{
"version": "v1",
"created": "Thu, 2 Jun 2016 11:13:04 GMT"
}
] | 2016-06-03T00:00:00 | [
[
"Liu",
"Yu",
""
],
[
"Fu",
"Jianlong",
""
],
[
"Mei",
"Tao",
""
],
[
"Chen",
"Chang Wen",
""
]
] | TITLE: Storytelling of Photo Stream with Bidirectional Multi-thread Recurrent
Neural Network
ABSTRACT: Visual storytelling aims to generate human-level narrative language (i.e., a
natural paragraph with multiple sentences) from a photo streams. A typical
photo story consists of a global timeline with multi-thread local storylines,
where each storyline occurs in one different scene. Such complex structure
leads to large content gaps at scene transitions between consecutive photos.
Most existing image/video captioning methods can only achieve limited
performance, because the units in traditional recurrent neural networks (RNN)
tend to "forget" the previous state when the visual sequence is inconsistent.
In this paper, we propose a novel visual storytelling approach with
Bidirectional Multi-thread Recurrent Neural Network (BMRNN). First, based on
the mined local storylines, a skip gated recurrent unit (sGRU) with delay
control is proposed to maintain longer range visual information. Second, by
using sGRU as basic units, the BMRNN is trained to align the local storylines
into the global sequential timeline. Third, a new training scheme with a
storyline-constrained objective function is proposed by jointly considering
both global and local matches. Experiments on three standard storytelling
datasets show that the BMRNN model outperforms the state-of-the-art methods.
| no_new_dataset | 0.948822 |
1508.05038 | Chris Thomas | Christopher Thomas and Adriana Kovashka | Seeing Behind the Camera: Identifying the Authorship of a Photograph | Dataset downloadable at http://www.cs.pitt.edu/~chris/photographer To
Appear in CVPR 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce the novel problem of identifying the photographer behind a
photograph. To explore the feasibility of current computer vision techniques to
address this problem, we created a new dataset of over 180,000 images taken by
41 well-known photographers. Using this dataset, we examined the effectiveness
of a variety of features (low and high-level, including CNN features) at
identifying the photographer. We also trained a new deep convolutional neural
network for this task. Our results show that high-level features greatly
outperform low-level features. We provide qualitative results using these
learned models that give insight into our method's ability to distinguish
between photographers, and allow us to draw interesting conclusions about what
specific photographers shoot. We also demonstrate two applications of our
method.
| [
{
"version": "v1",
"created": "Thu, 20 Aug 2015 16:45:17 GMT"
},
{
"version": "v2",
"created": "Wed, 11 Nov 2015 06:38:08 GMT"
},
{
"version": "v3",
"created": "Wed, 1 Jun 2016 01:09:08 GMT"
}
] | 2016-06-02T00:00:00 | [
[
"Thomas",
"Christopher",
""
],
[
"Kovashka",
"Adriana",
""
]
] | TITLE: Seeing Behind the Camera: Identifying the Authorship of a Photograph
ABSTRACT: We introduce the novel problem of identifying the photographer behind a
photograph. To explore the feasibility of current computer vision techniques to
address this problem, we created a new dataset of over 180,000 images taken by
41 well-known photographers. Using this dataset, we examined the effectiveness
of a variety of features (low and high-level, including CNN features) at
identifying the photographer. We also trained a new deep convolutional neural
network for this task. Our results show that high-level features greatly
outperform low-level features. We provide qualitative results using these
learned models that give insight into our method's ability to distinguish
between photographers, and allow us to draw interesting conclusions about what
specific photographers shoot. We also demonstrate two applications of our
method.
| new_dataset | 0.959078 |
1606.00110 | Chris Thomas | Christopher Thomas | OpenSalicon: An Open Source Implementation of the Salicon Saliency Model | Github Repository: https://github.com/CLT29/OpenSALICON | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this technical report, we present our publicly downloadable implementation
of the SALICON saliency model. At the time of this writing, SALICON is one of
the top performing saliency models on the MIT 300 fixation prediction dataset
which evaluates how well an algorithm is able to predict where humans would
look in a given image. Recently, numerous models have achieved state-of-the-art
performance on this benchmark, but none of the top 5 performing models
(including SALICON) are available for download. To address this issue, we have
created a publicly downloadable implementation of the SALICON model. It is our
hope that our model will engender further research in visual attention modeling
by providing a baseline for comparison of other algorithms and a platform for
extending this implementation. The model we provide supports both training and
testing, enabling researchers to quickly fine-tune the model on their own
dataset. We also provide a pre-trained model and code for those users who only
need to generate saliency maps for images without training their own model.
| [
{
"version": "v1",
"created": "Wed, 1 Jun 2016 04:28:10 GMT"
}
] | 2016-06-02T00:00:00 | [
[
"Thomas",
"Christopher",
""
]
] | TITLE: OpenSalicon: An Open Source Implementation of the Salicon Saliency Model
ABSTRACT: In this technical report, we present our publicly downloadable implementation
of the SALICON saliency model. At the time of this writing, SALICON is one of
the top performing saliency models on the MIT 300 fixation prediction dataset
which evaluates how well an algorithm is able to predict where humans would
look in a given image. Recently, numerous models have achieved state-of-the-art
performance on this benchmark, but none of the top 5 performing models
(including SALICON) are available for download. To address this issue, we have
created a publicly downloadable implementation of the SALICON model. It is our
hope that our model will engender further research in visual attention modeling
by providing a baseline for comparison of other algorithms and a platform for
extending this implementation. The model we provide supports both training and
testing, enabling researchers to quickly fine-tune the model on their own
dataset. We also provide a pre-trained model and code for those users who only
need to generate saliency maps for images without training their own model.
| no_new_dataset | 0.943191 |
1606.00136 | Ichiro Takeuchi Prof. | Hiroyuki Hanada, Atsushi Shibagaki, Jun Sakuma, Ichiro Takeuchi | Efficiently Bounding Optimal Solutions after Small Data Modification in
Large-Scale Empirical Risk Minimization | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study large-scale classification problems in changing environments where a
small part of the dataset is modified, and the effect of the data modification
must be quickly incorporated into the classifier. When the entire dataset is
large, even if the amount of the data modification is fairly small, the
computational cost of re-training the classifier would be prohibitively large.
In this paper, we propose a novel method for efficiently incorporating such a
data modification effect into the classifier without actually re-training it.
The proposed method provides bounds on the unknown optimal classifier with the
cost only proportional to the size of the data modification. We demonstrate
through numerical experiments that the proposed method provides sufficiently
tight bounds with negligible computational costs, especially when a small part
of the dataset is modified in a large-scale classification problem.
| [
{
"version": "v1",
"created": "Wed, 1 Jun 2016 06:56:17 GMT"
}
] | 2016-06-02T00:00:00 | [
[
"Hanada",
"Hiroyuki",
""
],
[
"Shibagaki",
"Atsushi",
""
],
[
"Sakuma",
"Jun",
""
],
[
"Takeuchi",
"Ichiro",
""
]
] | TITLE: Efficiently Bounding Optimal Solutions after Small Data Modification in
Large-Scale Empirical Risk Minimization
ABSTRACT: We study large-scale classification problems in changing environments where a
small part of the dataset is modified, and the effect of the data modification
must be quickly incorporated into the classifier. When the entire dataset is
large, even if the amount of the data modification is fairly small, the
computational cost of re-training the classifier would be prohibitively large.
In this paper, we propose a novel method for efficiently incorporating such a
data modification effect into the classifier without actually re-training it.
The proposed method provides bounds on the unknown optimal classifier with the
cost only proportional to the size of the data modification. We demonstrate
through numerical experiments that the proposed method provides sufficiently
tight bounds with negligible computational costs, especially when a small part
of the dataset is modified in a large-scale classification problem.
| no_new_dataset | 0.94868 |
1606.00210 | Duc Tam Hoang | Duc Tam Hoang and Shamil Chollampatt and Hwee Tou Ng | Exploiting N-Best Hypotheses to Improve an SMT Approach to Grammatical
Error Correction | Accepted for presentation at IJCAI-16 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Grammatical error correction (GEC) is the task of detecting and correcting
grammatical errors in texts written by second language learners. The
statistical machine translation (SMT) approach to GEC, in which sentences
written by second language learners are translated to grammatically correct
sentences, has achieved state-of-the-art accuracy. However, the SMT approach is
unable to utilize global context. In this paper, we propose a novel approach to
improve the accuracy of GEC, by exploiting the n-best hypotheses generated by
an SMT approach. Specifically, we build a classifier to score the edits in the
n-best hypotheses. The classifier can be used to select appropriate edits or
re-rank the n-best hypotheses. We apply these methods to a state-of-the-art GEC
system that uses the SMT approach. Our experiments show that our methods
achieve statistically significant improvements in accuracy over the best
published results on a benchmark test dataset on GEC.
| [
{
"version": "v1",
"created": "Wed, 1 Jun 2016 10:32:28 GMT"
}
] | 2016-06-02T00:00:00 | [
[
"Hoang",
"Duc Tam",
""
],
[
"Chollampatt",
"Shamil",
""
],
[
"Ng",
"Hwee Tou",
""
]
] | TITLE: Exploiting N-Best Hypotheses to Improve an SMT Approach to Grammatical
Error Correction
ABSTRACT: Grammatical error correction (GEC) is the task of detecting and correcting
grammatical errors in texts written by second language learners. The
statistical machine translation (SMT) approach to GEC, in which sentences
written by second language learners are translated to grammatically correct
sentences, has achieved state-of-the-art accuracy. However, the SMT approach is
unable to utilize global context. In this paper, we propose a novel approach to
improve the accuracy of GEC, by exploiting the n-best hypotheses generated by
an SMT approach. Specifically, we build a classifier to score the edits in the
n-best hypotheses. The classifier can be used to select appropriate edits or
re-rank the n-best hypotheses. We apply these methods to a state-of-the-art GEC
system that uses the SMT approach. Our experiments show that our methods
achieve statistically significant improvements in accuracy over the best
published results on a benchmark test dataset on GEC.
| no_new_dataset | 0.949295 |
1606.00298 | Keunwoo Choi Mr | Keunwoo Choi, George Fazekas, Mark Sandler | Automatic tagging using deep convolutional neural networks | Accepted to ISMIR (International Society of Music Information
Retrieval) Conference 2016 | null | null | null | cs.SD cs.LG | http://creativecommons.org/licenses/by/4.0/ | We present a content-based automatic music tagging algorithm using fully
convolutional neural networks (FCNs). We evaluate different architectures
consisting of 2D convolutional layers and subsampling layers only. In the
experiments, we measure the AUC-ROC scores of the architectures with different
complexities and input types using the MagnaTagATune dataset, where a 4-layer
architecture shows state-of-the-art performance with mel-spectrogram input.
Furthermore, we evaluated the performances of the architectures with varying
the number of layers on a larger dataset (Million Song Dataset), and found that
deeper models outperformed the 4-layer architecture. The experiments show that
mel-spectrogram is an effective time-frequency representation for automatic
tagging and that more complex models benefit from more training data.
| [
{
"version": "v1",
"created": "Wed, 1 Jun 2016 14:18:08 GMT"
}
] | 2016-06-02T00:00:00 | [
[
"Choi",
"Keunwoo",
""
],
[
"Fazekas",
"George",
""
],
[
"Sandler",
"Mark",
""
]
] | TITLE: Automatic tagging using deep convolutional neural networks
ABSTRACT: We present a content-based automatic music tagging algorithm using fully
convolutional neural networks (FCNs). We evaluate different architectures
consisting of 2D convolutional layers and subsampling layers only. In the
experiments, we measure the AUC-ROC scores of the architectures with different
complexities and input types using the MagnaTagATune dataset, where a 4-layer
architecture shows state-of-the-art performance with mel-spectrogram input.
Furthermore, we evaluated the performances of the architectures with varying
the number of layers on a larger dataset (Million Song Dataset), and found that
deeper models outperformed the 4-layer architecture. The experiments show that
mel-spectrogram is an effective time-frequency representation for automatic
tagging and that more complex models benefit from more training data.
| no_new_dataset | 0.944022 |
1606.00372 | Rami Al-Rfou | Rami Al-Rfou and Marc Pickett and Javier Snaider and Yun-hsuan Sung
and Brian Strope and Ray Kurzweil | Conversational Contextual Cues: The Case of Personalization and History
for Response Ranking | 10 pages, 6 figures | null | null | null | cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate the task of modeling open-domain, multi-turn, unstructured,
multi-participant, conversational dialogue. We specifically study the effect of
incorporating different elements of the conversation. Unlike previous efforts,
which focused on modeling messages and responses, we extend the modeling to
long context and participant's history. Our system does not rely on handwritten
rules or engineered features; instead, we train deep neural networks on a large
conversational dataset. In particular, we exploit the structure of Reddit
comments and posts to extract 2.1 billion messages and 133 million
conversations. We evaluate our models on the task of predicting the next
response in a conversation, and we find that modeling both context and
participants improves prediction accuracy.
| [
{
"version": "v1",
"created": "Wed, 1 Jun 2016 18:01:14 GMT"
}
] | 2016-06-02T00:00:00 | [
[
"Al-Rfou",
"Rami",
""
],
[
"Pickett",
"Marc",
""
],
[
"Snaider",
"Javier",
""
],
[
"Sung",
"Yun-hsuan",
""
],
[
"Strope",
"Brian",
""
],
[
"Kurzweil",
"Ray",
""
]
] | TITLE: Conversational Contextual Cues: The Case of Personalization and History
for Response Ranking
ABSTRACT: We investigate the task of modeling open-domain, multi-turn, unstructured,
multi-participant, conversational dialogue. We specifically study the effect of
incorporating different elements of the conversation. Unlike previous efforts,
which focused on modeling messages and responses, we extend the modeling to
long context and participant's history. Our system does not rely on handwritten
rules or engineered features; instead, we train deep neural networks on a large
conversational dataset. In particular, we exploit the structure of Reddit
comments and posts to extract 2.1 billion messages and 133 million
conversations. We evaluate our models on the task of predicting the next
response in a conversation, and we find that modeling both context and
participants improves prediction accuracy.
| no_new_dataset | 0.94743 |
1606.00399 | Tianyi Zhou | Tianyi Zhou, Hua Ouyang, Yi Chang, Jeff Bilmes, Carlos Guestrin | Scaling Submodular Maximization via Pruned Submodularity Graphs | null | null | null | null | cs.LG math.CO stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a new random pruning method (called "submodular sparsification
(SS)") to reduce the cost of submodular maximization. The pruning is applied
via a "submodularity graph" over the $n$ ground elements, where each directed
edge is associated with a pairwise dependency defined by the submodular
function. In each step, SS prunes a $1-1/\sqrt{c}$ (for $c>1$) fraction of the
nodes using weights on edges computed based on only a small number ($O(\log
n)$) of randomly sampled nodes. The algorithm requires $\log_{\sqrt{c}}n$ steps
with a small and highly parallelizable per-step computation. An accuracy-speed
tradeoff parameter $c$, set as $c = 8$, leads to a fast shrink rate
$\sqrt{2}/4$ and small iteration complexity $\log_{2\sqrt{2}}n$. Analysis shows
that w.h.p., the greedy algorithm on the pruned set of size $O(\log^2 n)$ can
achieve a guarantee similar to that of processing the original dataset. In news
and video summarization tasks, SS is able to substantially reduce both
computational costs and memory usage, while maintaining (or even slightly
exceeding) the quality of the original (and much more costly) greedy algorithm.
| [
{
"version": "v1",
"created": "Wed, 1 Jun 2016 18:58:36 GMT"
}
] | 2016-06-02T00:00:00 | [
[
"Zhou",
"Tianyi",
""
],
[
"Ouyang",
"Hua",
""
],
[
"Chang",
"Yi",
""
],
[
"Bilmes",
"Jeff",
""
],
[
"Guestrin",
"Carlos",
""
]
] | TITLE: Scaling Submodular Maximization via Pruned Submodularity Graphs
ABSTRACT: We propose a new random pruning method (called "submodular sparsification
(SS)") to reduce the cost of submodular maximization. The pruning is applied
via a "submodularity graph" over the $n$ ground elements, where each directed
edge is associated with a pairwise dependency defined by the submodular
function. In each step, SS prunes a $1-1/\sqrt{c}$ (for $c>1$) fraction of the
nodes using weights on edges computed based on only a small number ($O(\log
n)$) of randomly sampled nodes. The algorithm requires $\log_{\sqrt{c}}n$ steps
with a small and highly parallelizable per-step computation. An accuracy-speed
tradeoff parameter $c$, set as $c = 8$, leads to a fast shrink rate
$\sqrt{2}/4$ and small iteration complexity $\log_{2\sqrt{2}}n$. Analysis shows
that w.h.p., the greedy algorithm on the pruned set of size $O(\log^2 n)$ can
achieve a guarantee similar to that of processing the original dataset. In news
and video summarization tasks, SS is able to substantially reduce both
computational costs and memory usage, while maintaining (or even slightly
exceeding) the quality of the original (and much more costly) greedy algorithm.
| no_new_dataset | 0.946941 |
1606.00405 | Carlo Maria Zw\"olf | Carlo Maria Zw\"olf, Nicolas Moreau, Marie-Lise Dubernet | New model for datasets citation and extraction reproducibility in VAMDC | 48 pages | null | 10.1016/j.jms.2016.04.009 | null | cs.DL physics.atom-ph physics.chem-ph | http://creativecommons.org/licenses/by-nc-sa/4.0/ | In this paper we present a new paradigm for the identification of datasets
extracted from the Virtual Atomic and Molecular Data Centre (VAMDC) e-science
infrastructure. Such identification includes information on the origin and
version of the datasets, references associated to individual data in the
datasets, as well as timestamps linked to the extraction procedure. This
paradigm is described through the modifications of the language used to
exchange data within the VAMDC and through the services that will implement
those modifications. This new paradigm should enforce traceability of datasets,
favour reproducibility of datasets extraction, and facilitate the systematic
citation of the authors having originally measured and/or calculated the
extracted atomic and molecular data.
| [
{
"version": "v1",
"created": "Mon, 9 May 2016 19:14:35 GMT"
}
] | 2016-06-02T00:00:00 | [
[
"Zwölf",
"Carlo Maria",
""
],
[
"Moreau",
"Nicolas",
""
],
[
"Dubernet",
"Marie-Lise",
""
]
] | TITLE: New model for datasets citation and extraction reproducibility in VAMDC
ABSTRACT: In this paper we present a new paradigm for the identification of datasets
extracted from the Virtual Atomic and Molecular Data Centre (VAMDC) e-science
infrastructure. Such identification includes information on the origin and
version of the datasets, references associated to individual data in the
datasets, as well as timestamps linked to the extraction procedure. This
paradigm is described through the modifications of the language used to
exchange data within the VAMDC and through the services that will implement
those modifications. This new paradigm should enforce traceability of datasets,
favour reproducibility of datasets extraction, and facilitate the systematic
citation of the authors having originally measured and/or calculated the
extracted atomic and molecular data.
| no_new_dataset | 0.955361 |
1502.03473 | Shuai Li | Shuai Li and Alexandros Karatzoglou and Claudio Gentile | Collaborative Filtering Bandits | The 39th SIGIR (SIGIR 2016) | null | null | null | cs.LG cs.AI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Classical collaborative filtering, and content-based filtering methods try to
learn a static recommendation model given training data. These approaches are
far from ideal in highly dynamic recommendation domains such as news
recommendation and computational advertisement, where the set of items and
users is very fluid. In this work, we investigate an adaptive clustering
technique for content recommendation based on exploration-exploitation
strategies in contextual multi-armed bandit settings. Our algorithm takes into
account the collaborative effects that arise due to the interaction of the
users with the items, by dynamically grouping users based on the items under
consideration and, at the same time, grouping items based on the similarity of
the clusterings induced over the users. The resulting algorithm thus takes
advantage of preference patterns in the data in a way akin to collaborative
filtering methods. We provide an empirical analysis on medium-size real-world
datasets, showing scalability and increased prediction performance (as measured
by click-through rate) over state-of-the-art methods for clustering bandits. We
also provide a regret analysis within a standard linear stochastic noise
setting.
| [
{
"version": "v1",
"created": "Wed, 11 Feb 2015 22:28:14 GMT"
},
{
"version": "v2",
"created": "Tue, 17 Mar 2015 17:51:41 GMT"
},
{
"version": "v3",
"created": "Thu, 7 May 2015 17:03:39 GMT"
},
{
"version": "v4",
"created": "Thu, 24 Dec 2015 17:24:07 GMT"
},
{
"version": "v5",
"created": "Wed, 30 Mar 2016 10:29:12 GMT"
},
{
"version": "v6",
"created": "Wed, 11 May 2016 15:17:30 GMT"
},
{
"version": "v7",
"created": "Tue, 31 May 2016 18:47:03 GMT"
}
] | 2016-06-01T00:00:00 | [
[
"Li",
"Shuai",
""
],
[
"Karatzoglou",
"Alexandros",
""
],
[
"Gentile",
"Claudio",
""
]
] | TITLE: Collaborative Filtering Bandits
ABSTRACT: Classical collaborative filtering, and content-based filtering methods try to
learn a static recommendation model given training data. These approaches are
far from ideal in highly dynamic recommendation domains such as news
recommendation and computational advertisement, where the set of items and
users is very fluid. In this work, we investigate an adaptive clustering
technique for content recommendation based on exploration-exploitation
strategies in contextual multi-armed bandit settings. Our algorithm takes into
account the collaborative effects that arise due to the interaction of the
users with the items, by dynamically grouping users based on the items under
consideration and, at the same time, grouping items based on the similarity of
the clusterings induced over the users. The resulting algorithm thus takes
advantage of preference patterns in the data in a way akin to collaborative
filtering methods. We provide an empirical analysis on medium-size real-world
datasets, showing scalability and increased prediction performance (as measured
by click-through rate) over state-of-the-art methods for clustering bandits. We
also provide a regret analysis within a standard linear stochastic noise
setting.
| no_new_dataset | 0.948965 |
1602.06291 | Shalini Ghosh | Shalini Ghosh, Oriol Vinyals, Brian Strope, Scott Roy, Tom Dean, Larry
Heck | Contextual LSTM (CLSTM) models for Large scale NLP tasks | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Documents exhibit sequential structure at multiple levels of abstraction
(e.g., sentences, paragraphs, sections). These abstractions constitute a
natural hierarchy for representing the context in which to infer the meaning of
words and larger fragments of text. In this paper, we present CLSTM (Contextual
LSTM), an extension of the recurrent neural network LSTM (Long-Short Term
Memory) model, where we incorporate contextual features (e.g., topics) into the
model. We evaluate CLSTM on three specific NLP tasks: word prediction, next
sentence selection, and sentence topic prediction. Results from experiments run
on two corpora, English documents in Wikipedia and a subset of articles from a
recent snapshot of English Google News, indicate that using both words and
topics as features improves performance of the CLSTM models over baseline LSTM
models for these tasks. For example on the next sentence selection task, we get
relative accuracy improvements of 21% for the Wikipedia dataset and 18% for the
Google News dataset. This clearly demonstrates the significant benefit of using
context appropriately in natural language (NL) tasks. This has implications for
a wide variety of NL applications like question answering, sentence completion,
paraphrase generation, and next utterance prediction in dialog systems.
| [
{
"version": "v1",
"created": "Fri, 19 Feb 2016 20:52:08 GMT"
},
{
"version": "v2",
"created": "Tue, 31 May 2016 17:19:09 GMT"
}
] | 2016-06-01T00:00:00 | [
[
"Ghosh",
"Shalini",
""
],
[
"Vinyals",
"Oriol",
""
],
[
"Strope",
"Brian",
""
],
[
"Roy",
"Scott",
""
],
[
"Dean",
"Tom",
""
],
[
"Heck",
"Larry",
""
]
] | TITLE: Contextual LSTM (CLSTM) models for Large scale NLP tasks
ABSTRACT: Documents exhibit sequential structure at multiple levels of abstraction
(e.g., sentences, paragraphs, sections). These abstractions constitute a
natural hierarchy for representing the context in which to infer the meaning of
words and larger fragments of text. In this paper, we present CLSTM (Contextual
LSTM), an extension of the recurrent neural network LSTM (Long-Short Term
Memory) model, where we incorporate contextual features (e.g., topics) into the
model. We evaluate CLSTM on three specific NLP tasks: word prediction, next
sentence selection, and sentence topic prediction. Results from experiments run
on two corpora, English documents in Wikipedia and a subset of articles from a
recent snapshot of English Google News, indicate that using both words and
topics as features improves performance of the CLSTM models over baseline LSTM
models for these tasks. For example on the next sentence selection task, we get
relative accuracy improvements of 21% for the Wikipedia dataset and 18% for the
Google News dataset. This clearly demonstrates the significant benefit of using
context appropriately in natural language (NL) tasks. This has implications for
a wide variety of NL applications like question answering, sentence completion,
paraphrase generation, and next utterance prediction in dialog systems.
| no_new_dataset | 0.951233 |
1603.02501 | Harish Ramaswamy | Harish G. Ramaswamy and Clayton Scott and Ambuj Tewari | Mixture Proportion Estimation via Kernel Embedding of Distributions | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mixture proportion estimation (MPE) is the problem of estimating the weight
of a component distribution in a mixture, given samples from the mixture and
component. This problem constitutes a key part in many "weakly supervised
learning" problems like learning with positive and unlabelled samples, learning
with label noise, anomaly detection and crowdsourcing. While there have been
several methods proposed to solve this problem, to the best of our knowledge no
efficient algorithm with a proven convergence rate towards the true proportion
exists for this problem. We fill this gap by constructing a provably correct
algorithm for MPE, and derive convergence rates under certain assumptions on
the distribution. Our method is based on embedding distributions onto an RKHS,
and implementing it only requires solving a simple convex quadratic programming
problem a few times. We run our algorithm on several standard classification
datasets, and demonstrate that it performs comparably to or better than other
algorithms on most datasets.
| [
{
"version": "v1",
"created": "Tue, 8 Mar 2016 12:43:29 GMT"
},
{
"version": "v2",
"created": "Tue, 31 May 2016 16:41:44 GMT"
}
] | 2016-06-01T00:00:00 | [
[
"Ramaswamy",
"Harish G.",
""
],
[
"Scott",
"Clayton",
""
],
[
"Tewari",
"Ambuj",
""
]
] | TITLE: Mixture Proportion Estimation via Kernel Embedding of Distributions
ABSTRACT: Mixture proportion estimation (MPE) is the problem of estimating the weight
of a component distribution in a mixture, given samples from the mixture and
component. This problem constitutes a key part in many "weakly supervised
learning" problems like learning with positive and unlabelled samples, learning
with label noise, anomaly detection and crowdsourcing. While there have been
several methods proposed to solve this problem, to the best of our knowledge no
efficient algorithm with a proven convergence rate towards the true proportion
exists for this problem. We fill this gap by constructing a provably correct
algorithm for MPE, and derive convergence rates under certain assumptions on
the distribution. Our method is based on embedding distributions onto an RKHS,
and implementing it only requires solving a simple convex quadratic programming
problem a few times. We run our algorithm on several standard classification
datasets, and demonstrate that it performs comparably to or better than other
algorithms on most datasets.
| no_new_dataset | 0.9462 |
1605.09346 | Jean-Baptiste Alayrac | Anton Osokin, Jean-Baptiste Alayrac, Isabella Lukasewitz, Puneet K.
Dokania, Simon Lacoste-Julien | Minding the Gaps for Block Frank-Wolfe Optimization of Structured SVMs | Appears in Proceedings of the 33rd International Conference on
Machine Learning (ICML 2016). 31 pages | null | null | null | cs.LG math.OC stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose several improvements on the block-coordinate
Frank-Wolfe (BCFW) algorithm from Lacoste-Julien et al. (2013) recently used to
optimize the structured support vector machine (SSVM) objective in the context
of structured prediction, though it has wider applications. The key intuition
behind our improvements is that the estimates of block gaps maintained by BCFW
reveal the block suboptimality that can be used as an adaptive criterion.
First, we sample objects at each iteration of BCFW in an adaptive non-uniform
way via gapbased sampling. Second, we incorporate pairwise and away-step
variants of Frank-Wolfe into the block-coordinate setting. Third, we cache
oracle calls with a cache-hit criterion based on the block gaps. Fourth, we
provide the first method to compute an approximate regularization path for
SSVM. Finally, we provide an exhaustive empirical evaluation of all our methods
on four structured prediction datasets.
| [
{
"version": "v1",
"created": "Mon, 30 May 2016 18:15:30 GMT"
}
] | 2016-06-01T00:00:00 | [
[
"Osokin",
"Anton",
""
],
[
"Alayrac",
"Jean-Baptiste",
""
],
[
"Lukasewitz",
"Isabella",
""
],
[
"Dokania",
"Puneet K.",
""
],
[
"Lacoste-Julien",
"Simon",
""
]
] | TITLE: Minding the Gaps for Block Frank-Wolfe Optimization of Structured SVMs
ABSTRACT: In this paper, we propose several improvements on the block-coordinate
Frank-Wolfe (BCFW) algorithm from Lacoste-Julien et al. (2013) recently used to
optimize the structured support vector machine (SSVM) objective in the context
of structured prediction, though it has wider applications. The key intuition
behind our improvements is that the estimates of block gaps maintained by BCFW
reveal the block suboptimality that can be used as an adaptive criterion.
First, we sample objects at each iteration of BCFW in an adaptive non-uniform
way via gapbased sampling. Second, we incorporate pairwise and away-step
variants of Frank-Wolfe into the block-coordinate setting. Third, we cache
oracle calls with a cache-hit criterion based on the block gaps. Fourth, we
provide the first method to compute an approximate regularization path for
SSVM. Finally, we provide an exhaustive empirical evaluation of all our methods
on four structured prediction datasets.
| no_new_dataset | 0.947817 |
1605.09452 | Yang Liu | Yang Liu, Minh Hoai, Mang Shao, Tae-Kyun Kim | Latent Bi-constraint SVM for Video-based Object Recognition | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We address the task of recognizing objects from video input. This important
problem is relatively unexplored, compared with image-based object recognition.
To this end, we make the following contributions. First, we introduce two
comprehensive datasets for video-based object recognition. Second, we propose
Latent Bi-constraint SVM (LBSVM), a maximum-margin framework for video-based
object recognition. LBSVM is based on Structured-Output SVM, but extends it to
handle noisy video data and ensure consistency of the output decision
throughout time. We apply LBSVM to recognize office objects and museum
sculptures, and we demonstrate its benefits over image-based, set-based, and
other video-based object recognition.
| [
{
"version": "v1",
"created": "Tue, 31 May 2016 00:34:37 GMT"
}
] | 2016-06-01T00:00:00 | [
[
"Liu",
"Yang",
""
],
[
"Hoai",
"Minh",
""
],
[
"Shao",
"Mang",
""
],
[
"Kim",
"Tae-Kyun",
""
]
] | TITLE: Latent Bi-constraint SVM for Video-based Object Recognition
ABSTRACT: We address the task of recognizing objects from video input. This important
problem is relatively unexplored, compared with image-based object recognition.
To this end, we make the following contributions. First, we introduce two
comprehensive datasets for video-based object recognition. Second, we propose
Latent Bi-constraint SVM (LBSVM), a maximum-margin framework for video-based
object recognition. LBSVM is based on Structured-Output SVM, but extends it to
handle noisy video data and ensure consistency of the output decision
throughout time. We apply LBSVM to recognize office objects and museum
sculptures, and we demonstrate its benefits over image-based, set-based, and
other video-based object recognition.
| new_dataset | 0.953708 |
1605.09458 | Hui Shen | Hui Shen, Dehua Li, Hong Wu, Zhaoxiang Zang | Training Auto-encoders Effectively via Eliminating Task-irrelevant Input
Variables | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Auto-encoders are often used as building blocks of deep network classifier to
learn feature extractors, but task-irrelevant information in the input data may
lead to bad extractors and result in poor generalization performance of the
network. In this paper,via dropping the task-irrelevant input variables the
performance of auto-encoders can be obviously improved .Specifically, an
importance-based variable selection method is proposed to aim at finding the
task-irrelevant input variables and dropping them.It firstly estimates
importance of each variable,and then drops the variables with importance value
lower than a threshold. In order to obtain better performance, the method can
be employed for each layer of stacked auto-encoders. Experimental results show
that when combined with our method the stacked denoising auto-encoders achieves
significantly improved performance on three challenging datasets.
| [
{
"version": "v1",
"created": "Tue, 31 May 2016 00:58:47 GMT"
}
] | 2016-06-01T00:00:00 | [
[
"Shen",
"Hui",
""
],
[
"Li",
"Dehua",
""
],
[
"Wu",
"Hong",
""
],
[
"Zang",
"Zhaoxiang",
""
]
] | TITLE: Training Auto-encoders Effectively via Eliminating Task-irrelevant Input
Variables
ABSTRACT: Auto-encoders are often used as building blocks of deep network classifier to
learn feature extractors, but task-irrelevant information in the input data may
lead to bad extractors and result in poor generalization performance of the
network. In this paper,via dropping the task-irrelevant input variables the
performance of auto-encoders can be obviously improved .Specifically, an
importance-based variable selection method is proposed to aim at finding the
task-irrelevant input variables and dropping them.It firstly estimates
importance of each variable,and then drops the variables with importance value
lower than a threshold. In order to obtain better performance, the method can
be employed for each layer of stacked auto-encoders. Experimental results show
that when combined with our method the stacked denoising auto-encoders achieves
significantly improved performance on three challenging datasets.
| no_new_dataset | 0.947962 |
1605.09473 | Yu Wang | Yu Wang and Yang Feng and Xiyang Zhang and Richard Niemi and Jiebo Luo | Will Sanders Supporters Jump Ship for Trump? Fine-grained Analysis of
Twitter Followers | Election-series, 4 pages, 6 figures, under review for CIKM 2016.
arXiv admin note: substantial text overlap with arXiv:1605.05401 | null | null | null | cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we study the likelihood of Bernie Sanders supporters voting
for Donald Trump instead of Hillary Clinton. Building from a unique time-series
dataset of the three candidates' Twitter followers, which we make public here,
we first study the proportion of Sanders followers who simultaneously follow
Trump (but not Clinton) and how this evolves over time. Then we train a
convolutional neural network to classify the gender of Sanders followers, and
study whether men are more likely to jump ship for Trump than women. Our study
shows that between March and May an increasing proportion of Sanders followers
are following Trump (but not Clinton). The proportion of Sanders followers who
follow Clinton but not Trump has actually decreased. Equally important, our
study suggests that the jumping ship behavior will be affected by gender and
that men are more likely to switch to Trump than women.
| [
{
"version": "v1",
"created": "Tue, 31 May 2016 02:51:15 GMT"
}
] | 2016-06-01T00:00:00 | [
[
"Wang",
"Yu",
""
],
[
"Feng",
"Yang",
""
],
[
"Zhang",
"Xiyang",
""
],
[
"Niemi",
"Richard",
""
],
[
"Luo",
"Jiebo",
""
]
] | TITLE: Will Sanders Supporters Jump Ship for Trump? Fine-grained Analysis of
Twitter Followers
ABSTRACT: In this paper, we study the likelihood of Bernie Sanders supporters voting
for Donald Trump instead of Hillary Clinton. Building from a unique time-series
dataset of the three candidates' Twitter followers, which we make public here,
we first study the proportion of Sanders followers who simultaneously follow
Trump (but not Clinton) and how this evolves over time. Then we train a
convolutional neural network to classify the gender of Sanders followers, and
study whether men are more likely to jump ship for Trump than women. Our study
shows that between March and May an increasing proportion of Sanders followers
are following Trump (but not Clinton). The proportion of Sanders followers who
follow Clinton but not Trump has actually decreased. Equally important, our
study suggests that the jumping ship behavior will be affected by gender and
that men are more likely to switch to Trump than women.
| new_dataset | 0.952131 |
1605.09477 | Yin Zheng | Yin Zheng, Bangsheng Tang, Wenkui Ding, Hanning Zhou | A Neural Autoregressive Approach to Collaborative Filtering | Accepted by ICML2016 | null | null | null | cs.IR cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes CF-NADE, a neural autoregressive architecture for
collaborative filtering (CF) tasks, which is inspired by the Restricted
Boltzmann Machine (RBM) based CF model and the Neural Autoregressive
Distribution Estimator (NADE). We first describe the basic CF-NADE model for CF
tasks. Then we propose to improve the model by sharing parameters between
different ratings. A factored version of CF-NADE is also proposed for better
scalability. Furthermore, we take the ordinal nature of the preferences into
consideration and propose an ordinal cost to optimize CF-NADE, which shows
superior performance. Finally, CF-NADE can be extended to a deep model, with
only moderately increased computational complexity. Experimental results show
that CF-NADE with a single hidden layer beats all previous state-of-the-art
methods on MovieLens 1M, MovieLens 10M, and Netflix datasets, and adding more
hidden layers can further improve the performance.
| [
{
"version": "v1",
"created": "Tue, 31 May 2016 03:07:06 GMT"
}
] | 2016-06-01T00:00:00 | [
[
"Zheng",
"Yin",
""
],
[
"Tang",
"Bangsheng",
""
],
[
"Ding",
"Wenkui",
""
],
[
"Zhou",
"Hanning",
""
]
] | TITLE: A Neural Autoregressive Approach to Collaborative Filtering
ABSTRACT: This paper proposes CF-NADE, a neural autoregressive architecture for
collaborative filtering (CF) tasks, which is inspired by the Restricted
Boltzmann Machine (RBM) based CF model and the Neural Autoregressive
Distribution Estimator (NADE). We first describe the basic CF-NADE model for CF
tasks. Then we propose to improve the model by sharing parameters between
different ratings. A factored version of CF-NADE is also proposed for better
scalability. Furthermore, we take the ordinal nature of the preferences into
consideration and propose an ordinal cost to optimize CF-NADE, which shows
superior performance. Finally, CF-NADE can be extended to a deep model, with
only moderately increased computational complexity. Experimental results show
that CF-NADE with a single hidden layer beats all previous state-of-the-art
methods on MovieLens 1M, MovieLens 10M, and Netflix datasets, and adding more
hidden layers can further improve the performance.
| no_new_dataset | 0.947039 |
1605.09546 | Miaomiao Liu | Miaomiao Liu, Mathieu Salzmann, Xuming He | Semantic-Aware Depth Super-Resolution in Outdoor Scenes | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While depth sensors are becoming increasingly popular, their spatial
resolution often remains limited. Depth super-resolution therefore emerged as a
solution to this problem. Despite much progress, state-of-the-art techniques
suffer from two drawbacks: (i) they rely on the assumption that intensity edges
coincide with depth discontinuities, which, unfortunately, is only true in
controlled environments; and (ii) they typically exploit the availability of
high-resolution training depth maps, which can often not be acquired in
practice due to the sensors' limitations. By contrast, here, we introduce an
approach to performing depth super-resolution in more challenging conditions,
such as in outdoor scenes. To this end, we first propose to exploit semantic
information to better constrain the super-resolution process. In particular, we
design a co-sparse analysis model that learns filters from joint intensity,
depth and semantic information. Furthermore, we show how low-resolution
training depth maps can be employed in our learning strategy. We demonstrate
the benefits of our approach over state-of-the-art depth super-resolution
methods on two outdoor scene datasets.
| [
{
"version": "v1",
"created": "Tue, 31 May 2016 09:37:55 GMT"
}
] | 2016-06-01T00:00:00 | [
[
"Liu",
"Miaomiao",
""
],
[
"Salzmann",
"Mathieu",
""
],
[
"He",
"Xuming",
""
]
] | TITLE: Semantic-Aware Depth Super-Resolution in Outdoor Scenes
ABSTRACT: While depth sensors are becoming increasingly popular, their spatial
resolution often remains limited. Depth super-resolution therefore emerged as a
solution to this problem. Despite much progress, state-of-the-art techniques
suffer from two drawbacks: (i) they rely on the assumption that intensity edges
coincide with depth discontinuities, which, unfortunately, is only true in
controlled environments; and (ii) they typically exploit the availability of
high-resolution training depth maps, which can often not be acquired in
practice due to the sensors' limitations. By contrast, here, we introduce an
approach to performing depth super-resolution in more challenging conditions,
such as in outdoor scenes. To this end, we first propose to exploit semantic
information to better constrain the super-resolution process. In particular, we
design a co-sparse analysis model that learns filters from joint intensity,
depth and semantic information. Furthermore, we show how low-resolution
training depth maps can be employed in our learning strategy. We demonstrate
the benefits of our approach over state-of-the-art depth super-resolution
methods on two outdoor scene datasets.
| no_new_dataset | 0.948632 |
1605.09721 | Dimitris Papailiopoulos | Xinghao Pan, Maximilian Lam, Stephen Tu, Dimitris Papailiopoulos, Ce
Zhang, Michael I. Jordan, Kannan Ramchandran, Chris Re, Benjamin Recht | CYCLADES: Conflict-free Asynchronous Machine Learning | null | null | null | null | stat.ML cs.DC cs.DS cs.LG math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present CYCLADES, a general framework for parallelizing stochastic
optimization algorithms in a shared memory setting. CYCLADES is asynchronous
during shared model updates, and requires no memory locking mechanisms, similar
to HOGWILD!-type algorithms. Unlike HOGWILD!, CYCLADES introduces no conflicts
during the parallel execution, and offers a black-box analysis for provable
speedups across a large family of algorithms. Due to its inherent conflict-free
nature and cache locality, our multi-core implementation of CYCLADES
consistently outperforms HOGWILD!-type algorithms on sufficiently sparse
datasets, leading to up to 40% speedup gains compared to the HOGWILD!
implementation of SGD, and up to 5x gains over asynchronous implementations of
variance reduction algorithms.
| [
{
"version": "v1",
"created": "Tue, 31 May 2016 17:15:01 GMT"
}
] | 2016-06-01T00:00:00 | [
[
"Pan",
"Xinghao",
""
],
[
"Lam",
"Maximilian",
""
],
[
"Tu",
"Stephen",
""
],
[
"Papailiopoulos",
"Dimitris",
""
],
[
"Zhang",
"Ce",
""
],
[
"Jordan",
"Michael I.",
""
],
[
"Ramchandran",
"Kannan",
""
],
[
"Re",
"Chris",
""
],
[
"Recht",
"Benjamin",
""
]
] | TITLE: CYCLADES: Conflict-free Asynchronous Machine Learning
ABSTRACT: We present CYCLADES, a general framework for parallelizing stochastic
optimization algorithms in a shared memory setting. CYCLADES is asynchronous
during shared model updates, and requires no memory locking mechanisms, similar
to HOGWILD!-type algorithms. Unlike HOGWILD!, CYCLADES introduces no conflicts
during the parallel execution, and offers a black-box analysis for provable
speedups across a large family of algorithms. Due to its inherent conflict-free
nature and cache locality, our multi-core implementation of CYCLADES
consistently outperforms HOGWILD!-type algorithms on sufficiently sparse
datasets, leading to up to 40% speedup gains compared to the HOGWILD!
implementation of SGD, and up to 5x gains over asynchronous implementations of
variance reduction algorithms.
| no_new_dataset | 0.942188 |
1509.01618 | Chengtao Li | Chengtao Li, Stefanie Jegelka and Suvrit Sra | Efficient Sampling for k-Determinantal Point Processes | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Determinantal Point Processes (DPPs) are elegant probabilistic models of
repulsion and diversity over discrete sets of items. But their applicability to
large sets is hindered by expensive cubic-complexity matrix operations for
basic tasks such as sampling. In light of this, we propose a new method for
approximate sampling from discrete $k$-DPPs. Our method takes advantage of the
diversity property of subsets sampled from a DPP, and proceeds in two stages:
first it constructs coresets for the ground set of items; thereafter, it
efficiently samples subsets based on the constructed coresets. As opposed to
previous approaches, our algorithm aims to minimize the total variation
distance to the original distribution. Experiments on both synthetic and real
datasets indicate that our sampling algorithm works efficiently on large data
sets, and yields more accurate samples than previous approaches.
| [
{
"version": "v1",
"created": "Fri, 4 Sep 2015 21:38:17 GMT"
},
{
"version": "v2",
"created": "Sat, 28 May 2016 00:37:56 GMT"
}
] | 2016-05-31T00:00:00 | [
[
"Li",
"Chengtao",
""
],
[
"Jegelka",
"Stefanie",
""
],
[
"Sra",
"Suvrit",
""
]
] | TITLE: Efficient Sampling for k-Determinantal Point Processes
ABSTRACT: Determinantal Point Processes (DPPs) are elegant probabilistic models of
repulsion and diversity over discrete sets of items. But their applicability to
large sets is hindered by expensive cubic-complexity matrix operations for
basic tasks such as sampling. In light of this, we propose a new method for
approximate sampling from discrete $k$-DPPs. Our method takes advantage of the
diversity property of subsets sampled from a DPP, and proceeds in two stages:
first it constructs coresets for the ground set of items; thereafter, it
efficiently samples subsets based on the constructed coresets. As opposed to
previous approaches, our algorithm aims to minimize the total variation
distance to the original distribution. Experiments on both synthetic and real
datasets indicate that our sampling algorithm works efficiently on large data
sets, and yields more accurate samples than previous approaches.
| no_new_dataset | 0.949342 |
1511.00352 | Abhinav Maurya | Abhinav Maurya | Spatial Semantic Scan: Jointly Detecting Subtle Events and their Spatial
Footprint | 26 pages | null | null | null | cs.LG cs.CL stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many methods have been proposed for detecting emerging events in text streams
using topic modeling. However, these methods have shortcomings that make them
unsuitable for rapid detection of locally emerging events on massive text
streams. We describe Spatially Compact Semantic Scan (SCSS) that has been
developed specifically to overcome the shortcomings of current methods in
detecting new spatially compact events in text streams. SCSS employs
alternating optimization between using semantic scan to estimate contrastive
foreground topics in documents, and discovering spatial neighborhoods with high
occurrence of documents containing the foreground topics. We evaluate our
method on Emergency Department chief complaints dataset (ED dataset) to verify
the effectiveness of our method in detecting real-world disease outbreaks from
free-text ED chief complaint data.
| [
{
"version": "v1",
"created": "Mon, 2 Nov 2015 01:45:41 GMT"
},
{
"version": "v2",
"created": "Tue, 16 Feb 2016 03:01:41 GMT"
},
{
"version": "v3",
"created": "Sat, 28 May 2016 18:59:48 GMT"
}
] | 2016-05-31T00:00:00 | [
[
"Maurya",
"Abhinav",
""
]
] | TITLE: Spatial Semantic Scan: Jointly Detecting Subtle Events and their Spatial
Footprint
ABSTRACT: Many methods have been proposed for detecting emerging events in text streams
using topic modeling. However, these methods have shortcomings that make them
unsuitable for rapid detection of locally emerging events on massive text
streams. We describe Spatially Compact Semantic Scan (SCSS) that has been
developed specifically to overcome the shortcomings of current methods in
detecting new spatially compact events in text streams. SCSS employs
alternating optimization between using semantic scan to estimate contrastive
foreground topics in documents, and discovering spatial neighborhoods with high
occurrence of documents containing the foreground topics. We evaluate our
method on Emergency Department chief complaints dataset (ED dataset) to verify
the effectiveness of our method in detecting real-world disease outbreaks from
free-text ED chief complaint data.
| no_new_dataset | 0.902395 |
1512.01110 | Yang Song | Yang Song, Jun Zhu | Bayesian Matrix Completion via Adaptive Relaxed Spectral Regularization | Accepted to AAAI 2016 | null | null | null | cs.NA cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bayesian matrix completion has been studied based on a low-rank matrix
factorization formulation with promising results. However, little work has been
done on Bayesian matrix completion based on the more direct spectral
regularization formulation. We fill this gap by presenting a novel Bayesian
matrix completion method based on spectral regularization. In order to
circumvent the difficulties of dealing with the orthonormality constraints of
singular vectors, we derive a new equivalent form with relaxed constraints,
which then leads us to design an adaptive version of spectral regularization
feasible for Bayesian inference. Our Bayesian method requires no parameter
tuning and can infer the number of latent factors automatically. Experiments on
synthetic and real datasets demonstrate encouraging results on rank recovery
and collaborative filtering, with notably good results for very sparse
matrices.
| [
{
"version": "v1",
"created": "Thu, 3 Dec 2015 15:16:19 GMT"
},
{
"version": "v2",
"created": "Fri, 25 Dec 2015 02:51:22 GMT"
}
] | 2016-05-31T00:00:00 | [
[
"Song",
"Yang",
""
],
[
"Zhu",
"Jun",
""
]
] | TITLE: Bayesian Matrix Completion via Adaptive Relaxed Spectral Regularization
ABSTRACT: Bayesian matrix completion has been studied based on a low-rank matrix
factorization formulation with promising results. However, little work has been
done on Bayesian matrix completion based on the more direct spectral
regularization formulation. We fill this gap by presenting a novel Bayesian
matrix completion method based on spectral regularization. In order to
circumvent the difficulties of dealing with the orthonormality constraints of
singular vectors, we derive a new equivalent form with relaxed constraints,
which then leads us to design an adaptive version of spectral regularization
feasible for Bayesian inference. Our Bayesian method requires no parameter
tuning and can infer the number of latent factors automatically. Experiments on
synthetic and real datasets demonstrate encouraging results on rank recovery
and collaborative filtering, with notably good results for very sparse
matrices.
| no_new_dataset | 0.947575 |
1602.07416 | Chongxuan Li | Chongxuan Li, Jun Zhu and Bo Zhang | Learning to Generate with Memory | null | null | null | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Memory units have been widely used to enrich the capabilities of deep
networks on capturing long-term dependencies in reasoning and prediction tasks,
but little investigation exists on deep generative models (DGMs) which are good
at inferring high-level invariant representations from unlabeled data. This
paper presents a deep generative model with a possibly large external memory
and an attention mechanism to capture the local detail information that is
often lost in the bottom-up abstraction process in representation learning. By
adopting a smooth attention model, the whole network is trained end-to-end by
optimizing a variational bound of data likelihood via auto-encoding variational
Bayesian methods, where an asymmetric recognition network is learnt jointly to
infer high-level invariant representations. The asymmetric architecture can
reduce the competition between bottom-up invariant feature extraction and
top-down generation of instance details. Our experiments on several datasets
demonstrate that memory can significantly boost the performance of DGMs and
even achieve state-of-the-art results on various tasks, including density
estimation, image generation, and missing value imputation.
| [
{
"version": "v1",
"created": "Wed, 24 Feb 2016 06:57:14 GMT"
},
{
"version": "v2",
"created": "Sat, 28 May 2016 03:41:27 GMT"
}
] | 2016-05-31T00:00:00 | [
[
"Li",
"Chongxuan",
""
],
[
"Zhu",
"Jun",
""
],
[
"Zhang",
"Bo",
""
]
] | TITLE: Learning to Generate with Memory
ABSTRACT: Memory units have been widely used to enrich the capabilities of deep
networks on capturing long-term dependencies in reasoning and prediction tasks,
but little investigation exists on deep generative models (DGMs) which are good
at inferring high-level invariant representations from unlabeled data. This
paper presents a deep generative model with a possibly large external memory
and an attention mechanism to capture the local detail information that is
often lost in the bottom-up abstraction process in representation learning. By
adopting a smooth attention model, the whole network is trained end-to-end by
optimizing a variational bound of data likelihood via auto-encoding variational
Bayesian methods, where an asymmetric recognition network is learnt jointly to
infer high-level invariant representations. The asymmetric architecture can
reduce the competition between bottom-up invariant feature extraction and
top-down generation of instance details. Our experiments on several datasets
demonstrate that memory can significantly boost the performance of DGMs and
even achieve state-of-the-art results on various tasks, including density
estimation, image generation, and missing value imputation.
| no_new_dataset | 0.947186 |
1603.00550 | Soravit Changpinyo | Soravit Changpinyo, Wei-Lun Chao, Boqing Gong, Fei Sha | Synthesized Classifiers for Zero-Shot Learning | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Given semantic descriptions of object classes, zero-shot learning aims to
accurately recognize objects of the unseen classes, from which no examples are
available at the training stage, by associating them to the seen classes, from
which labeled examples are provided. We propose to tackle this problem from the
perspective of manifold learning. Our main idea is to align the semantic space
that is derived from external information to the model space that concerns
itself with recognizing visual features. To this end, we introduce a set of
"phantom" object classes whose coordinates live in both the semantic space and
the model space. Serving as bases in a dictionary, they can be optimized from
labeled data such that the synthesized real object classifiers achieve optimal
discriminative performance. We demonstrate superior accuracy of our approach
over the state of the art on four benchmark datasets for zero-shot learning,
including the full ImageNet Fall 2011 dataset with more than 20,000 unseen
classes.
| [
{
"version": "v1",
"created": "Wed, 2 Mar 2016 01:59:22 GMT"
},
{
"version": "v2",
"created": "Fri, 13 May 2016 18:49:13 GMT"
},
{
"version": "v3",
"created": "Fri, 27 May 2016 21:48:48 GMT"
}
] | 2016-05-31T00:00:00 | [
[
"Changpinyo",
"Soravit",
""
],
[
"Chao",
"Wei-Lun",
""
],
[
"Gong",
"Boqing",
""
],
[
"Sha",
"Fei",
""
]
] | TITLE: Synthesized Classifiers for Zero-Shot Learning
ABSTRACT: Given semantic descriptions of object classes, zero-shot learning aims to
accurately recognize objects of the unseen classes, from which no examples are
available at the training stage, by associating them to the seen classes, from
which labeled examples are provided. We propose to tackle this problem from the
perspective of manifold learning. Our main idea is to align the semantic space
that is derived from external information to the model space that concerns
itself with recognizing visual features. To this end, we introduce a set of
"phantom" object classes whose coordinates live in both the semantic space and
the model space. Serving as bases in a dictionary, they can be optimized from
labeled data such that the synthesized real object classifiers achieve optimal
discriminative performance. We demonstrate superior accuracy of our approach
over the state of the art on four benchmark datasets for zero-shot learning,
including the full ImageNet Fall 2011 dataset with more than 20,000 unseen
classes.
| no_new_dataset | 0.930774 |
1605.08361 | Daniel Soudry | Daniel Soudry, Yair Carmon | No bad local minima: Data independent training error guarantees for
multilayer neural networks | null | null | null | null | stat.ML cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We use smoothed analysis techniques to provide guarantees on the training
loss of Multilayer Neural Networks (MNNs) at differentiable local minima.
Specifically, we examine MNNs with piecewise linear activation functions,
quadratic loss and a single output, under mild over-parametrization. We prove
that for a MNN with one hidden layer, the training error is zero at every
differentiable local minimum, for almost every dataset and dropout-like noise
realization. We then extend these results to the case of more than one hidden
layer. Our theoretical guarantees assume essentially nothing on the training
data, and are verified numerically. These results suggest why the highly
non-convex loss of such MNNs can be easily optimized using local updates (e.g.,
stochastic gradient descent), as observed empirically.
| [
{
"version": "v1",
"created": "Thu, 26 May 2016 16:51:05 GMT"
},
{
"version": "v2",
"created": "Mon, 30 May 2016 04:33:39 GMT"
}
] | 2016-05-31T00:00:00 | [
[
"Soudry",
"Daniel",
""
],
[
"Carmon",
"Yair",
""
]
] | TITLE: No bad local minima: Data independent training error guarantees for
multilayer neural networks
ABSTRACT: We use smoothed analysis techniques to provide guarantees on the training
loss of Multilayer Neural Networks (MNNs) at differentiable local minima.
Specifically, we examine MNNs with piecewise linear activation functions,
quadratic loss and a single output, under mild over-parametrization. We prove
that for a MNN with one hidden layer, the training error is zero at every
differentiable local minimum, for almost every dataset and dropout-like noise
realization. We then extend these results to the case of more than one hidden
layer. Our theoretical guarantees assume essentially nothing on the training
data, and are verified numerically. These results suggest why the highly
non-convex loss of such MNNs can be easily optimized using local updates (e.g.,
stochastic gradient descent), as observed empirically.
| no_new_dataset | 0.947624 |
1605.08846 | Meg Young | Meg Young | A Human-Centered Approach to Data Privacy : Political Economy, Power,
and Collective Data Subjects | This is a workshop paper accepted to the Human-Centered Data Science
Workshop at the Computer Supported Collaborative Work Conference in 2016 | null | null | null | cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Researchers find weaknesses in current strategies for protecting privacy in
large datasets. Many anonymized datasets are reidentifiable, and norms for
offering data subjects notice and consent over emphasize individual
responsibility. Based on fieldwork with data managers in the City of Seattle, I
identify ways that these conventional approaches break down in practice.
Drawing on work from theorists in sociocultural anthropology, I propose that a
Human Centered Data Science move beyond concepts like dataset identifiability
and sensitivity toward a broader ontology of who is implicated by a dataset,
and new ways of anticipating how data can be combined and used.
| [
{
"version": "v1",
"created": "Sat, 28 May 2016 04:57:13 GMT"
}
] | 2016-05-31T00:00:00 | [
[
"Young",
"Meg",
""
]
] | TITLE: A Human-Centered Approach to Data Privacy : Political Economy, Power,
and Collective Data Subjects
ABSTRACT: Researchers find weaknesses in current strategies for protecting privacy in
large datasets. Many anonymized datasets are reidentifiable, and norms for
offering data subjects notice and consent over emphasize individual
responsibility. Based on fieldwork with data managers in the City of Seattle, I
identify ways that these conventional approaches break down in practice.
Drawing on work from theorists in sociocultural anthropology, I propose that a
Human Centered Data Science move beyond concepts like dataset identifiability
and sensitivity toward a broader ontology of who is implicated by a dataset,
and new ways of anticipating how data can be combined and used.
| no_new_dataset | 0.948632 |
1605.08912 | Rushil Anirudh | Rushil Anirudh, Vinay Venkataraman, Karthikeyan Natesan Ramamurthy,
Pavan Turaga | A Riemannian Framework for Statistical Analysis of Topological
Persistence Diagrams | Accepted at DiffCVML 2016 (CVPR 2016 Workshops) | null | null | null | math.AT cs.CG cs.CV math.DG math.ST stat.TH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Topological data analysis is becoming a popular way to study high dimensional
feature spaces without any contextual clues or assumptions. This paper concerns
itself with one popular topological feature, which is the number of
$d-$dimensional holes in the dataset, also known as the Betti$-d$ number. The
persistence of the Betti numbers over various scales is encoded into a
persistence diagram (PD), which indicates the birth and death times of these
holes as scale varies. A common way to compare PDs is by a point-to-point
matching, which is given by the $n$-Wasserstein metric. However, a big drawback
of this approach is the need to solve correspondence between points before
computing the distance; for $n$ points, the complexity grows according to
$\mathcal{O}($n$^3)$. Instead, we propose to use an entirely new framework
built on Riemannian geometry, that models PDs as 2D probability density
functions that are represented in the square-root framework on a Hilbert
Sphere. The resulting space is much more intuitive with closed form expressions
for common operations. The distance metric is 1) correspondence-free and also
2) independent of the number of points in the dataset. The complexity of
computing distance between PDs now grows according to $\mathcal{O}(K^2)$, for a
$K \times K$ discretization of $[0,1]^2$. This also enables the use of existing
machinery in differential geometry towards statistical analysis of PDs such as
computing the mean, geodesics, classification etc. We report competitive
results with the Wasserstein metric, at a much lower computational load,
indicating the favorable properties of the proposed approach.
| [
{
"version": "v1",
"created": "Sat, 28 May 2016 16:55:40 GMT"
}
] | 2016-05-31T00:00:00 | [
[
"Anirudh",
"Rushil",
""
],
[
"Venkataraman",
"Vinay",
""
],
[
"Ramamurthy",
"Karthikeyan Natesan",
""
],
[
"Turaga",
"Pavan",
""
]
] | TITLE: A Riemannian Framework for Statistical Analysis of Topological
Persistence Diagrams
ABSTRACT: Topological data analysis is becoming a popular way to study high dimensional
feature spaces without any contextual clues or assumptions. This paper concerns
itself with one popular topological feature, which is the number of
$d-$dimensional holes in the dataset, also known as the Betti$-d$ number. The
persistence of the Betti numbers over various scales is encoded into a
persistence diagram (PD), which indicates the birth and death times of these
holes as scale varies. A common way to compare PDs is by a point-to-point
matching, which is given by the $n$-Wasserstein metric. However, a big drawback
of this approach is the need to solve correspondence between points before
computing the distance; for $n$ points, the complexity grows according to
$\mathcal{O}($n$^3)$. Instead, we propose to use an entirely new framework
built on Riemannian geometry, that models PDs as 2D probability density
functions that are represented in the square-root framework on a Hilbert
Sphere. The resulting space is much more intuitive with closed form expressions
for common operations. The distance metric is 1) correspondence-free and also
2) independent of the number of points in the dataset. The complexity of
computing distance between PDs now grows according to $\mathcal{O}(K^2)$, for a
$K \times K$ discretization of $[0,1]^2$. This also enables the use of existing
machinery in differential geometry towards statistical analysis of PDs such as
computing the mean, geodesics, classification etc. We report competitive
results with the Wasserstein metric, at a much lower computational load,
indicating the favorable properties of the proposed approach.
| no_new_dataset | 0.951188 |
1605.08961 | Anastasios Kyrillidis | Megasthenis Asteris, Anastasios Kyrillidis, Oluwasanmi Koyejo, Russell
Poldrack | A simple and provable algorithm for sparse diagonal CCA | To appear at ICML 2016, 14 pages, 4 figures | null | null | null | stat.ML cs.DS cs.IT math.IT math.OC stat.ME | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Given two sets of variables, derived from a common set of samples, sparse
Canonical Correlation Analysis (CCA) seeks linear combinations of a small
number of variables in each set, such that the induced canonical variables are
maximally correlated. Sparse CCA is NP-hard.
We propose a novel combinatorial algorithm for sparse diagonal CCA, i.e.,
sparse CCA under the additional assumption that variables within each set are
standardized and uncorrelated. Our algorithm operates on a low rank
approximation of the input data and its computational complexity scales
linearly with the number of input variables. It is simple to implement, and
parallelizable. In contrast to most existing approaches, our algorithm
administers precise control on the sparsity of the extracted canonical vectors,
and comes with theoretical data-dependent global approximation guarantees, that
hinge on the spectrum of the input data. Finally, it can be straightforwardly
adapted to other constrained variants of CCA enforcing structure beyond
sparsity.
We empirically evaluate the proposed scheme and apply it on a real
neuroimaging dataset to investigate associations between brain activity and
behavior measurements.
| [
{
"version": "v1",
"created": "Sun, 29 May 2016 03:56:23 GMT"
}
] | 2016-05-31T00:00:00 | [
[
"Asteris",
"Megasthenis",
""
],
[
"Kyrillidis",
"Anastasios",
""
],
[
"Koyejo",
"Oluwasanmi",
""
],
[
"Poldrack",
"Russell",
""
]
] | TITLE: A simple and provable algorithm for sparse diagonal CCA
ABSTRACT: Given two sets of variables, derived from a common set of samples, sparse
Canonical Correlation Analysis (CCA) seeks linear combinations of a small
number of variables in each set, such that the induced canonical variables are
maximally correlated. Sparse CCA is NP-hard.
We propose a novel combinatorial algorithm for sparse diagonal CCA, i.e.,
sparse CCA under the additional assumption that variables within each set are
standardized and uncorrelated. Our algorithm operates on a low rank
approximation of the input data and its computational complexity scales
linearly with the number of input variables. It is simple to implement, and
parallelizable. In contrast to most existing approaches, our algorithm
administers precise control on the sparsity of the extracted canonical vectors,
and comes with theoretical data-dependent global approximation guarantees, that
hinge on the spectrum of the input data. Finally, it can be straightforwardly
adapted to other constrained variants of CCA enforcing structure beyond
sparsity.
We empirically evaluate the proposed scheme and apply it on a real
neuroimaging dataset to investigate associations between brain activity and
behavior measurements.
| no_new_dataset | 0.943867 |
1605.09062 | S Shankar | Yoad Lewenberg, Yoram Bachrach, Sukrit Shankar, Antonio Criminisi | Predicting Personal Traits from Facial Images using Convolutional Neural
Networks Augmented with Facial Landmark Information | 7 pages, 5 figures, IJCAI 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the task of predicting various traits of a person given an image
of their face. We estimate both objective traits, such as gender, ethnicity and
hair-color; as well as subjective traits, such as the emotion a person
expresses or whether he is humorous or attractive. For sizeable
experimentation, we contribute a new Face Attributes Dataset (FAD), having
roughly 200,000 attribute labels for the above traits, for over 10,000 facial
images. Due to the recent surge of research on Deep Convolutional Neural
Networks (CNNs), we begin by using a CNN architecture for estimating facial
attributes and show that they indeed provide an impressive baseline
performance. To further improve performance, we propose a novel approach that
incorporates facial landmark information for input images as an additional
channel, helping the CNN learn better attribute-specific features so that the
landmarks across various training images hold correspondence. We empirically
analyse the performance of our method, showing consistent improvement over the
baseline across traits.
| [
{
"version": "v1",
"created": "Sun, 29 May 2016 21:07:10 GMT"
}
] | 2016-05-31T00:00:00 | [
[
"Lewenberg",
"Yoad",
""
],
[
"Bachrach",
"Yoram",
""
],
[
"Shankar",
"Sukrit",
""
],
[
"Criminisi",
"Antonio",
""
]
] | TITLE: Predicting Personal Traits from Facial Images using Convolutional Neural
Networks Augmented with Facial Landmark Information
ABSTRACT: We consider the task of predicting various traits of a person given an image
of their face. We estimate both objective traits, such as gender, ethnicity and
hair-color; as well as subjective traits, such as the emotion a person
expresses or whether he is humorous or attractive. For sizeable
experimentation, we contribute a new Face Attributes Dataset (FAD), having
roughly 200,000 attribute labels for the above traits, for over 10,000 facial
images. Due to the recent surge of research on Deep Convolutional Neural
Networks (CNNs), we begin by using a CNN architecture for estimating facial
attributes and show that they indeed provide an impressive baseline
performance. To further improve performance, we propose a novel approach that
incorporates facial landmark information for input images as an additional
channel, helping the CNN learn better attribute-specific features so that the
landmarks across various training images hold correspondence. We empirically
analyse the performance of our method, showing consistent improvement over the
baseline across traits.
| new_dataset | 0.954308 |
1605.09114 | Miguel \'A. Carreira-Perpi\~n\'an | Miguel \'A. Carreira-Perpi\~n\'an and Mehdi Alizadeh | ParMAC: distributed optimisation of nested functions, with application
to learning binary autoencoders | 40 pages, 13 figures. The abstract appearing here is slightly shorter
than the one in the PDF file because of the arXiv's limitation of the
abstract field to 1920 characters | null | null | null | cs.LG cs.DC cs.NE math.OC stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many powerful machine learning models are based on the composition of
multiple processing layers, such as deep nets, which gives rise to nonconvex
objective functions. A general, recent approach to optimise such "nested"
functions is the method of auxiliary coordinates (MAC). MAC introduces an
auxiliary coordinate for each data point in order to decouple the nested model
into independent submodels. This decomposes the optimisation into steps that
alternate between training single layers and updating the coordinates. It has
the advantage that it reuses existing single-layer algorithms, introduces
parallelism, and does not need to use chain-rule gradients, so it works with
nondifferentiable layers. With large-scale problems, or when distributing the
computation is necessary for faster training, the dataset may not fit in a
single machine. It is then essential to limit the amount of communication
between machines so it does not obliterate the benefit of parallelism. We
describe a general way to achieve this, ParMAC. ParMAC works on a cluster of
processing machines with a circular topology and alternates two steps until
convergence: one step trains the submodels in parallel using stochastic
updates, and the other trains the coordinates in parallel. Only submodel
parameters, no data or coordinates, are ever communicated between machines.
ParMAC exhibits high parallelism, low communication overhead, and facilitates
data shuffling, load balancing, fault tolerance and streaming data processing.
We study the convergence of ParMAC and propose a theoretical model of its
runtime and parallel speedup. We develop ParMAC to learn binary autoencoders
for fast, approximate image retrieval. We implement it in MPI in a distributed
system and demonstrate nearly perfect speedups in a 128-processor cluster with
a training set of 100 million high-dimensional points.
| [
{
"version": "v1",
"created": "Mon, 30 May 2016 06:31:14 GMT"
}
] | 2016-05-31T00:00:00 | [
[
"Carreira-Perpiñán",
"Miguel Á.",
""
],
[
"Alizadeh",
"Mehdi",
""
]
] | TITLE: ParMAC: distributed optimisation of nested functions, with application
to learning binary autoencoders
ABSTRACT: Many powerful machine learning models are based on the composition of
multiple processing layers, such as deep nets, which gives rise to nonconvex
objective functions. A general, recent approach to optimise such "nested"
functions is the method of auxiliary coordinates (MAC). MAC introduces an
auxiliary coordinate for each data point in order to decouple the nested model
into independent submodels. This decomposes the optimisation into steps that
alternate between training single layers and updating the coordinates. It has
the advantage that it reuses existing single-layer algorithms, introduces
parallelism, and does not need to use chain-rule gradients, so it works with
nondifferentiable layers. With large-scale problems, or when distributing the
computation is necessary for faster training, the dataset may not fit in a
single machine. It is then essential to limit the amount of communication
between machines so it does not obliterate the benefit of parallelism. We
describe a general way to achieve this, ParMAC. ParMAC works on a cluster of
processing machines with a circular topology and alternates two steps until
convergence: one step trains the submodels in parallel using stochastic
updates, and the other trains the coordinates in parallel. Only submodel
parameters, no data or coordinates, are ever communicated between machines.
ParMAC exhibits high parallelism, low communication overhead, and facilitates
data shuffling, load balancing, fault tolerance and streaming data processing.
We study the convergence of ParMAC and propose a theoretical model of its
runtime and parallel speedup. We develop ParMAC to learn binary autoencoders
for fast, approximate image retrieval. We implement it in MPI in a distributed
system and demonstrate nearly perfect speedups in a 128-processor cluster with
a training set of 100 million high-dimensional points.
| no_new_dataset | 0.942771 |
1605.09211 | Brendan Jou | Brendan Jou and Shih-Fu Chang | Going Deeper for Multilingual Visual Sentiment Detection | technical report, 7 pages | null | null | null | cs.MM cs.CL cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This technical report details several improvements to the visual concept
detector banks built on images from the Multilingual Visual Sentiment Ontology
(MVSO). The detector banks are trained to detect a total of 9,918
sentiment-biased visual concepts from six major languages: English, Spanish,
Italian, French, German and Chinese. In the original MVSO release,
adjective-noun pair (ANP) detectors were trained for the six languages using an
AlexNet-styled architecture by fine-tuning from DeepSentiBank. Here, through a
more extensive set of experiments, parameter tuning, and training runs, we
detail and release higher accuracy models for detecting ANPs across six
languages from the same image pool and setting as in the original release using
a more modern architecture, GoogLeNet, providing comparable or better
performance with reduced network parameter cost.
In addition, since the image pool in MVSO can be corrupted by user noise from
social interactions, we partitioned out a sub-corpus of MVSO images based on
tag-restricted queries for higher fidelity labels. We show that as a result of
these higher fidelity labels, higher performing AlexNet-styled ANP detectors
can be trained using the tag-restricted image subset as compared to the models
in full corpus. We release all these newly trained models for public research
use along with the list of tag-restricted images from the MVSO dataset.
| [
{
"version": "v1",
"created": "Mon, 30 May 2016 12:57:44 GMT"
}
] | 2016-05-31T00:00:00 | [
[
"Jou",
"Brendan",
""
],
[
"Chang",
"Shih-Fu",
""
]
] | TITLE: Going Deeper for Multilingual Visual Sentiment Detection
ABSTRACT: This technical report details several improvements to the visual concept
detector banks built on images from the Multilingual Visual Sentiment Ontology
(MVSO). The detector banks are trained to detect a total of 9,918
sentiment-biased visual concepts from six major languages: English, Spanish,
Italian, French, German and Chinese. In the original MVSO release,
adjective-noun pair (ANP) detectors were trained for the six languages using an
AlexNet-styled architecture by fine-tuning from DeepSentiBank. Here, through a
more extensive set of experiments, parameter tuning, and training runs, we
detail and release higher accuracy models for detecting ANPs across six
languages from the same image pool and setting as in the original release using
a more modern architecture, GoogLeNet, providing comparable or better
performance with reduced network parameter cost.
In addition, since the image pool in MVSO can be corrupted by user noise from
social interactions, we partitioned out a sub-corpus of MVSO images based on
tag-restricted queries for higher fidelity labels. We show that as a result of
these higher fidelity labels, higher performing AlexNet-styled ANP detectors
can be trained using the tag-restricted image subset as compared to the models
in full corpus. We release all these newly trained models for public research
use along with the list of tag-restricted images from the MVSO dataset.
| no_new_dataset | 0.947186 |
1605.09299 | Eirikur Agustsson | Eirikur Agustsson, Radu Timofte and Luc Van Gool | k2-means for fast and accurate large scale clustering | null | null | null | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose k^2-means, a new clustering method which efficiently copes with
large numbers of clusters and achieves low energy solutions. k^2-means builds
upon the standard k-means (Lloyd's algorithm) and combines a new strategy to
accelerate the convergence with a new low time complexity divisive
initialization. The accelerated convergence is achieved through only looking at
k_n nearest clusters and using triangle inequality bounds in the assignment
step while the divisive initialization employs an optimal 2-clustering along a
direction. The worst-case time complexity per iteration of our k^2-means is
O(nk_nd+k^2d), where d is the dimension of the n data points and k is the
number of clusters and usually n << k << k_n. Compared to k-means' O(nkd)
complexity, our k^2-means complexity is significantly lower, at the expense of
slightly increasing the memory complexity by O(nk_n+k^2). In our extensive
experiments k^2-means is order(s) of magnitude faster than standard methods in
computing accurate clusterings on several standard datasets and settings with
hundreds of clusters and high dimensional data. Moreover, the proposed divisive
initialization generally leads to clustering energies comparable to those
achieved with the standard k-means++ initialization, while being significantly
faster.
| [
{
"version": "v1",
"created": "Mon, 30 May 2016 16:17:45 GMT"
}
] | 2016-05-31T00:00:00 | [
[
"Agustsson",
"Eirikur",
""
],
[
"Timofte",
"Radu",
""
],
[
"Van Gool",
"Luc",
""
]
] | TITLE: k2-means for fast and accurate large scale clustering
ABSTRACT: We propose k^2-means, a new clustering method which efficiently copes with
large numbers of clusters and achieves low energy solutions. k^2-means builds
upon the standard k-means (Lloyd's algorithm) and combines a new strategy to
accelerate the convergence with a new low time complexity divisive
initialization. The accelerated convergence is achieved through only looking at
k_n nearest clusters and using triangle inequality bounds in the assignment
step while the divisive initialization employs an optimal 2-clustering along a
direction. The worst-case time complexity per iteration of our k^2-means is
O(nk_nd+k^2d), where d is the dimension of the n data points and k is the
number of clusters and usually n << k << k_n. Compared to k-means' O(nkd)
complexity, our k^2-means complexity is significantly lower, at the expense of
slightly increasing the memory complexity by O(nk_n+k^2). In our extensive
experiments k^2-means is order(s) of magnitude faster than standard methods in
computing accurate clusterings on several standard datasets and settings with
hundreds of clusters and high dimensional data. Moreover, the proposed divisive
initialization generally leads to clustering energies comparable to those
achieved with the standard k-means++ initialization, while being significantly
faster.
| no_new_dataset | 0.953057 |
1503.00949 | Ramazan Gokberk Cinbis | Ramazan Gokberk Cinbis, Jakob Verbeek, Cordelia Schmid | Weakly Supervised Object Localization with Multi-fold Multiple Instance
Learning | To appear in IEEE Transactions on Pattern Analysis and Machine
Intelligence (TPAMI) | null | 10.1109/TPAMI.2016.2535231 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Object category localization is a challenging problem in computer vision.
Standard supervised training requires bounding box annotations of object
instances. This time-consuming annotation process is sidestepped in weakly
supervised learning. In this case, the supervised information is restricted to
binary labels that indicate the absence/presence of object instances in the
image, without their locations. We follow a multiple-instance learning approach
that iteratively trains the detector and infers the object locations in the
positive training images. Our main contribution is a multi-fold multiple
instance learning procedure, which prevents training from prematurely locking
onto erroneous object locations. This procedure is particularly important when
using high-dimensional representations, such as Fisher vectors and
convolutional neural network features. We also propose a window refinement
method, which improves the localization accuracy by incorporating an objectness
prior. We present a detailed experimental evaluation using the PASCAL VOC 2007
dataset, which verifies the effectiveness of our approach.
| [
{
"version": "v1",
"created": "Tue, 3 Mar 2015 14:06:02 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Sep 2015 09:58:39 GMT"
},
{
"version": "v3",
"created": "Mon, 22 Feb 2016 20:26:43 GMT"
}
] | 2016-05-30T00:00:00 | [
[
"Cinbis",
"Ramazan Gokberk",
""
],
[
"Verbeek",
"Jakob",
""
],
[
"Schmid",
"Cordelia",
""
]
] | TITLE: Weakly Supervised Object Localization with Multi-fold Multiple Instance
Learning
ABSTRACT: Object category localization is a challenging problem in computer vision.
Standard supervised training requires bounding box annotations of object
instances. This time-consuming annotation process is sidestepped in weakly
supervised learning. In this case, the supervised information is restricted to
binary labels that indicate the absence/presence of object instances in the
image, without their locations. We follow a multiple-instance learning approach
that iteratively trains the detector and infers the object locations in the
positive training images. Our main contribution is a multi-fold multiple
instance learning procedure, which prevents training from prematurely locking
onto erroneous object locations. This procedure is particularly important when
using high-dimensional representations, such as Fisher vectors and
convolutional neural network features. We also propose a window refinement
method, which improves the localization accuracy by incorporating an objectness
prior. We present a detailed experimental evaluation using the PASCAL VOC 2007
dataset, which verifies the effectiveness of our approach.
| no_new_dataset | 0.948632 |
1506.02159 | Bamdev Mishra | Hiroyuki Kasai and Bamdev Mishra | Riemannian preconditioning for tensor completion | Supplementary material included in the paper. An extension of the
paper is in arXiv:1605.08257 | null | null | null | cs.NA cs.LG math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a novel Riemannian preconditioning approach for the tensor
completion problem with rank constraint. A Riemannian metric or inner product
is proposed that exploits the least-squares structure of the cost function and
takes into account the structured symmetry in Tucker decomposition. The
specific metric allows to use the versatile framework of Riemannian
optimization on quotient manifolds to develop a preconditioned nonlinear
conjugate gradient algorithm for the problem. To this end, concrete matrix
representations of various optimization-related ingredients are listed.
Numerical comparisons suggest that our proposed algorithm robustly outperforms
state-of-the-art algorithms across different problem instances encompassing
various synthetic and real-world datasets.
| [
{
"version": "v1",
"created": "Sat, 6 Jun 2015 14:52:13 GMT"
},
{
"version": "v2",
"created": "Fri, 27 May 2016 17:28:32 GMT"
}
] | 2016-05-30T00:00:00 | [
[
"Kasai",
"Hiroyuki",
""
],
[
"Mishra",
"Bamdev",
""
]
] | TITLE: Riemannian preconditioning for tensor completion
ABSTRACT: We propose a novel Riemannian preconditioning approach for the tensor
completion problem with rank constraint. A Riemannian metric or inner product
is proposed that exploits the least-squares structure of the cost function and
takes into account the structured symmetry in Tucker decomposition. The
specific metric allows to use the versatile framework of Riemannian
optimization on quotient manifolds to develop a preconditioned nonlinear
conjugate gradient algorithm for the problem. To this end, concrete matrix
representations of various optimization-related ingredients are listed.
Numerical comparisons suggest that our proposed algorithm robustly outperforms
state-of-the-art algorithms across different problem instances encompassing
various synthetic and real-world datasets.
| no_new_dataset | 0.944074 |
1506.03805 | Balaji Lakshminarayanan | Balaji Lakshminarayanan, Daniel M. Roy and Yee Whye Teh | Mondrian Forests for Large-Scale Regression when Uncertainty Matters | Proceedings of the 19th International Conference on Artificial
Intelligence and Statistics (AISTATS) 2016, Cadiz, Spain. JMLR: W&CP volume
51 | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many real-world regression problems demand a measure of the uncertainty
associated with each prediction. Standard decision forests deliver efficient
state-of-the-art predictive performance, but high-quality uncertainty estimates
are lacking. Gaussian processes (GPs) deliver uncertainty estimates, but
scaling GPs to large-scale data sets comes at the cost of approximating the
uncertainty estimates. We extend Mondrian forests, first proposed by
Lakshminarayanan et al. (2014) for classification problems, to the large-scale
non-parametric regression setting. Using a novel hierarchical Gaussian prior
that dovetails with the Mondrian forest framework, we obtain principled
uncertainty estimates, while still retaining the computational advantages of
decision forests. Through a combination of illustrative examples, real-world
large-scale datasets, and Bayesian optimization benchmarks, we demonstrate that
Mondrian forests outperform approximate GPs on large-scale regression tasks and
deliver better-calibrated uncertainty assessments than decision-forest-based
methods.
| [
{
"version": "v1",
"created": "Thu, 11 Jun 2015 19:55:02 GMT"
},
{
"version": "v2",
"created": "Thu, 15 Oct 2015 18:10:07 GMT"
},
{
"version": "v3",
"created": "Wed, 20 Apr 2016 11:43:13 GMT"
},
{
"version": "v4",
"created": "Fri, 27 May 2016 11:15:55 GMT"
}
] | 2016-05-30T00:00:00 | [
[
"Lakshminarayanan",
"Balaji",
""
],
[
"Roy",
"Daniel M.",
""
],
[
"Teh",
"Yee Whye",
""
]
] | TITLE: Mondrian Forests for Large-Scale Regression when Uncertainty Matters
ABSTRACT: Many real-world regression problems demand a measure of the uncertainty
associated with each prediction. Standard decision forests deliver efficient
state-of-the-art predictive performance, but high-quality uncertainty estimates
are lacking. Gaussian processes (GPs) deliver uncertainty estimates, but
scaling GPs to large-scale data sets comes at the cost of approximating the
uncertainty estimates. We extend Mondrian forests, first proposed by
Lakshminarayanan et al. (2014) for classification problems, to the large-scale
non-parametric regression setting. Using a novel hierarchical Gaussian prior
that dovetails with the Mondrian forest framework, we obtain principled
uncertainty estimates, while still retaining the computational advantages of
decision forests. Through a combination of illustrative examples, real-world
large-scale datasets, and Bayesian optimization benchmarks, we demonstrate that
Mondrian forests outperform approximate GPs on large-scale regression tasks and
deliver better-calibrated uncertainty assessments than decision-forest-based
methods.
| no_new_dataset | 0.948585 |
1510.07338 | Alex Kantchelian | Brad Miller, Alex Kantchelian, Michael Carl Tschantz, Sadia Afroz,
Rekha Bachwani, Riyaz Faizullabhoy, Ling Huang, Vaishaal Shankar, Tony Wu,
George Yiu, Anthony D. Joseph, J. D. Tygar | Reviewer Integration and Performance Measurement for Malware Detection | 20 papers, 11 figures, accepted at the 13th Conference on Detection
of Intrusions and Malware & Vulnerability Assessment (DIMVA 2016) | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present and evaluate a large-scale malware detection system integrating
machine learning with expert reviewers, treating reviewers as a limited
labeling resource. We demonstrate that even in small numbers, reviewers can
vastly improve the system's ability to keep pace with evolving threats. We
conduct our evaluation on a sample of VirusTotal submissions spanning 2.5 years
and containing 1.1 million binaries with 778GB of raw feature data. Without
reviewer assistance, we achieve 72% detection at a 0.5% false positive rate,
performing comparable to the best vendors on VirusTotal. Given a budget of 80
accurate reviews daily, we improve detection to 89% and are able to detect 42%
of malicious binaries undetected upon initial submission to VirusTotal.
Additionally, we identify a previously unnoticed temporal inconsistency in the
labeling of training datasets. We compare the impact of training labels
obtained at the same time training data is first seen with training labels
obtained months later. We find that using training labels obtained well after
samples appear, and thus unavailable in practice for current training data,
inflates measured detection by almost 20 percentage points. We release our
cluster-based implementation, as well as a list of all hashes in our evaluation
and 3% of our entire dataset.
| [
{
"version": "v1",
"created": "Mon, 26 Oct 2015 00:40:43 GMT"
},
{
"version": "v2",
"created": "Fri, 27 May 2016 01:43:10 GMT"
}
] | 2016-05-30T00:00:00 | [
[
"Miller",
"Brad",
""
],
[
"Kantchelian",
"Alex",
""
],
[
"Tschantz",
"Michael Carl",
""
],
[
"Afroz",
"Sadia",
""
],
[
"Bachwani",
"Rekha",
""
],
[
"Faizullabhoy",
"Riyaz",
""
],
[
"Huang",
"Ling",
""
],
[
"Shankar",
"Vaishaal",
""
],
[
"Wu",
"Tony",
""
],
[
"Yiu",
"George",
""
],
[
"Joseph",
"Anthony D.",
""
],
[
"Tygar",
"J. D.",
""
]
] | TITLE: Reviewer Integration and Performance Measurement for Malware Detection
ABSTRACT: We present and evaluate a large-scale malware detection system integrating
machine learning with expert reviewers, treating reviewers as a limited
labeling resource. We demonstrate that even in small numbers, reviewers can
vastly improve the system's ability to keep pace with evolving threats. We
conduct our evaluation on a sample of VirusTotal submissions spanning 2.5 years
and containing 1.1 million binaries with 778GB of raw feature data. Without
reviewer assistance, we achieve 72% detection at a 0.5% false positive rate,
performing comparable to the best vendors on VirusTotal. Given a budget of 80
accurate reviews daily, we improve detection to 89% and are able to detect 42%
of malicious binaries undetected upon initial submission to VirusTotal.
Additionally, we identify a previously unnoticed temporal inconsistency in the
labeling of training datasets. We compare the impact of training labels
obtained at the same time training data is first seen with training labels
obtained months later. We find that using training labels obtained well after
samples appear, and thus unavailable in practice for current training data,
inflates measured detection by almost 20 percentage points. We release our
cluster-based implementation, as well as a list of all hashes in our evaluation
and 3% of our entire dataset.
| new_dataset | 0.585268 |
1602.06042 | Nikhil Rao | Prateek Jain, Nikhil Rao, Inderjit Dhillon | Structured Sparse Regression via Greedy Hard-Thresholding | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Several learning applications require solving high-dimensional regression
problems where the relevant features belong to a small number of (overlapping)
groups. For very large datasets and under standard sparsity constraints, hard
thresholding methods have proven to be extremely efficient, but such methods
require NP hard projections when dealing with overlapping groups. In this
paper, we show that such NP-hard projections can not only be avoided by
appealing to submodular optimization, but such methods come with strong
theoretical guarantees even in the presence of poorly conditioned data (i.e.
say when two features have correlation $\geq 0.99$), which existing analyses
cannot handle. These methods exhibit an interesting computation-accuracy
trade-off and can be extended to significantly harder problems such as sparse
overlapping groups. Experiments on both real and synthetic data validate our
claims and demonstrate that the proposed methods are orders of magnitude faster
than other greedy and convex relaxation techniques for learning with
group-structured sparsity.
| [
{
"version": "v1",
"created": "Fri, 19 Feb 2016 04:28:50 GMT"
},
{
"version": "v2",
"created": "Fri, 27 May 2016 04:47:38 GMT"
}
] | 2016-05-30T00:00:00 | [
[
"Jain",
"Prateek",
""
],
[
"Rao",
"Nikhil",
""
],
[
"Dhillon",
"Inderjit",
""
]
] | TITLE: Structured Sparse Regression via Greedy Hard-Thresholding
ABSTRACT: Several learning applications require solving high-dimensional regression
problems where the relevant features belong to a small number of (overlapping)
groups. For very large datasets and under standard sparsity constraints, hard
thresholding methods have proven to be extremely efficient, but such methods
require NP hard projections when dealing with overlapping groups. In this
paper, we show that such NP-hard projections can not only be avoided by
appealing to submodular optimization, but such methods come with strong
theoretical guarantees even in the presence of poorly conditioned data (i.e.
say when two features have correlation $\geq 0.99$), which existing analyses
cannot handle. These methods exhibit an interesting computation-accuracy
trade-off and can be extended to significantly harder problems such as sparse
overlapping groups. Experiments on both real and synthetic data validate our
claims and demonstrate that the proposed methods are orders of magnitude faster
than other greedy and convex relaxation techniques for learning with
group-structured sparsity.
| no_new_dataset | 0.948632 |
1604.08859 | Alexandre de Br\'ebisson | Alexandre de Br\'ebisson, Pascal Vincent | The Z-loss: a shift and scale invariant classification loss belonging to
the Spherical Family | null | null | null | null | cs.LG cs.AI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite being the standard loss function to train multi-class neural
networks, the log-softmax has two potential limitations. First, it involves
computations that scale linearly with the number of output classes, which can
restrict the size of problems we are able to tackle with current hardware.
Second, it remains unclear how close it matches the task loss such as the top-k
error rate or other non-differentiable evaluation metrics which we aim to
optimize ultimately. In this paper, we introduce an alternative classification
loss function, the Z-loss, which is designed to address these two issues.
Unlike the log-softmax, it has the desirable property of belonging to the
spherical loss family (Vincent et al., 2015), a class of loss functions for
which training can be performed very efficiently with a complexity independent
of the number of output classes. We show experimentally that it significantly
outperforms the other spherical loss functions previously investigated.
Furthermore, we show on a word language modeling task that it also outperforms
the log-softmax with respect to certain ranking scores, such as top-k scores,
suggesting that the Z-loss has the flexibility to better match the task loss.
These qualities thus makes the Z-loss an appealing candidate to train very
efficiently large output networks such as word-language models or other extreme
classification problems. On the One Billion Word (Chelba et al., 2014) dataset,
we are able to train a model with the Z-loss 40 times faster than the
log-softmax and more than 4 times faster than the hierarchical softmax.
| [
{
"version": "v1",
"created": "Fri, 29 Apr 2016 14:53:00 GMT"
},
{
"version": "v2",
"created": "Fri, 27 May 2016 15:17:34 GMT"
}
] | 2016-05-30T00:00:00 | [
[
"de Brébisson",
"Alexandre",
""
],
[
"Vincent",
"Pascal",
""
]
] | TITLE: The Z-loss: a shift and scale invariant classification loss belonging to
the Spherical Family
ABSTRACT: Despite being the standard loss function to train multi-class neural
networks, the log-softmax has two potential limitations. First, it involves
computations that scale linearly with the number of output classes, which can
restrict the size of problems we are able to tackle with current hardware.
Second, it remains unclear how close it matches the task loss such as the top-k
error rate or other non-differentiable evaluation metrics which we aim to
optimize ultimately. In this paper, we introduce an alternative classification
loss function, the Z-loss, which is designed to address these two issues.
Unlike the log-softmax, it has the desirable property of belonging to the
spherical loss family (Vincent et al., 2015), a class of loss functions for
which training can be performed very efficiently with a complexity independent
of the number of output classes. We show experimentally that it significantly
outperforms the other spherical loss functions previously investigated.
Furthermore, we show on a word language modeling task that it also outperforms
the log-softmax with respect to certain ranking scores, such as top-k scores,
suggesting that the Z-loss has the flexibility to better match the task loss.
These qualities thus makes the Z-loss an appealing candidate to train very
efficiently large output networks such as word-language models or other extreme
classification problems. On the One Billion Word (Chelba et al., 2014) dataset,
we are able to train a model with the Z-loss 40 times faster than the
log-softmax and more than 4 times faster than the hierarchical softmax.
| no_new_dataset | 0.944995 |
1605.08464 | Vivek Sharma | Vivek Sharma and Sule Yildirim-Yayilgan and Luc Van Gool | Low-Cost Scene Modeling using a Density Function Improves Segmentation
Performance | accepted for publication at 25th IEEE International Symposium on
Robot and Human Interactive Communication (RO-MAN), 2016 | null | null | null | cs.CV cs.AI cs.HC cs.RO | http://creativecommons.org/licenses/by/4.0/ | We propose a low cost and effective way to combine a free simulation software
and free CAD models for modeling human-object interaction in order to improve
human & object segmentation. It is intended for research scenarios related to
safe human-robot collaboration (SHRC) and interaction (SHRI) in the industrial
domain. The task of human and object modeling has been used for detecting
activity, and for inferring and predicting actions, different from those works,
we do human and object modeling in order to learn interactions in RGB-D data
for improving segmentation. For this purpose, we define a novel density
function to model a three dimensional (3D) scene in a virtual environment
(VREP). This density function takes into account various possible
configurations of human-object and object-object relationships and interactions
governed by their affordances. Using this function, we synthesize a large,
realistic and highly varied synthetic RGB-D dataset that we use for training.
We train a random forest classifier, and the pixelwise predictions obtained is
integrated as a unary term in a pairwise conditional random fields (CRF). Our
evaluation shows that modeling these interactions improves segmentation
performance by ~7\% in mean average precision and recall over state-of-the-art
methods that ignore these interactions in real-world data. Our approach is
computationally efficient, robust and can run real-time on consumer hardware.
| [
{
"version": "v1",
"created": "Thu, 26 May 2016 22:34:37 GMT"
}
] | 2016-05-30T00:00:00 | [
[
"Sharma",
"Vivek",
""
],
[
"Yildirim-Yayilgan",
"Sule",
""
],
[
"Van Gool",
"Luc",
""
]
] | TITLE: Low-Cost Scene Modeling using a Density Function Improves Segmentation
Performance
ABSTRACT: We propose a low cost and effective way to combine a free simulation software
and free CAD models for modeling human-object interaction in order to improve
human & object segmentation. It is intended for research scenarios related to
safe human-robot collaboration (SHRC) and interaction (SHRI) in the industrial
domain. The task of human and object modeling has been used for detecting
activity, and for inferring and predicting actions, different from those works,
we do human and object modeling in order to learn interactions in RGB-D data
for improving segmentation. For this purpose, we define a novel density
function to model a three dimensional (3D) scene in a virtual environment
(VREP). This density function takes into account various possible
configurations of human-object and object-object relationships and interactions
governed by their affordances. Using this function, we synthesize a large,
realistic and highly varied synthetic RGB-D dataset that we use for training.
We train a random forest classifier, and the pixelwise predictions obtained is
integrated as a unary term in a pairwise conditional random fields (CRF). Our
evaluation shows that modeling these interactions improves segmentation
performance by ~7\% in mean average precision and recall over state-of-the-art
methods that ignore these interactions in real-world data. Our approach is
computationally efficient, robust and can run real-time on consumer hardware.
| new_dataset | 0.808521 |
1605.08680 | Ognjen Arandjelovi\'c PhD | Duc-Son Pham, Ognjen Arandjelovic, Svetha Venkatesh | Achieving stable subspace clustering by post-processing generic
clustering results | International Joint Conference on Neural Networks, 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose an effective subspace selection scheme as a post-processing step
to improve results obtained by sparse subspace clustering (SSC). Our method
starts by the computation of stable subspaces using a novel random sampling
scheme. Thus constructed preliminary subspaces are used to identify the
initially incorrectly clustered data points and then to reassign them to more
suitable clusters based on their goodness-of-fit to the preliminary model. To
improve the robustness of the algorithm, we use a dominant nearest subspace
classification scheme that controls the level of sensitivity against
reassignment. We demonstrate that our algorithm is convergent and superior to
the direct application of a generic alternative such as principal component
analysis. On several popular datasets for motion segmentation and face
clustering pervasively used in the sparse subspace clustering literature the
proposed method is shown to reduce greatly the incidence of clustering errors
while introducing negligible disturbance to the data points already correctly
clustered.
| [
{
"version": "v1",
"created": "Fri, 27 May 2016 15:15:04 GMT"
}
] | 2016-05-30T00:00:00 | [
[
"Pham",
"Duc-Son",
""
],
[
"Arandjelovic",
"Ognjen",
""
],
[
"Venkatesh",
"Svetha",
""
]
] | TITLE: Achieving stable subspace clustering by post-processing generic
clustering results
ABSTRACT: We propose an effective subspace selection scheme as a post-processing step
to improve results obtained by sparse subspace clustering (SSC). Our method
starts by the computation of stable subspaces using a novel random sampling
scheme. Thus constructed preliminary subspaces are used to identify the
initially incorrectly clustered data points and then to reassign them to more
suitable clusters based on their goodness-of-fit to the preliminary model. To
improve the robustness of the algorithm, we use a dominant nearest subspace
classification scheme that controls the level of sensitivity against
reassignment. We demonstrate that our algorithm is convergent and superior to
the direct application of a generic alternative such as principal component
analysis. On several popular datasets for motion segmentation and face
clustering pervasively used in the sparse subspace clustering literature the
proposed method is shown to reduce greatly the incidence of clustering errors
while introducing negligible disturbance to the data points already correctly
clustered.
| no_new_dataset | 0.949435 |
1602.02373 | Rie Johnson | Rie Johnson, Tong Zhang | Supervised and Semi-Supervised Text Categorization using LSTM for Region
Embeddings | null | null | null | null | stat.ML cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One-hot CNN (convolutional neural network) has been shown to be effective for
text categorization (Johnson & Zhang, 2015). We view it as a special case of a
general framework which jointly trains a linear model with a non-linear feature
generator consisting of `text region embedding + pooling'. Under this
framework, we explore a more sophisticated region embedding method using Long
Short-Term Memory (LSTM). LSTM can embed text regions of variable (and possibly
large) sizes, whereas the region size needs to be fixed in a CNN. We seek
effective and efficient use of LSTM for this purpose in the supervised and
semi-supervised settings. The best results were obtained by combining region
embeddings in the form of LSTM and convolution layers trained on unlabeled
data. The results indicate that on this task, embeddings of text regions, which
can convey complex concepts, are more useful than embeddings of single words in
isolation. We report performances exceeding the previous best results on four
benchmark datasets.
| [
{
"version": "v1",
"created": "Sun, 7 Feb 2016 14:05:58 GMT"
},
{
"version": "v2",
"created": "Thu, 26 May 2016 15:26:34 GMT"
}
] | 2016-05-27T00:00:00 | [
[
"Johnson",
"Rie",
""
],
[
"Zhang",
"Tong",
""
]
] | TITLE: Supervised and Semi-Supervised Text Categorization using LSTM for Region
Embeddings
ABSTRACT: One-hot CNN (convolutional neural network) has been shown to be effective for
text categorization (Johnson & Zhang, 2015). We view it as a special case of a
general framework which jointly trains a linear model with a non-linear feature
generator consisting of `text region embedding + pooling'. Under this
framework, we explore a more sophisticated region embedding method using Long
Short-Term Memory (LSTM). LSTM can embed text regions of variable (and possibly
large) sizes, whereas the region size needs to be fixed in a CNN. We seek
effective and efficient use of LSTM for this purpose in the supervised and
semi-supervised settings. The best results were obtained by combining region
embeddings in the form of LSTM and convolution layers trained on unlabeled
data. The results indicate that on this task, embeddings of text regions, which
can convey complex concepts, are more useful than embeddings of single words in
isolation. We report performances exceeding the previous best results on four
benchmark datasets.
| no_new_dataset | 0.947381 |
1602.02660 | Sander Dieleman | Sander Dieleman, Jeffrey De Fauw, Koray Kavukcuoglu | Exploiting Cyclic Symmetry in Convolutional Neural Networks | 10 pages, 6 figures, accepted for publication at ICML 2016 | null | null | null | cs.LG cs.CV cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many classes of images exhibit rotational symmetry. Convolutional neural
networks are sometimes trained using data augmentation to exploit this, but
they are still required to learn the rotation equivariance properties from the
data. Encoding these properties into the network architecture, as we are
already used to doing for translation equivariance by using convolutional
layers, could result in a more efficient use of the parameter budget by
relieving the model from learning them. We introduce four operations which can
be inserted into neural network models as layers, and which can be combined to
make these models partially equivariant to rotations. They also enable
parameter sharing across different orientations. We evaluate the effect of
these architectural modifications on three datasets which exhibit rotational
symmetry and demonstrate improved performance with smaller models.
| [
{
"version": "v1",
"created": "Mon, 8 Feb 2016 17:37:16 GMT"
},
{
"version": "v2",
"created": "Thu, 26 May 2016 11:47:18 GMT"
}
] | 2016-05-27T00:00:00 | [
[
"Dieleman",
"Sander",
""
],
[
"De Fauw",
"Jeffrey",
""
],
[
"Kavukcuoglu",
"Koray",
""
]
] | TITLE: Exploiting Cyclic Symmetry in Convolutional Neural Networks
ABSTRACT: Many classes of images exhibit rotational symmetry. Convolutional neural
networks are sometimes trained using data augmentation to exploit this, but
they are still required to learn the rotation equivariance properties from the
data. Encoding these properties into the network architecture, as we are
already used to doing for translation equivariance by using convolutional
layers, could result in a more efficient use of the parameter budget by
relieving the model from learning them. We introduce four operations which can
be inserted into neural network models as layers, and which can be combined to
make these models partially equivariant to rotations. They also enable
parameter sharing across different orientations. We evaluate the effect of
these architectural modifications on three datasets which exhibit rotational
symmetry and demonstrate improved performance with smaller models.
| no_new_dataset | 0.95018 |
1605.05579 | Nauman Shahid | Nauman Shahid, Nathanael Perraudin, Pierre Vandergheynst | Low-Rank Matrices on Graphs: Generalized Recovery & Applications | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many real world datasets subsume a linear or non-linear low-rank structure in
a very low-dimensional space. Unfortunately, one often has very little or no
information about the geometry of the space, resulting in a highly
under-determined recovery problem. Under certain circumstances,
state-of-the-art algorithms provide an exact recovery for linear low-rank
structures but at the expense of highly inscalable algorithms which use nuclear
norm. However, the case of non-linear structures remains unresolved. We revisit
the problem of low-rank recovery from a totally different perspective,
involving graphs which encode pairwise similarity between the data samples and
features. Surprisingly, our analysis confirms that it is possible to recover
many approximate linear and non-linear low-rank structures with recovery
guarantees with a set of highly scalable and efficient algorithms. We call such
data matrices as \textit{Low-Rank matrices on graphs} and show that many real
world datasets satisfy this assumption approximately due to underlying
stationarity. Our detailed theoretical and experimental analysis unveils the
power of the simple, yet very novel recovery framework \textit{Fast Robust PCA
on Graphs}
| [
{
"version": "v1",
"created": "Wed, 18 May 2016 13:50:04 GMT"
},
{
"version": "v2",
"created": "Thu, 19 May 2016 07:37:35 GMT"
},
{
"version": "v3",
"created": "Wed, 25 May 2016 20:50:42 GMT"
}
] | 2016-05-27T00:00:00 | [
[
"Shahid",
"Nauman",
""
],
[
"Perraudin",
"Nathanael",
""
],
[
"Vandergheynst",
"Pierre",
""
]
] | TITLE: Low-Rank Matrices on Graphs: Generalized Recovery & Applications
ABSTRACT: Many real world datasets subsume a linear or non-linear low-rank structure in
a very low-dimensional space. Unfortunately, one often has very little or no
information about the geometry of the space, resulting in a highly
under-determined recovery problem. Under certain circumstances,
state-of-the-art algorithms provide an exact recovery for linear low-rank
structures but at the expense of highly inscalable algorithms which use nuclear
norm. However, the case of non-linear structures remains unresolved. We revisit
the problem of low-rank recovery from a totally different perspective,
involving graphs which encode pairwise similarity between the data samples and
features. Surprisingly, our analysis confirms that it is possible to recover
many approximate linear and non-linear low-rank structures with recovery
guarantees with a set of highly scalable and efficient algorithms. We call such
data matrices as \textit{Low-Rank matrices on graphs} and show that many real
world datasets satisfy this assumption approximately due to underlying
stationarity. Our detailed theoretical and experimental analysis unveils the
power of the simple, yet very novel recovery framework \textit{Fast Robust PCA
on Graphs}
| no_new_dataset | 0.941868 |
1605.08068 | Alireza Shafaei | Alireza Shafaei, James J. Little | Real-Time Human Motion Capture with Multiple Depth Cameras | Accepted to computer robot vision 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Commonly used human motion capture systems require intrusive attachment of
markers that are visually tracked with multiple cameras. In this work we
present an efficient and inexpensive solution to markerless motion capture
using only a few Kinect sensors. Unlike the previous work on 3d pose estimation
using a single depth camera, we relax constraints on the camera location and do
not assume a co-operative user. We apply recent image segmentation techniques
to depth images and use curriculum learning to train our system on purely
synthetic data. Our method accurately localizes body parts without requiring an
explicit shape model. The body joint locations are then recovered by combining
evidence from multiple views in real-time. We also introduce a dataset of ~6
million synthetic depth frames for pose estimation from multiple cameras and
exceed state-of-the-art results on the Berkeley MHAD dataset.
| [
{
"version": "v1",
"created": "Wed, 25 May 2016 20:52:28 GMT"
}
] | 2016-05-27T00:00:00 | [
[
"Shafaei",
"Alireza",
""
],
[
"Little",
"James J.",
""
]
] | TITLE: Real-Time Human Motion Capture with Multiple Depth Cameras
ABSTRACT: Commonly used human motion capture systems require intrusive attachment of
markers that are visually tracked with multiple cameras. In this work we
present an efficient and inexpensive solution to markerless motion capture
using only a few Kinect sensors. Unlike the previous work on 3d pose estimation
using a single depth camera, we relax constraints on the camera location and do
not assume a co-operative user. We apply recent image segmentation techniques
to depth images and use curriculum learning to train our system on purely
synthetic data. Our method accurately localizes body parts without requiring an
explicit shape model. The body joint locations are then recovered by combining
evidence from multiple views in real-time. We also introduce a dataset of ~6
million synthetic depth frames for pose estimation from multiple cameras and
exceed state-of-the-art results on the Berkeley MHAD dataset.
| new_dataset | 0.954351 |
1605.08125 | Waqas Sultani Mr | Waqas Sultani and Mubarak Shah | Automatic Action Annotation in Weakly Labeled Videos | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Manual spatio-temporal annotation of human action in videos is laborious,
requires several annotators and contains human biases. In this paper, we
present a weakly supervised approach to automatically obtain spatio-temporal
annotations of an actor in action videos. We first obtain a large number of
action proposals in each video. To capture a few most representative action
proposals in each video and evade processing thousands of them, we rank them
using optical flow and saliency in a 3D-MRF based framework and select a few
proposals using MAP based proposal subset selection method. We demonstrate that
this ranking preserves the high quality action proposals. Several such
proposals are generated for each video of the same action. Our next challenge
is to iteratively select one proposal from each video so that all proposals are
globally consistent. We formulate this as Generalized Maximum Clique Graph
problem using shape, global and fine grained similarity of proposals across the
videos. The output of our method is the most action representative proposals
from each video. Our method can also annotate multiple instances of the same
action in a video. We have validated our approach on three challenging action
datasets: UCF Sport, sub-JHMDB and THUMOS'13 and have obtained promising
results compared to several baseline methods. Moreover, on UCF Sports, we
demonstrate that action classifiers trained on these automatically obtained
spatio-temporal annotations have comparable performance to the classifiers
trained on ground truth annotation.
| [
{
"version": "v1",
"created": "Thu, 26 May 2016 02:22:57 GMT"
}
] | 2016-05-27T00:00:00 | [
[
"Sultani",
"Waqas",
""
],
[
"Shah",
"Mubarak",
""
]
] | TITLE: Automatic Action Annotation in Weakly Labeled Videos
ABSTRACT: Manual spatio-temporal annotation of human action in videos is laborious,
requires several annotators and contains human biases. In this paper, we
present a weakly supervised approach to automatically obtain spatio-temporal
annotations of an actor in action videos. We first obtain a large number of
action proposals in each video. To capture a few most representative action
proposals in each video and evade processing thousands of them, we rank them
using optical flow and saliency in a 3D-MRF based framework and select a few
proposals using MAP based proposal subset selection method. We demonstrate that
this ranking preserves the high quality action proposals. Several such
proposals are generated for each video of the same action. Our next challenge
is to iteratively select one proposal from each video so that all proposals are
globally consistent. We formulate this as Generalized Maximum Clique Graph
problem using shape, global and fine grained similarity of proposals across the
videos. The output of our method is the most action representative proposals
from each video. Our method can also annotate multiple instances of the same
action in a video. We have validated our approach on three challenging action
datasets: UCF Sport, sub-JHMDB and THUMOS'13 and have obtained promising
results compared to several baseline methods. Moreover, on UCF Sports, we
demonstrate that action classifiers trained on these automatically obtained
spatio-temporal annotations have comparable performance to the classifiers
trained on ground truth annotation.
| no_new_dataset | 0.949482 |
1605.08257 | Hiroyuki Kasai | Hiroyuki Kasai and Bamdev Mishra | Low-rank tensor completion: a Riemannian manifold preconditioning
approach | The 33rd International Conference on Machine Learning (ICML 2016).
arXiv admin note: substantial text overlap with arXiv:1506.02159 | null | null | null | cs.LG cs.NA math.OC stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a novel Riemannian manifold preconditioning approach for the
tensor completion problem with rank constraint. A novel Riemannian metric or
inner product is proposed that exploits the least-squares structure of the cost
function and takes into account the structured symmetry that exists in Tucker
decomposition. The specific metric allows to use the versatile framework of
Riemannian optimization on quotient manifolds to develop preconditioned
nonlinear conjugate gradient and stochastic gradient descent algorithms for
batch and online setups, respectively. Concrete matrix representations of
various optimization-related ingredients are listed. Numerical comparisons
suggest that our proposed algorithms robustly outperform state-of-the-art
algorithms across different synthetic and real-world datasets.
| [
{
"version": "v1",
"created": "Thu, 26 May 2016 12:55:02 GMT"
}
] | 2016-05-27T00:00:00 | [
[
"Kasai",
"Hiroyuki",
""
],
[
"Mishra",
"Bamdev",
""
]
] | TITLE: Low-rank tensor completion: a Riemannian manifold preconditioning
approach
ABSTRACT: We propose a novel Riemannian manifold preconditioning approach for the
tensor completion problem with rank constraint. A novel Riemannian metric or
inner product is proposed that exploits the least-squares structure of the cost
function and takes into account the structured symmetry that exists in Tucker
decomposition. The specific metric allows to use the versatile framework of
Riemannian optimization on quotient manifolds to develop preconditioned
nonlinear conjugate gradient and stochastic gradient descent algorithms for
batch and online setups, respectively. Concrete matrix representations of
various optimization-related ingredients are listed. Numerical comparisons
suggest that our proposed algorithms robustly outperform state-of-the-art
algorithms across different synthetic and real-world datasets.
| no_new_dataset | 0.945197 |
1605.08323 | Marius Leordeanu | Dragos Costea and Marius Leordeanu | Aerial image geolocalization from recognition and matching of roads and
intersections | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Aerial image analysis at a semantic level is important in many applications
with strong potential impact in industry and consumer use, such as automated
mapping, urban planning, real estate and environment monitoring, or disaster
relief. The problem is enjoying a great interest in computer vision and remote
sensing, due to increased computer power and improvement in automated image
understanding algorithms. In this paper we address the task of automatic
geolocalization of aerial images from recognition and matching of roads and
intersections. Our proposed method is a novel contribution in the literature
that could enable many applications of aerial image analysis when GPS data is
not available. We offer a complete pipeline for geolocalization, from the
detection of roads and intersections, to the identification of the enclosing
geographic region by matching detected intersections to previously learned
manually labeled ones, followed by accurate geometric alignment between the
detected roads and the manually labeled maps. We test on a novel dataset with
aerial images of two European cities and use the publicly available
OpenStreetMap project for collecting ground truth roads annotations. We show in
extensive experiments that our approach produces highly accurate localizations
in the challenging case when we train on images from one city and test on the
other and the quality of the aerial images is relatively poor. We also show
that the the alignment between detected roads and pre-stored manual annotations
can be effectively used for improving the quality of the road detection
results.
| [
{
"version": "v1",
"created": "Thu, 26 May 2016 15:11:09 GMT"
}
] | 2016-05-27T00:00:00 | [
[
"Costea",
"Dragos",
""
],
[
"Leordeanu",
"Marius",
""
]
] | TITLE: Aerial image geolocalization from recognition and matching of roads and
intersections
ABSTRACT: Aerial image analysis at a semantic level is important in many applications
with strong potential impact in industry and consumer use, such as automated
mapping, urban planning, real estate and environment monitoring, or disaster
relief. The problem is enjoying a great interest in computer vision and remote
sensing, due to increased computer power and improvement in automated image
understanding algorithms. In this paper we address the task of automatic
geolocalization of aerial images from recognition and matching of roads and
intersections. Our proposed method is a novel contribution in the literature
that could enable many applications of aerial image analysis when GPS data is
not available. We offer a complete pipeline for geolocalization, from the
detection of roads and intersections, to the identification of the enclosing
geographic region by matching detected intersections to previously learned
manually labeled ones, followed by accurate geometric alignment between the
detected roads and the manually labeled maps. We test on a novel dataset with
aerial images of two European cities and use the publicly available
OpenStreetMap project for collecting ground truth roads annotations. We show in
extensive experiments that our approach produces highly accurate localizations
in the challenging case when we train on images from one city and test on the
other and the quality of the aerial images is relatively poor. We also show
that the the alignment between detected roads and pre-stored manual annotations
can be effectively used for improving the quality of the road detection
results.
| new_dataset | 0.963022 |
1605.08350 | Tizita Nesibu Shewaye Mrs | Tizita Nesibu Shewaye and Alhayat Ali Mekonnen | Benign-Malignant Lung Nodule Classification with Geometric and
Appearance Histogram Features | 5 pages, 4 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Lung cancer accounts for the highest number of cancer deaths globally. Early
diagnosis of lung nodules is very important to reduce the mortality rate of
patients by improving the diagnosis and treatment of lung cancer. This work
proposes an automated system to classify lung nodules as malignant and benign
in CT images. It presents extensive experimental results using a combination of
geometric and histogram lung nodule image features and different linear and
non-linear discriminant classifiers. The proposed approach is experimentally
validated on the LIDC-IDRI public lung cancer screening thoracic computed
tomography (CT) dataset containing nodule level diagnostic data. The obtained
results are very encouraging correctly classifying 82% of malignant and 93% of
benign nodules on unseen test data at best.
| [
{
"version": "v1",
"created": "Thu, 26 May 2016 16:06:58 GMT"
}
] | 2016-05-27T00:00:00 | [
[
"Shewaye",
"Tizita Nesibu",
""
],
[
"Mekonnen",
"Alhayat Ali",
""
]
] | TITLE: Benign-Malignant Lung Nodule Classification with Geometric and
Appearance Histogram Features
ABSTRACT: Lung cancer accounts for the highest number of cancer deaths globally. Early
diagnosis of lung nodules is very important to reduce the mortality rate of
patients by improving the diagnosis and treatment of lung cancer. This work
proposes an automated system to classify lung nodules as malignant and benign
in CT images. It presents extensive experimental results using a combination of
geometric and histogram lung nodule image features and different linear and
non-linear discriminant classifiers. The proposed approach is experimentally
validated on the LIDC-IDRI public lung cancer screening thoracic computed
tomography (CT) dataset containing nodule level diagnostic data. The obtained
results are very encouraging correctly classifying 82% of malignant and 93% of
benign nodules on unseen test data at best.
| new_dataset | 0.705024 |
1605.08359 | Edward Johns | Edward Johns and Stefan Leutenegger and Andrew J. Davison | Pairwise Decomposition of Image Sequences for Active Multi-View
Recognition | CVPR 2016 (oral) | null | null | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A multi-view image sequence provides a much richer capacity for object
recognition than from a single image. However, most existing solutions to
multi-view recognition typically adopt hand-crafted, model-based geometric
methods, which do not readily embrace recent trends in deep learning. We
propose to bring Convolutional Neural Networks to generic multi-view
recognition, by decomposing an image sequence into a set of image pairs,
classifying each pair independently, and then learning an object classifier by
weighting the contribution of each pair. This allows for recognition over
arbitrary camera trajectories, without requiring explicit training over the
potentially infinite number of camera paths and lengths. Building these
pairwise relationships then naturally extends to the next-best-view problem in
an active recognition framework. To achieve this, we train a second
Convolutional Neural Network to map directly from an observed image to next
viewpoint. Finally, we incorporate this into a trajectory optimisation task,
whereby the best recognition confidence is sought for a given trajectory
length. We present state-of-the-art results in both guided and unguided
multi-view recognition on the ModelNet dataset, and show how our method can be
used with depth images, greyscale images, or both.
| [
{
"version": "v1",
"created": "Thu, 26 May 2016 16:44:19 GMT"
}
] | 2016-05-27T00:00:00 | [
[
"Johns",
"Edward",
""
],
[
"Leutenegger",
"Stefan",
""
],
[
"Davison",
"Andrew J.",
""
]
] | TITLE: Pairwise Decomposition of Image Sequences for Active Multi-View
Recognition
ABSTRACT: A multi-view image sequence provides a much richer capacity for object
recognition than from a single image. However, most existing solutions to
multi-view recognition typically adopt hand-crafted, model-based geometric
methods, which do not readily embrace recent trends in deep learning. We
propose to bring Convolutional Neural Networks to generic multi-view
recognition, by decomposing an image sequence into a set of image pairs,
classifying each pair independently, and then learning an object classifier by
weighting the contribution of each pair. This allows for recognition over
arbitrary camera trajectories, without requiring explicit training over the
potentially infinite number of camera paths and lengths. Building these
pairwise relationships then naturally extends to the next-best-view problem in
an active recognition framework. To achieve this, we train a second
Convolutional Neural Network to map directly from an observed image to next
viewpoint. Finally, we incorporate this into a trajectory optimisation task,
whereby the best recognition confidence is sought for a given trajectory
length. We present state-of-the-art results in both guided and unguided
multi-view recognition on the ModelNet dataset, and show how our method can be
used with depth images, greyscale images, or both.
| no_new_dataset | 0.945045 |
1605.08396 | Simon Durand | S. Durand, J. P. Bello, B. David and G. Richard | Robust Downbeat Tracking Using an Ensemble of Convolutional Networks | null | null | null | null | cs.SD cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we present a novel state of the art system for automatic
downbeat tracking from music signals. The audio signal is first segmented in
frames which are synchronized at the tatum level of the music. We then extract
different kind of features based on harmony, melody, rhythm and bass content to
feed convolutional neural networks that are adapted to take advantage of each
feature characteristics. This ensemble of neural networks is combined to obtain
one downbeat likelihood per tatum. The downbeat sequence is finally decoded
with a flexible and efficient temporal model which takes advantage of the
metrical continuity of a song. We then perform an evaluation of our system on a
large base of 9 datasets, compare its performance to 4 other published
algorithms and obtain a significant increase of 16.8 percent points compared to
the second best system, for altogether a moderate cost in test and training.
The influence of each step of the method is studied to show its strengths and
shortcomings.
| [
{
"version": "v1",
"created": "Thu, 26 May 2016 18:27:56 GMT"
}
] | 2016-05-27T00:00:00 | [
[
"Durand",
"S.",
""
],
[
"Bello",
"J. P.",
""
],
[
"David",
"B.",
""
],
[
"Richard",
"G.",
""
]
] | TITLE: Robust Downbeat Tracking Using an Ensemble of Convolutional Networks
ABSTRACT: In this paper, we present a novel state of the art system for automatic
downbeat tracking from music signals. The audio signal is first segmented in
frames which are synchronized at the tatum level of the music. We then extract
different kind of features based on harmony, melody, rhythm and bass content to
feed convolutional neural networks that are adapted to take advantage of each
feature characteristics. This ensemble of neural networks is combined to obtain
one downbeat likelihood per tatum. The downbeat sequence is finally decoded
with a flexible and efficient temporal model which takes advantage of the
metrical continuity of a song. We then perform an evaluation of our system on a
large base of 9 datasets, compare its performance to 4 other published
algorithms and obtain a significant increase of 16.8 percent points compared to
the second best system, for altogether a moderate cost in test and training.
The influence of each step of the method is studied to show its strengths and
shortcomings.
| no_new_dataset | 0.947235 |
1605.08401 | Jameson Merkow | Jameson Merkow and David Kriegman and Alison Marsden and Zhuowen Tu | Dense Volume-to-Volume Vascular Boundary Detection | Accepted to MICCAI2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we present a novel 3D-Convolutional Neural Network (CNN)
architecture called I2I-3D that predicts boundary location in volumetric data.
Our fine-to-fine, deeply supervised framework addresses three critical issues
to 3D boundary detection: (1) efficient, holistic, end-to-end volumetric label
training and prediction (2) precise voxel-level prediction to capture fine
scale structures prevalent in medical data and (3) directed multi-scale,
multi-level feature learning. We evaluate our approach on a dataset consisting
of 93 medical image volumes with a wide variety of anatomical regions and
vascular structures. In the process, we also introduce HED-3D, a 3D extension
of the state-of-the-art 2D edge detector (HED). We show that our deep learning
approach out-performs, the current state-of-the-art in 3D vascular boundary
detection (structured forests 3D), by a large margin, as well as HED applied to
slices, and HED-3D while successfully localizing fine structures. With our
approach, boundary detection takes about one minute on a typical 512x512x512
volume.
| [
{
"version": "v1",
"created": "Thu, 26 May 2016 18:40:31 GMT"
}
] | 2016-05-27T00:00:00 | [
[
"Merkow",
"Jameson",
""
],
[
"Kriegman",
"David",
""
],
[
"Marsden",
"Alison",
""
],
[
"Tu",
"Zhuowen",
""
]
] | TITLE: Dense Volume-to-Volume Vascular Boundary Detection
ABSTRACT: In this work, we present a novel 3D-Convolutional Neural Network (CNN)
architecture called I2I-3D that predicts boundary location in volumetric data.
Our fine-to-fine, deeply supervised framework addresses three critical issues
to 3D boundary detection: (1) efficient, holistic, end-to-end volumetric label
training and prediction (2) precise voxel-level prediction to capture fine
scale structures prevalent in medical data and (3) directed multi-scale,
multi-level feature learning. We evaluate our approach on a dataset consisting
of 93 medical image volumes with a wide variety of anatomical regions and
vascular structures. In the process, we also introduce HED-3D, a 3D extension
of the state-of-the-art 2D edge detector (HED). We show that our deep learning
approach out-performs, the current state-of-the-art in 3D vascular boundary
detection (structured forests 3D), by a large margin, as well as HED applied to
slices, and HED-3D while successfully localizing fine structures. With our
approach, boundary detection takes about one minute on a typical 512x512x512
volume.
| no_new_dataset | 0.945651 |
1511.05644 | Alireza Makhzani | Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, Ian Goodfellow,
Brendan Frey | Adversarial Autoencoders | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose the "adversarial autoencoder" (AAE), which is a
probabilistic autoencoder that uses the recently proposed generative
adversarial networks (GAN) to perform variational inference by matching the
aggregated posterior of the hidden code vector of the autoencoder with an
arbitrary prior distribution. Matching the aggregated posterior to the prior
ensures that generating from any part of prior space results in meaningful
samples. As a result, the decoder of the adversarial autoencoder learns a deep
generative model that maps the imposed prior to the data distribution. We show
how the adversarial autoencoder can be used in applications such as
semi-supervised classification, disentangling style and content of images,
unsupervised clustering, dimensionality reduction and data visualization. We
performed experiments on MNIST, Street View House Numbers and Toronto Face
datasets and show that adversarial autoencoders achieve competitive results in
generative modeling and semi-supervised classification tasks.
| [
{
"version": "v1",
"created": "Wed, 18 Nov 2015 02:32:39 GMT"
},
{
"version": "v2",
"created": "Wed, 25 May 2016 00:17:45 GMT"
}
] | 2016-05-26T00:00:00 | [
[
"Makhzani",
"Alireza",
""
],
[
"Shlens",
"Jonathon",
""
],
[
"Jaitly",
"Navdeep",
""
],
[
"Goodfellow",
"Ian",
""
],
[
"Frey",
"Brendan",
""
]
] | TITLE: Adversarial Autoencoders
ABSTRACT: In this paper, we propose the "adversarial autoencoder" (AAE), which is a
probabilistic autoencoder that uses the recently proposed generative
adversarial networks (GAN) to perform variational inference by matching the
aggregated posterior of the hidden code vector of the autoencoder with an
arbitrary prior distribution. Matching the aggregated posterior to the prior
ensures that generating from any part of prior space results in meaningful
samples. As a result, the decoder of the adversarial autoencoder learns a deep
generative model that maps the imposed prior to the data distribution. We show
how the adversarial autoencoder can be used in applications such as
semi-supervised classification, disentangling style and content of images,
unsupervised clustering, dimensionality reduction and data visualization. We
performed experiments on MNIST, Street View House Numbers and Toronto Face
datasets and show that adversarial autoencoders achieve competitive results in
generative modeling and semi-supervised classification tasks.
| no_new_dataset | 0.94743 |
1602.00287 | Kirthevasan Kandasamy | Kirthevasan Kandasamy, Yaoliang Yu | Additive Approximations in High Dimensional Nonparametric Regression via
the SALSA | International Conference on Machine Learning (ICML) 2016 | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | High dimensional nonparametric regression is an inherently difficult problem
with known lower bounds depending exponentially in dimension. A popular
strategy to alleviate this curse of dimensionality has been to use additive
models of \emph{first order}, which model the regression function as a sum of
independent functions on each dimension. Though useful in controlling the
variance of the estimate, such models are often too restrictive in practical
settings. Between non-additive models which often have large variance and first
order additive models which have large bias, there has been little work to
exploit the trade-off in the middle via additive models of intermediate order.
In this work, we propose SALSA, which bridges this gap by allowing interactions
between variables, but controls model capacity by limiting the order of
interactions. SALSA minimises the residual sum of squares with squared RKHS
norm penalties. Algorithmically, it can be viewed as Kernel Ridge Regression
with an additive kernel. When the regression function is additive, the excess
risk is only polynomial in dimension. Using the Girard-Newton formulae, we
efficiently sum over a combinatorial number of terms in the additive expansion.
Via a comparison on $15$ real datasets, we show that our method is competitive
against $21$ other alternatives.
| [
{
"version": "v1",
"created": "Sun, 31 Jan 2016 17:32:51 GMT"
},
{
"version": "v2",
"created": "Sun, 20 Mar 2016 23:11:13 GMT"
},
{
"version": "v3",
"created": "Tue, 24 May 2016 23:15:24 GMT"
}
] | 2016-05-26T00:00:00 | [
[
"Kandasamy",
"Kirthevasan",
""
],
[
"Yu",
"Yaoliang",
""
]
] | TITLE: Additive Approximations in High Dimensional Nonparametric Regression via
the SALSA
ABSTRACT: High dimensional nonparametric regression is an inherently difficult problem
with known lower bounds depending exponentially in dimension. A popular
strategy to alleviate this curse of dimensionality has been to use additive
models of \emph{first order}, which model the regression function as a sum of
independent functions on each dimension. Though useful in controlling the
variance of the estimate, such models are often too restrictive in practical
settings. Between non-additive models which often have large variance and first
order additive models which have large bias, there has been little work to
exploit the trade-off in the middle via additive models of intermediate order.
In this work, we propose SALSA, which bridges this gap by allowing interactions
between variables, but controls model capacity by limiting the order of
interactions. SALSA minimises the residual sum of squares with squared RKHS
norm penalties. Algorithmically, it can be viewed as Kernel Ridge Regression
with an additive kernel. When the regression function is additive, the excess
risk is only polynomial in dimension. Using the Girard-Newton formulae, we
efficiently sum over a combinatorial number of terms in the additive expansion.
Via a comparison on $15$ real datasets, we show that our method is competitive
against $21$ other alternatives.
| no_new_dataset | 0.941761 |
1603.04186 | Amir Rosenfeld | Amir Rosenfeld, Shimon Ullman | Visual Concept Recognition and Localization via Iterative Introspection | null | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Convolutional neural networks have been shown to develop internal
representations, which correspond closely to semantically meaningful objects
and parts, although trained solely on class labels. Class Activation Mapping
(CAM) is a recent method that makes it possible to easily highlight the image
regions contributing to a network's classification decision. We build upon
these two developments to enable a network to re-examine informative image
regions, which we term introspection. We propose a weakly-supervised iterative
scheme, which shifts its center of attention to increasingly discriminative
regions as it progresses, by alternating stages of classification and
introspection. We evaluate our method and show its effectiveness over a range
of several datasets, where we obtain competitive or state-of-the-art results:
on Stanford-40 Actions, we set a new state-of the art of 81.74%. On
FGVC-Aircraft and the Stanford Dogs dataset, we show consistent improvements
over baselines, some of which include significantly more supervision.
| [
{
"version": "v1",
"created": "Mon, 14 Mar 2016 10:18:03 GMT"
},
{
"version": "v2",
"created": "Wed, 25 May 2016 13:27:37 GMT"
}
] | 2016-05-26T00:00:00 | [
[
"Rosenfeld",
"Amir",
""
],
[
"Ullman",
"Shimon",
""
]
] | TITLE: Visual Concept Recognition and Localization via Iterative Introspection
ABSTRACT: Convolutional neural networks have been shown to develop internal
representations, which correspond closely to semantically meaningful objects
and parts, although trained solely on class labels. Class Activation Mapping
(CAM) is a recent method that makes it possible to easily highlight the image
regions contributing to a network's classification decision. We build upon
these two developments to enable a network to re-examine informative image
regions, which we term introspection. We propose a weakly-supervised iterative
scheme, which shifts its center of attention to increasingly discriminative
regions as it progresses, by alternating stages of classification and
introspection. We evaluate our method and show its effectiveness over a range
of several datasets, where we obtain competitive or state-of-the-art results:
on Stanford-40 Actions, we set a new state-of the art of 81.74%. On
FGVC-Aircraft and the Stanford Dogs dataset, we show consistent improvements
over baselines, some of which include significantly more supervision.
| no_new_dataset | 0.94743 |
1605.07512 | Xiang Sun | Xiang Sun and Nirwan Ansari | Green Cloudlet Network: A Distributed Green Mobile Cloud Network | accepted for publication in IEEE Network on March 29, 2016 | null | null | null | cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This article introduces a Green Cloudlet Network (GCN) architecture in the
context of mobile cloud computing. The proposed architecture is aimed at
providing seamless and low End-to-End (E2E) delay between a User Equipment (UE)
and its Avatar (its software clone) in the cloudlets to facilitate the
application workloads offloading process. Furthermore, Software Define
Networking (SDN) based core network is introduced in the GCN architecture by
replacing the traditional Evolved Packet Core (EPC) in the LTE network in order
to provide efficient communications connections between different end points.
Cloudlet Network File System (CNFS) is designed based on the proposed
architecture in order to protect Avatars' dataset against hardware failure and
improve the Avatars' performance in terms of data access latency. Moreover,
green energy supplement is proposed in the architecture in order to reduce the
extra Operational Expenditure (OPEX) and CO2 footprint incurred by running the
distributed cloudlets. Owing to the temporal and spatial dynamics of both the
green energy generation and energy demands of Green Cloudlet Systems (GCSs),
designing an optimal green energy management strategy based on the
characteristics of the green energy generation and the energy demands of eNBs
and cloudlets to minimize the on-grid energy consumption is critical to the
cloudlet provider.
| [
{
"version": "v1",
"created": "Tue, 24 May 2016 15:51:27 GMT"
},
{
"version": "v2",
"created": "Wed, 25 May 2016 02:50:17 GMT"
}
] | 2016-05-26T00:00:00 | [
[
"Sun",
"Xiang",
""
],
[
"Ansari",
"Nirwan",
""
]
] | TITLE: Green Cloudlet Network: A Distributed Green Mobile Cloud Network
ABSTRACT: This article introduces a Green Cloudlet Network (GCN) architecture in the
context of mobile cloud computing. The proposed architecture is aimed at
providing seamless and low End-to-End (E2E) delay between a User Equipment (UE)
and its Avatar (its software clone) in the cloudlets to facilitate the
application workloads offloading process. Furthermore, Software Define
Networking (SDN) based core network is introduced in the GCN architecture by
replacing the traditional Evolved Packet Core (EPC) in the LTE network in order
to provide efficient communications connections between different end points.
Cloudlet Network File System (CNFS) is designed based on the proposed
architecture in order to protect Avatars' dataset against hardware failure and
improve the Avatars' performance in terms of data access latency. Moreover,
green energy supplement is proposed in the architecture in order to reduce the
extra Operational Expenditure (OPEX) and CO2 footprint incurred by running the
distributed cloudlets. Owing to the temporal and spatial dynamics of both the
green energy generation and energy demands of Green Cloudlet Systems (GCSs),
designing an optimal green energy management strategy based on the
characteristics of the green energy generation and the energy demands of eNBs
and cloudlets to minimize the on-grid energy consumption is critical to the
cloudlet provider.
| no_new_dataset | 0.948298 |
1605.07659 | Aryan Mokhtari | Aryan Mokhtari and Alejandro Ribeiro | Adaptive Newton Method for Empirical Risk Minimization to Statistical
Accuracy | null | null | null | null | cs.LG math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider empirical risk minimization for large-scale datasets. We
introduce Ada Newton as an adaptive algorithm that uses Newton's method with
adaptive sample sizes. The main idea of Ada Newton is to increase the size of
the training set by a factor larger than one in a way that the minimization
variable for the current training set is in the local neighborhood of the
optimal argument of the next training set. This allows to exploit the quadratic
convergence property of Newton's method and reach the statistical accuracy of
each training set with only one iteration of Newton's method. We show
theoretically and empirically that Ada Newton can double the size of the
training set in each iteration to achieve the statistical accuracy of the full
training set with about two passes over the dataset.
| [
{
"version": "v1",
"created": "Tue, 24 May 2016 21:02:50 GMT"
}
] | 2016-05-26T00:00:00 | [
[
"Mokhtari",
"Aryan",
""
],
[
"Ribeiro",
"Alejandro",
""
]
] | TITLE: Adaptive Newton Method for Empirical Risk Minimization to Statistical
Accuracy
ABSTRACT: We consider empirical risk minimization for large-scale datasets. We
introduce Ada Newton as an adaptive algorithm that uses Newton's method with
adaptive sample sizes. The main idea of Ada Newton is to increase the size of
the training set by a factor larger than one in a way that the minimization
variable for the current training set is in the local neighborhood of the
optimal argument of the next training set. This allows to exploit the quadratic
convergence property of Newton's method and reach the statistical accuracy of
each training set with only one iteration of Newton's method. We show
theoretically and empirically that Ada Newton can double the size of the
training set in each iteration to achieve the statistical accuracy of the full
training set with about two passes over the dataset.
| no_new_dataset | 0.949576 |
1605.07819 | Markus Kammerstetter | Markus Kammerstetter, Markus Muellner, Daniel Burian, Christian Kudera
and Wolfgang Kastner | Efficient High-Speed WPA2 Brute Force Attacks using Scalable Low-Cost
FPGA Clustering [Extended Version] | Keywords: FPGA, WPA2, Security, Brute Force, Attacks Conference on
Cryptographic Hardware and Embedded Systems 2016 (CHES 2016), August 17-19,
2016, Santa Barbara, CA, USA | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | WPA2-Personal is widely used to protect Wi-Fi networks against illicit
access. While attackers typically use GPUs to speed up the discovery of weak
network passwords, attacking random passwords is considered to quickly become
infeasible with increasing password length. Professional attackers may thus
turn to commercial high-end FPGA-based cluster solutions to significantly
increase the speed of those attacks. Well known manufacturers such as Elcomsoft
have succeeded in creating world's fastest commercial FPGA-based WPA2 password
recovery system, but since they rely on high-performance FPGAs the costs of
these systems are well beyond the reach of amateurs. In this paper, we present
a highly optimized low-cost FPGA cluster-based WPA-2 Personal password recovery
system that can not only achieve similar performance at a cost affordable by
amateurs, but in comparison our implementation would also be more than 5 times
as fast on the original hardware. Since the currently fastest system is not
only significantly slower but proprietary as well, we believe that we are the
first to present the internals of a highly optimized and fully pipelined FPGA
WPA2 password recovery system. In addition, we evaluated our approach with
respect to performance and power usage and compare it to GPU-based systems. To
assess the real-world impact of our system, we utilized the well known Wigle
Wi-Fi network dataset to conduct a case study within the country and its border
regions. Our results indicate that our system could be used to break into each
of more than 160,000 existing Wi-Fi networks requiring 3 days per network on
our low-cost FPGA cluster in the worst case.
| [
{
"version": "v1",
"created": "Wed, 25 May 2016 10:41:23 GMT"
}
] | 2016-05-26T00:00:00 | [
[
"Kammerstetter",
"Markus",
""
],
[
"Muellner",
"Markus",
""
],
[
"Burian",
"Daniel",
""
],
[
"Kudera",
"Christian",
""
],
[
"Kastner",
"Wolfgang",
""
]
] | TITLE: Efficient High-Speed WPA2 Brute Force Attacks using Scalable Low-Cost
FPGA Clustering [Extended Version]
ABSTRACT: WPA2-Personal is widely used to protect Wi-Fi networks against illicit
access. While attackers typically use GPUs to speed up the discovery of weak
network passwords, attacking random passwords is considered to quickly become
infeasible with increasing password length. Professional attackers may thus
turn to commercial high-end FPGA-based cluster solutions to significantly
increase the speed of those attacks. Well known manufacturers such as Elcomsoft
have succeeded in creating world's fastest commercial FPGA-based WPA2 password
recovery system, but since they rely on high-performance FPGAs the costs of
these systems are well beyond the reach of amateurs. In this paper, we present
a highly optimized low-cost FPGA cluster-based WPA-2 Personal password recovery
system that can not only achieve similar performance at a cost affordable by
amateurs, but in comparison our implementation would also be more than 5 times
as fast on the original hardware. Since the currently fastest system is not
only significantly slower but proprietary as well, we believe that we are the
first to present the internals of a highly optimized and fully pipelined FPGA
WPA2 password recovery system. In addition, we evaluated our approach with
respect to performance and power usage and compare it to GPU-based systems. To
assess the real-world impact of our system, we utilized the well known Wigle
Wi-Fi network dataset to conduct a case study within the country and its border
regions. Our results indicate that our system could be used to break into each
of more than 160,000 existing Wi-Fi networks requiring 3 days per network on
our low-cost FPGA cluster in the worst case.
| no_new_dataset | 0.943191 |
1605.07843 | Yichun Yin | Yichun Yin, Furu Wei, Li Dong, Kaimeng Xu, Ming Zhang, Ming Zhou | Unsupervised Word and Dependency Path Embeddings for Aspect Term
Extraction | IJCAI 2016 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we develop a novel approach to aspect term extraction based on
unsupervised learning of distributed representations of words and dependency
paths. The basic idea is to connect two words (w1 and w2) with the dependency
path (r) between them in the embedding space. Specifically, our method
optimizes the objective w1 + r = w2 in the low-dimensional space, where the
multi-hop dependency paths are treated as a sequence of grammatical relations
and modeled by a recurrent neural network. Then, we design the embedding
features that consider linear context and dependency context information, for
the conditional random field (CRF) based aspect term extraction. Experimental
results on the SemEval datasets show that, (1) with only embedding features, we
can achieve state-of-the-art results; (2) our embedding method which
incorporates the syntactic information among words yields better performance
than other representative ones in aspect term extraction.
| [
{
"version": "v1",
"created": "Wed, 25 May 2016 12:01:46 GMT"
}
] | 2016-05-26T00:00:00 | [
[
"Yin",
"Yichun",
""
],
[
"Wei",
"Furu",
""
],
[
"Dong",
"Li",
""
],
[
"Xu",
"Kaimeng",
""
],
[
"Zhang",
"Ming",
""
],
[
"Zhou",
"Ming",
""
]
] | TITLE: Unsupervised Word and Dependency Path Embeddings for Aspect Term
Extraction
ABSTRACT: In this paper, we develop a novel approach to aspect term extraction based on
unsupervised learning of distributed representations of words and dependency
paths. The basic idea is to connect two words (w1 and w2) with the dependency
path (r) between them in the embedding space. Specifically, our method
optimizes the objective w1 + r = w2 in the low-dimensional space, where the
multi-hop dependency paths are treated as a sequence of grammatical relations
and modeled by a recurrent neural network. Then, we design the embedding
features that consider linear context and dependency context information, for
the conditional random field (CRF) based aspect term extraction. Experimental
results on the SemEval datasets show that, (1) with only embedding features, we
can achieve state-of-the-art results; (2) our embedding method which
incorporates the syntactic information among words yields better performance
than other representative ones in aspect term extraction.
| no_new_dataset | 0.9463 |
1605.07960 | Aijun Bai | Aijun Bai | Multi-Object Tracking and Identification over Sets | Draft version | null | null | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The ability for an autonomous agent or robot to track and identify
potentially multiple objects in a dynamic environment is essential for many
applications, such as automated surveillance, traffic monitoring, human-robot
interaction, etc. The main challenge is due to the noisy and incomplete
perception including inevitable false negative and false positive errors from a
low-level detector. In this paper, we propose a novel multi-object tracking and
identification over sets approach to address this challenge. We define joint
states and observations both as finite sets, and develop motion and observation
functions accordingly. The object identification problem is then formulated and
solved by using expectation-maximization methods. The set formulation enables
us to avoid directly performing observation-to-object association. We
empirically confirm that the overall algorithm outperforms the state-of-the-art
in a popular PETS dataset.
| [
{
"version": "v1",
"created": "Wed, 25 May 2016 16:40:05 GMT"
}
] | 2016-05-26T00:00:00 | [
[
"Bai",
"Aijun",
""
]
] | TITLE: Multi-Object Tracking and Identification over Sets
ABSTRACT: The ability for an autonomous agent or robot to track and identify
potentially multiple objects in a dynamic environment is essential for many
applications, such as automated surveillance, traffic monitoring, human-robot
interaction, etc. The main challenge is due to the noisy and incomplete
perception including inevitable false negative and false positive errors from a
low-level detector. In this paper, we propose a novel multi-object tracking and
identification over sets approach to address this challenge. We define joint
states and observations both as finite sets, and develop motion and observation
functions accordingly. The object identification problem is then formulated and
solved by using expectation-maximization methods. The set formulation enables
us to avoid directly performing observation-to-object association. We
empirically confirm that the overall algorithm outperforms the state-of-the-art
in a popular PETS dataset.
| no_new_dataset | 0.947672 |
1605.07991 | Jialei Wang | Jialei Wang, Mladen Kolar, Nathan Srebro, Tong Zhang | Efficient Distributed Learning with Sparsity | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a novel, efficient approach for distributed sparse learning in
high-dimensions, where observations are randomly partitioned across machines.
Computationally, at each round our method only requires the master machine to
solve a shifted ell_1 regularized M-estimation problem, and other workers to
compute the gradient. In respect of communication, the proposed approach
provably matches the estimation error bound of centralized methods within
constant rounds of communications (ignoring logarithmic factors). We conduct
extensive experiments on both simulated and real world datasets, and
demonstrate encouraging performances on high-dimensional regression and
classification tasks.
| [
{
"version": "v1",
"created": "Wed, 25 May 2016 18:15:43 GMT"
}
] | 2016-05-26T00:00:00 | [
[
"Wang",
"Jialei",
""
],
[
"Kolar",
"Mladen",
""
],
[
"Srebro",
"Nathan",
""
],
[
"Zhang",
"Tong",
""
]
] | TITLE: Efficient Distributed Learning with Sparsity
ABSTRACT: We propose a novel, efficient approach for distributed sparse learning in
high-dimensions, where observations are randomly partitioned across machines.
Computationally, at each round our method only requires the master machine to
solve a shifted ell_1 regularized M-estimation problem, and other workers to
compute the gradient. In respect of communication, the proposed approach
provably matches the estimation error bound of centralized methods within
constant rounds of communications (ignoring logarithmic factors). We conduct
extensive experiments on both simulated and real world datasets, and
demonstrate encouraging performances on high-dimensional regression and
classification tasks.
| no_new_dataset | 0.949435 |
1507.08173 | Nauman Shahid | Nauman Shahid, Nathanael Perraudin, Vassilis Kalofolias, Gilles Puy,
Pierre Vandergheynst | Fast Robust PCA on Graphs | null | null | 10.1109/JSTSP.2016.2555239 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mining useful clusters from high dimensional data has received significant
attention of the computer vision and pattern recognition community in the
recent years. Linear and non-linear dimensionality reduction has played an
important role to overcome the curse of dimensionality. However, often such
methods are accompanied with three different problems: high computational
complexity (usually associated with the nuclear norm minimization),
non-convexity (for matrix factorization methods) and susceptibility to gross
corruptions in the data. In this paper we propose a principal component
analysis (PCA) based solution that overcomes these three issues and
approximates a low-rank recovery method for high dimensional datasets. We
target the low-rank recovery by enforcing two types of graph smoothness
assumptions, one on the data samples and the other on the features by designing
a convex optimization problem. The resulting algorithm is fast, efficient and
scalable for huge datasets with O(nlog(n)) computational complexity in the
number of data samples. It is also robust to gross corruptions in the dataset
as well as to the model parameters. Clustering experiments on 7 benchmark
datasets with different types of corruptions and background separation
experiments on 3 video datasets show that our proposed model outperforms 10
state-of-the-art dimensionality reduction models. Our theoretical analysis
proves that the proposed model is able to recover approximate low-rank
representations with a bounded error for clusterable data.
| [
{
"version": "v1",
"created": "Wed, 29 Jul 2015 14:53:33 GMT"
},
{
"version": "v2",
"created": "Mon, 25 Jan 2016 20:29:57 GMT"
}
] | 2016-05-25T00:00:00 | [
[
"Shahid",
"Nauman",
""
],
[
"Perraudin",
"Nathanael",
""
],
[
"Kalofolias",
"Vassilis",
""
],
[
"Puy",
"Gilles",
""
],
[
"Vandergheynst",
"Pierre",
""
]
] | TITLE: Fast Robust PCA on Graphs
ABSTRACT: Mining useful clusters from high dimensional data has received significant
attention of the computer vision and pattern recognition community in the
recent years. Linear and non-linear dimensionality reduction has played an
important role to overcome the curse of dimensionality. However, often such
methods are accompanied with three different problems: high computational
complexity (usually associated with the nuclear norm minimization),
non-convexity (for matrix factorization methods) and susceptibility to gross
corruptions in the data. In this paper we propose a principal component
analysis (PCA) based solution that overcomes these three issues and
approximates a low-rank recovery method for high dimensional datasets. We
target the low-rank recovery by enforcing two types of graph smoothness
assumptions, one on the data samples and the other on the features by designing
a convex optimization problem. The resulting algorithm is fast, efficient and
scalable for huge datasets with O(nlog(n)) computational complexity in the
number of data samples. It is also robust to gross corruptions in the dataset
as well as to the model parameters. Clustering experiments on 7 benchmark
datasets with different types of corruptions and background separation
experiments on 3 video datasets show that our proposed model outperforms 10
state-of-the-art dimensionality reduction models. Our theoretical analysis
proves that the proposed model is able to recover approximate low-rank
representations with a bounded error for clusterable data.
| no_new_dataset | 0.945248 |
1511.07053 | Francesco Visin | Francesco Visin, Marco Ciccone, Adriana Romero, Kyle Kastner,
Kyunghyun Cho, Yoshua Bengio, Matteo Matteucci, Aaron Courville | ReSeg: A Recurrent Neural Network-based Model for Semantic Segmentation | In CVPR Deep Vision Workshop, 2016 | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a structured prediction architecture, which exploits the local
generic features extracted by Convolutional Neural Networks and the capacity of
Recurrent Neural Networks (RNN) to retrieve distant dependencies. The proposed
architecture, called ReSeg, is based on the recently introduced ReNet model for
image classification. We modify and extend it to perform the more challenging
task of semantic segmentation. Each ReNet layer is composed of four RNN that
sweep the image horizontally and vertically in both directions, encoding
patches or activations, and providing relevant global information. Moreover,
ReNet layers are stacked on top of pre-trained convolutional layers, benefiting
from generic local features. Upsampling layers follow ReNet layers to recover
the original image resolution in the final predictions. The proposed ReSeg
architecture is efficient, flexible and suitable for a variety of semantic
segmentation tasks. We evaluate ReSeg on several widely-used semantic
segmentation datasets: Weizmann Horse, Oxford Flower, and CamVid; achieving
state-of-the-art performance. Results show that ReSeg can act as a suitable
architecture for semantic segmentation tasks, and may have further applications
in other structured prediction problems. The source code and model
hyperparameters are available on https://github.com/fvisin/reseg.
| [
{
"version": "v1",
"created": "Sun, 22 Nov 2015 19:25:27 GMT"
},
{
"version": "v2",
"created": "Mon, 11 Jan 2016 14:41:56 GMT"
},
{
"version": "v3",
"created": "Tue, 24 May 2016 15:55:41 GMT"
}
] | 2016-05-25T00:00:00 | [
[
"Visin",
"Francesco",
""
],
[
"Ciccone",
"Marco",
""
],
[
"Romero",
"Adriana",
""
],
[
"Kastner",
"Kyle",
""
],
[
"Cho",
"Kyunghyun",
""
],
[
"Bengio",
"Yoshua",
""
],
[
"Matteucci",
"Matteo",
""
],
[
"Courville",
"Aaron",
""
]
] | TITLE: ReSeg: A Recurrent Neural Network-based Model for Semantic Segmentation
ABSTRACT: We propose a structured prediction architecture, which exploits the local
generic features extracted by Convolutional Neural Networks and the capacity of
Recurrent Neural Networks (RNN) to retrieve distant dependencies. The proposed
architecture, called ReSeg, is based on the recently introduced ReNet model for
image classification. We modify and extend it to perform the more challenging
task of semantic segmentation. Each ReNet layer is composed of four RNN that
sweep the image horizontally and vertically in both directions, encoding
patches or activations, and providing relevant global information. Moreover,
ReNet layers are stacked on top of pre-trained convolutional layers, benefiting
from generic local features. Upsampling layers follow ReNet layers to recover
the original image resolution in the final predictions. The proposed ReSeg
architecture is efficient, flexible and suitable for a variety of semantic
segmentation tasks. We evaluate ReSeg on several widely-used semantic
segmentation datasets: Weizmann Horse, Oxford Flower, and CamVid; achieving
state-of-the-art performance. Results show that ReSeg can act as a suitable
architecture for semantic segmentation tasks, and may have further applications
in other structured prediction problems. The source code and model
hyperparameters are available on https://github.com/fvisin/reseg.
| no_new_dataset | 0.950824 |
1602.01301 | Julien Flamant | Julien Flamant, Nicolas Le Bihan, Andrew V. Martin, Jonathan H. Manton | Expansion-maximization-compression algorithm with spherical harmonics
for single particle imaging with X-ray lasers | null | null | 10.1103/PhysRevE.93.053302 | null | physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In 3D single particle imaging with X-ray free-electron lasers, particle
orientation is not recorded during measurement but is instead recovered as a
necessary step in the reconstruction of a 3D image from the diffraction data.
Here we use harmonic analysis on the sphere to cleanly separate the angu- lar
and radial degrees of freedom of this problem, providing new opportunities to
efficiently use data and computational resources. We develop the
Expansion-Maximization-Compression algorithm into a shell-by-shell approach and
implement an angular bandwidth limit that can be gradually raised during the
reconstruction. We study the minimum number of patterns and minimum rotation
sampling required for a desired angular and radial resolution. These extensions
provide new av- enues to improve computational efficiency and speed of
convergence, which are critically important considering the very large datasets
expected from experiment.
| [
{
"version": "v1",
"created": "Wed, 3 Feb 2016 14:02:11 GMT"
},
{
"version": "v2",
"created": "Mon, 2 May 2016 13:39:51 GMT"
}
] | 2016-05-25T00:00:00 | [
[
"Flamant",
"Julien",
""
],
[
"Bihan",
"Nicolas Le",
""
],
[
"Martin",
"Andrew V.",
""
],
[
"Manton",
"Jonathan H.",
""
]
] | TITLE: Expansion-maximization-compression algorithm with spherical harmonics
for single particle imaging with X-ray lasers
ABSTRACT: In 3D single particle imaging with X-ray free-electron lasers, particle
orientation is not recorded during measurement but is instead recovered as a
necessary step in the reconstruction of a 3D image from the diffraction data.
Here we use harmonic analysis on the sphere to cleanly separate the angu- lar
and radial degrees of freedom of this problem, providing new opportunities to
efficiently use data and computational resources. We develop the
Expansion-Maximization-Compression algorithm into a shell-by-shell approach and
implement an angular bandwidth limit that can be gradually raised during the
reconstruction. We study the minimum number of patterns and minimum rotation
sampling required for a desired angular and radial resolution. These extensions
provide new av- enues to improve computational efficiency and speed of
convergence, which are critically important considering the very large datasets
expected from experiment.
| no_new_dataset | 0.951684 |
1605.05422 | Shinji Ito | Shinji Ito and Ryohei Fujimaki | Optimization Beyond Prediction: Prescriptive Price Optimization | null | null | null | null | math.OC cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper addresses a novel data science problem, prescriptive price
optimization, which derives the optimal price strategy to maximize future
profit/revenue on the basis of massive predictive formulas produced by machine
learning. The prescriptive price optimization first builds sales forecast
formulas of multiple products, on the basis of historical data, which reveal
complex relationships between sales and prices, such as price elasticity of
demand and cannibalization. Then, it constructs a mathematical optimization
problem on the basis of those predictive formulas. We present that the
optimization problem can be formulated as an instance of binary quadratic
programming (BQP). Although BQP problems are NP-hard in general and
computationally intractable, we propose a fast approximation algorithm using a
semi-definite programming (SDP) relaxation, which is closely related to the
Goemans-Williamson's Max-Cut approximation. Our experiments on simulation and
real retail datasets show that our prescriptive price optimization
simultaneously derives the optimal prices of tens/hundreds products with
practical computational time, that potentially improve 8.2% of gross profit of
those products.
| [
{
"version": "v1",
"created": "Wed, 18 May 2016 02:46:14 GMT"
},
{
"version": "v2",
"created": "Tue, 24 May 2016 06:38:18 GMT"
}
] | 2016-05-25T00:00:00 | [
[
"Ito",
"Shinji",
""
],
[
"Fujimaki",
"Ryohei",
""
]
] | TITLE: Optimization Beyond Prediction: Prescriptive Price Optimization
ABSTRACT: This paper addresses a novel data science problem, prescriptive price
optimization, which derives the optimal price strategy to maximize future
profit/revenue on the basis of massive predictive formulas produced by machine
learning. The prescriptive price optimization first builds sales forecast
formulas of multiple products, on the basis of historical data, which reveal
complex relationships between sales and prices, such as price elasticity of
demand and cannibalization. Then, it constructs a mathematical optimization
problem on the basis of those predictive formulas. We present that the
optimization problem can be formulated as an instance of binary quadratic
programming (BQP). Although BQP problems are NP-hard in general and
computationally intractable, we propose a fast approximation algorithm using a
semi-definite programming (SDP) relaxation, which is closely related to the
Goemans-Williamson's Max-Cut approximation. Our experiments on simulation and
real retail datasets show that our prescriptive price optimization
simultaneously derives the optimal prices of tens/hundreds products with
practical computational time, that potentially improve 8.2% of gross profit of
those products.
| no_new_dataset | 0.942454 |
1605.07363 | Apratim Bhattacharyya | Apratim Bhattacharyya, Mateusz Malinowski, Mario Fritz | Spatio-Temporal Image Boundary Extrapolation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Boundary prediction in images as well as video has been a very active topic
of research and organizing visual information into boundaries and segments is
believed to be a corner stone of visual perception. While prior work has
focused on predicting boundaries for observed frames, our work aims at
predicting boundaries of future unobserved frames. This requires our model to
learn about the fate of boundaries and extrapolate motion patterns. We
experiment on established real-world video segmentation dataset, which provides
a testbed for this new task. We show for the first time spatio-temporal
boundary extrapolation in this challenging scenario. Furthermore, we show
long-term prediction of boundaries in situations where the motion is governed
by the laws of physics. We successfully predict boundaries in a billiard
scenario without any assumptions of a strong parametric model or any object
notion. We argue that our model has with minimalistic model assumptions derived
a notion of 'intuitive physics' that can be applied to novel scenes.
| [
{
"version": "v1",
"created": "Tue, 24 May 2016 10:22:33 GMT"
}
] | 2016-05-25T00:00:00 | [
[
"Bhattacharyya",
"Apratim",
""
],
[
"Malinowski",
"Mateusz",
""
],
[
"Fritz",
"Mario",
""
]
] | TITLE: Spatio-Temporal Image Boundary Extrapolation
ABSTRACT: Boundary prediction in images as well as video has been a very active topic
of research and organizing visual information into boundaries and segments is
believed to be a corner stone of visual perception. While prior work has
focused on predicting boundaries for observed frames, our work aims at
predicting boundaries of future unobserved frames. This requires our model to
learn about the fate of boundaries and extrapolate motion patterns. We
experiment on established real-world video segmentation dataset, which provides
a testbed for this new task. We show for the first time spatio-temporal
boundary extrapolation in this challenging scenario. Furthermore, we show
long-term prediction of boundaries in situations where the motion is governed
by the laws of physics. We successfully predict boundaries in a billiard
scenario without any assumptions of a strong parametric model or any object
notion. We argue that our model has with minimalistic model assumptions derived
a notion of 'intuitive physics' that can be applied to novel scenes.
| no_new_dataset | 0.939582 |
1605.07369 | Ganesh Sundaramoorthi | Dong Lao and Ganesh Sundaramoorthi | Quickest Moving Object Detection | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a general framework and method for simultaneous detection and
segmentation of an object in a video that moves (or comes into view of the
camera) at some unknown time in the video. The method is an online approach
based on motion segmentation, and it operates under dynamic backgrounds caused
by a moving camera or moving nuisances. The goal of the method is to detect and
segment the object as soon as it moves. Due to stochastic variability in the
video and unreliability of the motion signal, several frames are needed to
reliably detect the object. The method is designed to detect and segment with
minimum delay subject to a constraint on the false alarm rate. The method is
derived as a problem of Quickest Change Detection. Experiments on a dataset
show the effectiveness of our method in minimizing detection delay subject to
false alarm constraints.
| [
{
"version": "v1",
"created": "Tue, 24 May 2016 10:40:13 GMT"
}
] | 2016-05-25T00:00:00 | [
[
"Lao",
"Dong",
""
],
[
"Sundaramoorthi",
"Ganesh",
""
]
] | TITLE: Quickest Moving Object Detection
ABSTRACT: We present a general framework and method for simultaneous detection and
segmentation of an object in a video that moves (or comes into view of the
camera) at some unknown time in the video. The method is an online approach
based on motion segmentation, and it operates under dynamic backgrounds caused
by a moving camera or moving nuisances. The goal of the method is to detect and
segment the object as soon as it moves. Due to stochastic variability in the
video and unreliability of the motion signal, several frames are needed to
reliably detect the object. The method is designed to detect and segment with
minimum delay subject to a constraint on the false alarm rate. The method is
derived as a problem of Quickest Change Detection. Experiments on a dataset
show the effectiveness of our method in minimizing detection delay subject to
false alarm constraints.
| no_new_dataset | 0.950869 |
1405.5919 | Szymon Grabowski | Szymon Grabowski, Marcin Raniszewski | Two simple full-text indexes based on the suffix array | null | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose two suffix array inspired full-text indexes. One, called SA-hash,
augments the suffix array with a hash table to speed up pattern searches due to
significantly narrowed search interval before the binary search phase. The
other, called FBCSA, is a compact data structure, similar to M{\"a}kinen's
compact suffix array, but working on fixed sized blocks. Experiments on the
Pizza~\&~Chili 200\,MB datasets show that SA-hash is about 2--3 times faster in
pattern searches (counts) than the standard suffix array, for the price of
requiring $0.2n-1.1n$ bytes of extra space, where $n$ is the text length, and
setting a minimum pattern length. FBCSA is relatively fast in single cell
accesses (a few times faster than related indexes at about the same or better
compression), but not competitive if many consecutive cells are to be
extracted. Still, for the task of extracting, e.g., 10 successive cells its
time-space relation remains attractive.
| [
{
"version": "v1",
"created": "Thu, 22 May 2014 21:55:00 GMT"
},
{
"version": "v2",
"created": "Mon, 23 May 2016 17:04:14 GMT"
}
] | 2016-05-24T00:00:00 | [
[
"Grabowski",
"Szymon",
""
],
[
"Raniszewski",
"Marcin",
""
]
] | TITLE: Two simple full-text indexes based on the suffix array
ABSTRACT: We propose two suffix array inspired full-text indexes. One, called SA-hash,
augments the suffix array with a hash table to speed up pattern searches due to
significantly narrowed search interval before the binary search phase. The
other, called FBCSA, is a compact data structure, similar to M{\"a}kinen's
compact suffix array, but working on fixed sized blocks. Experiments on the
Pizza~\&~Chili 200\,MB datasets show that SA-hash is about 2--3 times faster in
pattern searches (counts) than the standard suffix array, for the price of
requiring $0.2n-1.1n$ bytes of extra space, where $n$ is the text length, and
setting a minimum pattern length. FBCSA is relatively fast in single cell
accesses (a few times faster than related indexes at about the same or better
compression), but not competitive if many consecutive cells are to be
extracted. Still, for the task of extracting, e.g., 10 successive cells its
time-space relation remains attractive.
| no_new_dataset | 0.947672 |
1511.07838 | Amjad Almahairi | Amjad Almahairi, Nicolas Ballas, Tim Cooijmans, Yin Zheng, Hugo
Larochelle, Aaron Courville | Dynamic Capacity Networks | ICML 2016 | null | null | null | cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce the Dynamic Capacity Network (DCN), a neural network that can
adaptively assign its capacity across different portions of the input data.
This is achieved by combining modules of two types: low-capacity sub-networks
and high-capacity sub-networks. The low-capacity sub-networks are applied
across most of the input, but also provide a guide to select a few portions of
the input on which to apply the high-capacity sub-networks. The selection is
made using a novel gradient-based attention mechanism, that efficiently
identifies input regions for which the DCN's output is most sensitive and to
which we should devote more capacity. We focus our empirical evaluation on the
Cluttered MNIST and SVHN image datasets. Our findings indicate that DCNs are
able to drastically reduce the number of computations, compared to traditional
convolutional neural networks, while maintaining similar or even better
performance.
| [
{
"version": "v1",
"created": "Tue, 24 Nov 2015 19:30:19 GMT"
},
{
"version": "v2",
"created": "Fri, 27 Nov 2015 19:17:53 GMT"
},
{
"version": "v3",
"created": "Thu, 3 Dec 2015 16:13:21 GMT"
},
{
"version": "v4",
"created": "Thu, 7 Jan 2016 22:44:43 GMT"
},
{
"version": "v5",
"created": "Tue, 9 Feb 2016 16:49:55 GMT"
},
{
"version": "v6",
"created": "Wed, 6 Apr 2016 19:48:32 GMT"
},
{
"version": "v7",
"created": "Sun, 22 May 2016 20:58:11 GMT"
}
] | 2016-05-24T00:00:00 | [
[
"Almahairi",
"Amjad",
""
],
[
"Ballas",
"Nicolas",
""
],
[
"Cooijmans",
"Tim",
""
],
[
"Zheng",
"Yin",
""
],
[
"Larochelle",
"Hugo",
""
],
[
"Courville",
"Aaron",
""
]
] | TITLE: Dynamic Capacity Networks
ABSTRACT: We introduce the Dynamic Capacity Network (DCN), a neural network that can
adaptively assign its capacity across different portions of the input data.
This is achieved by combining modules of two types: low-capacity sub-networks
and high-capacity sub-networks. The low-capacity sub-networks are applied
across most of the input, but also provide a guide to select a few portions of
the input on which to apply the high-capacity sub-networks. The selection is
made using a novel gradient-based attention mechanism, that efficiently
identifies input regions for which the DCN's output is most sensitive and to
which we should devote more capacity. We focus our empirical evaluation on the
Cluttered MNIST and SVHN image datasets. Our findings indicate that DCNs are
able to drastically reduce the number of computations, compared to traditional
convolutional neural networks, while maintaining similar or even better
performance.
| no_new_dataset | 0.947039 |
1601.05506 | Kyle Kloster | Biaobin Jiang, Kyle Kloster, David F. Gleich, Michael Gribskov | AptRank: An Adaptive PageRank Model for Protein Function Prediction on
Bi-relational Graphs | 20 pages, code available at this url
https://github.rcac.purdue.edu/mgribsko/aptrank | null | null | null | q-bio.MN cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Diffusion-based network models are widely used for protein function
prediction using protein network data and have been shown to outperform
neighborhood- and module-based methods. Recent studies have shown that
integrating the hierarchical structure of the Gene Ontology (GO) data
dramatically improves prediction accuracy. However, previous methods usually
either used the GO hierarchy to refine the prediction results of multiple
classifiers, or flattened the hierarchy into a function-function similarity
kernel. No study has taken the GO hierarchy into account together with the
protein network as a two-layer network model.
We first construct a Bi-relational graph (Birg) model comprised of both
protein-protein association and function-function hierarchical networks. We
then propose two diffusion-based methods, BirgRank and AptRank, both of which
use PageRank to diffuse information on this two-layer graph model. BirgRank is
an application of traditional PageRank with fixed decay parameters. In
contrast, AptRank uses an adaptive mechanism to improve the performance of
BirgRank. We evaluate both methods in predicting protein function on yeast,
fly, and human datasets, and compare with four previous methods: GeneMANIA,
TMC, ProteinRank and clusDCA. We design three validation strategies: missing
function prediction, de novo function prediction, and guided function
prediction to comprehensively evaluate all six methods. We find that both
BirgRank and AptRank outperform the others, especially in missing function
prediction when using only 10% of the data for training.
AptRank combines protein-protein associations and the GO function-function
hierarchy into a two-layer network model without flattening the hierarchy into
a similarity kernel. Introducing an adaptive mechanism to the traditional,
fixed-parameter model of PageRank greatly improves the accuracy of protein
function prediction.
| [
{
"version": "v1",
"created": "Thu, 21 Jan 2016 04:22:57 GMT"
},
{
"version": "v2",
"created": "Sun, 22 May 2016 06:04:09 GMT"
}
] | 2016-05-24T00:00:00 | [
[
"Jiang",
"Biaobin",
""
],
[
"Kloster",
"Kyle",
""
],
[
"Gleich",
"David F.",
""
],
[
"Gribskov",
"Michael",
""
]
] | TITLE: AptRank: An Adaptive PageRank Model for Protein Function Prediction on
Bi-relational Graphs
ABSTRACT: Diffusion-based network models are widely used for protein function
prediction using protein network data and have been shown to outperform
neighborhood- and module-based methods. Recent studies have shown that
integrating the hierarchical structure of the Gene Ontology (GO) data
dramatically improves prediction accuracy. However, previous methods usually
either used the GO hierarchy to refine the prediction results of multiple
classifiers, or flattened the hierarchy into a function-function similarity
kernel. No study has taken the GO hierarchy into account together with the
protein network as a two-layer network model.
We first construct a Bi-relational graph (Birg) model comprised of both
protein-protein association and function-function hierarchical networks. We
then propose two diffusion-based methods, BirgRank and AptRank, both of which
use PageRank to diffuse information on this two-layer graph model. BirgRank is
an application of traditional PageRank with fixed decay parameters. In
contrast, AptRank uses an adaptive mechanism to improve the performance of
BirgRank. We evaluate both methods in predicting protein function on yeast,
fly, and human datasets, and compare with four previous methods: GeneMANIA,
TMC, ProteinRank and clusDCA. We design three validation strategies: missing
function prediction, de novo function prediction, and guided function
prediction to comprehensively evaluate all six methods. We find that both
BirgRank and AptRank outperform the others, especially in missing function
prediction when using only 10% of the data for training.
AptRank combines protein-protein associations and the GO function-function
hierarchy into a two-layer network model without flattening the hierarchy into
a similarity kernel. Introducing an adaptive mechanism to the traditional,
fixed-parameter model of PageRank greatly improves the accuracy of protein
function prediction.
| no_new_dataset | 0.953449 |
1602.01959 | Lu Lu | Lu Lu, Xuanhua Shi, Yongluan Zhou, Xiong Zhang, Hai Jin, Cheng Pei,
Ligang He, Yuanzhen Geng | Lifetime-Based Memory Management for Distributed Data Processing Systems | null | null | null | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In-memory caching of intermediate data and eager combining of data in shuffle
buffers have been shown to be very effective in minimizing the re-computation
and I/O cost in distributed data processing systems like Spark and Flink.
However, it has also been widely reported that these techniques would create a
large amount of long-living data objects in the heap, which may quickly
saturate the garbage collector, especially when handling a large dataset, and
hence would limit the scalability of the system. To eliminate this problem, we
propose a lifetime-based memory management framework, which, by automatically
analyzing the user-defined functions and data types, obtains the expected
lifetime of the data objects, and then allocates and releases memory space
accordingly to minimize the garbage collection overhead. In particular, we
present Deca, a concrete implementation of our proposal on top of Spark, which
transparently decomposes and groups objects with similar lifetimes into byte
arrays and releases their space altogether when their lifetimes come to an end.
An extensive experimental study using both synthetic and real datasets shows
that, in comparing to Spark, Deca is able to 1) reduce the garbage collection
time by up to 99.9%, 2) to achieve up to 22.7x speed up in terms of execution
time in cases without data spilling and 41.6x speedup in cases with data
spilling, and 3) to consume up to 46.6% less memory.
| [
{
"version": "v1",
"created": "Fri, 5 Feb 2016 09:13:00 GMT"
},
{
"version": "v2",
"created": "Wed, 18 May 2016 15:55:24 GMT"
},
{
"version": "v3",
"created": "Sun, 22 May 2016 16:33:39 GMT"
}
] | 2016-05-24T00:00:00 | [
[
"Lu",
"Lu",
""
],
[
"Shi",
"Xuanhua",
""
],
[
"Zhou",
"Yongluan",
""
],
[
"Zhang",
"Xiong",
""
],
[
"Jin",
"Hai",
""
],
[
"Pei",
"Cheng",
""
],
[
"He",
"Ligang",
""
],
[
"Geng",
"Yuanzhen",
""
]
] | TITLE: Lifetime-Based Memory Management for Distributed Data Processing Systems
ABSTRACT: In-memory caching of intermediate data and eager combining of data in shuffle
buffers have been shown to be very effective in minimizing the re-computation
and I/O cost in distributed data processing systems like Spark and Flink.
However, it has also been widely reported that these techniques would create a
large amount of long-living data objects in the heap, which may quickly
saturate the garbage collector, especially when handling a large dataset, and
hence would limit the scalability of the system. To eliminate this problem, we
propose a lifetime-based memory management framework, which, by automatically
analyzing the user-defined functions and data types, obtains the expected
lifetime of the data objects, and then allocates and releases memory space
accordingly to minimize the garbage collection overhead. In particular, we
present Deca, a concrete implementation of our proposal on top of Spark, which
transparently decomposes and groups objects with similar lifetimes into byte
arrays and releases their space altogether when their lifetimes come to an end.
An extensive experimental study using both synthetic and real datasets shows
that, in comparing to Spark, Deca is able to 1) reduce the garbage collection
time by up to 99.9%, 2) to achieve up to 22.7x speed up in terms of execution
time in cases without data spilling and 41.6x speedup in cases with data
spilling, and 3) to consume up to 46.6% less memory.
| no_new_dataset | 0.946597 |
1604.06984 | Nan Zhu | Nan Zhu, Wenbo He, Xue Liu, Yu Hua | PFO: A Parallel Friendly High Performance System for Online Query and
Update of Nearest Neighbors | null | null | null | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Nearest Neighbor(s) search is the fundamental computational primitive to
tackle massive dataset. Locality Sensitive Hashing (LSH) has been a bracing
tool for Nearest Neighbor(s) search in high dimensional spaces. However,
traditional LSH systems cannot be applied in online big data systems to handle
a large volume of query/update requests, because most of the systems optimize
the query efficiency with the assumption of infrequent updates and missing the
parallel-friendly design. As a result, the state-of-the-art LSH systems cannot
adapt the system response to the user behavior interactively.
In this paper, we propose a new LSH system called PFO. It handles
query/update requests in RAM and scales the system capacity by using flash
memory. To achieve high streaming data throughput, PFO adopts a
parallel-friendly indexing structure while preserving the distance between data
points. Further, it accommodates inbound data in real-time and dispatches
update requests intelligently to eliminate the cross-threads synchronization.
We carried out extensive evaluations with large synthetic and standard
benchmark datasets. Results demonstrate that PFO delivers shorter latency and
offers scalable capacity compared with the existing LSH systems. PFO serves
with higher throughput than the state-of-the-art LSH indexing structure when
dealing with online query/update requests to nearest neighbors. Meanwhile, PFO
returns neighbors with much better quality, thus being efficient to handle
online big data applications, e.g. streaming recommendation system, interactive
machine learning systems.
| [
{
"version": "v1",
"created": "Sun, 24 Apr 2016 05:08:27 GMT"
},
{
"version": "v2",
"created": "Fri, 13 May 2016 23:16:52 GMT"
},
{
"version": "v3",
"created": "Sun, 22 May 2016 21:20:32 GMT"
}
] | 2016-05-24T00:00:00 | [
[
"Zhu",
"Nan",
""
],
[
"He",
"Wenbo",
""
],
[
"Liu",
"Xue",
""
],
[
"Hua",
"Yu",
""
]
] | TITLE: PFO: A Parallel Friendly High Performance System for Online Query and
Update of Nearest Neighbors
ABSTRACT: Nearest Neighbor(s) search is the fundamental computational primitive to
tackle massive dataset. Locality Sensitive Hashing (LSH) has been a bracing
tool for Nearest Neighbor(s) search in high dimensional spaces. However,
traditional LSH systems cannot be applied in online big data systems to handle
a large volume of query/update requests, because most of the systems optimize
the query efficiency with the assumption of infrequent updates and missing the
parallel-friendly design. As a result, the state-of-the-art LSH systems cannot
adapt the system response to the user behavior interactively.
In this paper, we propose a new LSH system called PFO. It handles
query/update requests in RAM and scales the system capacity by using flash
memory. To achieve high streaming data throughput, PFO adopts a
parallel-friendly indexing structure while preserving the distance between data
points. Further, it accommodates inbound data in real-time and dispatches
update requests intelligently to eliminate the cross-threads synchronization.
We carried out extensive evaluations with large synthetic and standard
benchmark datasets. Results demonstrate that PFO delivers shorter latency and
offers scalable capacity compared with the existing LSH systems. PFO serves
with higher throughput than the state-of-the-art LSH indexing structure when
dealing with online query/update requests to nearest neighbors. Meanwhile, PFO
returns neighbors with much better quality, thus being efficient to handle
online big data applications, e.g. streaming recommendation system, interactive
machine learning systems.
| no_new_dataset | 0.945045 |
1605.06217 | Xiao Liu | Xiao Liu, Jiang Wang, Shilei Wen, Errui Ding, Yuanqing Lin | Localizing by Describing: Attribute-Guided Attention Localization for
Fine-Grained Recognition | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A key challenge in fine-grained recognition is how to find and represent
discriminative local regions. Recent attention models are capable of learning
discriminative region localizers only from category labels with reinforcement
learning. However, not utilizing any explicit part information, they are not
able to accurately find multiple distinctive regions. In this work, we
introduce an attribute-guided attention localization scheme where the local
region localizers are learned under the guidance of part attribute
descriptions. By designing a novel reward strategy, we are able to learn to
locate regions that are spatially and semantically distinctive with
reinforcement learning algorithm. The attribute labeling requirement of the
scheme is more amenable than the accurate part location annotation required by
traditional part-based fine-grained recognition methods. Experimental results
on the CUB-200-2011 dataset demonstrate the superiority of the proposed scheme
on both fine-grained recognition and attribute recognition.
| [
{
"version": "v1",
"created": "Fri, 20 May 2016 05:54:54 GMT"
},
{
"version": "v2",
"created": "Mon, 23 May 2016 03:37:54 GMT"
}
] | 2016-05-24T00:00:00 | [
[
"Liu",
"Xiao",
""
],
[
"Wang",
"Jiang",
""
],
[
"Wen",
"Shilei",
""
],
[
"Ding",
"Errui",
""
],
[
"Lin",
"Yuanqing",
""
]
] | TITLE: Localizing by Describing: Attribute-Guided Attention Localization for
Fine-Grained Recognition
ABSTRACT: A key challenge in fine-grained recognition is how to find and represent
discriminative local regions. Recent attention models are capable of learning
discriminative region localizers only from category labels with reinforcement
learning. However, not utilizing any explicit part information, they are not
able to accurately find multiple distinctive regions. In this work, we
introduce an attribute-guided attention localization scheme where the local
region localizers are learned under the guidance of part attribute
descriptions. By designing a novel reward strategy, we are able to learn to
locate regions that are spatially and semantically distinctive with
reinforcement learning algorithm. The attribute labeling requirement of the
scheme is more amenable than the accurate part location annotation required by
traditional part-based fine-grained recognition methods. Experimental results
on the CUB-200-2011 dataset demonstrate the superiority of the proposed scheme
on both fine-grained recognition and attribute recognition.
| no_new_dataset | 0.947575 |
1605.06597 | Shu Zhang | Shu Zhang, Qi Zhu, Amit Roy-Chowdhury | Adaptive Algorithm and Platform Selection for Visual Detection and
Tracking | 10 pages, 10 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Computer vision algorithms are known to be extremely sensitive to the
environmental conditions in which the data is captured, e.g., lighting
conditions and target density. Tuning of parameters or choosing a completely
new algorithm is often needed to achieve a certain performance level,
especially when there is a limitation of the computation source. In this paper,
we focus on this problem and propose a framework to adaptively select the
"best" algorithm-parameter combination and the computation platform under
performance and cost constraints at design time, and adapt the algorithms at
runtime based on real-time inputs. This necessitates developing a mechanism to
switch between different algorithms as the nature of the input video changes.
Our proposed algorithm calculates a similarity function between a test video
scenario and each training scenario, where the similarity calculation is based
on learning a manifold of image features that is shared by both the training
and test datasets. Similarity between training and test dataset indicates the
same algorithm can be applied to both of them and achieve similar performance.
We design a cost function with this similarity measure to find the most similar
training scenario to the test data. The "best" algorithm under a given platform
is obtained by selecting the algorithm with a specific parameter combination
that performs the best on the corresponding training data. The proposed
framework can be used first offline to choose the platform based on performance
and cost constraints, and then online whereby the "best" algorithm is selected
for each new incoming video segment for a given platform. In the experiments,
we apply our algorithm to the problems of pedestrian detection and tracking. We
show how to adaptively select platforms and algorithm-parameter combinations.
Our results provide optimal performance on 3 publicly available datasets.
| [
{
"version": "v1",
"created": "Sat, 21 May 2016 06:58:02 GMT"
}
] | 2016-05-24T00:00:00 | [
[
"Zhang",
"Shu",
""
],
[
"Zhu",
"Qi",
""
],
[
"Roy-Chowdhury",
"Amit",
""
]
] | TITLE: Adaptive Algorithm and Platform Selection for Visual Detection and
Tracking
ABSTRACT: Computer vision algorithms are known to be extremely sensitive to the
environmental conditions in which the data is captured, e.g., lighting
conditions and target density. Tuning of parameters or choosing a completely
new algorithm is often needed to achieve a certain performance level,
especially when there is a limitation of the computation source. In this paper,
we focus on this problem and propose a framework to adaptively select the
"best" algorithm-parameter combination and the computation platform under
performance and cost constraints at design time, and adapt the algorithms at
runtime based on real-time inputs. This necessitates developing a mechanism to
switch between different algorithms as the nature of the input video changes.
Our proposed algorithm calculates a similarity function between a test video
scenario and each training scenario, where the similarity calculation is based
on learning a manifold of image features that is shared by both the training
and test datasets. Similarity between training and test dataset indicates the
same algorithm can be applied to both of them and achieve similar performance.
We design a cost function with this similarity measure to find the most similar
training scenario to the test data. The "best" algorithm under a given platform
is obtained by selecting the algorithm with a specific parameter combination
that performs the best on the corresponding training data. The proposed
framework can be used first offline to choose the platform based on performance
and cost constraints, and then online whereby the "best" algorithm is selected
for each new incoming video segment for a given platform. In the experiments,
we apply our algorithm to the problems of pedestrian detection and tracking. We
show how to adaptively select platforms and algorithm-parameter combinations.
Our results provide optimal performance on 3 publicly available datasets.
| no_new_dataset | 0.949389 |
1605.06695 | Xingchao Peng | Xingchao Peng, Judy Hoffman, Stella X. Yu, Kate Saenko | Fine-to-coarse Knowledge Transfer For Low-Res Image Classification | 5 pages, accepted by ICIP 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We address the difficult problem of distinguishing fine-grained object
categories in low resolution images. Wepropose a simple an effective deep
learning approach that transfers fine-grained knowledge gained from high
resolution training data to the coarse low-resolution test scenario. Such
fine-to-coarse knowledge transfer has many real world applications, such as
identifying objects in surveillance photos or satellite images where the image
resolution at the test time is very low but plenty of high resolution photos of
similar objects are available. Our extensive experiments on two standard
benchmark datasets containing fine-grained car models and bird species
demonstrate that our approach can effectively transfer fine-detail knowledge to
coarse-detail imagery.
| [
{
"version": "v1",
"created": "Sat, 21 May 2016 20:08:53 GMT"
}
] | 2016-05-24T00:00:00 | [
[
"Peng",
"Xingchao",
""
],
[
"Hoffman",
"Judy",
""
],
[
"Yu",
"Stella X.",
""
],
[
"Saenko",
"Kate",
""
]
] | TITLE: Fine-to-coarse Knowledge Transfer For Low-Res Image Classification
ABSTRACT: We address the difficult problem of distinguishing fine-grained object
categories in low resolution images. Wepropose a simple an effective deep
learning approach that transfers fine-grained knowledge gained from high
resolution training data to the coarse low-resolution test scenario. Such
fine-to-coarse knowledge transfer has many real world applications, such as
identifying objects in surveillance photos or satellite images where the image
resolution at the test time is very low but plenty of high resolution photos of
similar objects are available. Our extensive experiments on two standard
benchmark datasets containing fine-grained car models and bird species
demonstrate that our approach can effectively transfer fine-detail knowledge to
coarse-detail imagery.
| no_new_dataset | 0.95297 |
1605.06820 | Hamid Tizhoosh | Fares Al-Qunaieer, Hamid R. Tizhoosh, Shahryar Rahnamayan | Automated Resolution Selection for Image Segmentation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It is well-known in image processing that computational cost increases
rapidly with the number and dimensions of the images to be processed. Several
fields, such as medical imaging, routinely use numerous very large images,
which might also be 3D and/or captured at several frequency bands, all adding
to the computational expense. Multiresolution analysis is a method of
increasing the efficiency of the segmentation process. One multiresolution
approach is the coarse-to-fine segmentation strategy, whereby the segmentation
starts at a coarse resolution and is then fine-tuned during subsequent steps.
The starting resolution for segmentation is generally selected arbitrarily with
no clear selection criteria. The research reported in this paper showed that
starting from different resolutions for image segmentation results in different
accuracies and computational times, even for images of the same category
(depicting similar scenes or objects). An automated method for resolution
selection for an input image would thus be beneficial. This paper introduces a
framework for the automated selection of the best resolution for image
segmentation. We propose a measure for defining the best resolution based on
user/system criteria, offering a trade-off between accuracy and computation
time. A learning approach is then introduced for the selection of the
resolution, whereby extracted image features are mapped to the previously
determined best resolution. In the learning process, class (i.e., resolution)
distribution is generally imbalanced, making effective learning from the data
difficult. Experiments conducted with three datasets using two different
segmentation algorithms show that the resolutions selected through learning
enable much faster segmentation than the original ones, while retaining at
least the original accuracy.
| [
{
"version": "v1",
"created": "Sun, 22 May 2016 17:09:29 GMT"
}
] | 2016-05-24T00:00:00 | [
[
"Al-Qunaieer",
"Fares",
""
],
[
"Tizhoosh",
"Hamid R.",
""
],
[
"Rahnamayan",
"Shahryar",
""
]
] | TITLE: Automated Resolution Selection for Image Segmentation
ABSTRACT: It is well-known in image processing that computational cost increases
rapidly with the number and dimensions of the images to be processed. Several
fields, such as medical imaging, routinely use numerous very large images,
which might also be 3D and/or captured at several frequency bands, all adding
to the computational expense. Multiresolution analysis is a method of
increasing the efficiency of the segmentation process. One multiresolution
approach is the coarse-to-fine segmentation strategy, whereby the segmentation
starts at a coarse resolution and is then fine-tuned during subsequent steps.
The starting resolution for segmentation is generally selected arbitrarily with
no clear selection criteria. The research reported in this paper showed that
starting from different resolutions for image segmentation results in different
accuracies and computational times, even for images of the same category
(depicting similar scenes or objects). An automated method for resolution
selection for an input image would thus be beneficial. This paper introduces a
framework for the automated selection of the best resolution for image
segmentation. We propose a measure for defining the best resolution based on
user/system criteria, offering a trade-off between accuracy and computation
time. A learning approach is then introduced for the selection of the
resolution, whereby extracted image features are mapped to the previously
determined best resolution. In the learning process, class (i.e., resolution)
distribution is generally imbalanced, making effective learning from the data
difficult. Experiments conducted with three datasets using two different
segmentation algorithms show that the resolutions selected through learning
enable much faster segmentation than the original ones, while retaining at
least the original accuracy.
| no_new_dataset | 0.952882 |
1605.06885 | Chunhua Shen | Zifeng Wu, Chunhua Shen, Anton van den Hengel | Bridging Category-level and Instance-level Semantic Image Segmentation | 14 pages. arXiv admin note: substantial text overlap with
arXiv:1604.04339 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose an approach to instance-level image segmentation that is built on
top of category-level segmentation. Specifically, for each pixel in a semantic
category mask, its corresponding instance bounding box is predicted using a
deep fully convolutional regression network. Thus it follows a different
pipeline to the popular detect-then-segment approaches that first predict
instances' bounding boxes, which are the current state-of-the-art in instance
segmentation. We show that, by leveraging the strength of our state-of-the-art
semantic segmentation models, the proposed method can achieve comparable or
even better results to detect-then-segment approaches. We make the following
contributions. (i) First, we propose a simple yet effective approach to
semantic instance segmentation. (ii) Second, we propose an online bootstrapping
method during training, which is critically important for achieving good
performance for both semantic category segmentation and instance-level
segmentation. (iii) As the performance of semantic category segmentation has a
significant impact on the instance-level segmentation, which is the second step
of our approach, we train fully convolutional residual networks to achieve the
best semantic category segmentation accuracy. On the PASCAL VOC 2012 dataset,
we obtain the currently best mean intersection-over-union score of 79.1%. (iv)
We also achieve state-of-the-art results for instance-level segmentation.
| [
{
"version": "v1",
"created": "Mon, 23 May 2016 03:43:00 GMT"
}
] | 2016-05-24T00:00:00 | [
[
"Wu",
"Zifeng",
""
],
[
"Shen",
"Chunhua",
""
],
[
"Hengel",
"Anton van den",
""
]
] | TITLE: Bridging Category-level and Instance-level Semantic Image Segmentation
ABSTRACT: We propose an approach to instance-level image segmentation that is built on
top of category-level segmentation. Specifically, for each pixel in a semantic
category mask, its corresponding instance bounding box is predicted using a
deep fully convolutional regression network. Thus it follows a different
pipeline to the popular detect-then-segment approaches that first predict
instances' bounding boxes, which are the current state-of-the-art in instance
segmentation. We show that, by leveraging the strength of our state-of-the-art
semantic segmentation models, the proposed method can achieve comparable or
even better results to detect-then-segment approaches. We make the following
contributions. (i) First, we propose a simple yet effective approach to
semantic instance segmentation. (ii) Second, we propose an online bootstrapping
method during training, which is critically important for achieving good
performance for both semantic category segmentation and instance-level
segmentation. (iii) As the performance of semantic category segmentation has a
significant impact on the instance-level segmentation, which is the second step
of our approach, we train fully convolutional residual networks to achieve the
best semantic category segmentation accuracy. On the PASCAL VOC 2012 dataset,
we obtain the currently best mean intersection-over-union score of 79.1%. (iv)
We also achieve state-of-the-art results for instance-level segmentation.
| no_new_dataset | 0.950365 |
1605.07104 | Ran Tao | Ran Tao, Arnold W.M. Smeulders, Shih-Fu Chang | Generic Instance Search and Re-identification from One Example via
Attributes and Categories | This technical report is an extended version of our previous
conference paper 'Attributes and Categories for Generic Instance Search from
One Example' (CVPR 2015) | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper aims for generic instance search from one example where the
instance can be an arbitrary object like shoes, not just near-planar and
one-sided instances like buildings and logos. First, we evaluate
state-of-the-art instance search methods on this problem. We observe that what
works for buildings loses its generality on shoes. Second, we propose to use
automatically learned category-specific attributes to address the large
appearance variations present in generic instance search. Searching among
instances from the same category as the query, the category-specific attributes
outperform existing approaches by a large margin on shoes and cars and perform
on par with the state-of-the-art on buildings. Third, we treat person
re-identification as a special case of generic instance search. On the popular
VIPeR dataset, we reach state-of-the-art performance with the same method.
Fourth, we extend our method to search objects without restriction to the
specifically known category. We show that the combination of category-level
information and the category-specific attributes is superior to the alternative
method combining category-level information with low-level features such as
Fisher vector.
| [
{
"version": "v1",
"created": "Mon, 23 May 2016 17:25:40 GMT"
}
] | 2016-05-24T00:00:00 | [
[
"Tao",
"Ran",
""
],
[
"Smeulders",
"Arnold W. M.",
""
],
[
"Chang",
"Shih-Fu",
""
]
] | TITLE: Generic Instance Search and Re-identification from One Example via
Attributes and Categories
ABSTRACT: This paper aims for generic instance search from one example where the
instance can be an arbitrary object like shoes, not just near-planar and
one-sided instances like buildings and logos. First, we evaluate
state-of-the-art instance search methods on this problem. We observe that what
works for buildings loses its generality on shoes. Second, we propose to use
automatically learned category-specific attributes to address the large
appearance variations present in generic instance search. Searching among
instances from the same category as the query, the category-specific attributes
outperform existing approaches by a large margin on shoes and cars and perform
on par with the state-of-the-art on buildings. Third, we treat person
re-identification as a special case of generic instance search. On the popular
VIPeR dataset, we reach state-of-the-art performance with the same method.
Fourth, we extend our method to search objects without restriction to the
specifically known category. We show that the combination of category-level
information and the category-specific attributes is superior to the alternative
method combining category-level information with low-level features such as
Fisher vector.
| no_new_dataset | 0.951188 |
1605.07154 | Behnam Neyshabur | Behnam Neyshabur, Yuhuai Wu, Ruslan Salakhutdinov, Nathan Srebro | Path-Normalized Optimization of Recurrent Neural Networks with ReLU
Activations | 15 pages | null | null | null | cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate the parameter-space geometry of recurrent neural networks
(RNNs), and develop an adaptation of path-SGD optimization method, attuned to
this geometry, that can learn plain RNNs with ReLU activations. On several
datasets that require capturing long-term dependency structure, we show that
path-SGD can significantly improve trainability of ReLU RNNs compared to RNNs
trained with SGD, even with various recently suggested initialization schemes.
| [
{
"version": "v1",
"created": "Mon, 23 May 2016 19:40:50 GMT"
}
] | 2016-05-24T00:00:00 | [
[
"Neyshabur",
"Behnam",
""
],
[
"Wu",
"Yuhuai",
""
],
[
"Salakhutdinov",
"Ruslan",
""
],
[
"Srebro",
"Nathan",
""
]
] | TITLE: Path-Normalized Optimization of Recurrent Neural Networks with ReLU
Activations
ABSTRACT: We investigate the parameter-space geometry of recurrent neural networks
(RNNs), and develop an adaptation of path-SGD optimization method, attuned to
this geometry, that can learn plain RNNs with ReLU activations. On several
datasets that require capturing long-term dependency structure, we show that
path-SGD can significantly improve trainability of ReLU RNNs compared to RNNs
trained with SGD, even with various recently suggested initialization schemes.
| no_new_dataset | 0.949809 |
1505.03540 | Mohammad Havaei | Mohammad Havaei, Axel Davy, David Warde-Farley, Antoine Biard, Aaron
Courville, Yoshua Bengio, Chris Pal, Pierre-Marc Jodoin, Hugo Larochelle | Brain Tumor Segmentation with Deep Neural Networks | null | null | 10.1016/j.media.2016.05.004 | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we present a fully automatic brain tumor segmentation method
based on Deep Neural Networks (DNNs). The proposed networks are tailored to
glioblastomas (both low and high grade) pictured in MR images. By their very
nature, these tumors can appear anywhere in the brain and have almost any kind
of shape, size, and contrast. These reasons motivate our exploration of a
machine learning solution that exploits a flexible, high capacity DNN while
being extremely efficient. Here, we give a description of different model
choices that we've found to be necessary for obtaining competitive performance.
We explore in particular different architectures based on Convolutional Neural
Networks (CNN), i.e. DNNs specifically adapted to image data.
We present a novel CNN architecture which differs from those traditionally
used in computer vision. Our CNN exploits both local features as well as more
global contextual features simultaneously. Also, different from most
traditional uses of CNNs, our networks use a final layer that is a
convolutional implementation of a fully connected layer which allows a 40 fold
speed up. We also describe a 2-phase training procedure that allows us to
tackle difficulties related to the imbalance of tumor labels. Finally, we
explore a cascade architecture in which the output of a basic CNN is treated as
an additional source of information for a subsequent CNN. Results reported on
the 2013 BRATS test dataset reveal that our architecture improves over the
currently published state-of-the-art while being over 30 times faster.
| [
{
"version": "v1",
"created": "Wed, 13 May 2015 20:06:21 GMT"
},
{
"version": "v2",
"created": "Mon, 5 Oct 2015 17:37:02 GMT"
},
{
"version": "v3",
"created": "Fri, 20 May 2016 06:30:23 GMT"
}
] | 2016-05-23T00:00:00 | [
[
"Havaei",
"Mohammad",
""
],
[
"Davy",
"Axel",
""
],
[
"Warde-Farley",
"David",
""
],
[
"Biard",
"Antoine",
""
],
[
"Courville",
"Aaron",
""
],
[
"Bengio",
"Yoshua",
""
],
[
"Pal",
"Chris",
""
],
[
"Jodoin",
"Pierre-Marc",
""
],
[
"Larochelle",
"Hugo",
""
]
] | TITLE: Brain Tumor Segmentation with Deep Neural Networks
ABSTRACT: In this paper, we present a fully automatic brain tumor segmentation method
based on Deep Neural Networks (DNNs). The proposed networks are tailored to
glioblastomas (both low and high grade) pictured in MR images. By their very
nature, these tumors can appear anywhere in the brain and have almost any kind
of shape, size, and contrast. These reasons motivate our exploration of a
machine learning solution that exploits a flexible, high capacity DNN while
being extremely efficient. Here, we give a description of different model
choices that we've found to be necessary for obtaining competitive performance.
We explore in particular different architectures based on Convolutional Neural
Networks (CNN), i.e. DNNs specifically adapted to image data.
We present a novel CNN architecture which differs from those traditionally
used in computer vision. Our CNN exploits both local features as well as more
global contextual features simultaneously. Also, different from most
traditional uses of CNNs, our networks use a final layer that is a
convolutional implementation of a fully connected layer which allows a 40 fold
speed up. We also describe a 2-phase training procedure that allows us to
tackle difficulties related to the imbalance of tumor labels. Finally, we
explore a cascade architecture in which the output of a basic CNN is treated as
an additional source of information for a subsequent CNN. Results reported on
the 2013 BRATS test dataset reveal that our architecture improves over the
currently published state-of-the-art while being over 30 times faster.
| no_new_dataset | 0.947672 |
1602.02261 | Rodrigo Nogueira | Rodrigo Nogueira and Kyunghyun Cho | End-to-End Goal-Driven Web Navigation | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a goal-driven web navigation as a benchmark task for evaluating an
agent with abilities to understand natural language and plan on partially
observed environments. In this challenging task, an agent navigates through a
website, which is represented as a graph consisting of web pages as nodes and
hyperlinks as directed edges, to find a web page in which a query appears. The
agent is required to have sophisticated high-level reasoning based on natural
languages and efficient sequential decision-making capability to succeed. We
release a software tool, called WebNav, that automatically transforms a website
into this goal-driven web navigation task, and as an example, we make WikiNav,
a dataset constructed from the English Wikipedia. We extensively evaluate
different variants of neural net based artificial agents on WikiNav and observe
that the proposed goal-driven web navigation well reflects the advances in
models, making it a suitable benchmark for evaluating future progress.
Furthermore, we extend the WikiNav with question-answer pairs from Jeopardy!
and test the proposed agent based on recurrent neural networks against strong
inverted index based search engines. The artificial agents trained on WikiNav
outperforms the engined based approaches, demonstrating the capability of the
proposed goal-driven navigation as a good proxy for measuring the progress in
real-world tasks such as focused crawling and question-answering.
| [
{
"version": "v1",
"created": "Sat, 6 Feb 2016 14:53:02 GMT"
},
{
"version": "v2",
"created": "Fri, 20 May 2016 16:26:58 GMT"
}
] | 2016-05-23T00:00:00 | [
[
"Nogueira",
"Rodrigo",
""
],
[
"Cho",
"Kyunghyun",
""
]
] | TITLE: End-to-End Goal-Driven Web Navigation
ABSTRACT: We propose a goal-driven web navigation as a benchmark task for evaluating an
agent with abilities to understand natural language and plan on partially
observed environments. In this challenging task, an agent navigates through a
website, which is represented as a graph consisting of web pages as nodes and
hyperlinks as directed edges, to find a web page in which a query appears. The
agent is required to have sophisticated high-level reasoning based on natural
languages and efficient sequential decision-making capability to succeed. We
release a software tool, called WebNav, that automatically transforms a website
into this goal-driven web navigation task, and as an example, we make WikiNav,
a dataset constructed from the English Wikipedia. We extensively evaluate
different variants of neural net based artificial agents on WikiNav and observe
that the proposed goal-driven web navigation well reflects the advances in
models, making it a suitable benchmark for evaluating future progress.
Furthermore, we extend the WikiNav with question-answer pairs from Jeopardy!
and test the proposed agent based on recurrent neural networks against strong
inverted index based search engines. The artificial agents trained on WikiNav
outperforms the engined based approaches, demonstrating the capability of the
proposed goal-driven navigation as a good proxy for measuring the progress in
real-world tasks such as focused crawling and question-answering.
| new_dataset | 0.976535 |
1605.05573 | Xipeng Qiu | Pengfei Liu, Xipeng Qiu, Xuanjing Huang | Modelling Interaction of Sentence Pair with coupled-LSTMs | Submitted to IJCAI 2016 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, there is rising interest in modelling the interactions of two
sentences with deep neural networks. However, most of the existing methods
encode two sequences with separate encoders, in which a sentence is encoded
with little or no information from the other sentence. In this paper, we
propose a deep architecture to model the strong interaction of sentence pair
with two coupled-LSTMs. Specifically, we introduce two coupled ways to model
the interdependences of two LSTMs, coupling the local contextualized
interactions of two sentences. We then aggregate these interactions and use a
dynamic pooling to select the most informative features. Experiments on two
very large datasets demonstrate the efficacy of our proposed architecture and
its superiority to state-of-the-art methods.
| [
{
"version": "v1",
"created": "Wed, 18 May 2016 13:33:21 GMT"
},
{
"version": "v2",
"created": "Fri, 20 May 2016 01:28:43 GMT"
}
] | 2016-05-23T00:00:00 | [
[
"Liu",
"Pengfei",
""
],
[
"Qiu",
"Xipeng",
""
],
[
"Huang",
"Xuanjing",
""
]
] | TITLE: Modelling Interaction of Sentence Pair with coupled-LSTMs
ABSTRACT: Recently, there is rising interest in modelling the interactions of two
sentences with deep neural networks. However, most of the existing methods
encode two sequences with separate encoders, in which a sentence is encoded
with little or no information from the other sentence. In this paper, we
propose a deep architecture to model the strong interaction of sentence pair
with two coupled-LSTMs. Specifically, we introduce two coupled ways to model
the interdependences of two LSTMs, coupling the local contextualized
interactions of two sentences. We then aggregate these interactions and use a
dynamic pooling to select the most informative features. Experiments on two
very large datasets demonstrate the efficacy of our proposed architecture and
its superiority to state-of-the-art methods.
| no_new_dataset | 0.9455 |
1605.06143 | Philip Derbeko | Philip Derbeko, Shlomi Dolev, Ehud Gudes, Jeffrey D. Ullman | Efficient and Private Approximations of Distributed Databases
Calculations | null | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, an increasing amount of data is collected in different and
often, not cooperative, databases. The problem of privacy-preserving,
distributed calculations over separated databases and, a relative to it, issue
of private data release were intensively investigated. However, despite a
considerable progress, computational complexity, due to an increasing size of
data, remains a limiting factor in real-world deployments, especially in case
of privacy-preserving computations.
In this paper, we present a general method for trade off between performance
and accuracy of distributed calculations by performing data sampling. Sampling
was a topic of extensive research that recently received a boost of interest.
We provide a sampling method targeted at separate, non-collaborating,
vertically partitioned datasets. The method is exemplified and tested on
approximation of intersection set both without and with privacy-preserving
mechanism. An analysis of the bound on error as a function of the sample size
is discussed and heuristic algorithm is suggested to further improve the
performance. The algorithms were implemented and experimental results confirm
the validity of the approach.
| [
{
"version": "v1",
"created": "Thu, 19 May 2016 21:11:01 GMT"
}
] | 2016-05-23T00:00:00 | [
[
"Derbeko",
"Philip",
""
],
[
"Dolev",
"Shlomi",
""
],
[
"Gudes",
"Ehud",
""
],
[
"Ullman",
"Jeffrey D.",
""
]
] | TITLE: Efficient and Private Approximations of Distributed Databases
Calculations
ABSTRACT: In recent years, an increasing amount of data is collected in different and
often, not cooperative, databases. The problem of privacy-preserving,
distributed calculations over separated databases and, a relative to it, issue
of private data release were intensively investigated. However, despite a
considerable progress, computational complexity, due to an increasing size of
data, remains a limiting factor in real-world deployments, especially in case
of privacy-preserving computations.
In this paper, we present a general method for trade off between performance
and accuracy of distributed calculations by performing data sampling. Sampling
was a topic of extensive research that recently received a boost of interest.
We provide a sampling method targeted at separate, non-collaborating,
vertically partitioned datasets. The method is exemplified and tested on
approximation of intersection set both without and with privacy-preserving
mechanism. An analysis of the bound on error as a function of the sample size
is discussed and heuristic algorithm is suggested to further improve the
performance. The algorithms were implemented and experimental results confirm
the validity of the approach.
| no_new_dataset | 0.943971 |
1605.06167 | Shahin Mohammadi | Abram Magner, Shahin Mohammadi, Ananth Grama | Combining Density and Overlap (CoDO): A New Method for Assessing the
Significance of Overlap Among Subgraphs | null | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Algorithms for detecting clusters (including overlapping clusters) in graphs
have received significant attention in the research community. A closely
related important aspect of the problem -- quantification of statistical
significance of overlap of clusters, remains relatively unexplored. This paper
presents the first theoretical and practical results on quantifying
statistically significant interactions between clusters in networks. Such
problems commonly arise in diverse applications, ranging from social network
analysis to systems biology. The paper addresses the problem of quantifying the
statistical significance of the observed overlap of the two clusters in an
Erd\H{o}s-R\'enyi graph model. The analytical framework presented in the paper
assigns a $p$-value to overlapping subgraphs by combining information about
both the sizes of the subgraphs and their edge densities in comparison to the
corresponding values for their overlapping component. This $p$-value is
demonstrated to have excellent discrimination properties in real applications
and is shown to be robust across broad parameter ranges.
Our results are comprehensively validated on synthetic, social, and
biological networks. We show that our framework: (i) derives insight from both
the density and the size of overlap among communities (circles/pathways), (ii)
consistently outperforms state-of-the-art methods over all tested datasets, and
(iii) when compared to other measures, has much broader application scope. In
the context of social networks, we identify highly interdependent (social)
circles and show that our predictions are highly co-enriched with known user
features. In networks of biomolecular interactions, we show that our method
identifies novel cross-talk between pathways, sheds light on their mechanisms
of interaction, and provides new opportunities for investigations of
biomolecular interactions.
| [
{
"version": "v1",
"created": "Thu, 19 May 2016 22:24:26 GMT"
}
] | 2016-05-23T00:00:00 | [
[
"Magner",
"Abram",
""
],
[
"Mohammadi",
"Shahin",
""
],
[
"Grama",
"Ananth",
""
]
] | TITLE: Combining Density and Overlap (CoDO): A New Method for Assessing the
Significance of Overlap Among Subgraphs
ABSTRACT: Algorithms for detecting clusters (including overlapping clusters) in graphs
have received significant attention in the research community. A closely
related important aspect of the problem -- quantification of statistical
significance of overlap of clusters, remains relatively unexplored. This paper
presents the first theoretical and practical results on quantifying
statistically significant interactions between clusters in networks. Such
problems commonly arise in diverse applications, ranging from social network
analysis to systems biology. The paper addresses the problem of quantifying the
statistical significance of the observed overlap of the two clusters in an
Erd\H{o}s-R\'enyi graph model. The analytical framework presented in the paper
assigns a $p$-value to overlapping subgraphs by combining information about
both the sizes of the subgraphs and their edge densities in comparison to the
corresponding values for their overlapping component. This $p$-value is
demonstrated to have excellent discrimination properties in real applications
and is shown to be robust across broad parameter ranges.
Our results are comprehensively validated on synthetic, social, and
biological networks. We show that our framework: (i) derives insight from both
the density and the size of overlap among communities (circles/pathways), (ii)
consistently outperforms state-of-the-art methods over all tested datasets, and
(iii) when compared to other measures, has much broader application scope. In
the context of social networks, we identify highly interdependent (social)
circles and show that our predictions are highly co-enriched with known user
features. In networks of biomolecular interactions, we show that our method
identifies novel cross-talk between pathways, sheds light on their mechanisms
of interaction, and provides new opportunities for investigations of
biomolecular interactions.
| no_new_dataset | 0.95096 |
1605.06177 | David Hall | David Hall and Pietro Perona | Fine-Grained Classification of Pedestrians in Video: Benchmark and State
of the Art | CVPR 2015 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A video dataset that is designed to study fine-grained categorisation of
pedestrians is introduced. Pedestrians were recorded "in-the-wild" from a
moving vehicle. Annotations include bounding boxes, tracks, 14 keypoints with
occlusion information and the fine-grained categories of age (5 classes), sex
(2 classes), weight (3 classes) and clothing style (4 classes). There are a
total of 27,454 bounding box and pose labels across 4222 tracks. This dataset
is designed to train and test algorithms for fine-grained categorisation of
people, it is also useful for benchmarking tracking, detection and pose
estimation of pedestrians. State-of-the-art algorithms for fine-grained
classification and pose estimation were tested using the dataset and the
results are reported as a useful performance baseline.
| [
{
"version": "v1",
"created": "Fri, 20 May 2016 00:03:42 GMT"
}
] | 2016-05-23T00:00:00 | [
[
"Hall",
"David",
""
],
[
"Perona",
"Pietro",
""
]
] | TITLE: Fine-Grained Classification of Pedestrians in Video: Benchmark and State
of the Art
ABSTRACT: A video dataset that is designed to study fine-grained categorisation of
pedestrians is introduced. Pedestrians were recorded "in-the-wild" from a
moving vehicle. Annotations include bounding boxes, tracks, 14 keypoints with
occlusion information and the fine-grained categories of age (5 classes), sex
(2 classes), weight (3 classes) and clothing style (4 classes). There are a
total of 27,454 bounding box and pose labels across 4222 tracks. This dataset
is designed to train and test algorithms for fine-grained categorisation of
people, it is also useful for benchmarking tracking, detection and pose
estimation of pedestrians. State-of-the-art algorithms for fine-grained
classification and pose estimation were tested using the dataset and the
results are reported as a useful performance baseline.
| new_dataset | 0.957636 |
1605.06457 | Adrien Gaidon | Adrien Gaidon, Qiao Wang, Yohann Cabon, Eleonora Vig | Virtual Worlds as Proxy for Multi-Object Tracking Analysis | CVPR 2016, Virtual KITTI dataset download at
http://www.xrce.xerox.com/Research-Development/Computer-Vision/Proxy-Virtual-Worlds | null | null | null | cs.CV cs.LG cs.NE stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Modern computer vision algorithms typically require expensive data
acquisition and accurate manual labeling. In this work, we instead leverage the
recent progress in computer graphics to generate fully labeled, dynamic, and
photo-realistic proxy virtual worlds. We propose an efficient real-to-virtual
world cloning method, and validate our approach by building and publicly
releasing a new video dataset, called Virtual KITTI (see
http://www.xrce.xerox.com/Research-Development/Computer-Vision/Proxy-Virtual-Worlds),
automatically labeled with accurate ground truth for object detection,
tracking, scene and instance segmentation, depth, and optical flow. We provide
quantitative experimental evidence suggesting that (i) modern deep learning
algorithms pre-trained on real data behave similarly in real and virtual
worlds, and (ii) pre-training on virtual data improves performance. As the gap
between real and virtual worlds is small, virtual worlds enable measuring the
impact of various weather and imaging conditions on recognition performance,
all other things being equal. We show these factors may affect drastically
otherwise high-performing deep models for tracking.
| [
{
"version": "v1",
"created": "Fri, 20 May 2016 18:03:07 GMT"
}
] | 2016-05-23T00:00:00 | [
[
"Gaidon",
"Adrien",
""
],
[
"Wang",
"Qiao",
""
],
[
"Cabon",
"Yohann",
""
],
[
"Vig",
"Eleonora",
""
]
] | TITLE: Virtual Worlds as Proxy for Multi-Object Tracking Analysis
ABSTRACT: Modern computer vision algorithms typically require expensive data
acquisition and accurate manual labeling. In this work, we instead leverage the
recent progress in computer graphics to generate fully labeled, dynamic, and
photo-realistic proxy virtual worlds. We propose an efficient real-to-virtual
world cloning method, and validate our approach by building and publicly
releasing a new video dataset, called Virtual KITTI (see
http://www.xrce.xerox.com/Research-Development/Computer-Vision/Proxy-Virtual-Worlds),
automatically labeled with accurate ground truth for object detection,
tracking, scene and instance segmentation, depth, and optical flow. We provide
quantitative experimental evidence suggesting that (i) modern deep learning
algorithms pre-trained on real data behave similarly in real and virtual
worlds, and (ii) pre-training on virtual data improves performance. As the gap
between real and virtual worlds is small, virtual worlds enable measuring the
impact of various weather and imaging conditions on recognition performance,
all other things being equal. We show these factors may affect drastically
otherwise high-performing deep models for tracking.
| new_dataset | 0.956594 |
1311.3646 | Urs Niesen | Ramtin Pedarsani, Mohammad Ali Maddah-Ali, Urs Niesen | Online Coded Caching | 15 pages | IEEE/ACM Transactions on Networking, vol. 24, pp. 836 - 845, April
2016 | null | null | cs.IT cs.NI math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider a basic content distribution scenario consisting of a single
origin server connected through a shared bottleneck link to a number of users
each equipped with a cache of finite memory. The users issue a sequence of
content requests from a set of popular files, and the goal is to operate the
caches as well as the server such that these requests are satisfied with the
minimum number of bits sent over the shared link. Assuming a basic Markov model
for renewing the set of popular files, we characterize approximately the
optimal long-term average rate of the shared link. We further prove that the
optimal online scheme has approximately the same performance as the optimal
offline scheme, in which the cache contents can be updated based on the entire
set of popular files before each new request. To support these theoretical
results, we propose an online coded caching scheme termed coded least-recently
sent (LRS) and simulate it for a demand time series derived from the dataset
made available by Netflix for the Netflix Prize. For this time series, we show
that the proposed coded LRS algorithm significantly outperforms the popular
least-recently used (LRU) caching algorithm.
| [
{
"version": "v1",
"created": "Thu, 14 Nov 2013 20:25:32 GMT"
}
] | 2016-05-20T00:00:00 | [
[
"Pedarsani",
"Ramtin",
""
],
[
"Maddah-Ali",
"Mohammad Ali",
""
],
[
"Niesen",
"Urs",
""
]
] | TITLE: Online Coded Caching
ABSTRACT: We consider a basic content distribution scenario consisting of a single
origin server connected through a shared bottleneck link to a number of users
each equipped with a cache of finite memory. The users issue a sequence of
content requests from a set of popular files, and the goal is to operate the
caches as well as the server such that these requests are satisfied with the
minimum number of bits sent over the shared link. Assuming a basic Markov model
for renewing the set of popular files, we characterize approximately the
optimal long-term average rate of the shared link. We further prove that the
optimal online scheme has approximately the same performance as the optimal
offline scheme, in which the cache contents can be updated based on the entire
set of popular files before each new request. To support these theoretical
results, we propose an online coded caching scheme termed coded least-recently
sent (LRS) and simulate it for a demand time series derived from the dataset
made available by Netflix for the Netflix Prize. For this time series, we show
that the proposed coded LRS algorithm significantly outperforms the popular
least-recently used (LRU) caching algorithm.
| no_new_dataset | 0.943556 |
1510.01344 | Mohammad Havaei | Mohammad Havaei, Hugo Larochelle, Philippe Poulin, Pierre-Marc Jodoin | Within-Brain Classification for Brain Tumor Segmentation | null | null | 10.1007/s11548-015-1311-1 | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Purpose: In this paper, we investigate a framework for interactive brain
tumor segmentation which, at its core, treats the problem of interactive brain
tumor segmentation as a machine learning problem.
Methods: This method has an advantage over typical machine learning methods
for this task where generalization is made across brains. The problem with
these methods is that they need to deal with intensity bias correction and
other MRI-specific noise. In this paper, we avoid these issues by approaching
the problem as one of within brain generalization. Specifically, we propose a
semi-automatic method that segments a brain tumor by training and generalizing
within that brain only, based on some minimum user interaction.
Conclusion: We investigate how adding spatial feature coordinates (i.e. $i$,
$j$, $k$) to the intensity features can significantly improve the performance
of different classification methods such as SVM, kNN and random forests. This
would only be possible within an interactive framework. We also investigate the
use of a more appropriate kernel and the adaptation of hyper-parameters
specifically for each brain.
Results: As a result of these experiments, we obtain an interactive method
whose results reported on the MICCAI-BRATS 2013 dataset are the second most
accurate compared to published methods, while using significantly less memory
and processing power than most state-of-the-art methods.
| [
{
"version": "v1",
"created": "Mon, 5 Oct 2015 20:32:04 GMT"
}
] | 2016-05-20T00:00:00 | [
[
"Havaei",
"Mohammad",
""
],
[
"Larochelle",
"Hugo",
""
],
[
"Poulin",
"Philippe",
""
],
[
"Jodoin",
"Pierre-Marc",
""
]
] | TITLE: Within-Brain Classification for Brain Tumor Segmentation
ABSTRACT: Purpose: In this paper, we investigate a framework for interactive brain
tumor segmentation which, at its core, treats the problem of interactive brain
tumor segmentation as a machine learning problem.
Methods: This method has an advantage over typical machine learning methods
for this task where generalization is made across brains. The problem with
these methods is that they need to deal with intensity bias correction and
other MRI-specific noise. In this paper, we avoid these issues by approaching
the problem as one of within brain generalization. Specifically, we propose a
semi-automatic method that segments a brain tumor by training and generalizing
within that brain only, based on some minimum user interaction.
Conclusion: We investigate how adding spatial feature coordinates (i.e. $i$,
$j$, $k$) to the intensity features can significantly improve the performance
of different classification methods such as SVM, kNN and random forests. This
would only be possible within an interactive framework. We also investigate the
use of a more appropriate kernel and the adaptation of hyper-parameters
specifically for each brain.
Results: As a result of these experiments, we obtain an interactive method
whose results reported on the MICCAI-BRATS 2013 dataset are the second most
accurate compared to published methods, while using significantly less memory
and processing power than most state-of-the-art methods.
| no_new_dataset | 0.948106 |
1605.04719 | Nir Rosenfeld | Nir Rosenfeld and Amir Globerson | Optimal Tagging with Markov Chain Optimization | null | null | null | null | cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many information systems use tags and keywords to describe and annotate
content. These allow for efficient organization and categorization of items, as
well as facilitate relevant search queries. As such, the selected set of tags
for an item can have a considerable effect on the volume of traffic that
eventually reaches an item. In settings where tags are chosen by an item's
creator, who in turn is interested in maximizing traffic, a principled approach
for choosing tags can prove valuable. In this paper we introduce the problem of
optimal tagging, where the task is to choose a subset of tags for a new item
such that the probability of a browsing user reaching that item is maximized.
We formulate the problem by modeling traffic using a Markov chain, and asking
how transitions in this chain should be modified to maximize traffic into a
certain state of interest. The resulting optimization problem involves
maximizing a certain function over subsets, under a cardinality constraint. We
show that the optimization problem is NP-hard, but nonetheless has a simple
(1-1/e)-approximation via a simple greedy algorithm. Furthermore, the structure
of the problem allows for an efficient implementation of the greedy step.To
demonstrate the effectiveness of our method, we perform experiments on three
tagging datasets, and show that the greedy algorithm outperforms other
baselines.
| [
{
"version": "v1",
"created": "Mon, 16 May 2016 10:30:05 GMT"
},
{
"version": "v2",
"created": "Wed, 18 May 2016 07:16:21 GMT"
},
{
"version": "v3",
"created": "Thu, 19 May 2016 15:11:59 GMT"
}
] | 2016-05-20T00:00:00 | [
[
"Rosenfeld",
"Nir",
""
],
[
"Globerson",
"Amir",
""
]
] | TITLE: Optimal Tagging with Markov Chain Optimization
ABSTRACT: Many information systems use tags and keywords to describe and annotate
content. These allow for efficient organization and categorization of items, as
well as facilitate relevant search queries. As such, the selected set of tags
for an item can have a considerable effect on the volume of traffic that
eventually reaches an item. In settings where tags are chosen by an item's
creator, who in turn is interested in maximizing traffic, a principled approach
for choosing tags can prove valuable. In this paper we introduce the problem of
optimal tagging, where the task is to choose a subset of tags for a new item
such that the probability of a browsing user reaching that item is maximized.
We formulate the problem by modeling traffic using a Markov chain, and asking
how transitions in this chain should be modified to maximize traffic into a
certain state of interest. The resulting optimization problem involves
maximizing a certain function over subsets, under a cardinality constraint. We
show that the optimization problem is NP-hard, but nonetheless has a simple
(1-1/e)-approximation via a simple greedy algorithm. Furthermore, the structure
of the problem allows for an efficient implementation of the greedy step.To
demonstrate the effectiveness of our method, we perform experiments on three
tagging datasets, and show that the greedy algorithm outperforms other
baselines.
| no_new_dataset | 0.942981 |
1605.05847 | Erman Ayday | Erin Avllazagaj and Erman Ayday and A. Ercument Cicek | Privacy-Related Consequences of Turkish Citizen Database Leak | 12 pages, 5 figures | null | null | null | cs.CR cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Personal data is collected and stored more than ever by the governments and
companies in the digital age. Even though the data is only released after
anonymization, deanonymization is possible by joining different datasets. This
puts the privacy of individuals in jeopardy. Furthermore, data leaks can unveil
personal identifiers of individuals when security is breached. Processing the
leaked dataset can provide even more information than what is visible to naked
eye. In this work, we report the results of our analyses on the recent "Turkish
citizen database leak", which revealed the national identifier numbers of close
to fifty million voters, along with personal information such as date of birth,
birth place, and full address. We show that with automated processing of the
data, one can uniquely identify (i) mother's maiden name of individuals and
(ii) landline numbers, for a significant portion of people. This is a serious
privacy and security threat because (i) identity theft risk is now higher, and
(ii) scammers are able to access more information about individuals. The only
and utmost goal of this work is to point out to the security risks and suggest
stricter measures to related companies and agencies to protect the security and
privacy of individuals.
| [
{
"version": "v1",
"created": "Thu, 19 May 2016 08:36:55 GMT"
}
] | 2016-05-20T00:00:00 | [
[
"Avllazagaj",
"Erin",
""
],
[
"Ayday",
"Erman",
""
],
[
"Cicek",
"A. Ercument",
""
]
] | TITLE: Privacy-Related Consequences of Turkish Citizen Database Leak
ABSTRACT: Personal data is collected and stored more than ever by the governments and
companies in the digital age. Even though the data is only released after
anonymization, deanonymization is possible by joining different datasets. This
puts the privacy of individuals in jeopardy. Furthermore, data leaks can unveil
personal identifiers of individuals when security is breached. Processing the
leaked dataset can provide even more information than what is visible to naked
eye. In this work, we report the results of our analyses on the recent "Turkish
citizen database leak", which revealed the national identifier numbers of close
to fifty million voters, along with personal information such as date of birth,
birth place, and full address. We show that with automated processing of the
data, one can uniquely identify (i) mother's maiden name of individuals and
(ii) landline numbers, for a significant portion of people. This is a serious
privacy and security threat because (i) identity theft risk is now higher, and
(ii) scammers are able to access more information about individuals. The only
and utmost goal of this work is to point out to the security risks and suggest
stricter measures to related companies and agencies to protect the security and
privacy of individuals.
| no_new_dataset | 0.926901 |
1605.05912 | Kele Xu | Aurore Jaumard-Hakoun, Kele Xu, Pierre Roussel-Ragot, G\'erard
Dreyfus, Bruce Denby | Tongue contour extraction from ultrasound images based on deep neural
network | 5 pages, 3 figures, published in The International Congress of
Phonetic Sciences, 2015 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Studying tongue motion during speech using ultrasound is a standard
procedure, but automatic ultrasound image labelling remains a challenge, as
standard tongue shape extraction methods typically require human intervention.
This article presents a method based on deep neural networks to automatically
extract tongue contour from ultrasound images on a speech dataset. We use a
deep autoencoder trained to learn the relationship between an image and its
related contour, so that the model is able to automatically reconstruct
contours from the ultrasound image alone. In this paper, we use an automatic
labelling algorithm instead of time-consuming hand-labelling during the
training process, and estimate the performances of both automatic labelling and
contour extraction as compared to hand-labelling. Observed results show quality
scores comparable to the state of the art.
| [
{
"version": "v1",
"created": "Thu, 19 May 2016 12:20:40 GMT"
}
] | 2016-05-20T00:00:00 | [
[
"Jaumard-Hakoun",
"Aurore",
""
],
[
"Xu",
"Kele",
""
],
[
"Roussel-Ragot",
"Pierre",
""
],
[
"Dreyfus",
"Gérard",
""
],
[
"Denby",
"Bruce",
""
]
] | TITLE: Tongue contour extraction from ultrasound images based on deep neural
network
ABSTRACT: Studying tongue motion during speech using ultrasound is a standard
procedure, but automatic ultrasound image labelling remains a challenge, as
standard tongue shape extraction methods typically require human intervention.
This article presents a method based on deep neural networks to automatically
extract tongue contour from ultrasound images on a speech dataset. We use a
deep autoencoder trained to learn the relationship between an image and its
related contour, so that the model is able to automatically reconstruct
contours from the ultrasound image alone. In this paper, we use an automatic
labelling algorithm instead of time-consuming hand-labelling during the
training process, and estimate the performances of both automatic labelling and
contour extraction as compared to hand-labelling. Observed results show quality
scores comparable to the state of the art.
| no_new_dataset | 0.954942 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.