id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
sequencelengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1606.05286 | Julien Perez | Julien Perez | Spectral decomposition method of dialog state tracking via collective
matrix factorization | 13 pages, 3 figures, 1 Table. arXiv admin note: substantial text
overlap with arXiv:1606.04052 | Dialogue & Discourse 7(3) (2016) | 10.5087/dad.2016.304 | null | cs.CL stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The task of dialog management is commonly decomposed into two sequential
subtasks: dialog state tracking and dialog policy learning. In an end-to-end
dialog system, the aim of dialog state tracking is to accurately estimate the
true dialog state from noisy observations produced by the speech recognition
and the natural language understanding modules. The state tracking task is
primarily meant to support a dialog policy. From a probabilistic perspective,
this is achieved by maintaining a posterior distribution over hidden dialog
states composed of a set of context dependent variables. Once a dialog policy
is learned, it strives to select an optimal dialog act given the estimated
dialog state and a defined reward function. This paper introduces a novel
method of dialog state tracking based on a bilinear algebric decomposition
model that provides an efficient inference schema through collective matrix
factorization. We evaluate the proposed approach on the second Dialog State
Tracking Challenge (DSTC-2) dataset and we show that the proposed tracker gives
encouraging results compared to the state-of-the-art trackers that participated
in this standard benchmark. Finally, we show that the prediction schema is
computationally efficient in comparison to the previous approaches.
| [
{
"version": "v1",
"created": "Thu, 16 Jun 2016 17:31:13 GMT"
}
] | 2016-06-17T00:00:00 | [
[
"Perez",
"Julien",
""
]
] | TITLE: Spectral decomposition method of dialog state tracking via collective
matrix factorization
ABSTRACT: The task of dialog management is commonly decomposed into two sequential
subtasks: dialog state tracking and dialog policy learning. In an end-to-end
dialog system, the aim of dialog state tracking is to accurately estimate the
true dialog state from noisy observations produced by the speech recognition
and the natural language understanding modules. The state tracking task is
primarily meant to support a dialog policy. From a probabilistic perspective,
this is achieved by maintaining a posterior distribution over hidden dialog
states composed of a set of context dependent variables. Once a dialog policy
is learned, it strives to select an optimal dialog act given the estimated
dialog state and a defined reward function. This paper introduces a novel
method of dialog state tracking based on a bilinear algebric decomposition
model that provides an efficient inference schema through collective matrix
factorization. We evaluate the proposed approach on the second Dialog State
Tracking Challenge (DSTC-2) dataset and we show that the proposed tracker gives
encouraging results compared to the state-of-the-art trackers that participated
in this standard benchmark. Finally, we show that the prediction schema is
computationally efficient in comparison to the previous approaches.
| no_new_dataset | 0.944382 |
1606.05310 | Mark Marsden | M. Marsden, K. McGuinness, S. Little, N. E. O'Connor | Holistic Features For Real-Time Crowd Behaviour Anomaly Detection | 4 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a new approach to crowd behaviour anomaly detection that
uses a set of efficiently computed, easily interpretable, scene-level holistic
features. This low-dimensional descriptor combines two features from the
literature: crowd collectiveness [1] and crowd conflict [2], with two newly
developed crowd features: mean motion speed and a new formulation of crowd
density. Two different anomaly detection approaches are investigated using
these features. When only normal training data is available we use a Gaussian
Mixture Model (GMM) for outlier detection. When both normal and abnormal
training data is available we use a Support Vector Machine (SVM) for binary
classification. We evaluate on two crowd behaviour anomaly detection datasets,
achieving both state-of-the-art classification performance on the violent-flows
dataset [3] as well as better than real-time processing performance (40 frames
per second).
| [
{
"version": "v1",
"created": "Thu, 16 Jun 2016 18:37:25 GMT"
}
] | 2016-06-17T00:00:00 | [
[
"Marsden",
"M.",
""
],
[
"McGuinness",
"K.",
""
],
[
"Little",
"S.",
""
],
[
"O'Connor",
"N. E.",
""
]
] | TITLE: Holistic Features For Real-Time Crowd Behaviour Anomaly Detection
ABSTRACT: This paper presents a new approach to crowd behaviour anomaly detection that
uses a set of efficiently computed, easily interpretable, scene-level holistic
features. This low-dimensional descriptor combines two features from the
literature: crowd collectiveness [1] and crowd conflict [2], with two newly
developed crowd features: mean motion speed and a new formulation of crowd
density. Two different anomaly detection approaches are investigated using
these features. When only normal training data is available we use a Gaussian
Mixture Model (GMM) for outlier detection. When both normal and abnormal
training data is available we use a Support Vector Machine (SVM) for binary
classification. We evaluate on two crowd behaviour anomaly detection datasets,
achieving both state-of-the-art classification performance on the violent-flows
dataset [3] as well as better than real-time processing performance (40 frames
per second).
| no_new_dataset | 0.950915 |
1606.05325 | Yubin Park | Yubin Park and Joyce Ho and Joydeep Ghosh | ACDC: $\alpha$-Carving Decision Chain for Risk Stratification | presented at 2016 ICML Workshop on Human Interpretability in Machine
Learning (WHI 2016), New York, NY | null | null | null | stat.ML cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | In many healthcare settings, intuitive decision rules for risk stratification
can help effective hospital resource allocation. This paper introduces a novel
variant of decision tree algorithms that produces a chain of decisions, not a
general tree. Our algorithm, $\alpha$-Carving Decision Chain (ACDC),
sequentially carves out "pure" subsets of the majority class examples. The
resulting chain of decision rules yields a pure subset of the minority class
examples. Our approach is particularly effective in exploring large and
class-imbalanced health datasets. Moreover, ACDC provides an interactive
interpretation in conjunction with visual performance metrics such as Receiver
Operating Characteristics curve and Lift chart.
| [
{
"version": "v1",
"created": "Thu, 16 Jun 2016 19:36:51 GMT"
}
] | 2016-06-17T00:00:00 | [
[
"Park",
"Yubin",
""
],
[
"Ho",
"Joyce",
""
],
[
"Ghosh",
"Joydeep",
""
]
] | TITLE: ACDC: $\alpha$-Carving Decision Chain for Risk Stratification
ABSTRACT: In many healthcare settings, intuitive decision rules for risk stratification
can help effective hospital resource allocation. This paper introduces a novel
variant of decision tree algorithms that produces a chain of decisions, not a
general tree. Our algorithm, $\alpha$-Carving Decision Chain (ACDC),
sequentially carves out "pure" subsets of the majority class examples. The
resulting chain of decision rules yields a pure subset of the minority class
examples. Our approach is particularly effective in exploring large and
class-imbalanced health datasets. Moreover, ACDC provides an interactive
interpretation in conjunction with visual performance metrics such as Receiver
Operating Characteristics curve and Lift chart.
| no_new_dataset | 0.951323 |
1511.06676 | James Charles | James Charles and Tomas Pfister and Derek Magee and David Hogg and
Andrew Zisserman | Personalizing Human Video Pose Estimation | CVPR 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a personalized ConvNet pose estimator that automatically adapts
itself to the uniqueness of a person's appearance to improve pose estimation in
long videos. We make the following contributions: (i) we show that given a few
high-precision pose annotations, e.g. from a generic ConvNet pose estimator,
additional annotations can be generated throughout the video using a
combination of image-based matching for temporally distant frames, and dense
optical flow for temporally local frames; (ii) we develop an occlusion aware
self-evaluation model that is able to automatically select the high-quality and
reject the erroneous additional annotations; and (iii) we demonstrate that
these high-quality annotations can be used to fine-tune a ConvNet pose
estimator and thereby personalize it to lock on to key discriminative features
of the person's appearance. The outcome is a substantial improvement in the
pose estimates for the target video using the personalized ConvNet compared to
the original generic ConvNet. Our method outperforms the state of the art
(including top ConvNet methods) by a large margin on two standard benchmarks,
as well as on a new challenging YouTube video dataset. Furthermore, we show
that training from the automatically generated annotations can be used to
improve the performance of a generic ConvNet on other benchmarks.
| [
{
"version": "v1",
"created": "Fri, 20 Nov 2015 16:34:42 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Jun 2016 11:05:05 GMT"
}
] | 2016-06-16T00:00:00 | [
[
"Charles",
"James",
""
],
[
"Pfister",
"Tomas",
""
],
[
"Magee",
"Derek",
""
],
[
"Hogg",
"David",
""
],
[
"Zisserman",
"Andrew",
""
]
] | TITLE: Personalizing Human Video Pose Estimation
ABSTRACT: We propose a personalized ConvNet pose estimator that automatically adapts
itself to the uniqueness of a person's appearance to improve pose estimation in
long videos. We make the following contributions: (i) we show that given a few
high-precision pose annotations, e.g. from a generic ConvNet pose estimator,
additional annotations can be generated throughout the video using a
combination of image-based matching for temporally distant frames, and dense
optical flow for temporally local frames; (ii) we develop an occlusion aware
self-evaluation model that is able to automatically select the high-quality and
reject the erroneous additional annotations; and (iii) we demonstrate that
these high-quality annotations can be used to fine-tune a ConvNet pose
estimator and thereby personalize it to lock on to key discriminative features
of the person's appearance. The outcome is a substantial improvement in the
pose estimates for the target video using the personalized ConvNet compared to
the original generic ConvNet. Our method outperforms the state of the art
(including top ConvNet methods) by a large margin on two standard benchmarks,
as well as on a new challenging YouTube video dataset. Furthermore, we show
that training from the automatically generated annotations can be used to
improve the performance of a generic ConvNet on other benchmarks.
| new_dataset | 0.927298 |
1601.01356 | Makbule Gulcin Ozsoy | Makbule Gulcin Ozsoy | From Word Embeddings to Item Recommendation | null | null | null | null | cs.LG cs.CL cs.IR cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Social network platforms can use the data produced by their users to serve
them better. One of the services these platforms provide is recommendation
service. Recommendation systems can predict the future preferences of users
using their past preferences. In the recommendation systems literature there
are various techniques, such as neighborhood based methods, machine-learning
based methods and matrix-factorization based methods. In this work, a set of
well known methods from natural language processing domain, namely Word2Vec, is
applied to recommendation systems domain. Unlike previous works that use
Word2Vec for recommendation, this work uses non-textual features, the
check-ins, and it recommends venues to visit/check-in to the target users. For
the experiments, a Foursquare check-in dataset is used. The results show that
use of continuous vector space representations of items modeled by techniques
of Word2Vec is promising for making recommendations.
| [
{
"version": "v1",
"created": "Thu, 7 Jan 2016 00:09:37 GMT"
},
{
"version": "v2",
"created": "Sun, 6 Mar 2016 16:09:10 GMT"
},
{
"version": "v3",
"created": "Wed, 15 Jun 2016 08:07:36 GMT"
}
] | 2016-06-16T00:00:00 | [
[
"Ozsoy",
"Makbule Gulcin",
""
]
] | TITLE: From Word Embeddings to Item Recommendation
ABSTRACT: Social network platforms can use the data produced by their users to serve
them better. One of the services these platforms provide is recommendation
service. Recommendation systems can predict the future preferences of users
using their past preferences. In the recommendation systems literature there
are various techniques, such as neighborhood based methods, machine-learning
based methods and matrix-factorization based methods. In this work, a set of
well known methods from natural language processing domain, namely Word2Vec, is
applied to recommendation systems domain. Unlike previous works that use
Word2Vec for recommendation, this work uses non-textual features, the
check-ins, and it recommends venues to visit/check-in to the target users. For
the experiments, a Foursquare check-in dataset is used. The results show that
use of continuous vector space representations of items modeled by techniques
of Word2Vec is promising for making recommendations.
| no_new_dataset | 0.939025 |
1606.04586 | Mehdi Sajjadi | Mehdi Sajjadi, Mehran Javanmardi, Tolga Tasdizen | Regularization With Stochastic Transformations and Perturbations for
Deep Semi-Supervised Learning | 9 pages, 2 figures, 5 tables | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Effective convolutional neural networks are trained on large sets of labeled
data. However, creating large labeled datasets is a very costly and
time-consuming task. Semi-supervised learning uses unlabeled data to train a
model with higher accuracy when there is a limited set of labeled data
available. In this paper, we consider the problem of semi-supervised learning
with convolutional neural networks. Techniques such as randomized data
augmentation, dropout and random max-pooling provide better generalization and
stability for classifiers that are trained using gradient descent. Multiple
passes of an individual sample through the network might lead to different
predictions due to the non-deterministic behavior of these techniques. We
propose an unsupervised loss function that takes advantage of the stochastic
nature of these methods and minimizes the difference between the predictions of
multiple passes of a training sample through the network. We evaluate the
proposed method on several benchmark datasets.
| [
{
"version": "v1",
"created": "Tue, 14 Jun 2016 22:30:08 GMT"
}
] | 2016-06-16T00:00:00 | [
[
"Sajjadi",
"Mehdi",
""
],
[
"Javanmardi",
"Mehran",
""
],
[
"Tasdizen",
"Tolga",
""
]
] | TITLE: Regularization With Stochastic Transformations and Perturbations for
Deep Semi-Supervised Learning
ABSTRACT: Effective convolutional neural networks are trained on large sets of labeled
data. However, creating large labeled datasets is a very costly and
time-consuming task. Semi-supervised learning uses unlabeled data to train a
model with higher accuracy when there is a limited set of labeled data
available. In this paper, we consider the problem of semi-supervised learning
with convolutional neural networks. Techniques such as randomized data
augmentation, dropout and random max-pooling provide better generalization and
stability for classifiers that are trained using gradient descent. Multiple
passes of an individual sample through the network might lead to different
predictions due to the non-deterministic behavior of these techniques. We
propose an unsupervised loss function that takes advantage of the stochastic
nature of these methods and minimizes the difference between the predictions of
multiple passes of a training sample through the network. We evaluate the
proposed method on several benchmark datasets.
| no_new_dataset | 0.948394 |
1606.04597 | Yang Liu | Chunyang Liu, Yang Liu, Huanbo Luan, Maosong Sun and Heng Yu | Agreement-based Learning of Parallel Lexicons and Phrases from
Non-Parallel Corpora | Accepted for publication in the Proceedings of ACL 2016 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce an agreement-based approach to learning parallel lexicons and
phrases from non-parallel corpora. The basic idea is to encourage two
asymmetric latent-variable translation models (i.e., source-to-target and
target-to-source) to agree on identifying latent phrase and word alignments.
The agreement is defined at both word and phrase levels. We develop a Viterbi
EM algorithm for jointly training the two unidirectional models efficiently.
Experiments on the Chinese-English dataset show that agreement-based learning
significantly improves both alignment and translation performance.
| [
{
"version": "v1",
"created": "Wed, 15 Jun 2016 00:28:51 GMT"
}
] | 2016-06-16T00:00:00 | [
[
"Liu",
"Chunyang",
""
],
[
"Liu",
"Yang",
""
],
[
"Luan",
"Huanbo",
""
],
[
"Sun",
"Maosong",
""
],
[
"Yu",
"Heng",
""
]
] | TITLE: Agreement-based Learning of Parallel Lexicons and Phrases from
Non-Parallel Corpora
ABSTRACT: We introduce an agreement-based approach to learning parallel lexicons and
phrases from non-parallel corpora. The basic idea is to encourage two
asymmetric latent-variable translation models (i.e., source-to-target and
target-to-source) to agree on identifying latent phrase and word alignments.
The agreement is defined at both word and phrase levels. We develop a Viterbi
EM algorithm for jointly training the two unidirectional models efficiently.
Experiments on the Chinese-English dataset show that agreement-based learning
significantly improves both alignment and translation performance.
| no_new_dataset | 0.952794 |
1606.04616 | Zheng Zhang | Zheng Zhang, Yong Xu, Cheng-Lin Liu | Natural Scene Character Recognition Using Robust PCA and Sparse
Representation | The 12th IAPR International Workshop on Document Analysis Systems
(DAS); The natural scene character image features used in this paper have
been released at
http://www.yongxu.org/Natural%20Scene%20Character%20Recognition%20Datasets.html | null | 10.1109/DAS.2016.32 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Natural scene character recognition is challenging due to the cluttered
background, which is hard to separate from text. In this paper, we propose a
novel method for robust scene character recognition. Specifically, we first use
robust principal component analysis (PCA) to denoise character image by
recovering the missing low-rank component and filtering out the sparse noise
term, and then use a simple Histogram of oriented Gradient (HOG) to perform
image feature extraction, and finally, use a sparse representation based
classifier for recognition. In experiments on four public datasets, namely the
Char74K dataset, ICADAR 2003 robust reading dataset, Street View Text (SVT)
dataset and IIIT5K-word dataset, our method was demonstrated to be competitive
with the state-of-the-art methods.
| [
{
"version": "v1",
"created": "Wed, 15 Jun 2016 01:58:06 GMT"
}
] | 2016-06-16T00:00:00 | [
[
"Zhang",
"Zheng",
""
],
[
"Xu",
"Yong",
""
],
[
"Liu",
"Cheng-Lin",
""
]
] | TITLE: Natural Scene Character Recognition Using Robust PCA and Sparse
Representation
ABSTRACT: Natural scene character recognition is challenging due to the cluttered
background, which is hard to separate from text. In this paper, we propose a
novel method for robust scene character recognition. Specifically, we first use
robust principal component analysis (PCA) to denoise character image by
recovering the missing low-rank component and filtering out the sparse noise
term, and then use a simple Histogram of oriented Gradient (HOG) to perform
image feature extraction, and finally, use a sparse representation based
classifier for recognition. In experiments on four public datasets, namely the
Char74K dataset, ICADAR 2003 robust reading dataset, Street View Text (SVT)
dataset and IIIT5K-word dataset, our method was demonstrated to be competitive
with the state-of-the-art methods.
| no_new_dataset | 0.949153 |
1606.04640 | Tom Kenter | Tom Kenter, Alexey Borisov, Maarten de Rijke | Siamese CBOW: Optimizing Word Embeddings for Sentence Representations | Accepted as full paper at ACL 2016, Berlin. 11 pages | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present the Siamese Continuous Bag of Words (Siamese CBOW) model, a neural
network for efficient estimation of high-quality sentence embeddings. Averaging
the embeddings of words in a sentence has proven to be a surprisingly
successful and efficient way of obtaining sentence embeddings. However, word
embeddings trained with the methods currently available are not optimized for
the task of sentence representation, and, thus, likely to be suboptimal.
Siamese CBOW handles this problem by training word embeddings directly for the
purpose of being averaged. The underlying neural network learns word embeddings
by predicting, from a sentence representation, its surrounding sentences. We
show the robustness of the Siamese CBOW model by evaluating it on 20 datasets
stemming from a wide variety of sources.
| [
{
"version": "v1",
"created": "Wed, 15 Jun 2016 04:47:43 GMT"
}
] | 2016-06-16T00:00:00 | [
[
"Kenter",
"Tom",
""
],
[
"Borisov",
"Alexey",
""
],
[
"de Rijke",
"Maarten",
""
]
] | TITLE: Siamese CBOW: Optimizing Word Embeddings for Sentence Representations
ABSTRACT: We present the Siamese Continuous Bag of Words (Siamese CBOW) model, a neural
network for efficient estimation of high-quality sentence embeddings. Averaging
the embeddings of words in a sentence has proven to be a surprisingly
successful and efficient way of obtaining sentence embeddings. However, word
embeddings trained with the methods currently available are not optimized for
the task of sentence representation, and, thus, likely to be suboptimal.
Siamese CBOW handles this problem by training word embeddings directly for the
purpose of being averaged. The underlying neural network learns word embeddings
by predicting, from a sentence representation, its surrounding sentences. We
show the robustness of the Siamese CBOW model by evaluating it on 20 datasets
stemming from a wide variety of sources.
| no_new_dataset | 0.946001 |
1606.04746 | Vincenzo Gulisano | Vincenzo Gulisano, Yiannis Nikolakopoulos, Daniel Cederman, Marina
Papatriantafilou and Philippas Tsigas | Efficient data streaming multiway aggregation through concurrent
algorithmic designs and new abstract data types | null | null | null | null | cs.DS cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Data streaming relies on continuous queries to process unbounded streams of
data in a real-time fashion. It is commonly demanding in computation capacity,
given that the relevant applications involve very large volumes of data. Data
structures act as articulation points and maintain the state of data streaming
operators, potentially supporting high parallelism and balancing the work
between them. Prompted by this fact, in this work we study and analyze
parallelization needs of these articulation points, focusing on the problem of
streaming multiway aggregation, where large data volumes are received from
multiple input streams. The analysis of the parallelization needs, as well as
of the use and limitations of existing aggregate designs and their data
structures, leads us to identify needs for proper shared objects that can
achieve low-latency and high throughput multiway aggregation. We present the
requirements of such objects as abstract data types and we provide efficient
lock-free linearizable algorithmic implementations of them, along with new
multiway aggregate algorithmic designs that leverage them, supporting both
deterministic order-sensitive and order-insensitive aggregate functions.
Furthermore, we point out future directions that open through these
contributions. The paper includes an extensive experimental study, based on a
variety of aggregation continuous queries on two large datasets extracted from
SoundCloud, a music social network, and from a Smart Grid network. In all the
experiments, the proposed data structures and the enhanced aggregate operators
improved the processing performance significantly, up to one order of
magnitude, in terms of both throughput and latency, over the commonly-used
techniques based on queues.
| [
{
"version": "v1",
"created": "Wed, 15 Jun 2016 13:01:38 GMT"
}
] | 2016-06-16T00:00:00 | [
[
"Gulisano",
"Vincenzo",
""
],
[
"Nikolakopoulos",
"Yiannis",
""
],
[
"Cederman",
"Daniel",
""
],
[
"Papatriantafilou",
"Marina",
""
],
[
"Tsigas",
"Philippas",
""
]
] | TITLE: Efficient data streaming multiway aggregation through concurrent
algorithmic designs and new abstract data types
ABSTRACT: Data streaming relies on continuous queries to process unbounded streams of
data in a real-time fashion. It is commonly demanding in computation capacity,
given that the relevant applications involve very large volumes of data. Data
structures act as articulation points and maintain the state of data streaming
operators, potentially supporting high parallelism and balancing the work
between them. Prompted by this fact, in this work we study and analyze
parallelization needs of these articulation points, focusing on the problem of
streaming multiway aggregation, where large data volumes are received from
multiple input streams. The analysis of the parallelization needs, as well as
of the use and limitations of existing aggregate designs and their data
structures, leads us to identify needs for proper shared objects that can
achieve low-latency and high throughput multiway aggregation. We present the
requirements of such objects as abstract data types and we provide efficient
lock-free linearizable algorithmic implementations of them, along with new
multiway aggregate algorithmic designs that leverage them, supporting both
deterministic order-sensitive and order-insensitive aggregate functions.
Furthermore, we point out future directions that open through these
contributions. The paper includes an extensive experimental study, based on a
variety of aggregation continuous queries on two large datasets extracted from
SoundCloud, a music social network, and from a Smart Grid network. In all the
experiments, the proposed data structures and the enhanced aggregate operators
improved the processing performance significantly, up to one order of
magnitude, in terms of both throughput and latency, over the commonly-used
techniques based on queues.
| no_new_dataset | 0.9434 |
1606.04853 | Patrick Flynn | Kevin W. Bowyer and Patrick J. Flynn | The ND-IRIS-0405 Iris Image Dataset | 13 pages, 8 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Computer Vision Research Lab at the University of Notre Dame began
collecting iris images in the spring semester of 2004. The initial data
collections used an LG 2200 iris imaging system for image acquisition. Image
datasets acquired in 2004-2005 at Notre Dame with this LG 2200 have been used
in the ICE 2005 and ICE 2006 iris biometric evaluations. The ICE 2005 iris
image dataset has been distributed to over 100 research groups around the
world. The purpose of this document is to describe the content of the
ND-IRIS-0405 iris image dataset. This dataset is a superset of the iris image
datasets used in ICE 2005 and ICE 2006. The ND 2004-2005 iris image dataset
contains 64,980 images corresponding to 356 unique subjects, and 712 unique
irises. The age range of the subjects is 18 to 75 years old. 158 of the
subjects are female, and 198 are male. 250 of the subjects are Caucasian, 82
are Asian, and 24 are other ethnicities.
| [
{
"version": "v1",
"created": "Wed, 15 Jun 2016 16:40:51 GMT"
}
] | 2016-06-16T00:00:00 | [
[
"Bowyer",
"Kevin W.",
""
],
[
"Flynn",
"Patrick J.",
""
]
] | TITLE: The ND-IRIS-0405 Iris Image Dataset
ABSTRACT: The Computer Vision Research Lab at the University of Notre Dame began
collecting iris images in the spring semester of 2004. The initial data
collections used an LG 2200 iris imaging system for image acquisition. Image
datasets acquired in 2004-2005 at Notre Dame with this LG 2200 have been used
in the ICE 2005 and ICE 2006 iris biometric evaluations. The ICE 2005 iris
image dataset has been distributed to over 100 research groups around the
world. The purpose of this document is to describe the content of the
ND-IRIS-0405 iris image dataset. This dataset is a superset of the iris image
datasets used in ICE 2005 and ICE 2006. The ND 2004-2005 iris image dataset
contains 64,980 images corresponding to 356 unique subjects, and 712 unique
irises. The age range of the subjects is 18 to 75 years old. 158 of the
subjects are female, and 198 are male. 250 of the subjects are Caucasian, 82
are Asian, and 24 are other ethnicities.
| new_dataset | 0.939025 |
1507.02081 | Michael Neunert | Michael Neunert, Michael Bloesch, Jonas Buchli | An Open Source, Fiducial Based, Visual-Inertial Motion Capture System | To appear in The International Conference on Information Fusion
(FUSION) 2016 | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many robotic tasks rely on the accurate localization of moving objects within
a given workspace. This information about the objects' poses and velocities are
used for control,motion planning, navigation, interaction with the environment
or verification. Often motion capture systems are used to obtain such a state
estimate. However, these systems are often costly, limited in workspace size
and not suitable for outdoor usage. Therefore, we propose a lightweight and
easy to use, visual-inertial Simultaneous Localization and Mapping approach
that leverages cost-efficient, paper printable artificial landmarks, socalled
fiducials. Results show that by fusing visual and inertial data, the system
provides accurate estimates and is robust against fast motions and changing
lighting conditions. Tight integration of the estimation of sensor and fiducial
pose as well as extrinsics ensures accuracy, map consistency and avoids the
requirement for precalibration. By providing an open source implementation and
various datasets, partially with ground truth information, we enable community
members to run, test, modify and extend the system either using these datasets
or directly running the system on their own robotic setups.
| [
{
"version": "v1",
"created": "Wed, 8 Jul 2015 09:38:13 GMT"
},
{
"version": "v2",
"created": "Mon, 13 Jun 2016 20:02:20 GMT"
}
] | 2016-06-15T00:00:00 | [
[
"Neunert",
"Michael",
""
],
[
"Bloesch",
"Michael",
""
],
[
"Buchli",
"Jonas",
""
]
] | TITLE: An Open Source, Fiducial Based, Visual-Inertial Motion Capture System
ABSTRACT: Many robotic tasks rely on the accurate localization of moving objects within
a given workspace. This information about the objects' poses and velocities are
used for control,motion planning, navigation, interaction with the environment
or verification. Often motion capture systems are used to obtain such a state
estimate. However, these systems are often costly, limited in workspace size
and not suitable for outdoor usage. Therefore, we propose a lightweight and
easy to use, visual-inertial Simultaneous Localization and Mapping approach
that leverages cost-efficient, paper printable artificial landmarks, socalled
fiducials. Results show that by fusing visual and inertial data, the system
provides accurate estimates and is robust against fast motions and changing
lighting conditions. Tight integration of the estimation of sensor and fiducial
pose as well as extrinsics ensures accuracy, map consistency and avoids the
requirement for precalibration. By providing an open source implementation and
various datasets, partially with ground truth information, we enable community
members to run, test, modify and extend the system either using these datasets
or directly running the system on their own robotic setups.
| no_new_dataset | 0.947914 |
1603.01006 | Manuel Marin-Jimenez | F.M. Castro and M.J. Marin-Jimenez and N. Guil and N. Perez de la
Blanca | Automatic learning of gait signatures for people identification | Proof of concept paper. Technical report on the use of ConvNets (CNN)
for gait recognition. Data and code:
http://www.uco.es/~in1majim/research/cnngaitof.html | null | null | 2016-03 | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work targets people identification in video based on the way they walk
(i.e. gait). While classical methods typically derive gait signatures from
sequences of binary silhouettes, in this work we explore the use of
convolutional neural networks (CNN) for learning high-level descriptors from
low-level motion features (i.e. optical flow components). We carry out a
thorough experimental evaluation of the proposed CNN architecture on the
challenging TUM-GAID dataset. The experimental results indicate that using
spatio-temporal cuboids of optical flow as input data for CNN allows to obtain
state-of-the-art results on the gait task with an image resolution eight times
lower than the previously reported results (i.e. 80x60 pixels).
| [
{
"version": "v1",
"created": "Thu, 3 Mar 2016 08:07:14 GMT"
},
{
"version": "v2",
"created": "Tue, 14 Jun 2016 16:07:07 GMT"
}
] | 2016-06-15T00:00:00 | [
[
"Castro",
"F. M.",
""
],
[
"Marin-Jimenez",
"M. J.",
""
],
[
"Guil",
"N.",
""
],
[
"de la Blanca",
"N. Perez",
""
]
] | TITLE: Automatic learning of gait signatures for people identification
ABSTRACT: This work targets people identification in video based on the way they walk
(i.e. gait). While classical methods typically derive gait signatures from
sequences of binary silhouettes, in this work we explore the use of
convolutional neural networks (CNN) for learning high-level descriptors from
low-level motion features (i.e. optical flow components). We carry out a
thorough experimental evaluation of the proposed CNN architecture on the
challenging TUM-GAID dataset. The experimental results indicate that using
spatio-temporal cuboids of optical flow as input data for CNN allows to obtain
state-of-the-art results on the gait task with an image resolution eight times
lower than the previously reported results (i.e. 80x60 pixels).
| no_new_dataset | 0.956553 |
1606.04275 | Michiel Stock | Michiel Stock and Tapio Pahikkala and Antti Airola and Bernard De
Baets and Willem Waegeman | Efficient Pairwise Learning Using Kernel Ridge Regression: an Exact
Two-Step Method | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Pairwise learning or dyadic prediction concerns the prediction of properties
for pairs of objects. It can be seen as an umbrella covering various machine
learning problems such as matrix completion, collaborative filtering,
multi-task learning, transfer learning, network prediction and zero-shot
learning. In this work we analyze kernel-based methods for pairwise learning,
with a particular focus on a recently-suggested two-step method. We show that
this method offers an appealing alternative for commonly-applied
Kronecker-based methods that model dyads by means of pairwise feature
representations and pairwise kernels. In a series of theoretical results, we
establish correspondences between the two types of methods in terms of linear
algebra and spectral filtering, and we analyze their statistical consistency.
In addition, the two-step method allows us to establish novel algorithmic
shortcuts for efficient training and validation on very large datasets. Putting
those properties together, we believe that this simple, yet powerful method can
become a standard tool for many problems. Extensive experimental results for a
range of practical settings are reported.
| [
{
"version": "v1",
"created": "Tue, 14 Jun 2016 09:38:18 GMT"
}
] | 2016-06-15T00:00:00 | [
[
"Stock",
"Michiel",
""
],
[
"Pahikkala",
"Tapio",
""
],
[
"Airola",
"Antti",
""
],
[
"De Baets",
"Bernard",
""
],
[
"Waegeman",
"Willem",
""
]
] | TITLE: Efficient Pairwise Learning Using Kernel Ridge Regression: an Exact
Two-Step Method
ABSTRACT: Pairwise learning or dyadic prediction concerns the prediction of properties
for pairs of objects. It can be seen as an umbrella covering various machine
learning problems such as matrix completion, collaborative filtering,
multi-task learning, transfer learning, network prediction and zero-shot
learning. In this work we analyze kernel-based methods for pairwise learning,
with a particular focus on a recently-suggested two-step method. We show that
this method offers an appealing alternative for commonly-applied
Kronecker-based methods that model dyads by means of pairwise feature
representations and pairwise kernels. In a series of theoretical results, we
establish correspondences between the two types of methods in terms of linear
algebra and spectral filtering, and we analyze their statistical consistency.
In addition, the two-step method allows us to establish novel algorithmic
shortcuts for efficient training and validation on very large datasets. Putting
those properties together, we believe that this simple, yet powerful method can
become a standard tool for many problems. Extensive experimental results for a
range of practical settings are reported.
| no_new_dataset | 0.942135 |
1606.04335 | Maria Kalantzi | Maria Kalantzi | LLFR: A Lanczos-Based Latent Factor Recommender for Big Data Scenarios | 65 pages, MSc Thesis (in Greek) | null | null | null | stat.ML cs.IR cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The purpose if this master's thesis is to study and develop a new algorithmic
framework for Collaborative Filtering to produce recommendations in the top-N
recommendation problem. Thus, we propose Lanczos Latent Factor Recommender
(LLFR); a novel "big data friendly" collaborative filtering algorithm for top-N
recommendation. Using a computationally efficient Lanczos-based procedure, LLFR
builds a low dimensional item similarity model, that can be readily exploited
to produce personalized ranking vectors over the item space. A number of
experiments on real datasets indicate that LLFR outperforms other
state-of-the-art top-N recommendation methods from a computational as well as a
qualitative perspective. Our experimental results also show that its relative
performance gains, compared to competing methods, increase as the data get
sparser, as in the Cold Start Problem. More specifically, this is true both
when the sparsity is generalized - as in the New Community Problem, a very
common problem faced by real recommender systems in their beginning stages,
when there is not sufficient number of ratings for the collaborative filtering
algorithms to uncover similarities between items or users - and in the very
interesting case where the sparsity is localized in a small fraction of the
dataset - as in the New Users Problem, where new users are introduced to the
system, they have not rated many items and thus, the CF algorithm can not make
reliable personalized recommendations yet.
| [
{
"version": "v1",
"created": "Tue, 14 Jun 2016 13:04:57 GMT"
}
] | 2016-06-15T00:00:00 | [
[
"Kalantzi",
"Maria",
""
]
] | TITLE: LLFR: A Lanczos-Based Latent Factor Recommender for Big Data Scenarios
ABSTRACT: The purpose if this master's thesis is to study and develop a new algorithmic
framework for Collaborative Filtering to produce recommendations in the top-N
recommendation problem. Thus, we propose Lanczos Latent Factor Recommender
(LLFR); a novel "big data friendly" collaborative filtering algorithm for top-N
recommendation. Using a computationally efficient Lanczos-based procedure, LLFR
builds a low dimensional item similarity model, that can be readily exploited
to produce personalized ranking vectors over the item space. A number of
experiments on real datasets indicate that LLFR outperforms other
state-of-the-art top-N recommendation methods from a computational as well as a
qualitative perspective. Our experimental results also show that its relative
performance gains, compared to competing methods, increase as the data get
sparser, as in the Cold Start Problem. More specifically, this is true both
when the sparsity is generalized - as in the New Community Problem, a very
common problem faced by real recommender systems in their beginning stages,
when there is not sufficient number of ratings for the collaborative filtering
algorithms to uncover similarities between items or users - and in the very
interesting case where the sparsity is localized in a small fraction of the
dataset - as in the New Users Problem, where new users are introduced to the
system, they have not rated many items and thus, the CF algorithm can not make
reliable personalized recommendations yet.
| no_new_dataset | 0.946843 |
1606.04429 | Arkaitz Zubiaga | Alberto P. Garc\'ia-Plaza and V\'ictor Fresno and Raquel Mart\'inez
and Arkaitz Zubiaga | Using Fuzzy Logic to Leverage HTML Markup for Web Page Representation | This is the accepted version of an article accepted for publication
in IEEE Transactions on Fuzzy Systems | null | null | null | cs.IR cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The selection of a suitable document representation approach plays a crucial
role in the performance of a document clustering task. Being able to pick out
representative words within a document can lead to substantial improvements in
document clustering. In the case of web documents, the HTML markup that defines
the layout of the content provides additional structural information that can
be further exploited to identify representative words. In this paper we
introduce a fuzzy term weighing approach that makes the most of the HTML
structure for document clustering. We set forth and build on the hypothesis
that a good representation can take advantage of how humans skim through
documents to extract the most representative words. The authors of web pages
make use of HTML tags to convey the most important message of a web page
through page elements that attract the readers' attention, such as page titles
or emphasized elements. We define a set of criteria to exploit the information
provided by these page elements, and introduce a fuzzy combination of these
criteria that we evaluate within the context of a web page clustering task. Our
proposed approach, called Abstract Fuzzy Combination of Criteria (AFCC), can
adapt to datasets whose features are distributed differently, achieving good
results compared to other similar fuzzy logic based approaches and TF-IDF
across different datasets.
| [
{
"version": "v1",
"created": "Tue, 14 Jun 2016 15:44:52 GMT"
}
] | 2016-06-15T00:00:00 | [
[
"García-Plaza",
"Alberto P.",
""
],
[
"Fresno",
"Víctor",
""
],
[
"Martínez",
"Raquel",
""
],
[
"Zubiaga",
"Arkaitz",
""
]
] | TITLE: Using Fuzzy Logic to Leverage HTML Markup for Web Page Representation
ABSTRACT: The selection of a suitable document representation approach plays a crucial
role in the performance of a document clustering task. Being able to pick out
representative words within a document can lead to substantial improvements in
document clustering. In the case of web documents, the HTML markup that defines
the layout of the content provides additional structural information that can
be further exploited to identify representative words. In this paper we
introduce a fuzzy term weighing approach that makes the most of the HTML
structure for document clustering. We set forth and build on the hypothesis
that a good representation can take advantage of how humans skim through
documents to extract the most representative words. The authors of web pages
make use of HTML tags to convey the most important message of a web page
through page elements that attract the readers' attention, such as page titles
or emphasized elements. We define a set of criteria to exploit the information
provided by these page elements, and introduce a fuzzy combination of these
criteria that we evaluate within the context of a web page clustering task. Our
proposed approach, called Abstract Fuzzy Combination of Criteria (AFCC), can
adapt to datasets whose features are distributed differently, achieving good
results compared to other similar fuzzy logic based approaches and TF-IDF
across different datasets.
| no_new_dataset | 0.951684 |
1606.04446 | Spyros Gidaris | Spyros Gidaris and Nikos Komodakis | Attend Refine Repeat: Active Box Proposal Generation via In-Out
Localization | Technical report. Code as well as box proposals computed for several
datasets are available at:: https://github.com/gidariss/AttractioNet | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The problem of computing category agnostic bounding box proposals is utilized
as a core component in many computer vision tasks and thus has lately attracted
a lot of attention. In this work we propose a new approach to tackle this
problem that is based on an active strategy for generating box proposals that
starts from a set of seed boxes, which are uniformly distributed on the image,
and then progressively moves its attention on the promising image areas where
it is more likely to discover well localized bounding box proposals. We call
our approach AttractioNet and a core component of it is a CNN-based category
agnostic object location refinement module that is capable of yielding accurate
and robust bounding box predictions regardless of the object category.
We extensively evaluate our AttractioNet approach on several image datasets
(i.e. COCO, PASCAL, ImageNet detection and NYU-Depth V2 datasets) reporting on
all of them state-of-the-art results that surpass the previous work in the
field by a significant margin and also providing strong empirical evidence that
our approach is capable to generalize to unseen categories. Furthermore, we
evaluate our AttractioNet proposals in the context of the object detection task
using a VGG16-Net based detector and the achieved detection performance on COCO
manages to significantly surpass all other VGG16-Net based detectors while even
being competitive with a heavily tuned ResNet-101 based detector. Code as well
as box proposals computed for several datasets are available at::
https://github.com/gidariss/AttractioNet.
| [
{
"version": "v1",
"created": "Tue, 14 Jun 2016 16:35:08 GMT"
}
] | 2016-06-15T00:00:00 | [
[
"Gidaris",
"Spyros",
""
],
[
"Komodakis",
"Nikos",
""
]
] | TITLE: Attend Refine Repeat: Active Box Proposal Generation via In-Out
Localization
ABSTRACT: The problem of computing category agnostic bounding box proposals is utilized
as a core component in many computer vision tasks and thus has lately attracted
a lot of attention. In this work we propose a new approach to tackle this
problem that is based on an active strategy for generating box proposals that
starts from a set of seed boxes, which are uniformly distributed on the image,
and then progressively moves its attention on the promising image areas where
it is more likely to discover well localized bounding box proposals. We call
our approach AttractioNet and a core component of it is a CNN-based category
agnostic object location refinement module that is capable of yielding accurate
and robust bounding box predictions regardless of the object category.
We extensively evaluate our AttractioNet approach on several image datasets
(i.e. COCO, PASCAL, ImageNet detection and NYU-Depth V2 datasets) reporting on
all of them state-of-the-art results that surpass the previous work in the
field by a significant margin and also providing strong empirical evidence that
our approach is capable to generalize to unseen categories. Furthermore, we
evaluate our AttractioNet proposals in the context of the object detection task
using a VGG16-Net based detector and the achieved detection performance on COCO
manages to significantly surpass all other VGG16-Net based detectors while even
being competitive with a heavily tuned ResNet-101 based detector. Code as well
as box proposals computed for several datasets are available at::
https://github.com/gidariss/AttractioNet.
| no_new_dataset | 0.949902 |
1606.04450 | Massimo Camplani | Massimo Camplani, Adeline Paiement, Majid Mirmehdi, Dima Damen, Sion
Hannuna, Tilo Burghardt, Lili Tao | Multiple Human Tracking in RGB-D Data: A Survey | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multiple human tracking (MHT) is a fundamental task in many computer vision
applications. Appearance-based approaches, primarily formulated on RGB data,
are constrained and affected by problems arising from occlusions and/or
illumination variations. In recent years, the arrival of cheap RGB-Depth
(RGB-D) devices has {led} to many new approaches to MHT, and many of these
integrate color and depth cues to improve each and every stage of the process.
In this survey, we present the common processing pipeline of these methods and
review their methodology based (a) on how they implement this pipeline and (b)
on what role depth plays within each stage of it. We identify and introduce
existing, publicly available, benchmark datasets and software resources that
fuse color and depth data for MHT. Finally, we present a brief comparative
evaluation of the performance of those works that have applied their methods to
these datasets.
| [
{
"version": "v1",
"created": "Tue, 14 Jun 2016 16:41:55 GMT"
}
] | 2016-06-15T00:00:00 | [
[
"Camplani",
"Massimo",
""
],
[
"Paiement",
"Adeline",
""
],
[
"Mirmehdi",
"Majid",
""
],
[
"Damen",
"Dima",
""
],
[
"Hannuna",
"Sion",
""
],
[
"Burghardt",
"Tilo",
""
],
[
"Tao",
"Lili",
""
]
] | TITLE: Multiple Human Tracking in RGB-D Data: A Survey
ABSTRACT: Multiple human tracking (MHT) is a fundamental task in many computer vision
applications. Appearance-based approaches, primarily formulated on RGB data,
are constrained and affected by problems arising from occlusions and/or
illumination variations. In recent years, the arrival of cheap RGB-Depth
(RGB-D) devices has {led} to many new approaches to MHT, and many of these
integrate color and depth cues to improve each and every stage of the process.
In this survey, we present the common processing pipeline of these methods and
review their methodology based (a) on how they implement this pipeline and (b)
on what role depth plays within each stage of it. We identify and introduce
existing, publicly available, benchmark datasets and software resources that
fuse color and depth data for MHT. Finally, we present a brief comparative
evaluation of the performance of those works that have applied their methods to
these datasets.
| new_dataset | 0.526868 |
1606.04456 | Alina S\^irbu | Alina S\^irbu and Ozalp Babaoglu | Towards Operator-less Data Centers Through Data-Driven, Predictive,
Proactive Autonomics | null | Cluster Computing, Volume 19, Issue 2, pp 865-878, 2016 | 10.1007/s10586-016-0564-y | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Continued reliance on human operators for managing data centers is a major
impediment for them from ever reaching extreme dimensions. Large computer
systems in general, and data centers in particular, will ultimately be managed
using predictive computational and executable models obtained through
data-science tools, and at that point, the intervention of humans will be
limited to setting high-level goals and policies rather than performing
low-level operations. Data-driven autonomics, where management and control are
based on holistic predictive models that are built and updated using live data,
opens one possible path towards limiting the role of operators in data centers.
In this paper, we present a data-science study of a public Google dataset
collected in a 12K-node cluster with the goal of building and evaluating
predictive models for node failures. Our results support the practicality of a
data-driven approach by showing the effectiveness of predictive models based on
data found in typical data center logs. We use BigQuery, the big data SQL
platform from the Google Cloud suite, to process massive amounts of data and
generate a rich feature set characterizing node state over time. We describe
how an ensemble classifier can be built out of many Random Forest classifiers
each trained on these features, to predict if nodes will fail in a future
24-hour window. Our evaluation reveals that if we limit false positive rates to
5%, we can achieve true positive rates between 27% and 88% with precision
varying between 50% and 72%.This level of performance allows us to recover
large fraction of jobs' executions (by redirecting them to other nodes when a
failure of the present node is predicted) that would otherwise have been wasted
due to failures. [...]
| [
{
"version": "v1",
"created": "Tue, 14 Jun 2016 16:55:01 GMT"
}
] | 2016-06-15T00:00:00 | [
[
"Sîrbu",
"Alina",
""
],
[
"Babaoglu",
"Ozalp",
""
]
] | TITLE: Towards Operator-less Data Centers Through Data-Driven, Predictive,
Proactive Autonomics
ABSTRACT: Continued reliance on human operators for managing data centers is a major
impediment for them from ever reaching extreme dimensions. Large computer
systems in general, and data centers in particular, will ultimately be managed
using predictive computational and executable models obtained through
data-science tools, and at that point, the intervention of humans will be
limited to setting high-level goals and policies rather than performing
low-level operations. Data-driven autonomics, where management and control are
based on holistic predictive models that are built and updated using live data,
opens one possible path towards limiting the role of operators in data centers.
In this paper, we present a data-science study of a public Google dataset
collected in a 12K-node cluster with the goal of building and evaluating
predictive models for node failures. Our results support the practicality of a
data-driven approach by showing the effectiveness of predictive models based on
data found in typical data center logs. We use BigQuery, the big data SQL
platform from the Google Cloud suite, to process massive amounts of data and
generate a rich feature set characterizing node state over time. We describe
how an ensemble classifier can be built out of many Random Forest classifiers
each trained on these features, to predict if nodes will fail in a future
24-hour window. Our evaluation reveals that if we limit false positive rates to
5%, we can achieve true positive rates between 27% and 88% with precision
varying between 50% and 72%.This level of performance allows us to recover
large fraction of jobs' executions (by redirecting them to other nodes when a
failure of the present node is predicted) that would otherwise have been wasted
due to failures. [...]
| no_new_dataset | 0.950227 |
1606.04506 | Yamuna Prasad | Yamuna Prasad, Dinesh Khandelwal, K. K. Biswas | Max-Margin Feature Selection | submitted to PR Letters | null | null | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many machine learning applications such as in vision, biology and social
networking deal with data in high dimensions. Feature selection is typically
employed to select a subset of features which im- proves generalization
accuracy as well as reduces the computational cost of learning the model. One
of the criteria used for feature selection is to jointly minimize the
redundancy and maximize the rele- vance of the selected features. In this
paper, we formulate the task of feature selection as a one class SVM problem in
a space where features correspond to the data points and instances correspond
to the dimensions. The goal is to look for a representative subset of the
features (support vectors) which describes the boundary for the region where
the set of the features (data points) exists. This leads to a joint
optimization of relevance and redundancy in a principled max-margin framework.
Additionally, our formulation enables us to leverage existing techniques for
optimizing the SVM objective resulting in highly computationally efficient
solutions for the task of feature selection. Specifically, we employ the dual
coordinate descent algorithm (Hsieh et al., 2008), originally proposed for
SVMs, for our formulation. We use a sparse representation to deal with data in
very high dimensions. Experiments on seven publicly available benchmark
datasets from a variety of domains show that our approach results in orders of
magnitude faster solutions even while retaining the same level of accuracy
compared to the state of the art feature selection techniques.
| [
{
"version": "v1",
"created": "Tue, 14 Jun 2016 19:05:01 GMT"
}
] | 2016-06-15T00:00:00 | [
[
"Prasad",
"Yamuna",
""
],
[
"Khandelwal",
"Dinesh",
""
],
[
"Biswas",
"K. K.",
""
]
] | TITLE: Max-Margin Feature Selection
ABSTRACT: Many machine learning applications such as in vision, biology and social
networking deal with data in high dimensions. Feature selection is typically
employed to select a subset of features which im- proves generalization
accuracy as well as reduces the computational cost of learning the model. One
of the criteria used for feature selection is to jointly minimize the
redundancy and maximize the rele- vance of the selected features. In this
paper, we formulate the task of feature selection as a one class SVM problem in
a space where features correspond to the data points and instances correspond
to the dimensions. The goal is to look for a representative subset of the
features (support vectors) which describes the boundary for the region where
the set of the features (data points) exists. This leads to a joint
optimization of relevance and redundancy in a principled max-margin framework.
Additionally, our formulation enables us to leverage existing techniques for
optimizing the SVM objective resulting in highly computationally efficient
solutions for the task of feature selection. Specifically, we employ the dual
coordinate descent algorithm (Hsieh et al., 2008), originally proposed for
SVMs, for our formulation. We use a sparse representation to deal with data in
very high dimensions. Experiments on seven publicly available benchmark
datasets from a variety of domains show that our approach results in orders of
magnitude faster solutions even while retaining the same level of accuracy
compared to the state of the art feature selection techniques.
| no_new_dataset | 0.949902 |
1502.05840 | Junchi Yan | Junchi Yan, Minsu Cho, Hongyuan Zha, Xiaokang Yang, Stephen Chu | A General Multi-Graph Matching Approach via Graduated
Consistency-regularized Boosting | null | IEEE Transactions on Pattern Analysis and Machine Intelligence
38(6) 2016, page 1228 - 1242 | 10.1109/TPAMI.2015.2477832 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper addresses the problem of matching $N$ weighted graphs referring to
an identical object or category. More specifically, matching the common node
correspondences among graphs. This multi-graph matching problem involves two
ingredients affecting the overall accuracy: i) the local pairwise matching
affinity score among graphs; ii) the global matching consistency that measures
the uniqueness of the pairwise matching results by different chaining orders.
Previous studies typically either enforce the matching consistency constraints
in the beginning of iterative optimization, which may propagate matching error
both over iterations and across graph pairs; or separate affinity optimizing
and consistency regularization in two steps. This paper is motivated by the
observation that matching consistency can serve as a regularizer in the
affinity objective function when the function is biased due to noises or
inappropriate modeling. We propose multi-graph matching methods to incorporate
the two aspects by boosting the affinity score, meanwhile gradually infusing
the consistency as a regularizer. Furthermore, we propose a node-wise
consistency/affinity-driven mechanism to elicit the common inlier nodes out of
the irrelevant outliers. Extensive results on both synthetic and public image
datasets demonstrate the competency of the proposed algorithms.
| [
{
"version": "v1",
"created": "Fri, 20 Feb 2015 11:45:25 GMT"
}
] | 2016-06-14T00:00:00 | [
[
"Yan",
"Junchi",
""
],
[
"Cho",
"Minsu",
""
],
[
"Zha",
"Hongyuan",
""
],
[
"Yang",
"Xiaokang",
""
],
[
"Chu",
"Stephen",
""
]
] | TITLE: A General Multi-Graph Matching Approach via Graduated
Consistency-regularized Boosting
ABSTRACT: This paper addresses the problem of matching $N$ weighted graphs referring to
an identical object or category. More specifically, matching the common node
correspondences among graphs. This multi-graph matching problem involves two
ingredients affecting the overall accuracy: i) the local pairwise matching
affinity score among graphs; ii) the global matching consistency that measures
the uniqueness of the pairwise matching results by different chaining orders.
Previous studies typically either enforce the matching consistency constraints
in the beginning of iterative optimization, which may propagate matching error
both over iterations and across graph pairs; or separate affinity optimizing
and consistency regularization in two steps. This paper is motivated by the
observation that matching consistency can serve as a regularizer in the
affinity objective function when the function is biased due to noises or
inappropriate modeling. We propose multi-graph matching methods to incorporate
the two aspects by boosting the affinity score, meanwhile gradually infusing
the consistency as a regularizer. Furthermore, we propose a node-wise
consistency/affinity-driven mechanism to elicit the common inlier nodes out of
the irrelevant outliers. Extensive results on both synthetic and public image
datasets demonstrate the competency of the proposed algorithms.
| no_new_dataset | 0.947721 |
1507.00677 | Takeru Miyato | Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, Ken Nakae, Shin Ishii | Distributional Smoothing with Virtual Adversarial Training | Under review as a conference paper at ICLR 2016 | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose local distributional smoothness (LDS), a new notion of smoothness
for statistical model that can be used as a regularization term to promote the
smoothness of the model distribution. We named the LDS based regularization as
virtual adversarial training (VAT). The LDS of a model at an input datapoint is
defined as the KL-divergence based robustness of the model distribution against
local perturbation around the datapoint. VAT resembles adversarial training,
but distinguishes itself in that it determines the adversarial direction from
the model distribution alone without using the label information, making it
applicable to semi-supervised learning. The computational cost for VAT is
relatively low. For neural network, the approximated gradient of the LDS can be
computed with no more than three pairs of forward and back propagations. When
we applied our technique to supervised and semi-supervised learning for the
MNIST dataset, it outperformed all the training methods other than the current
state of the art method, which is based on a highly advanced generative model.
We also applied our method to SVHN and NORB, and confirmed our method's
superior performance over the current state of the art semi-supervised method
applied to these datasets.
| [
{
"version": "v1",
"created": "Thu, 2 Jul 2015 18:01:23 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Aug 2015 19:59:36 GMT"
},
{
"version": "v3",
"created": "Thu, 13 Aug 2015 09:19:40 GMT"
},
{
"version": "v4",
"created": "Fri, 25 Sep 2015 12:20:05 GMT"
},
{
"version": "v5",
"created": "Thu, 19 Nov 2015 18:47:51 GMT"
},
{
"version": "v6",
"created": "Wed, 25 Nov 2015 13:31:07 GMT"
},
{
"version": "v7",
"created": "Sat, 9 Jan 2016 23:53:05 GMT"
},
{
"version": "v8",
"created": "Mon, 29 Feb 2016 15:39:55 GMT"
},
{
"version": "v9",
"created": "Sat, 11 Jun 2016 18:22:33 GMT"
}
] | 2016-06-14T00:00:00 | [
[
"Miyato",
"Takeru",
""
],
[
"Maeda",
"Shin-ichi",
""
],
[
"Koyama",
"Masanori",
""
],
[
"Nakae",
"Ken",
""
],
[
"Ishii",
"Shin",
""
]
] | TITLE: Distributional Smoothing with Virtual Adversarial Training
ABSTRACT: We propose local distributional smoothness (LDS), a new notion of smoothness
for statistical model that can be used as a regularization term to promote the
smoothness of the model distribution. We named the LDS based regularization as
virtual adversarial training (VAT). The LDS of a model at an input datapoint is
defined as the KL-divergence based robustness of the model distribution against
local perturbation around the datapoint. VAT resembles adversarial training,
but distinguishes itself in that it determines the adversarial direction from
the model distribution alone without using the label information, making it
applicable to semi-supervised learning. The computational cost for VAT is
relatively low. For neural network, the approximated gradient of the LDS can be
computed with no more than three pairs of forward and back propagations. When
we applied our technique to supervised and semi-supervised learning for the
MNIST dataset, it outperformed all the training methods other than the current
state of the art method, which is based on a highly advanced generative model.
We also applied our method to SVHN and NORB, and confirmed our method's
superior performance over the current state of the art semi-supervised method
applied to these datasets.
| no_new_dataset | 0.948728 |
1508.07266 | Suin Kim | Suin Kim, Sungjoon Park, Scott A. Hale, Sooyoung Kim, Jeongmin Byun
and Alice Oh | Understanding Editing Behaviors in Multilingual Wikipedia | 34 pages, 7 figures | null | 10.1371/journal.pone.0155305 | null | cs.SI cs.CL cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multilingualism is common offline, but we have a more limited understanding
of the ways multilingualism is displayed online and the roles that
multilinguals play in the spread of content between speakers of different
languages. We take a computational approach to studying multilingualism using
one of the largest user-generated content platforms, Wikipedia. We study
multilingualism by collecting and analyzing a large dataset of the content
written by multilingual editors of the English, German, and Spanish editions of
Wikipedia. This dataset contains over two million paragraphs edited by over
15,000 multilingual users from July 8 to August 9, 2013. We analyze these
multilingual editors in terms of their engagement, interests, and language
proficiency in their primary and non-primary (secondary) languages and find
that the English edition of Wikipedia displays different dynamics from the
Spanish and German editions. Users primarily editing the Spanish and German
editions make more complex edits than users who edit these editions as a second
language. In contrast, users editing the English edition as a second language
make edits that are just as complex as the edits by users who primarily edit
the English edition. In this way, English serves a special role bringing
together content written by multilinguals from many language editions.
Nonetheless, language remains a formidable hurdle to the spread of content: we
find evidence for a complexity barrier whereby editors are less likely to edit
complex content in a second language. In addition, we find that multilinguals
are less engaged and show lower levels of language proficiency in their second
languages. We also examine the topical interests of multilingual editors and
find that there is no significant difference between primary and non-primary
editors in each language.
| [
{
"version": "v1",
"created": "Fri, 28 Aug 2015 16:21:03 GMT"
}
] | 2016-06-14T00:00:00 | [
[
"Kim",
"Suin",
""
],
[
"Park",
"Sungjoon",
""
],
[
"Hale",
"Scott A.",
""
],
[
"Kim",
"Sooyoung",
""
],
[
"Byun",
"Jeongmin",
""
],
[
"Oh",
"Alice",
""
]
] | TITLE: Understanding Editing Behaviors in Multilingual Wikipedia
ABSTRACT: Multilingualism is common offline, but we have a more limited understanding
of the ways multilingualism is displayed online and the roles that
multilinguals play in the spread of content between speakers of different
languages. We take a computational approach to studying multilingualism using
one of the largest user-generated content platforms, Wikipedia. We study
multilingualism by collecting and analyzing a large dataset of the content
written by multilingual editors of the English, German, and Spanish editions of
Wikipedia. This dataset contains over two million paragraphs edited by over
15,000 multilingual users from July 8 to August 9, 2013. We analyze these
multilingual editors in terms of their engagement, interests, and language
proficiency in their primary and non-primary (secondary) languages and find
that the English edition of Wikipedia displays different dynamics from the
Spanish and German editions. Users primarily editing the Spanish and German
editions make more complex edits than users who edit these editions as a second
language. In contrast, users editing the English edition as a second language
make edits that are just as complex as the edits by users who primarily edit
the English edition. In this way, English serves a special role bringing
together content written by multilinguals from many language editions.
Nonetheless, language remains a formidable hurdle to the spread of content: we
find evidence for a complexity barrier whereby editors are less likely to edit
complex content in a second language. In addition, we find that multilinguals
are less engaged and show lower levels of language proficiency in their second
languages. We also examine the topical interests of multilingual editors and
find that there is no significant difference between primary and non-primary
editors in each language.
| no_new_dataset | 0.909667 |
1510.03055 | Michel Galley | Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan | A Diversity-Promoting Objective Function for Neural Conversation Models | In. Proc of NAACL 2016 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sequence-to-sequence neural network models for generation of conversational
responses tend to generate safe, commonplace responses (e.g., "I don't know")
regardless of the input. We suggest that the traditional objective function,
i.e., the likelihood of output (response) given input (message) is unsuited to
response generation tasks. Instead we propose using Maximum Mutual Information
(MMI) as the objective function in neural models. Experimental results
demonstrate that the proposed MMI models produce more diverse, interesting, and
appropriate responses, yielding substantive gains in BLEU scores on two
conversational datasets and in human evaluations.
| [
{
"version": "v1",
"created": "Sun, 11 Oct 2015 14:04:57 GMT"
},
{
"version": "v2",
"created": "Thu, 7 Jan 2016 06:59:19 GMT"
},
{
"version": "v3",
"created": "Fri, 10 Jun 2016 22:03:28 GMT"
}
] | 2016-06-14T00:00:00 | [
[
"Li",
"Jiwei",
""
],
[
"Galley",
"Michel",
""
],
[
"Brockett",
"Chris",
""
],
[
"Gao",
"Jianfeng",
""
],
[
"Dolan",
"Bill",
""
]
] | TITLE: A Diversity-Promoting Objective Function for Neural Conversation Models
ABSTRACT: Sequence-to-sequence neural network models for generation of conversational
responses tend to generate safe, commonplace responses (e.g., "I don't know")
regardless of the input. We suggest that the traditional objective function,
i.e., the likelihood of output (response) given input (message) is unsuited to
response generation tasks. Instead we propose using Maximum Mutual Information
(MMI) as the objective function in neural models. Experimental results
demonstrate that the proposed MMI models produce more diverse, interesting, and
appropriate responses, yielding substantive gains in BLEU scores on two
conversational datasets and in human evaluations.
| no_new_dataset | 0.946001 |
1602.03320 | Arlei Lopes Da Silva | Arlei Silva, Xuan-Hong Dang, Prithwish Basu, Ambuj K Singh, Ananthram
Swami | Graph Wavelets via Sparse Cuts: Extended Version | null | null | null | null | cs.DS cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Modeling information that resides on vertices of large graphs is a key
problem in several real-life applications, ranging from social networks to the
Internet-of-things. Signal Processing on Graphs and, in particular, graph
wavelets can exploit the intrinsic smoothness of these datasets in order to
represent them in a both compact and accurate manner. However, how to discover
wavelet bases that capture the geometry of the data with respect to the signal
as well as the graph structure remains an open question. In this paper, we
study the problem of computing graph wavelet bases via sparse cuts in order to
produce low-dimensional encodings of data-driven bases. This problem is
connected to known hard problems in graph theory (e.g. multiway cuts) and thus
requires an efficient heuristic. We formulate the basis discovery task as a
relaxation of a vector optimization problem, which leads to an elegant solution
as a regularized eigenvalue computation. Moreover, we propose several
strategies in order to scale our algorithm to large graphs. Experimental
results show that the proposed algorithm can effectively encode both the graph
structure and signal, producing compressed and accurate representations for
vertex values in a wide range of datasets (e.g. sensor and gene networks) and
significantly outperforming the best baseline.
| [
{
"version": "v1",
"created": "Wed, 10 Feb 2016 10:34:41 GMT"
},
{
"version": "v2",
"created": "Sat, 13 Feb 2016 04:21:13 GMT"
},
{
"version": "v3",
"created": "Wed, 17 Feb 2016 07:08:36 GMT"
},
{
"version": "v4",
"created": "Fri, 26 Feb 2016 01:01:45 GMT"
},
{
"version": "v5",
"created": "Mon, 13 Jun 2016 02:31:07 GMT"
}
] | 2016-06-14T00:00:00 | [
[
"Silva",
"Arlei",
""
],
[
"Dang",
"Xuan-Hong",
""
],
[
"Basu",
"Prithwish",
""
],
[
"Singh",
"Ambuj K",
""
],
[
"Swami",
"Ananthram",
""
]
] | TITLE: Graph Wavelets via Sparse Cuts: Extended Version
ABSTRACT: Modeling information that resides on vertices of large graphs is a key
problem in several real-life applications, ranging from social networks to the
Internet-of-things. Signal Processing on Graphs and, in particular, graph
wavelets can exploit the intrinsic smoothness of these datasets in order to
represent them in a both compact and accurate manner. However, how to discover
wavelet bases that capture the geometry of the data with respect to the signal
as well as the graph structure remains an open question. In this paper, we
study the problem of computing graph wavelet bases via sparse cuts in order to
produce low-dimensional encodings of data-driven bases. This problem is
connected to known hard problems in graph theory (e.g. multiway cuts) and thus
requires an efficient heuristic. We formulate the basis discovery task as a
relaxation of a vector optimization problem, which leads to an elegant solution
as a regularized eigenvalue computation. Moreover, we propose several
strategies in order to scale our algorithm to large graphs. Experimental
results show that the proposed algorithm can effectively encode both the graph
structure and signal, producing compressed and accurate representations for
vertex values in a wide range of datasets (e.g. sensor and gene networks) and
significantly outperforming the best baseline.
| no_new_dataset | 0.948775 |
1606.01609 | Chunhua Shen | Lin Wu, Chunhua Shen, Anton van den Hengel | Deep Recurrent Convolutional Networks for Video-based Person
Re-identification: An End-to-End Approach | 11 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we present an end-to-end approach to simultaneously learn
spatio-temporal features and corresponding similarity metric for video-based
person re-identification. Given the video sequence of a person, features from
each frame that are extracted from all levels of a deep convolutional network
can preserve a higher spatial resolution from which we can model finer motion
patterns. These low-level visual percepts are leveraged into a variant of
recurrent model to characterize the temporal variation between time-steps.
Features from all time-steps are then summarized using temporal pooling to
produce an overall feature representation for the complete sequence. The deep
convolutional network, recurrent layer, and the temporal pooling are jointly
trained to extract comparable hidden-unit representations from input pair of
time series to compute their corresponding similarity value. The proposed
framework combines time series modeling and metric learning to jointly learn
relevant features and a good similarity measure between time sequences of
person.
Experiments demonstrate that our approach achieves the state-of-the-art
performance for video-based person re-identification on iLIDS-VID and PRID
2011, the two primary public datasets for this purpose.
| [
{
"version": "v1",
"created": "Mon, 6 Jun 2016 04:29:16 GMT"
},
{
"version": "v2",
"created": "Sun, 12 Jun 2016 10:52:09 GMT"
}
] | 2016-06-14T00:00:00 | [
[
"Wu",
"Lin",
""
],
[
"Shen",
"Chunhua",
""
],
[
"Hengel",
"Anton van den",
""
]
] | TITLE: Deep Recurrent Convolutional Networks for Video-based Person
Re-identification: An End-to-End Approach
ABSTRACT: In this paper, we present an end-to-end approach to simultaneously learn
spatio-temporal features and corresponding similarity metric for video-based
person re-identification. Given the video sequence of a person, features from
each frame that are extracted from all levels of a deep convolutional network
can preserve a higher spatial resolution from which we can model finer motion
patterns. These low-level visual percepts are leveraged into a variant of
recurrent model to characterize the temporal variation between time-steps.
Features from all time-steps are then summarized using temporal pooling to
produce an overall feature representation for the complete sequence. The deep
convolutional network, recurrent layer, and the temporal pooling are jointly
trained to extract comparable hidden-unit representations from input pair of
time series to compute their corresponding similarity value. The proposed
framework combines time series modeling and metric learning to jointly learn
relevant features and a good similarity measure between time sequences of
person.
Experiments demonstrate that our approach achieves the state-of-the-art
performance for video-based person re-identification on iLIDS-VID and PRID
2011, the two primary public datasets for this purpose.
| no_new_dataset | 0.949995 |
1606.02617 | Aleksander Lodwich | Aleksander Lodwich, Faisal Shafait and Thomas Breuel | Efficient Estimation of k for the Nearest Neighbors Class of Methods | Technical Report, 16p, alternative source:
http://lodwich.net/Science.html | null | 10.13140/RG.2.1.5045.4649 | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The k Nearest Neighbors (kNN) method has received much attention in the past
decades, where some theoretical bounds on its performance were identified and
where practical optimizations were proposed for making it work fairly well in
high dimensional spaces and on large datasets. From countless experiments of
the past it became widely accepted that the value of k has a significant impact
on the performance of this method. However, the efficient optimization of this
parameter has not received so much attention in literature. Today, the most
common approach is to cross-validate or bootstrap this value for all values in
question. This approach forces distances to be recomputed many times, even if
efficient methods are used. Hence, estimating the optimal k can become
expensive even on modern systems. Frequently, this circumstance leads to a
sparse manual search of k. In this paper we want to point out that a systematic
and thorough estimation of the parameter k can be performed efficiently. The
discussed approach relies on large matrices, but we want to argue, that in
practice a higher space complexity is often much less of a problem than
repetitive distance computations.
| [
{
"version": "v1",
"created": "Wed, 8 Jun 2016 16:11:53 GMT"
},
{
"version": "v2",
"created": "Mon, 13 Jun 2016 11:34:59 GMT"
}
] | 2016-06-14T00:00:00 | [
[
"Lodwich",
"Aleksander",
""
],
[
"Shafait",
"Faisal",
""
],
[
"Breuel",
"Thomas",
""
]
] | TITLE: Efficient Estimation of k for the Nearest Neighbors Class of Methods
ABSTRACT: The k Nearest Neighbors (kNN) method has received much attention in the past
decades, where some theoretical bounds on its performance were identified and
where practical optimizations were proposed for making it work fairly well in
high dimensional spaces and on large datasets. From countless experiments of
the past it became widely accepted that the value of k has a significant impact
on the performance of this method. However, the efficient optimization of this
parameter has not received so much attention in literature. Today, the most
common approach is to cross-validate or bootstrap this value for all values in
question. This approach forces distances to be recomputed many times, even if
efficient methods are used. Hence, estimating the optimal k can become
expensive even on modern systems. Frequently, this circumstance leads to a
sparse manual search of k. In this paper we want to point out that a systematic
and thorough estimation of the parameter k can be performed efficiently. The
discussed approach relies on large matrices, but we want to argue, that in
practice a higher space complexity is often much less of a problem than
repetitive distance computations.
| no_new_dataset | 0.951278 |
1606.03473 | Huaizu Jiang | Huaizu Jiang and Erik Learned-Miller | Face Detection with the Faster R-CNN | technical report | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Faster R-CNN has recently demonstrated impressive results on various
object detection benchmarks. By training a Faster R-CNN model on the large
scale WIDER face dataset, we report state-of-the-art results on two widely used
face detection benchmarks, FDDB and the recently released IJB-A.
| [
{
"version": "v1",
"created": "Fri, 10 Jun 2016 20:34:39 GMT"
}
] | 2016-06-14T00:00:00 | [
[
"Jiang",
"Huaizu",
""
],
[
"Learned-Miller",
"Erik",
""
]
] | TITLE: Face Detection with the Faster R-CNN
ABSTRACT: The Faster R-CNN has recently demonstrated impressive results on various
object detection benchmarks. By training a Faster R-CNN model on the large
scale WIDER face dataset, we report state-of-the-art results on two widely used
face detection benchmarks, FDDB and the recently released IJB-A.
| no_new_dataset | 0.954308 |
1606.03475 | Franck Dernoncourt | Franck Dernoncourt, Ji Young Lee, Ozlem Uzuner, Peter Szolovits | De-identification of Patient Notes with Recurrent Neural Networks | null | null | null | null | cs.CL cs.AI cs.NE stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Objective: Patient notes in electronic health records (EHRs) may contain
critical information for medical investigations. However, the vast majority of
medical investigators can only access de-identified notes, in order to protect
the confidentiality of patients. In the United States, the Health Insurance
Portability and Accountability Act (HIPAA) defines 18 types of protected health
information (PHI) that needs to be removed to de-identify patient notes. Manual
de-identification is impractical given the size of EHR databases, the limited
number of researchers with access to the non-de-identified notes, and the
frequent mistakes of human annotators. A reliable automated de-identification
system would consequently be of high value.
Materials and Methods: We introduce the first de-identification system based
on artificial neural networks (ANNs), which requires no handcrafted features or
rules, unlike existing systems. We compare the performance of the system with
state-of-the-art systems on two datasets: the i2b2 2014 de-identification
challenge dataset, which is the largest publicly available de-identification
dataset, and the MIMIC de-identification dataset, which we assembled and is
twice as large as the i2b2 2014 dataset.
Results: Our ANN model outperforms the state-of-the-art systems. It yields an
F1-score of 97.85 on the i2b2 2014 dataset, with a recall 97.38 and a precision
of 97.32, and an F1-score of 99.23 on the MIMIC de-identification dataset, with
a recall 99.25 and a precision of 99.06.
Conclusion: Our findings support the use of ANNs for de-identification of
patient notes, as they show better performance than previously published
systems while requiring no feature engineering.
| [
{
"version": "v1",
"created": "Fri, 10 Jun 2016 20:45:30 GMT"
}
] | 2016-06-14T00:00:00 | [
[
"Dernoncourt",
"Franck",
""
],
[
"Lee",
"Ji Young",
""
],
[
"Uzuner",
"Ozlem",
""
],
[
"Szolovits",
"Peter",
""
]
] | TITLE: De-identification of Patient Notes with Recurrent Neural Networks
ABSTRACT: Objective: Patient notes in electronic health records (EHRs) may contain
critical information for medical investigations. However, the vast majority of
medical investigators can only access de-identified notes, in order to protect
the confidentiality of patients. In the United States, the Health Insurance
Portability and Accountability Act (HIPAA) defines 18 types of protected health
information (PHI) that needs to be removed to de-identify patient notes. Manual
de-identification is impractical given the size of EHR databases, the limited
number of researchers with access to the non-de-identified notes, and the
frequent mistakes of human annotators. A reliable automated de-identification
system would consequently be of high value.
Materials and Methods: We introduce the first de-identification system based
on artificial neural networks (ANNs), which requires no handcrafted features or
rules, unlike existing systems. We compare the performance of the system with
state-of-the-art systems on two datasets: the i2b2 2014 de-identification
challenge dataset, which is the largest publicly available de-identification
dataset, and the MIMIC de-identification dataset, which we assembled and is
twice as large as the i2b2 2014 dataset.
Results: Our ANN model outperforms the state-of-the-art systems. It yields an
F1-score of 97.85 on the i2b2 2014 dataset, with a recall 97.38 and a precision
of 97.32, and an F1-score of 99.23 on the MIMIC de-identification dataset, with
a recall 99.25 and a precision of 99.06.
Conclusion: Our findings support the use of ANNs for de-identification of
patient notes, as they show better performance than previously published
systems while requiring no feature engineering.
| no_new_dataset | 0.544315 |
1606.03601 | Mohamed Aly | Mohamed Aly, Guangming Zang, Wolfgang Heidrich, Peter Wonka | TRex: A Tomography Reconstruction Proximal Framework for Robust Sparse
View X-Ray Applications | null | null | null | null | math.OC cs.CV cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present TRex, a flexible and robust Tomographic Reconstruction framework
using proximal algorithms. We provide an overview and perform an experimental
comparison between the famous iterative reconstruction methods in terms of
reconstruction quality in sparse view situations. We then derive the proximal
operators for the four best methods. We show the flexibility of our framework
by deriving solvers for two noise models: Gaussian and Poisson; and by plugging
in three powerful regularizers. We compare our framework to state of the art
methods, and show superior quality on both synthetic and real datasets.
| [
{
"version": "v1",
"created": "Sat, 11 Jun 2016 14:19:28 GMT"
}
] | 2016-06-14T00:00:00 | [
[
"Aly",
"Mohamed",
""
],
[
"Zang",
"Guangming",
""
],
[
"Heidrich",
"Wolfgang",
""
],
[
"Wonka",
"Peter",
""
]
] | TITLE: TRex: A Tomography Reconstruction Proximal Framework for Robust Sparse
View X-Ray Applications
ABSTRACT: We present TRex, a flexible and robust Tomographic Reconstruction framework
using proximal algorithms. We provide an overview and perform an experimental
comparison between the famous iterative reconstruction methods in terms of
reconstruction quality in sparse view situations. We then derive the proximal
operators for the four best methods. We show the flexibility of our framework
by deriving solvers for two noise models: Gaussian and Poisson; and by plugging
in three powerful regularizers. We compare our framework to state of the art
methods, and show superior quality on both synthetic and real datasets.
| no_new_dataset | 0.948155 |
1606.03622 | Robin Jia | Robin Jia and Percy Liang | Data Recombination for Neural Semantic Parsing | ACL 2016 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Modeling crisp logical regularities is crucial in semantic parsing, making it
difficult for neural models with no task-specific prior knowledge to achieve
good results. In this paper, we introduce data recombination, a novel framework
for injecting such prior knowledge into a model. From the training data, we
induce a high-precision synchronous context-free grammar, which captures
important conditional independence properties commonly found in semantic
parsing. We then train a sequence-to-sequence recurrent network (RNN) model
with a novel attention-based copying mechanism on datapoints sampled from this
grammar, thereby teaching the model about these structural properties. Data
recombination improves the accuracy of our RNN model on three semantic parsing
datasets, leading to new state-of-the-art performance on the standard GeoQuery
dataset for models with comparable supervision.
| [
{
"version": "v1",
"created": "Sat, 11 Jun 2016 20:34:09 GMT"
}
] | 2016-06-14T00:00:00 | [
[
"Jia",
"Robin",
""
],
[
"Liang",
"Percy",
""
]
] | TITLE: Data Recombination for Neural Semantic Parsing
ABSTRACT: Modeling crisp logical regularities is crucial in semantic parsing, making it
difficult for neural models with no task-specific prior knowledge to achieve
good results. In this paper, we introduce data recombination, a novel framework
for injecting such prior knowledge into a model. From the training data, we
induce a high-precision synchronous context-free grammar, which captures
important conditional independence properties commonly found in semantic
parsing. We then train a sequence-to-sequence recurrent network (RNN) model
with a novel attention-based copying mechanism on datapoints sampled from this
grammar, thereby teaching the model about these structural properties. Data
recombination improves the accuracy of our RNN model on three semantic parsing
datasets, leading to new state-of-the-art performance on the standard GeoQuery
dataset for models with comparable supervision.
| no_new_dataset | 0.950595 |
1606.03628 | Jiaping Zhao | Jiaping Zhao, Zerong Xi and Laurent Itti | metricDTW: local distance metric learning in Dynamic Time Warping | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose to learn multiple local Mahalanobis distance metrics to perform
k-nearest neighbor (kNN) classification of temporal sequences. Temporal
sequences are first aligned by dynamic time warping (DTW); given the alignment
path, similarity between two sequences is measured by the DTW distance, which
is computed as the accumulated distance between matched temporal point pairs
along the alignment path. Traditionally, Euclidean metric is used for distance
computation between matched pairs, which ignores the data regularities and
might not be optimal for applications at hand. Here we propose to learn
multiple Mahalanobis metrics, such that DTW distance becomes the sum of
Mahalanobis distances. We adapt the large margin nearest neighbor (LMNN)
framework to our case, and formulate multiple metric learning as a linear
programming problem. Extensive sequence classification results show that our
proposed multiple metrics learning approach is effective, insensitive to the
preceding alignment qualities, and reaches the state-of-the-art performances on
UCR time series datasets.
| [
{
"version": "v1",
"created": "Sat, 11 Jun 2016 21:14:08 GMT"
}
] | 2016-06-14T00:00:00 | [
[
"Zhao",
"Jiaping",
""
],
[
"Xi",
"Zerong",
""
],
[
"Itti",
"Laurent",
""
]
] | TITLE: metricDTW: local distance metric learning in Dynamic Time Warping
ABSTRACT: We propose to learn multiple local Mahalanobis distance metrics to perform
k-nearest neighbor (kNN) classification of temporal sequences. Temporal
sequences are first aligned by dynamic time warping (DTW); given the alignment
path, similarity between two sequences is measured by the DTW distance, which
is computed as the accumulated distance between matched temporal point pairs
along the alignment path. Traditionally, Euclidean metric is used for distance
computation between matched pairs, which ignores the data regularities and
might not be optimal for applications at hand. Here we propose to learn
multiple Mahalanobis metrics, such that DTW distance becomes the sum of
Mahalanobis distances. We adapt the large margin nearest neighbor (LMNN)
framework to our case, and formulate multiple metric learning as a linear
programming problem. Extensive sequence classification results show that our
proposed multiple metrics learning approach is effective, insensitive to the
preceding alignment qualities, and reaches the state-of-the-art performances on
UCR time series datasets.
| no_new_dataset | 0.948394 |
1606.03657 | Xi Chen | Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever,
Pieter Abbeel | InfoGAN: Interpretable Representation Learning by Information Maximizing
Generative Adversarial Nets | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper describes InfoGAN, an information-theoretic extension to the
Generative Adversarial Network that is able to learn disentangled
representations in a completely unsupervised manner. InfoGAN is a generative
adversarial network that also maximizes the mutual information between a small
subset of the latent variables and the observation. We derive a lower bound to
the mutual information objective that can be optimized efficiently, and show
that our training procedure can be interpreted as a variation of the Wake-Sleep
algorithm. Specifically, InfoGAN successfully disentangles writing styles from
digit shapes on the MNIST dataset, pose from lighting of 3D rendered images,
and background digits from the central digit on the SVHN dataset. It also
discovers visual concepts that include hair styles, presence/absence of
eyeglasses, and emotions on the CelebA face dataset. Experiments show that
InfoGAN learns interpretable representations that are competitive with
representations learned by existing fully supervised methods.
| [
{
"version": "v1",
"created": "Sun, 12 Jun 2016 02:14:31 GMT"
}
] | 2016-06-14T00:00:00 | [
[
"Chen",
"Xi",
""
],
[
"Duan",
"Yan",
""
],
[
"Houthooft",
"Rein",
""
],
[
"Schulman",
"John",
""
],
[
"Sutskever",
"Ilya",
""
],
[
"Abbeel",
"Pieter",
""
]
] | TITLE: InfoGAN: Interpretable Representation Learning by Information Maximizing
Generative Adversarial Nets
ABSTRACT: This paper describes InfoGAN, an information-theoretic extension to the
Generative Adversarial Network that is able to learn disentangled
representations in a completely unsupervised manner. InfoGAN is a generative
adversarial network that also maximizes the mutual information between a small
subset of the latent variables and the observation. We derive a lower bound to
the mutual information objective that can be optimized efficiently, and show
that our training procedure can be interpreted as a variation of the Wake-Sleep
algorithm. Specifically, InfoGAN successfully disentangles writing styles from
digit shapes on the MNIST dataset, pose from lighting of 3D rendered images,
and background digits from the central digit on the SVHN dataset. It also
discovers visual concepts that include hair styles, presence/absence of
eyeglasses, and emotions on the CelebA face dataset. Experiments show that
InfoGAN learns interpretable representations that are competitive with
representations learned by existing fully supervised methods.
| no_new_dataset | 0.94474 |
1606.03672 | Ashkan Esmaeili | Ashkan Esmaeili and Farokh Marvasti | Comparison of Several Sparse Recovery Methods for Low Rank Matrices with
Random Samples | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we will investigate the efficacy of IMAT (Iterative Method of
Adaptive Thresholding) in recovering the sparse signal (parameters) for linear
models with missing data. Sparse recovery rises in compressed sensing and
machine learning problems and has various applications necessitating viable
reconstruction methods specifically when we work with big data. This paper will
focus on comparing the power of IMAT in reconstruction of the desired sparse
signal with LASSO. Additionally, we will assume the model has random missing
information. Missing data has been recently of interest in big data and machine
learning problems since they appear in many cases including but not limited to
medical imaging datasets, hospital datasets, and massive MIMO. The dominance of
IMAT over the well-known LASSO will be taken into account in different
scenarios. Simulations and numerical results are also provided to verify the
arguments.
| [
{
"version": "v1",
"created": "Sun, 12 Jun 2016 07:05:22 GMT"
}
] | 2016-06-14T00:00:00 | [
[
"Esmaeili",
"Ashkan",
""
],
[
"Marvasti",
"Farokh",
""
]
] | TITLE: Comparison of Several Sparse Recovery Methods for Low Rank Matrices with
Random Samples
ABSTRACT: In this paper, we will investigate the efficacy of IMAT (Iterative Method of
Adaptive Thresholding) in recovering the sparse signal (parameters) for linear
models with missing data. Sparse recovery rises in compressed sensing and
machine learning problems and has various applications necessitating viable
reconstruction methods specifically when we work with big data. This paper will
focus on comparing the power of IMAT in reconstruction of the desired sparse
signal with LASSO. Additionally, we will assume the model has random missing
information. Missing data has been recently of interest in big data and machine
learning problems since they appear in many cases including but not limited to
medical imaging datasets, hospital datasets, and massive MIMO. The dominance of
IMAT over the well-known LASSO will be taken into account in different
scenarios. Simulations and numerical results are also provided to verify the
arguments.
| no_new_dataset | 0.949482 |
1606.03719 | Roberto Capobianco | Roberto Capobianco, Jacopo Serafin, Johann Dichtl, Giorgio Grisetti,
Luca Iocchi and Daniele Nardi | A Proposal for Semantic Map Representation and Evaluation | null | null | 10.1109/ECMR.2015.7324198 | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Semantic mapping is the incremental process of "mapping" relevant information
of the world (i.e., spatial information, temporal events, agents and actions)
to a formal description supported by a reasoning engine. Current research
focuses on learning the semantic of environments based on their spatial
location, geometry and appearance. Many methods to tackle this problem have
been proposed, but the lack of a uniform representation, as well as standard
benchmarking suites, prevents their direct comparison. In this paper, we
propose a standardization in the representation of semantic maps, by defining
an easily extensible formalism to be used on top of metric maps of the
environments. Based on this, we describe the procedure to build a dataset
(based on real sensor data) for benchmarking semantic mapping techniques, also
hypothesizing some possible evaluation metrics. Nevertheless, by providing a
tool for the construction of a semantic map ground truth, we aim at the
contribution of the scientific community in acquiring data for populating the
dataset.
| [
{
"version": "v1",
"created": "Sun, 12 Jun 2016 14:43:07 GMT"
}
] | 2016-06-14T00:00:00 | [
[
"Capobianco",
"Roberto",
""
],
[
"Serafin",
"Jacopo",
""
],
[
"Dichtl",
"Johann",
""
],
[
"Grisetti",
"Giorgio",
""
],
[
"Iocchi",
"Luca",
""
],
[
"Nardi",
"Daniele",
""
]
] | TITLE: A Proposal for Semantic Map Representation and Evaluation
ABSTRACT: Semantic mapping is the incremental process of "mapping" relevant information
of the world (i.e., spatial information, temporal events, agents and actions)
to a formal description supported by a reasoning engine. Current research
focuses on learning the semantic of environments based on their spatial
location, geometry and appearance. Many methods to tackle this problem have
been proposed, but the lack of a uniform representation, as well as standard
benchmarking suites, prevents their direct comparison. In this paper, we
propose a standardization in the representation of semantic maps, by defining
an easily extensible formalism to be used on top of metric maps of the
environments. Based on this, we describe the procedure to build a dataset
(based on real sensor data) for benchmarking semantic mapping techniques, also
hypothesizing some possible evaluation metrics. Nevertheless, by providing a
tool for the construction of a semantic map ground truth, we aim at the
contribution of the scientific community in acquiring data for populating the
dataset.
| new_dataset | 0.533228 |
1606.03774 | Chenxia Wu | Chenxia Wu, Jiemi Zhang, Ashutosh Saxena, Silvio Savarese | Human Centred Object Co-Segmentation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Co-segmentation is the automatic extraction of the common semantic regions
given a set of images. Different from previous approaches mainly based on
object visuals, in this paper, we propose a human centred object
co-segmentation approach, which uses the human as another strong evidence. In
order to discover the rich internal structure of the objects reflecting their
human-object interactions and visual similarities, we propose an unsupervised
fully connected CRF auto-encoder incorporating the rich object features and a
novel human-object interaction representation. We propose an efficient learning
and inference algorithm to allow the full connectivity of the CRF with the
auto-encoder, that establishes pairwise relations on all pairs of the object
proposals in the dataset. Moreover, the auto-encoder learns the parameters from
the data itself rather than supervised learning or manually assigned parameters
in the conventional CRF. In the extensive experiments on four datasets, we show
that our approach is able to extract the common objects more accurately than
the state-of-the-art co-segmentation algorithms.
| [
{
"version": "v1",
"created": "Sun, 12 Jun 2016 22:36:53 GMT"
}
] | 2016-06-14T00:00:00 | [
[
"Wu",
"Chenxia",
""
],
[
"Zhang",
"Jiemi",
""
],
[
"Saxena",
"Ashutosh",
""
],
[
"Savarese",
"Silvio",
""
]
] | TITLE: Human Centred Object Co-Segmentation
ABSTRACT: Co-segmentation is the automatic extraction of the common semantic regions
given a set of images. Different from previous approaches mainly based on
object visuals, in this paper, we propose a human centred object
co-segmentation approach, which uses the human as another strong evidence. In
order to discover the rich internal structure of the objects reflecting their
human-object interactions and visual similarities, we propose an unsupervised
fully connected CRF auto-encoder incorporating the rich object features and a
novel human-object interaction representation. We propose an efficient learning
and inference algorithm to allow the full connectivity of the CRF with the
auto-encoder, that establishes pairwise relations on all pairs of the object
proposals in the dataset. Moreover, the auto-encoder learns the parameters from
the data itself rather than supervised learning or manually assigned parameters
in the conventional CRF. In the extensive experiments on four datasets, we show
that our approach is able to extract the common objects more accurately than
the state-of-the-art co-segmentation algorithms.
| no_new_dataset | 0.947721 |
1606.03784 | Guido Zarrella | Guido Zarrella and Amy Marsh | MITRE at SemEval-2016 Task 6: Transfer Learning for Stance Detection | International Workshop on Semantic Evaluation 2016 | null | null | null | cs.AI cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We describe MITRE's submission to the SemEval-2016 Task 6, Detecting Stance
in Tweets. This effort achieved the top score in Task A on supervised stance
detection, producing an average F1 score of 67.8 when assessing whether a tweet
author was in favor or against a topic. We employed a recurrent neural network
initialized with features learned via distant supervision on two large
unlabeled datasets. We trained embeddings of words and phrases with the
word2vec skip-gram method, then used those features to learn sentence
representations via a hashtag prediction auxiliary task. These sentence vectors
were then fine-tuned for stance detection on several hundred labeled examples.
The result was a high performing system that used transfer learning to maximize
the value of the available training data.
| [
{
"version": "v1",
"created": "Mon, 13 Jun 2016 00:12:49 GMT"
}
] | 2016-06-14T00:00:00 | [
[
"Zarrella",
"Guido",
""
],
[
"Marsh",
"Amy",
""
]
] | TITLE: MITRE at SemEval-2016 Task 6: Transfer Learning for Stance Detection
ABSTRACT: We describe MITRE's submission to the SemEval-2016 Task 6, Detecting Stance
in Tweets. This effort achieved the top score in Task A on supervised stance
detection, producing an average F1 score of 67.8 when assessing whether a tweet
author was in favor or against a topic. We employed a recurrent neural network
initialized with features learned via distant supervision on two large
unlabeled datasets. We trained embeddings of words and phrases with the
word2vec skip-gram method, then used those features to learn sentence
representations via a hashtag prediction auxiliary task. These sentence vectors
were then fine-tuned for stance detection on several hundred labeled examples.
The result was a high performing system that used transfer learning to maximize
the value of the available training data.
| no_new_dataset | 0.946547 |
1606.03816 | Mehrdad Farajtabar | Mehrdad Farajtabar, Xiaojing Ye, Sahar Harati, Le Song, Hongyuan Zha | Multistage Campaigning in Social Networks | null | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the problem of how to optimize multi-stage campaigning over
social networks. The dynamic programming framework is employed to balance the
high present reward and large penalty on low future outcome in the presence of
extensive uncertainties. In particular, we establish theoretical foundations of
optimal campaigning over social networks where the user activities are modeled
as a multivariate Hawkes process, and we derive a time dependent linear
relation between the intensity of exogenous events and several commonly used
objective functions of campaigning. We further develop a convex dynamic
programming framework for determining the optimal intervention policy that
prescribes the required level of external drive at each stage for the desired
campaigning result. Experiments on both synthetic data and the real-world
MemeTracker dataset show that our algorithm can steer the user activities for
optimal campaigning much more accurately than baselines.
| [
{
"version": "v1",
"created": "Mon, 13 Jun 2016 05:29:49 GMT"
}
] | 2016-06-14T00:00:00 | [
[
"Farajtabar",
"Mehrdad",
""
],
[
"Ye",
"Xiaojing",
""
],
[
"Harati",
"Sahar",
""
],
[
"Song",
"Le",
""
],
[
"Zha",
"Hongyuan",
""
]
] | TITLE: Multistage Campaigning in Social Networks
ABSTRACT: We consider the problem of how to optimize multi-stage campaigning over
social networks. The dynamic programming framework is employed to balance the
high present reward and large penalty on low future outcome in the presence of
extensive uncertainties. In particular, we establish theoretical foundations of
optimal campaigning over social networks where the user activities are modeled
as a multivariate Hawkes process, and we derive a time dependent linear
relation between the intensity of exogenous events and several commonly used
objective functions of campaigning. We further develop a convex dynamic
programming framework for determining the optimal intervention policy that
prescribes the required level of external drive at each stage for the desired
campaigning result. Experiments on both synthetic data and the real-world
MemeTracker dataset show that our algorithm can steer the user activities for
optimal campaigning much more accurately than baselines.
| no_new_dataset | 0.945197 |
1606.03838 | Boyue Wang | Boyue Wang and Yongli Hu and Junbin Gao and Yanfeng Sun and Baocai Yin | Laplacian LRR on Product Grassmann Manifolds for Human Activity
Clustering in Multi-Camera Video Surveillance | 14pages,submitting to IEEE TCSVT with minor revision | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In multi-camera video surveillance, it is challenging to represent videos
from different cameras properly and fuse them efficiently for specific
applications such as human activity recognition and clustering. In this paper,
a novel representation for multi-camera video data, namely the Product
Grassmann Manifold (PGM), is proposed to model video sequences as points on the
Grassmann manifold and integrate them as a whole in the product manifold form.
Additionally, with a new geometry metric on the product manifold, the
conventional Low Rank Representation (LRR) model is extended onto PGM and the
new LRR model can be used for clustering non-linear data, such as multi-camera
video data. To evaluate the proposed method, a number of clustering experiments
are conducted on several multi-camera video datasets of human activity,
including Dongzhimen Transport Hub Crowd action dataset, ACT 42 Human action
dataset and SKIG action dataset. The experiment results show that the proposed
method outperforms many state-of-the-art clustering methods.
| [
{
"version": "v1",
"created": "Mon, 13 Jun 2016 07:09:39 GMT"
}
] | 2016-06-14T00:00:00 | [
[
"Wang",
"Boyue",
""
],
[
"Hu",
"Yongli",
""
],
[
"Gao",
"Junbin",
""
],
[
"Sun",
"Yanfeng",
""
],
[
"Yin",
"Baocai",
""
]
] | TITLE: Laplacian LRR on Product Grassmann Manifolds for Human Activity
Clustering in Multi-Camera Video Surveillance
ABSTRACT: In multi-camera video surveillance, it is challenging to represent videos
from different cameras properly and fuse them efficiently for specific
applications such as human activity recognition and clustering. In this paper,
a novel representation for multi-camera video data, namely the Product
Grassmann Manifold (PGM), is proposed to model video sequences as points on the
Grassmann manifold and integrate them as a whole in the product manifold form.
Additionally, with a new geometry metric on the product manifold, the
conventional Low Rank Representation (LRR) model is extended onto PGM and the
new LRR model can be used for clustering non-linear data, such as multi-camera
video data. To evaluate the proposed method, a number of clustering experiments
are conducted on several multi-camera video datasets of human activity,
including Dongzhimen Transport Hub Crowd action dataset, ACT 42 Human action
dataset and SKIG action dataset. The experiment results show that the proposed
method outperforms many state-of-the-art clustering methods.
| no_new_dataset | 0.948394 |
1606.03989 | Marco Winkler | Marco Winkler | On the Role of Triadic Substructures in Complex Networks | 195 pages, dissertation | null | null | null | cs.SI cond-mat.stat-mech physics.data-an physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the course of the growth of the Internet and due to increasing
availability of data, over the last two decades, the field of network science
has established itself as an own area of research. With quantitative scientists
from computer science, mathematics, and physics working on datasets from
biology, economics, sociology, political sciences, and many others, network
science serves as a paradigm for interdisciplinary research. One of the major
goals in network science is to unravel the relationship between topological
graph structure and a network's function. As evidence suggests, systems from
the same fields, i.e. with similar function, tend to exhibit similar structure.
However, it is still vague whether a similar graph structure automatically
implies likewise function. This dissertation aims at helping to bridge this
gap, while particularly focusing on the role of triadic structures. After a
general introduction to the main concepts of network science, existing work
devoted to the relevance of triadic substructures is reviewed. A major
challenge in modeling such structure is the fact that not all three-node
subgraphs can be specified independently of each other, as pairs of nodes may
participate in multiple triadic subgraphs. In order to overcome this obstacle,
a novel class of generative network models based on pair-disjoint triadic
building blocks is suggested. It is further investigated whether triad motifs -
subgraph patterns which appear significantly more frequently than expected at
random - occur homogeneously or heterogeneously distributed over graphs.
Finally, the influence of triadic substructure on the evolution of dynamical
processes acting on their nodes is studied. It is observed that certain motifs
impose clear signatures on the systems' dynamics, even when embedded in a
larger network structure.
| [
{
"version": "v1",
"created": "Thu, 30 Jul 2015 13:56:48 GMT"
}
] | 2016-06-14T00:00:00 | [
[
"Winkler",
"Marco",
""
]
] | TITLE: On the Role of Triadic Substructures in Complex Networks
ABSTRACT: In the course of the growth of the Internet and due to increasing
availability of data, over the last two decades, the field of network science
has established itself as an own area of research. With quantitative scientists
from computer science, mathematics, and physics working on datasets from
biology, economics, sociology, political sciences, and many others, network
science serves as a paradigm for interdisciplinary research. One of the major
goals in network science is to unravel the relationship between topological
graph structure and a network's function. As evidence suggests, systems from
the same fields, i.e. with similar function, tend to exhibit similar structure.
However, it is still vague whether a similar graph structure automatically
implies likewise function. This dissertation aims at helping to bridge this
gap, while particularly focusing on the role of triadic structures. After a
general introduction to the main concepts of network science, existing work
devoted to the relevance of triadic substructures is reviewed. A major
challenge in modeling such structure is the fact that not all three-node
subgraphs can be specified independently of each other, as pairs of nodes may
participate in multiple triadic subgraphs. In order to overcome this obstacle,
a novel class of generative network models based on pair-disjoint triadic
building blocks is suggested. It is further investigated whether triad motifs -
subgraph patterns which appear significantly more frequently than expected at
random - occur homogeneously or heterogeneously distributed over graphs.
Finally, the influence of triadic substructure on the evolution of dynamical
processes acting on their nodes is studied. It is observed that certain motifs
impose clear signatures on the systems' dynamics, even when embedded in a
larger network structure.
| no_new_dataset | 0.940681 |
1511.06068 | Michael Cogswell | Michael Cogswell, Faruk Ahmed, Ross Girshick, Larry Zitnick, Dhruv
Batra | Reducing Overfitting in Deep Networks by Decorrelating Representations | 12 pages, 5 figures, 5 tables, Accepted to ICLR 2016, (v4 adds
acknowledgements) | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One major challenge in training Deep Neural Networks is preventing
overfitting. Many techniques such as data augmentation and novel regularizers
such as Dropout have been proposed to prevent overfitting without requiring a
massive amount of training data. In this work, we propose a new regularizer
called DeCov which leads to significantly reduced overfitting (as indicated by
the difference between train and val performance), and better generalization.
Our regularizer encourages diverse or non-redundant representations in Deep
Neural Networks by minimizing the cross-covariance of hidden activations. This
simple intuition has been explored in a number of past works but surprisingly
has never been applied as a regularizer in supervised learning. Experiments
across a range of datasets and network architectures show that this loss always
reduces overfitting while almost always maintaining or increasing
generalization performance and often improving performance over Dropout.
| [
{
"version": "v1",
"created": "Thu, 19 Nov 2015 06:23:09 GMT"
},
{
"version": "v2",
"created": "Thu, 7 Jan 2016 21:12:29 GMT"
},
{
"version": "v3",
"created": "Mon, 29 Feb 2016 21:23:05 GMT"
},
{
"version": "v4",
"created": "Fri, 10 Jun 2016 10:59:37 GMT"
}
] | 2016-06-13T00:00:00 | [
[
"Cogswell",
"Michael",
""
],
[
"Ahmed",
"Faruk",
""
],
[
"Girshick",
"Ross",
""
],
[
"Zitnick",
"Larry",
""
],
[
"Batra",
"Dhruv",
""
]
] | TITLE: Reducing Overfitting in Deep Networks by Decorrelating Representations
ABSTRACT: One major challenge in training Deep Neural Networks is preventing
overfitting. Many techniques such as data augmentation and novel regularizers
such as Dropout have been proposed to prevent overfitting without requiring a
massive amount of training data. In this work, we propose a new regularizer
called DeCov which leads to significantly reduced overfitting (as indicated by
the difference between train and val performance), and better generalization.
Our regularizer encourages diverse or non-redundant representations in Deep
Neural Networks by minimizing the cross-covariance of hidden activations. This
simple intuition has been explored in a number of past works but surprisingly
has never been applied as a regularizer in supervised learning. Experiments
across a range of datasets and network architectures show that this loss always
reduces overfitting while almost always maintaining or increasing
generalization performance and often improving performance over Dropout.
| no_new_dataset | 0.948394 |
1601.01343 | Ikuya Yamada | Ikuya Yamada, Hiroyuki Shindo, Hideaki Takeda, Yoshiyasu Takefuji | Joint Learning of the Embedding of Words and Entities for Named Entity
Disambiguation | Accepted at CoNLL 2016 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Named Entity Disambiguation (NED) refers to the task of resolving multiple
named entity mentions in a document to their correct references in a knowledge
base (KB) (e.g., Wikipedia). In this paper, we propose a novel embedding method
specifically designed for NED. The proposed method jointly maps words and
entities into the same continuous vector space. We extend the skip-gram model
by using two models. The KB graph model learns the relatedness of entities
using the link structure of the KB, whereas the anchor context model aims to
align vectors such that similar words and entities occur close to one another
in the vector space by leveraging KB anchors and their context words. By
combining contexts based on the proposed embedding with standard NED features,
we achieved state-of-the-art accuracy of 93.1% on the standard CoNLL dataset
and 85.2% on the TAC 2010 dataset.
| [
{
"version": "v1",
"created": "Wed, 6 Jan 2016 22:19:20 GMT"
},
{
"version": "v2",
"created": "Sat, 19 Mar 2016 07:31:47 GMT"
},
{
"version": "v3",
"created": "Sun, 1 May 2016 06:39:19 GMT"
},
{
"version": "v4",
"created": "Fri, 10 Jun 2016 01:51:26 GMT"
}
] | 2016-06-13T00:00:00 | [
[
"Yamada",
"Ikuya",
""
],
[
"Shindo",
"Hiroyuki",
""
],
[
"Takeda",
"Hideaki",
""
],
[
"Takefuji",
"Yoshiyasu",
""
]
] | TITLE: Joint Learning of the Embedding of Words and Entities for Named Entity
Disambiguation
ABSTRACT: Named Entity Disambiguation (NED) refers to the task of resolving multiple
named entity mentions in a document to their correct references in a knowledge
base (KB) (e.g., Wikipedia). In this paper, we propose a novel embedding method
specifically designed for NED. The proposed method jointly maps words and
entities into the same continuous vector space. We extend the skip-gram model
by using two models. The KB graph model learns the relatedness of entities
using the link structure of the KB, whereas the anchor context model aims to
align vectors such that similar words and entities occur close to one another
in the vector space by leveraging KB anchors and their context words. By
combining contexts based on the proposed embedding with standard NED features,
we achieved state-of-the-art accuracy of 93.1% on the standard CoNLL dataset
and 85.2% on the TAC 2010 dataset.
| no_new_dataset | 0.952397 |
1603.04404 | Shu Sun Ms. | Shu Sun, Theodore S. Rappaport, Timothy A. Thomas, Amitava Ghosh, Huan
C. Nguyen, Istvan Z. Kovacs, Ignacio Rodriguez, Ozge Koymen, Andrzej Partyka | Investigation of Prediction Accuracy, Sensitivity, and Parameter
Stability of Large-Scale Propagation Path Loss Models for 5G Wireless
Communications | Open access available at:
http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=7434656 | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper compares three candidate large-scale propagation path loss models
for use over the entire microwave and millimeter-wave (mmWave) radio spectrum:
the alpha-beta-gamma (ABG) model, the close-in (CI) free space reference
distance model, and the CI model with a frequency-weighted path loss exponent
(CIF). Each of these models have been recently studied for use in standards
bodies such as 3GPP, and for use in the design of fifth generation (5G)
wireless systems in urban macrocell, urban microcell, and indoor office and
shopping mall scenarios. Here we compare the accuracy and sensitivity of these
models using measured data from 30 propagation measurement datasets from 2 GHz
to 73 GHz over distances ranging from 4 m to 1238 m. A series of sensitivity
analyses of the three models show that the physically-based two-parameter CI
model and three-parameter CIF model offer computational simplicity, have very
similar goodness of fit (i.e., the shadow fading standard deviation), exhibit
more stable model parameter behavior across frequencies and distances, and
yield smaller prediction error in sensitivity testing across distances and
frequencies, when compared to the four-parameter ABG model. Results show the CI
model with a 1 m close-in reference distance is suitable for outdoor
environments, while the CIF model is more appropriate for indoor modeling. The
CI and CIF models are easily implemented in existing 3GPP models by making a
very subtle modification -- by replacing a floating non-physically based
constant with a frequency-dependent constant that represents free space path
loss in the first meter of propagation.
| [
{
"version": "v1",
"created": "Mon, 14 Mar 2016 19:22:53 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Mar 2016 17:43:24 GMT"
},
{
"version": "v3",
"created": "Thu, 7 Apr 2016 19:25:22 GMT"
},
{
"version": "v4",
"created": "Fri, 8 Apr 2016 02:01:51 GMT"
},
{
"version": "v5",
"created": "Mon, 25 Apr 2016 13:37:05 GMT"
},
{
"version": "v6",
"created": "Tue, 7 Jun 2016 14:54:23 GMT"
},
{
"version": "v7",
"created": "Thu, 9 Jun 2016 15:28:40 GMT"
},
{
"version": "v8",
"created": "Fri, 10 Jun 2016 15:18:58 GMT"
}
] | 2016-06-13T00:00:00 | [
[
"Sun",
"Shu",
""
],
[
"Rappaport",
"Theodore S.",
""
],
[
"Thomas",
"Timothy A.",
""
],
[
"Ghosh",
"Amitava",
""
],
[
"Nguyen",
"Huan C.",
""
],
[
"Kovacs",
"Istvan Z.",
""
],
[
"Rodriguez",
"Ignacio",
""
],
[
"Koymen",
"Ozge",
""
],
[
"Partyka",
"Andrzej",
""
]
] | TITLE: Investigation of Prediction Accuracy, Sensitivity, and Parameter
Stability of Large-Scale Propagation Path Loss Models for 5G Wireless
Communications
ABSTRACT: This paper compares three candidate large-scale propagation path loss models
for use over the entire microwave and millimeter-wave (mmWave) radio spectrum:
the alpha-beta-gamma (ABG) model, the close-in (CI) free space reference
distance model, and the CI model with a frequency-weighted path loss exponent
(CIF). Each of these models have been recently studied for use in standards
bodies such as 3GPP, and for use in the design of fifth generation (5G)
wireless systems in urban macrocell, urban microcell, and indoor office and
shopping mall scenarios. Here we compare the accuracy and sensitivity of these
models using measured data from 30 propagation measurement datasets from 2 GHz
to 73 GHz over distances ranging from 4 m to 1238 m. A series of sensitivity
analyses of the three models show that the physically-based two-parameter CI
model and three-parameter CIF model offer computational simplicity, have very
similar goodness of fit (i.e., the shadow fading standard deviation), exhibit
more stable model parameter behavior across frequencies and distances, and
yield smaller prediction error in sensitivity testing across distances and
frequencies, when compared to the four-parameter ABG model. Results show the CI
model with a 1 m close-in reference distance is suitable for outdoor
environments, while the CIF model is more appropriate for indoor modeling. The
CI and CIF models are easily implemented in existing 3GPP models by making a
very subtle modification -- by replacing a floating non-physically based
constant with a frequency-dependent constant that represents free space path
loss in the first meter of propagation.
| no_new_dataset | 0.954137 |
1606.03237 | Ciprian Corneanu | Ciprian Corneanu, Marc Oliu, Jeffrey F. Cohn, Sergio Escalera | Survey on RGB, 3D, Thermal, and Multimodal Approaches for Facial
Expression Recognition: History, Trends, and Affect-related Applications | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Facial expressions are an important way through which humans interact
socially. Building a system capable of automatically recognizing facial
expressions from images and video has been an intense field of study in recent
years. Interpreting such expressions remains challenging and much research is
needed about the way they relate to human affect. This paper presents a general
overview of automatic RGB, 3D, thermal and multimodal facial expression
analysis. We define a new taxonomy for the field, encompassing all steps from
face detection to facial expression recognition, and describe and classify the
state of the art methods accordingly. We also present the important datasets
and the bench-marking of most influential methods. We conclude with a general
discussion about trends, important questions and future lines of research.
| [
{
"version": "v1",
"created": "Fri, 10 Jun 2016 09:12:05 GMT"
}
] | 2016-06-13T00:00:00 | [
[
"Corneanu",
"Ciprian",
""
],
[
"Oliu",
"Marc",
""
],
[
"Cohn",
"Jeffrey F.",
""
],
[
"Escalera",
"Sergio",
""
]
] | TITLE: Survey on RGB, 3D, Thermal, and Multimodal Approaches for Facial
Expression Recognition: History, Trends, and Affect-related Applications
ABSTRACT: Facial expressions are an important way through which humans interact
socially. Building a system capable of automatically recognizing facial
expressions from images and video has been an intense field of study in recent
years. Interpreting such expressions remains challenging and much research is
needed about the way they relate to human affect. This paper presents a general
overview of automatic RGB, 3D, thermal and multimodal facial expression
analysis. We define a new taxonomy for the field, encompassing all steps from
face detection to facial expression recognition, and describe and classify the
state of the art methods accordingly. We also present the important datasets
and the bench-marking of most influential methods. We conclude with a general
discussion about trends, important questions and future lines of research.
| no_new_dataset | 0.946843 |
1606.03335 | Roman Bartusiak | Roman Bartusiak, {\L}ukasz Augustyniak, Tomasz Kajdanowicz,
Przemys{\l}aw Kazienko, Maciej Piasecki | WordNet2Vec: Corpora Agnostic Word Vectorization Method | 29 pages, 16 figures, submitted to journal | null | null | null | cs.CL cs.AI cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A complex nature of big data resources demands new methods for structuring
especially for textual content. WordNet is a good knowledge source for
comprehensive abstraction of natural language as its good implementations exist
for many languages. Since WordNet embeds natural language in the form of a
complex network, a transformation mechanism WordNet2Vec is proposed in the
paper. It creates vectors for each word from WordNet. These vectors encapsulate
general position - role of a given word towards all other words in the natural
language. Any list or set of such vectors contains knowledge about the context
of its component within the whole language. Such word representation can be
easily applied to many analytic tasks like classification or clustering. The
usefulness of the WordNet2Vec method was demonstrated in sentiment analysis,
i.e. classification with transfer learning for the real Amazon opinion textual
dataset.
| [
{
"version": "v1",
"created": "Fri, 10 Jun 2016 14:12:47 GMT"
}
] | 2016-06-13T00:00:00 | [
[
"Bartusiak",
"Roman",
""
],
[
"Augustyniak",
"Łukasz",
""
],
[
"Kajdanowicz",
"Tomasz",
""
],
[
"Kazienko",
"Przemysław",
""
],
[
"Piasecki",
"Maciej",
""
]
] | TITLE: WordNet2Vec: Corpora Agnostic Word Vectorization Method
ABSTRACT: A complex nature of big data resources demands new methods for structuring
especially for textual content. WordNet is a good knowledge source for
comprehensive abstraction of natural language as its good implementations exist
for many languages. Since WordNet embeds natural language in the form of a
complex network, a transformation mechanism WordNet2Vec is proposed in the
paper. It creates vectors for each word from WordNet. These vectors encapsulate
general position - role of a given word towards all other words in the natural
language. Any list or set of such vectors contains knowledge about the context
of its component within the whole language. Such word representation can be
easily applied to many analytic tasks like classification or clustering. The
usefulness of the WordNet2Vec method was demonstrated in sentiment analysis,
i.e. classification with transfer learning for the real Amazon opinion textual
dataset.
| no_new_dataset | 0.945096 |
1506.02690 | Zhiguang Wang | Zhiguang Wang, Tim Oates, James Lo | Adaptive Normalized Risk-Averting Training For Deep Neural Networks | AAAI 2016, 0.39%~0.4% ER on MNIST with single 32-32-256-10 ConvNets,
code available at https://github.com/cauchyturing/ANRAE | null | null | null | cs.LG cs.NE stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes a set of new error criteria and learning approaches,
Adaptive Normalized Risk-Averting Training (ANRAT), to attack the non-convex
optimization problem in training deep neural networks (DNNs). Theoretically, we
demonstrate its effectiveness on global and local convexity lower-bounded by
the standard $L_p$-norm error. By analyzing the gradient on the convexity index
$\lambda$, we explain the reason why to learn $\lambda$ adaptively using
gradient descent works. In practice, we show how this method improves training
of deep neural networks to solve visual recognition tasks on the MNIST and
CIFAR-10 datasets. Without using pretraining or other tricks, we obtain results
comparable or superior to those reported in recent literature on the same tasks
using standard ConvNets + MSE/cross entropy. Performance on deep/shallow
multilayer perceptrons and Denoised Auto-encoders is also explored. ANRAT can
be combined with other quasi-Newton training methods, innovative network
variants, regularization techniques and other specific tricks in DNNs. Other
than unsupervised pretraining, it provides a new perspective to address the
non-convex optimization problem in DNNs.
| [
{
"version": "v1",
"created": "Mon, 8 Jun 2015 20:42:12 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Aug 2015 14:53:46 GMT"
},
{
"version": "v3",
"created": "Thu, 9 Jun 2016 04:10:22 GMT"
}
] | 2016-06-10T00:00:00 | [
[
"Wang",
"Zhiguang",
""
],
[
"Oates",
"Tim",
""
],
[
"Lo",
"James",
""
]
] | TITLE: Adaptive Normalized Risk-Averting Training For Deep Neural Networks
ABSTRACT: This paper proposes a set of new error criteria and learning approaches,
Adaptive Normalized Risk-Averting Training (ANRAT), to attack the non-convex
optimization problem in training deep neural networks (DNNs). Theoretically, we
demonstrate its effectiveness on global and local convexity lower-bounded by
the standard $L_p$-norm error. By analyzing the gradient on the convexity index
$\lambda$, we explain the reason why to learn $\lambda$ adaptively using
gradient descent works. In practice, we show how this method improves training
of deep neural networks to solve visual recognition tasks on the MNIST and
CIFAR-10 datasets. Without using pretraining or other tricks, we obtain results
comparable or superior to those reported in recent literature on the same tasks
using standard ConvNets + MSE/cross entropy. Performance on deep/shallow
multilayer perceptrons and Denoised Auto-encoders is also explored. ANRAT can
be combined with other quasi-Newton training methods, innovative network
variants, regularization techniques and other specific tricks in DNNs. Other
than unsupervised pretraining, it provides a new perspective to address the
non-convex optimization problem in DNNs.
| no_new_dataset | 0.945601 |
1603.00957 | Kun Xu | Kun Xu, Siva Reddy, Yansong Feng, Songfang Huang, Dongyan Zhao | Question Answering on Freebase via Relation Extraction and Textual
Evidence | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Existing knowledge-based question answering systems often rely on small
annotated training data. While shallow methods like relation extraction are
robust to data scarcity, they are less expressive than the deep meaning
representation methods like semantic parsing, thereby failing at answering
questions involving multiple constraints. Here we alleviate this problem by
empowering a relation extraction method with additional evidence from
Wikipedia. We first present a neural network based relation extractor to
retrieve the candidate answers from Freebase, and then infer over Wikipedia to
validate these answers. Experiments on the WebQuestions question answering
dataset show that our method achieves an F_1 of 53.3%, a substantial
improvement over the state-of-the-art.
| [
{
"version": "v1",
"created": "Thu, 3 Mar 2016 03:22:01 GMT"
},
{
"version": "v2",
"created": "Wed, 8 Jun 2016 11:05:53 GMT"
},
{
"version": "v3",
"created": "Thu, 9 Jun 2016 15:12:19 GMT"
}
] | 2016-06-10T00:00:00 | [
[
"Xu",
"Kun",
""
],
[
"Reddy",
"Siva",
""
],
[
"Feng",
"Yansong",
""
],
[
"Huang",
"Songfang",
""
],
[
"Zhao",
"Dongyan",
""
]
] | TITLE: Question Answering on Freebase via Relation Extraction and Textual
Evidence
ABSTRACT: Existing knowledge-based question answering systems often rely on small
annotated training data. While shallow methods like relation extraction are
robust to data scarcity, they are less expressive than the deep meaning
representation methods like semantic parsing, thereby failing at answering
questions involving multiple constraints. Here we alleviate this problem by
empowering a relation extraction method with additional evidence from
Wikipedia. We first present a neural network based relation extractor to
retrieve the candidate answers from Freebase, and then infer over Wikipedia to
validate these answers. Experiments on the WebQuestions question answering
dataset show that our method achieves an F_1 of 53.3%, a substantial
improvement over the state-of-the-art.
| no_new_dataset | 0.950365 |
1603.06059 | Nasrin Mostafazadeh | Nasrin Mostafazadeh, Ishan Misra, Jacob Devlin, Margaret Mitchell,
Xiaodong He, Lucy Vanderwende | Generating Natural Questions About an Image | Proceedings of the 54th Annual Meeting of the Association for
Computational Linguistics | null | null | null | cs.CL cs.AI cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There has been an explosion of work in the vision & language community during
the past few years from image captioning to video transcription, and answering
questions about images. These tasks have focused on literal descriptions of the
image. To move beyond the literal, we choose to explore how questions about an
image are often directed at commonsense inference and the abstract events
evoked by objects in the image. In this paper, we introduce the novel task of
Visual Question Generation (VQG), where the system is tasked with asking a
natural and engaging question when shown an image. We provide three datasets
which cover a variety of images from object-centric to event-centric, with
considerably more abstract training data than provided to state-of-the-art
captioning systems thus far. We train and test several generative and retrieval
models to tackle the task of VQG. Evaluation results show that while such
models ask reasonable questions for a variety of images, there is still a wide
gap with human performance which motivates further work on connecting images
with commonsense knowledge and pragmatics. Our proposed task offers a new
challenge to the community which we hope furthers interest in exploring deeper
connections between vision & language.
| [
{
"version": "v1",
"created": "Sat, 19 Mar 2016 07:27:15 GMT"
},
{
"version": "v2",
"created": "Tue, 22 Mar 2016 06:54:58 GMT"
},
{
"version": "v3",
"created": "Thu, 9 Jun 2016 01:20:49 GMT"
}
] | 2016-06-10T00:00:00 | [
[
"Mostafazadeh",
"Nasrin",
""
],
[
"Misra",
"Ishan",
""
],
[
"Devlin",
"Jacob",
""
],
[
"Mitchell",
"Margaret",
""
],
[
"He",
"Xiaodong",
""
],
[
"Vanderwende",
"Lucy",
""
]
] | TITLE: Generating Natural Questions About an Image
ABSTRACT: There has been an explosion of work in the vision & language community during
the past few years from image captioning to video transcription, and answering
questions about images. These tasks have focused on literal descriptions of the
image. To move beyond the literal, we choose to explore how questions about an
image are often directed at commonsense inference and the abstract events
evoked by objects in the image. In this paper, we introduce the novel task of
Visual Question Generation (VQG), where the system is tasked with asking a
natural and engaging question when shown an image. We provide three datasets
which cover a variety of images from object-centric to event-centric, with
considerably more abstract training data than provided to state-of-the-art
captioning systems thus far. We train and test several generative and retrieval
models to tackle the task of VQG. Evaluation results show that while such
models ask reasonable questions for a variety of images, there is still a wide
gap with human performance which motivates further work on connecting images
with commonsense knowledge and pragmatics. Our proposed task offers a new
challenge to the community which we hope furthers interest in exploring deeper
connections between vision & language.
| new_dataset | 0.968649 |
1604.07342 | Mahyar Najibi | Bahadir Ozdemir and Mahyar Najibi and Larry S. Davis | Supervised Incremental Hashing | 14 pages, 6 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose an incremental strategy for learning hash functions with kernels
for large-scale image search. Our method is based on a two-stage classification
framework that treats binary codes as intermediate variables between the
feature space and the semantic space. In the first stage of classification,
binary codes are considered as class labels by a set of binary SVMs; each
corresponds to one bit. In the second stage, binary codes become the input
space of a multi-class SVM. Hash functions are learned by an efficient
algorithm where the NP-hard problem of finding optimal binary codes is solved
via cyclic coordinate descent and SVMs are trained in a parallelized
incremental manner. For modifications like adding images from a previously
unseen class, we describe an incremental procedure for effective and efficient
updates to the previous hash functions. Experiments on three large-scale image
datasets demonstrate the effectiveness of the proposed hashing method,
Supervised Incremental Hashing (SIH), over the state-of-the-art supervised
hashing methods.
| [
{
"version": "v1",
"created": "Mon, 25 Apr 2016 17:50:05 GMT"
},
{
"version": "v2",
"created": "Thu, 9 Jun 2016 17:24:25 GMT"
}
] | 2016-06-10T00:00:00 | [
[
"Ozdemir",
"Bahadir",
""
],
[
"Najibi",
"Mahyar",
""
],
[
"Davis",
"Larry S.",
""
]
] | TITLE: Supervised Incremental Hashing
ABSTRACT: We propose an incremental strategy for learning hash functions with kernels
for large-scale image search. Our method is based on a two-stage classification
framework that treats binary codes as intermediate variables between the
feature space and the semantic space. In the first stage of classification,
binary codes are considered as class labels by a set of binary SVMs; each
corresponds to one bit. In the second stage, binary codes become the input
space of a multi-class SVM. Hash functions are learned by an efficient
algorithm where the NP-hard problem of finding optimal binary codes is solved
via cyclic coordinate descent and SVMs are trained in a parallelized
incremental manner. For modifications like adding images from a previously
unseen class, we describe an incremental procedure for effective and efficient
updates to the previous hash functions. Experiments on three large-scale image
datasets demonstrate the effectiveness of the proposed hashing method,
Supervised Incremental Hashing (SIH), over the state-of-the-art supervised
hashing methods.
| no_new_dataset | 0.948106 |
1606.01323 | Kevin Clark | Kevin Clark and Christopher D. Manning | Improving Coreference Resolution by Learning Entity-Level Distributed
Representations | Accepted for publication at the Association for Computational
Linguistics (ACL), 2016 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A long-standing challenge in coreference resolution has been the
incorporation of entity-level information - features defined over clusters of
mentions instead of mention pairs. We present a neural network based
coreference system that produces high-dimensional vector representations for
pairs of coreference clusters. Using these representations, our system learns
when combining clusters is desirable. We train the system with a
learning-to-search algorithm that teaches it which local decisions (cluster
merges) will lead to a high-scoring final coreference partition. The system
substantially outperforms the current state-of-the-art on the English and
Chinese portions of the CoNLL 2012 Shared Task dataset despite using few
hand-engineered features.
| [
{
"version": "v1",
"created": "Sat, 4 Jun 2016 04:08:45 GMT"
},
{
"version": "v2",
"created": "Wed, 8 Jun 2016 21:11:13 GMT"
}
] | 2016-06-10T00:00:00 | [
[
"Clark",
"Kevin",
""
],
[
"Manning",
"Christopher D.",
""
]
] | TITLE: Improving Coreference Resolution by Learning Entity-Level Distributed
Representations
ABSTRACT: A long-standing challenge in coreference resolution has been the
incorporation of entity-level information - features defined over clusters of
mentions instead of mention pairs. We present a neural network based
coreference system that produces high-dimensional vector representations for
pairs of coreference clusters. Using these representations, our system learns
when combining clusters is desirable. We train the system with a
learning-to-search algorithm that teaches it which local decisions (cluster
merges) will lead to a high-scoring final coreference partition. The system
substantially outperforms the current state-of-the-art on the English and
Chinese portions of the CoNLL 2012 Shared Task dataset despite using few
hand-engineered features.
| no_new_dataset | 0.948965 |
1606.02785 | Lu Wang | Lu Wang and Wang Ling | Neural Network-Based Abstract Generation for Opinions and Arguments | NAACL 2016 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the problem of generating abstractive summaries for opinionated
text. We propose an attention-based neural network model that is able to absorb
information from multiple text units to construct informative, concise, and
fluent summaries. An importance-based sampling method is designed to allow the
encoder to integrate information from an important subset of input. Automatic
evaluation indicates that our system outperforms state-of-the-art abstractive
and extractive summarization systems on two newly collected datasets of movie
reviews and arguments. Our system summaries are also rated as more informative
and grammatical in human evaluation.
| [
{
"version": "v1",
"created": "Thu, 9 Jun 2016 00:15:23 GMT"
}
] | 2016-06-10T00:00:00 | [
[
"Wang",
"Lu",
""
],
[
"Ling",
"Wang",
""
]
] | TITLE: Neural Network-Based Abstract Generation for Opinions and Arguments
ABSTRACT: We study the problem of generating abstractive summaries for opinionated
text. We propose an attention-based neural network model that is able to absorb
information from multiple text units to construct informative, concise, and
fluent summaries. An importance-based sampling method is designed to allow the
encoder to integrate information from an important subset of input. Automatic
evaluation indicates that our system outperforms state-of-the-art abstractive
and extractive summarization systems on two newly collected datasets of movie
reviews and arguments. Our system summaries are also rated as more informative
and grammatical in human evaluation.
| new_dataset | 0.956836 |
1606.02894 | Haz{\i}m Kemal Ekenel | Mostafa Mehdipour Ghazi and Hazim Kemal Ekenel | A Comprehensive Analysis of Deep Learning Based Representation for Face
Recognition | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep learning based approaches have been dominating the face recognition
field due to the significant performance improvement they have provided on the
challenging wild datasets. These approaches have been extensively tested on
such unconstrained datasets, on the Labeled Faces in the Wild and YouTube
Faces, to name a few. However, their capability to handle individual appearance
variations caused by factors such as head pose, illumination, occlusion, and
misalignment has not been thoroughly assessed till now. In this paper, we
present a comprehensive study to evaluate the performance of deep learning
based face representation under several conditions including the varying head
pose angles, upper and lower face occlusion, changing illumination of different
strengths, and misalignment due to erroneous facial feature localization. Two
successful and publicly available deep learning models, namely VGG-Face and
Lightened CNN have been utilized to extract face representations. The obtained
results show that although deep learning provides a powerful representation for
face recognition, it can still benefit from preprocessing, for example, for
pose and illumination normalization in order to achieve better performance
under various conditions. Particularly, if these variations are not included in
the dataset used to train the deep learning model, the role of preprocessing
becomes more crucial. Experimental results also show that deep learning based
representation is robust to misalignment and can tolerate facial feature
localization errors up to 10% of the interocular distance.
| [
{
"version": "v1",
"created": "Thu, 9 Jun 2016 10:25:24 GMT"
}
] | 2016-06-10T00:00:00 | [
[
"Ghazi",
"Mostafa Mehdipour",
""
],
[
"Ekenel",
"Hazim Kemal",
""
]
] | TITLE: A Comprehensive Analysis of Deep Learning Based Representation for Face
Recognition
ABSTRACT: Deep learning based approaches have been dominating the face recognition
field due to the significant performance improvement they have provided on the
challenging wild datasets. These approaches have been extensively tested on
such unconstrained datasets, on the Labeled Faces in the Wild and YouTube
Faces, to name a few. However, their capability to handle individual appearance
variations caused by factors such as head pose, illumination, occlusion, and
misalignment has not been thoroughly assessed till now. In this paper, we
present a comprehensive study to evaluate the performance of deep learning
based face representation under several conditions including the varying head
pose angles, upper and lower face occlusion, changing illumination of different
strengths, and misalignment due to erroneous facial feature localization. Two
successful and publicly available deep learning models, namely VGG-Face and
Lightened CNN have been utilized to extract face representations. The obtained
results show that although deep learning provides a powerful representation for
face recognition, it can still benefit from preprocessing, for example, for
pose and illumination normalization in order to achieve better performance
under various conditions. Particularly, if these variations are not included in
the dataset used to train the deep learning model, the role of preprocessing
becomes more crucial. Experimental results also show that deep learning based
representation is robust to misalignment and can tolerate facial feature
localization errors up to 10% of the interocular distance.
| no_new_dataset | 0.942242 |
1606.02909 | Haz{\i}m Kemal Ekenel | Refik Can Malli and Mehmet Aygun and Hazim Kemal Ekenel | Apparent Age Estimation Using Ensemble of Deep Learning Models | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we address the problem of apparent age estimation. Different
from estimating the real age of individuals, in which each face image has a
single age label, in this problem, face images have multiple age labels,
corresponding to the ages perceived by the annotators, when they look at these
images. This provides an intriguing computer vision problem, since in generic
image or object classification tasks, it is typical to have a single ground
truth label per class. To account for multiple labels per image, instead of
using average age of the annotated face image as the class label, we have
grouped the face images that are within a specified age range. Using these age
groups and their age-shifted groupings, we have trained an ensemble of deep
learning models. Before feeding an input face image to a deep learning model,
five facial landmark points are detected and used for 2-D alignment. We have
employed and fine tuned convolutional neural networks (CNNs) that are based on
VGG-16 [24] architecture and pretrained on the IMDB-WIKI dataset [22]. The
outputs of these deep learning models are then combined to produce the final
estimation. Proposed method achieves 0.3668 error in the final ChaLearn LAP
2016 challenge test set [5].
| [
{
"version": "v1",
"created": "Thu, 9 Jun 2016 11:00:21 GMT"
}
] | 2016-06-10T00:00:00 | [
[
"Malli",
"Refik Can",
""
],
[
"Aygun",
"Mehmet",
""
],
[
"Ekenel",
"Hazim Kemal",
""
]
] | TITLE: Apparent Age Estimation Using Ensemble of Deep Learning Models
ABSTRACT: In this paper, we address the problem of apparent age estimation. Different
from estimating the real age of individuals, in which each face image has a
single age label, in this problem, face images have multiple age labels,
corresponding to the ages perceived by the annotators, when they look at these
images. This provides an intriguing computer vision problem, since in generic
image or object classification tasks, it is typical to have a single ground
truth label per class. To account for multiple labels per image, instead of
using average age of the annotated face image as the class label, we have
grouped the face images that are within a specified age range. Using these age
groups and their age-shifted groupings, we have trained an ensemble of deep
learning models. Before feeding an input face image to a deep learning model,
five facial landmark points are detected and used for 2-D alignment. We have
employed and fine tuned convolutional neural networks (CNNs) that are based on
VGG-16 [24] architecture and pretrained on the IMDB-WIKI dataset [22]. The
outputs of these deep learning models are then combined to produce the final
estimation. Proposed method achieves 0.3668 error in the final ChaLearn LAP
2016 challenge test set [5].
| no_new_dataset | 0.939858 |
1606.02938 | Robert Hovden | Barnaby D.A. Levin, Elliot Padgett, Chien-Chun Chen, M.C. Scott, Rui
Xu, Wolfgang Theis, Yi Jiang, Yongsoo Yang, Colin Ophus, Haitao Zhang,
Don-Hyung Ha, Deli Wang, Yingchao Yu, Hector D. Abruna, Richard D. Robinson,
Peter Ercius, Lena F. Kourkoutis, Jianwei Miao, David A. Muller, Robert
Hovden | Nanomaterial datasets to advance tomography in scanning transmission
electron microscopy | 3 figures, 10 datasets | Scientific Data 3, Article number: 160041 (2016) | 10.1038/sdata.2016.41 | null | cond-mat.mes-hall physics.ins-det | http://creativecommons.org/licenses/by/4.0/ | Electron tomography in materials science has flourished with the demand to
characterize nanoscale materials in three dimensions (3D). Access to
experimental data is vital for developing and validating reconstruction methods
that improve resolution and reduce radiation dose requirements. This work
presents five high-quality scanning transmission electron microscope (STEM)
tomography datasets in order to address the critical need for open access data
in this field. The datasets represent the current limits of experimental
technique, are of high quality, and contain materials with structural
complexity. Included are tomographic series of a hyperbranched Co2P
nanocrystal, platinum nanoparticles on a carbon nanofibre imaged over the
complete 180{\deg} tilt range, a platinum nanoparticle and a tungsten needle
both imaged at atomic resolution by equal slope tomography, and a through-focal
tilt series of PtCu nanoparticles. A volumetric reconstruction from every
dataset is provided for comparison and development of post-processing and
visualization techniques. Researchers interested in creating novel data
processing and reconstruction algorithms will now have access to state of the
art experimental test data.
| [
{
"version": "v1",
"created": "Thu, 9 Jun 2016 12:59:17 GMT"
}
] | 2016-06-10T00:00:00 | [
[
"Levin",
"Barnaby D. A.",
""
],
[
"Padgett",
"Elliot",
""
],
[
"Chen",
"Chien-Chun",
""
],
[
"Scott",
"M. C.",
""
],
[
"Xu",
"Rui",
""
],
[
"Theis",
"Wolfgang",
""
],
[
"Jiang",
"Yi",
""
],
[
"Yang",
"Yongsoo",
""
],
[
"Ophus",
"Colin",
""
],
[
"Zhang",
"Haitao",
""
],
[
"Ha",
"Don-Hyung",
""
],
[
"Wang",
"Deli",
""
],
[
"Yu",
"Yingchao",
""
],
[
"Abruna",
"Hector D.",
""
],
[
"Robinson",
"Richard D.",
""
],
[
"Ercius",
"Peter",
""
],
[
"Kourkoutis",
"Lena F.",
""
],
[
"Miao",
"Jianwei",
""
],
[
"Muller",
"David A.",
""
],
[
"Hovden",
"Robert",
""
]
] | TITLE: Nanomaterial datasets to advance tomography in scanning transmission
electron microscopy
ABSTRACT: Electron tomography in materials science has flourished with the demand to
characterize nanoscale materials in three dimensions (3D). Access to
experimental data is vital for developing and validating reconstruction methods
that improve resolution and reduce radiation dose requirements. This work
presents five high-quality scanning transmission electron microscope (STEM)
tomography datasets in order to address the critical need for open access data
in this field. The datasets represent the current limits of experimental
technique, are of high quality, and contain materials with structural
complexity. Included are tomographic series of a hyperbranched Co2P
nanocrystal, platinum nanoparticles on a carbon nanofibre imaged over the
complete 180{\deg} tilt range, a platinum nanoparticle and a tungsten needle
both imaged at atomic resolution by equal slope tomography, and a through-focal
tilt series of PtCu nanoparticles. A volumetric reconstruction from every
dataset is provided for comparison and development of post-processing and
visualization techniques. Researchers interested in creating novel data
processing and reconstruction algorithms will now have access to state of the
art experimental test data.
| no_new_dataset | 0.940243 |
1606.02976 | Gayo Diallo | Khadim Dram\'e (UB), Fleur Mougin (UB), Gayo Diallo (UB) | Large scale biomedical texts classification: a kNN and an ESA-based
approaches | Journal of Biomedical Semantics, BioMed Central, 2016 | null | null | null | cs.IR cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the large and increasing volume of textual data, automated methods for
identifying significant topics to classify textual documents have received a
growing interest. While many efforts have been made in this direction, it still
remains a real challenge. Moreover, the issue is even more complex as full
texts are not always freely available. Then, using only partial information to
annotate these documents is promising but remains a very ambitious issue.
MethodsWe propose two classification methods: a k-nearest neighbours
(kNN)-based approach and an explicit semantic analysis (ESA)-based approach.
Although the kNN-based approach is widely used in text classification, it needs
to be improved to perform well in this specific classification problem which
deals with partial information. Compared to existing kNN-based methods, our
method uses classical Machine Learning (ML) algorithms for ranking the labels.
Additional features are also investigated in order to improve the classifiers'
performance. In addition, the combination of several learning algorithms with
various techniques for fixing the number of relevant topics is performed. On
the other hand, ESA seems promising for this classification task as it yielded
interesting results in related issues, such as semantic relatedness computation
between texts and text classification. Unlike existing works, which use ESA for
enriching the bag-of-words approach with additional knowledge-based features,
our ESA-based method builds a standalone classifier. Furthermore, we
investigate if the results of this method could be useful as a complementary
feature of our kNN-based approach.ResultsExperimental evaluations performed on
large standard annotated datasets, provided by the BioASQ organizers, show that
the kNN-based method with the Random Forest learning algorithm achieves good
performances compared with the current state-of-the-art methods, reaching a
competitive f-measure of 0.55% while the ESA-based approach surprisingly
yielded reserved results.ConclusionsWe have proposed simple classification
methods suitable to annotate textual documents using only partial information.
They are therefore adequate for large multi-label classification and
particularly in the biomedical domain. Thus, our work contributes to the
extraction of relevant information from unstructured documents in order to
facilitate their automated processing. Consequently, it could be used for
various purposes, including document indexing, information retrieval, etc.
| [
{
"version": "v1",
"created": "Thu, 9 Jun 2016 14:32:50 GMT"
}
] | 2016-06-10T00:00:00 | [
[
"Dramé",
"Khadim",
"",
"UB"
],
[
"Mougin",
"Fleur",
"",
"UB"
],
[
"Diallo",
"Gayo",
"",
"UB"
]
] | TITLE: Large scale biomedical texts classification: a kNN and an ESA-based
approaches
ABSTRACT: With the large and increasing volume of textual data, automated methods for
identifying significant topics to classify textual documents have received a
growing interest. While many efforts have been made in this direction, it still
remains a real challenge. Moreover, the issue is even more complex as full
texts are not always freely available. Then, using only partial information to
annotate these documents is promising but remains a very ambitious issue.
MethodsWe propose two classification methods: a k-nearest neighbours
(kNN)-based approach and an explicit semantic analysis (ESA)-based approach.
Although the kNN-based approach is widely used in text classification, it needs
to be improved to perform well in this specific classification problem which
deals with partial information. Compared to existing kNN-based methods, our
method uses classical Machine Learning (ML) algorithms for ranking the labels.
Additional features are also investigated in order to improve the classifiers'
performance. In addition, the combination of several learning algorithms with
various techniques for fixing the number of relevant topics is performed. On
the other hand, ESA seems promising for this classification task as it yielded
interesting results in related issues, such as semantic relatedness computation
between texts and text classification. Unlike existing works, which use ESA for
enriching the bag-of-words approach with additional knowledge-based features,
our ESA-based method builds a standalone classifier. Furthermore, we
investigate if the results of this method could be useful as a complementary
feature of our kNN-based approach.ResultsExperimental evaluations performed on
large standard annotated datasets, provided by the BioASQ organizers, show that
the kNN-based method with the Random Forest learning algorithm achieves good
performances compared with the current state-of-the-art methods, reaching a
competitive f-measure of 0.55% while the ESA-based approach surprisingly
yielded reserved results.ConclusionsWe have proposed simple classification
methods suitable to annotate textual documents using only partial information.
They are therefore adequate for large multi-label classification and
particularly in the biomedical domain. Thus, our work contributes to the
extraction of relevant information from unstructured documents in order to
facilitate their automated processing. Consequently, it could be used for
various purposes, including document indexing, information retrieval, etc.
| no_new_dataset | 0.942401 |
1606.03044 | Bob Sturm | Bob L. Sturm | The "Horse'' Inside: Seeking Causes Behind the Behaviours of Music
Content Analysis Systems | 32 pages, 17 figures, this work was accepted for publication in a
journal special issue in Apr. 2015 | null | null | null | cs.SD cs.AI cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Building systems that possess the sensitivity and intelligence to identify
and describe high-level attributes in music audio signals continues to be an
elusive goal, but one that surely has broad and deep implications for a wide
variety of applications. Hundreds of papers have so far been published toward
this goal, and great progress appears to have been made. Some systems produce
remarkable accuracies at recognising high-level semantic concepts, such as
music style, genre and mood. However, it might be that these numbers do not
mean what they seem. In this paper, we take a state-of-the-art music content
analysis system and investigate what causes it to achieve exceptionally high
performance in a benchmark music audio dataset. We dissect the system to
understand its operation, determine its sensitivities and limitations, and
predict the kinds of knowledge it could and could not possess about music. We
perform a series of experiments to illuminate what the system has actually
learned to do, and to what extent it is performing the intended music listening
task. Our results demonstrate how the initial manifestation of music
intelligence in this state-of-the-art can be deceptive. Our work provides
constructive directions toward developing music content analysis systems that
can address the music information and creation needs of real-world users.
| [
{
"version": "v1",
"created": "Thu, 9 Jun 2016 18:10:31 GMT"
}
] | 2016-06-10T00:00:00 | [
[
"Sturm",
"Bob L.",
""
]
] | TITLE: The "Horse'' Inside: Seeking Causes Behind the Behaviours of Music
Content Analysis Systems
ABSTRACT: Building systems that possess the sensitivity and intelligence to identify
and describe high-level attributes in music audio signals continues to be an
elusive goal, but one that surely has broad and deep implications for a wide
variety of applications. Hundreds of papers have so far been published toward
this goal, and great progress appears to have been made. Some systems produce
remarkable accuracies at recognising high-level semantic concepts, such as
music style, genre and mood. However, it might be that these numbers do not
mean what they seem. In this paper, we take a state-of-the-art music content
analysis system and investigate what causes it to achieve exceptionally high
performance in a benchmark music audio dataset. We dissect the system to
understand its operation, determine its sensitivities and limitations, and
predict the kinds of knowledge it could and could not possess about music. We
perform a series of experiments to illuminate what the system has actually
learned to do, and to what extent it is performing the intended music listening
task. Our results demonstrate how the initial manifestation of music
intelligence in this state-of-the-art can be deceptive. Our work provides
constructive directions toward developing music content analysis systems that
can address the music information and creation needs of real-world users.
| no_new_dataset | 0.862988 |
1503.03701 | Alessandro Perina | Nebojsa Jojic and Alessandro Perina and Dongwoo Kim | Hierarchical learning of grids of microtopics | To Appear in Uncertainty in Artificial Intelligence - UAI 2016 | null | null | null | stat.ML cs.IR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The counting grid is a grid of microtopics, sparse word/feature
distributions. The generative model associated with the grid does not use these
microtopics individually. Rather, it groups them in overlapping rectangular
windows and uses these grouped microtopics as either mixture or admixture
components. This paper builds upon the basic counting grid model and it shows
that hierarchical reasoning helps avoid bad local minima, produces better
classification accuracy and, most interestingly, allows for extraction of large
numbers of coherent microtopics even from small datasets. We evaluate this in
terms of consistency, diversity and clarity of the indexed content, as well as
in a user study on word intrusion tasks. We demonstrate that these models work
well as a technique for embedding raw images and discuss interesting parallels
between hierarchical CG models and other deep architectures.
| [
{
"version": "v1",
"created": "Thu, 12 Mar 2015 12:59:25 GMT"
},
{
"version": "v2",
"created": "Wed, 11 Nov 2015 16:38:24 GMT"
},
{
"version": "v3",
"created": "Fri, 13 Nov 2015 16:46:07 GMT"
},
{
"version": "v4",
"created": "Wed, 8 Jun 2016 15:05:38 GMT"
}
] | 2016-06-09T00:00:00 | [
[
"Jojic",
"Nebojsa",
""
],
[
"Perina",
"Alessandro",
""
],
[
"Kim",
"Dongwoo",
""
]
] | TITLE: Hierarchical learning of grids of microtopics
ABSTRACT: The counting grid is a grid of microtopics, sparse word/feature
distributions. The generative model associated with the grid does not use these
microtopics individually. Rather, it groups them in overlapping rectangular
windows and uses these grouped microtopics as either mixture or admixture
components. This paper builds upon the basic counting grid model and it shows
that hierarchical reasoning helps avoid bad local minima, produces better
classification accuracy and, most interestingly, allows for extraction of large
numbers of coherent microtopics even from small datasets. We evaluate this in
terms of consistency, diversity and clarity of the indexed content, as well as
in a user study on word intrusion tasks. We demonstrate that these models work
well as a technique for embedding raw images and discuss interesting parallels
between hierarchical CG models and other deep architectures.
| no_new_dataset | 0.954858 |
1505.06816 | I. Beltagy | I. Beltagy, Stephen Roller, Pengxiang Cheng, Katrin Erk, Raymond J.
Mooney | Representing Meaning with a Combination of Logical and Distributional
Models | Special issue of Computational Linguistics on Formal Distributional
Semantics, 2016 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | NLP tasks differ in the semantic information they require, and at this time
no single se- mantic representation fulfills all requirements. Logic-based
representations characterize sentence structure, but do not capture the graded
aspect of meaning. Distributional models give graded similarity ratings for
words and phrases, but do not capture sentence structure in the same detail as
logic-based approaches. So it has been argued that the two are complementary.
We adopt a hybrid approach that combines logic-based and distributional
semantics through probabilistic logic inference in Markov Logic Networks
(MLNs). In this paper, we focus on the three components of a practical system
integrating logical and distributional models: 1) Parsing and task
representation is the logic-based part where input problems are represented in
probabilistic logic. This is quite different from representing them in standard
first-order logic. 2) For knowledge base construction we form weighted
inference rules. We integrate and compare distributional information with other
sources, notably WordNet and an existing paraphrase collection. In particular,
we use our system to evaluate distributional lexical entailment approaches. We
use a variant of Robinson resolution to determine the necessary inference
rules. More sources can easily be added by mapping them to logical rules; our
system learns a resource-specific weight that corrects for scaling differences
between resources. 3) In discussing probabilistic inference, we show how to
solve the inference problems efficiently. To evaluate our approach, we use the
task of textual entailment (RTE), which can utilize the strengths of both
logic-based and distributional representations. In particular we focus on the
SICK dataset, where we achieve state-of-the-art results.
| [
{
"version": "v1",
"created": "Tue, 26 May 2015 06:19:18 GMT"
},
{
"version": "v2",
"created": "Sun, 29 Nov 2015 03:51:26 GMT"
},
{
"version": "v3",
"created": "Tue, 23 Feb 2016 03:46:07 GMT"
},
{
"version": "v4",
"created": "Tue, 7 Jun 2016 13:30:01 GMT"
},
{
"version": "v5",
"created": "Wed, 8 Jun 2016 15:07:47 GMT"
}
] | 2016-06-09T00:00:00 | [
[
"Beltagy",
"I.",
""
],
[
"Roller",
"Stephen",
""
],
[
"Cheng",
"Pengxiang",
""
],
[
"Erk",
"Katrin",
""
],
[
"Mooney",
"Raymond J.",
""
]
] | TITLE: Representing Meaning with a Combination of Logical and Distributional
Models
ABSTRACT: NLP tasks differ in the semantic information they require, and at this time
no single se- mantic representation fulfills all requirements. Logic-based
representations characterize sentence structure, but do not capture the graded
aspect of meaning. Distributional models give graded similarity ratings for
words and phrases, but do not capture sentence structure in the same detail as
logic-based approaches. So it has been argued that the two are complementary.
We adopt a hybrid approach that combines logic-based and distributional
semantics through probabilistic logic inference in Markov Logic Networks
(MLNs). In this paper, we focus on the three components of a practical system
integrating logical and distributional models: 1) Parsing and task
representation is the logic-based part where input problems are represented in
probabilistic logic. This is quite different from representing them in standard
first-order logic. 2) For knowledge base construction we form weighted
inference rules. We integrate and compare distributional information with other
sources, notably WordNet and an existing paraphrase collection. In particular,
we use our system to evaluate distributional lexical entailment approaches. We
use a variant of Robinson resolution to determine the necessary inference
rules. More sources can easily be added by mapping them to logical rules; our
system learns a resource-specific weight that corrects for scaling differences
between resources. 3) In discussing probabilistic inference, we show how to
solve the inference problems efficiently. To evaluate our approach, we use the
task of textual entailment (RTE), which can utilize the strengths of both
logic-based and distributional representations. In particular we focus on the
SICK dataset, where we achieve state-of-the-art results.
| no_new_dataset | 0.946547 |
1512.07587 | Rajasekaran Masatran | Rajasekaran Masatran | A Latent-Variable Lattice Model | 6 pages, with 4 figures, 8 algorithms, and 1 table | null | null | null | cs.LG cs.CV stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Markov random field (MRF) learning is intractable, and its approximation
algorithms are computationally expensive. We target a small subset of MRF that
is used frequently in computer vision. We characterize this subset with three
concepts: Lattice, Homogeneity, and Inertia; and design a non-markov model as
an alternative. Our goal is robust learning from small datasets. Our learning
algorithm uses vector quantization and, at time complexity O(U log U) for a
dataset of U pixels, is much faster than that of general-purpose MRF.
| [
{
"version": "v1",
"created": "Wed, 23 Dec 2015 19:01:03 GMT"
},
{
"version": "v2",
"created": "Thu, 28 Jan 2016 16:57:50 GMT"
},
{
"version": "v3",
"created": "Mon, 8 Feb 2016 08:48:46 GMT"
},
{
"version": "v4",
"created": "Sat, 5 Mar 2016 13:07:09 GMT"
},
{
"version": "v5",
"created": "Fri, 20 May 2016 08:30:02 GMT"
},
{
"version": "v6",
"created": "Wed, 25 May 2016 09:17:23 GMT"
},
{
"version": "v7",
"created": "Wed, 8 Jun 2016 03:25:09 GMT"
}
] | 2016-06-09T00:00:00 | [
[
"Masatran",
"Rajasekaran",
""
]
] | TITLE: A Latent-Variable Lattice Model
ABSTRACT: Markov random field (MRF) learning is intractable, and its approximation
algorithms are computationally expensive. We target a small subset of MRF that
is used frequently in computer vision. We characterize this subset with three
concepts: Lattice, Homogeneity, and Inertia; and design a non-markov model as
an alternative. Our goal is robust learning from small datasets. Our learning
algorithm uses vector quantization and, at time complexity O(U log U) for a
dataset of U pixels, is much faster than that of general-purpose MRF.
| no_new_dataset | 0.954774 |
1601.01705 | Jacob Andreas | Jacob Andreas, Marcus Rohrbach, Trevor Darrell, Dan Klein | Learning to Compose Neural Networks for Question Answering | null | null | null | null | cs.CL cs.CV cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We describe a question answering model that applies to both images and
structured knowledge bases. The model uses natural language strings to
automatically assemble neural networks from a collection of composable modules.
Parameters for these modules are learned jointly with network-assembly
parameters via reinforcement learning, with only (world, question, answer)
triples as supervision. Our approach, which we term a dynamic neural model
network, achieves state-of-the-art results on benchmark datasets in both visual
and structured domains.
| [
{
"version": "v1",
"created": "Thu, 7 Jan 2016 21:21:59 GMT"
},
{
"version": "v2",
"created": "Wed, 1 Jun 2016 18:20:37 GMT"
},
{
"version": "v3",
"created": "Mon, 6 Jun 2016 01:44:25 GMT"
},
{
"version": "v4",
"created": "Tue, 7 Jun 2016 23:25:51 GMT"
}
] | 2016-06-09T00:00:00 | [
[
"Andreas",
"Jacob",
""
],
[
"Rohrbach",
"Marcus",
""
],
[
"Darrell",
"Trevor",
""
],
[
"Klein",
"Dan",
""
]
] | TITLE: Learning to Compose Neural Networks for Question Answering
ABSTRACT: We describe a question answering model that applies to both images and
structured knowledge bases. The model uses natural language strings to
automatically assemble neural networks from a collection of composable modules.
Parameters for these modules are learned jointly with network-assembly
parameters via reinforcement learning, with only (world, question, answer)
triples as supervision. Our approach, which we term a dynamic neural model
network, achieves state-of-the-art results on benchmark datasets in both visual
and structured domains.
| no_new_dataset | 0.946794 |
1603.06075 | Kazuma Hashimoto | Akiko Eriguchi, Kazuma Hashimoto, and Yoshimasa Tsuruoka | Tree-to-Sequence Attentional Neural Machine Translation | Accepted as a full paper at the 54th Annual Meeting of the
Association for Computational Linguistics (ACL 2016) | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Most of the existing Neural Machine Translation (NMT) models focus on the
conversion of sequential data and do not directly use syntactic information. We
propose a novel end-to-end syntactic NMT model, extending a
sequence-to-sequence model with the source-side phrase structure. Our model has
an attention mechanism that enables the decoder to generate a translated word
while softly aligning it with phrases as well as words of the source sentence.
Experimental results on the WAT'15 English-to-Japanese dataset demonstrate that
our proposed model considerably outperforms sequence-to-sequence attentional
NMT models and compares favorably with the state-of-the-art tree-to-string SMT
system.
| [
{
"version": "v1",
"created": "Sat, 19 Mar 2016 10:08:40 GMT"
},
{
"version": "v2",
"created": "Tue, 22 Mar 2016 09:55:39 GMT"
},
{
"version": "v3",
"created": "Wed, 8 Jun 2016 08:39:11 GMT"
}
] | 2016-06-09T00:00:00 | [
[
"Eriguchi",
"Akiko",
""
],
[
"Hashimoto",
"Kazuma",
""
],
[
"Tsuruoka",
"Yoshimasa",
""
]
] | TITLE: Tree-to-Sequence Attentional Neural Machine Translation
ABSTRACT: Most of the existing Neural Machine Translation (NMT) models focus on the
conversion of sequential data and do not directly use syntactic information. We
propose a novel end-to-end syntactic NMT model, extending a
sequence-to-sequence model with the source-side phrase structure. Our model has
an attention mechanism that enables the decoder to generate a translated word
while softly aligning it with phrases as well as words of the source sentence.
Experimental results on the WAT'15 English-to-Japanese dataset demonstrate that
our proposed model considerably outperforms sequence-to-sequence attentional
NMT models and compares favorably with the state-of-the-art tree-to-string SMT
system.
| no_new_dataset | 0.951729 |
1605.04278 | Yevgeni Berzak | Yevgeni Berzak, Jessica Kenney, Carolyn Spadine, Jing Xian Wang, Lucia
Lam, Keiko Sophie Mori, Sebastian Garza and Boris Katz | Universal Dependencies for Learner English | Updated parsing experiments to EWT v1.3, improved grammatical error
marking, minor revisions. To appear in ACL 2016 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce the Treebank of Learner English (TLE), the first publicly
available syntactic treebank for English as a Second Language (ESL). The TLE
provides manually annotated POS tags and Universal Dependency (UD) trees for
5,124 sentences from the Cambridge First Certificate in English (FCE) corpus.
The UD annotations are tied to a pre-existing error annotation of the FCE,
whereby full syntactic analyses are provided for both the original and error
corrected versions of each sentence. Further on, we delineate ESL annotation
guidelines that allow for consistent syntactic treatment of ungrammatical
English. Finally, we benchmark POS tagging and dependency parsing performance
on the TLE dataset and measure the effect of grammatical errors on parsing
accuracy. We envision the treebank to support a wide range of linguistic and
computational research on second language acquisition as well as automatic
processing of ungrammatical language. The treebank is available at
universaldependencies.org. The annotation manual used in this project and a
graphical query engine are available at esltreebank.org.
| [
{
"version": "v1",
"created": "Fri, 13 May 2016 18:45:22 GMT"
},
{
"version": "v2",
"created": "Wed, 8 Jun 2016 02:33:34 GMT"
}
] | 2016-06-09T00:00:00 | [
[
"Berzak",
"Yevgeni",
""
],
[
"Kenney",
"Jessica",
""
],
[
"Spadine",
"Carolyn",
""
],
[
"Wang",
"Jing Xian",
""
],
[
"Lam",
"Lucia",
""
],
[
"Mori",
"Keiko Sophie",
""
],
[
"Garza",
"Sebastian",
""
],
[
"Katz",
"Boris",
""
]
] | TITLE: Universal Dependencies for Learner English
ABSTRACT: We introduce the Treebank of Learner English (TLE), the first publicly
available syntactic treebank for English as a Second Language (ESL). The TLE
provides manually annotated POS tags and Universal Dependency (UD) trees for
5,124 sentences from the Cambridge First Certificate in English (FCE) corpus.
The UD annotations are tied to a pre-existing error annotation of the FCE,
whereby full syntactic analyses are provided for both the original and error
corrected versions of each sentence. Further on, we delineate ESL annotation
guidelines that allow for consistent syntactic treatment of ungrammatical
English. Finally, we benchmark POS tagging and dependency parsing performance
on the TLE dataset and measure the effect of grammatical errors on parsing
accuracy. We envision the treebank to support a wide range of linguistic and
computational research on second language acquisition as well as automatic
processing of ungrammatical language. The treebank is available at
universaldependencies.org. The annotation manual used in this project and a
graphical query engine are available at esltreebank.org.
| new_dataset | 0.939359 |
1606.02276 | Mercan Topkara | Nikolaos Pappas, Miriam Redi, Mercan Topkara, Brendan Jou, Hongyi Liu,
Tao Chen, Shih-Fu Chang | Multilingual Visual Sentiment Concept Matching | null | Proceedings ICMR '16 Proceedings of the 2016 ACM on International
Conference on Multimedia Retrieval Pages 151-158 | 10.1145/2911996.2912016 | null | cs.CL cs.CV cs.IR cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The impact of culture in visual emotion perception has recently captured the
attention of multimedia research. In this study, we pro- vide powerful
computational linguistics tools to explore, retrieve and browse a dataset of
16K multilingual affective visual concepts and 7.3M Flickr images. First, we
design an effective crowdsourc- ing experiment to collect human judgements of
sentiment connected to the visual concepts. We then use word embeddings to
repre- sent these concepts in a low dimensional vector space, allowing us to
expand the meaning around concepts, and thus enabling insight about
commonalities and differences among different languages. We compare a variety
of concept representations through a novel evaluation task based on the notion
of visual semantic relatedness. Based on these representations, we design
clustering schemes to group multilingual visual concepts, and evaluate them
with novel metrics based on the crowdsourced sentiment annotations as well as
visual semantic relatedness. The proposed clustering framework enables us to
analyze the full multilingual dataset in-depth and also show an application on
a facial data subset, exploring cultural in- sights of portrait-related
affective visual concepts.
| [
{
"version": "v1",
"created": "Tue, 7 Jun 2016 19:40:00 GMT"
}
] | 2016-06-09T00:00:00 | [
[
"Pappas",
"Nikolaos",
""
],
[
"Redi",
"Miriam",
""
],
[
"Topkara",
"Mercan",
""
],
[
"Jou",
"Brendan",
""
],
[
"Liu",
"Hongyi",
""
],
[
"Chen",
"Tao",
""
],
[
"Chang",
"Shih-Fu",
""
]
] | TITLE: Multilingual Visual Sentiment Concept Matching
ABSTRACT: The impact of culture in visual emotion perception has recently captured the
attention of multimedia research. In this study, we pro- vide powerful
computational linguistics tools to explore, retrieve and browse a dataset of
16K multilingual affective visual concepts and 7.3M Flickr images. First, we
design an effective crowdsourc- ing experiment to collect human judgements of
sentiment connected to the visual concepts. We then use word embeddings to
repre- sent these concepts in a low dimensional vector space, allowing us to
expand the meaning around concepts, and thus enabling insight about
commonalities and differences among different languages. We compare a variety
of concept representations through a novel evaluation task based on the notion
of visual semantic relatedness. Based on these representations, we design
clustering schemes to group multilingual visual concepts, and evaluate them
with novel metrics based on the crowdsourced sentiment annotations as well as
visual semantic relatedness. The proposed clustering framework enables us to
analyze the full multilingual dataset in-depth and also show an application on
a facial data subset, exploring cultural in- sights of portrait-related
affective visual concepts.
| no_new_dataset | 0.936343 |
1606.02355 | Tommaso Furlanello | Tommaso Furlanello, Jiaping Zhao, Andrew M. Saxe, Laurent Itti, Bosco
S. Tjan | Active Long Term Memory Networks | null | null | null | null | cs.LG cs.AI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Continual Learning in artificial neural networks suffers from interference
and forgetting when different tasks are learned sequentially. This paper
introduces the Active Long Term Memory Networks (A-LTM), a model of sequential
multi-task deep learning that is able to maintain previously learned
association between sensory input and behavioral output while acquiring knew
knowledge. A-LTM exploits the non-convex nature of deep neural networks and
actively maintains knowledge of previously learned, inactive tasks using a
distillation loss. Distortions of the learned input-output map are penalized
but hidden layers are free to transverse towards new local optima that are more
favorable for the multi-task objective. We re-frame the McClelland's seminal
Hippocampal theory with respect to Catastrophic Inference (CI) behavior
exhibited by modern deep architectures trained with back-propagation and
inhomogeneous sampling of latent factors across epochs. We present empirical
results of non-trivial CI during continual learning in Deep Linear Networks
trained on the same task, in Convolutional Neural Networks when the task shifts
from predicting semantic to graphical factors and during domain adaptation from
simple to complex environments. We present results of the A-LTM model's ability
to maintain viewpoint recognition learned in the highly controlled iLab-20M
dataset with 10 object categories and 88 camera viewpoints, while adapting to
the unstructured domain of Imagenet with 1,000 object categories.
| [
{
"version": "v1",
"created": "Tue, 7 Jun 2016 23:43:42 GMT"
}
] | 2016-06-09T00:00:00 | [
[
"Furlanello",
"Tommaso",
""
],
[
"Zhao",
"Jiaping",
""
],
[
"Saxe",
"Andrew M.",
""
],
[
"Itti",
"Laurent",
""
],
[
"Tjan",
"Bosco S.",
""
]
] | TITLE: Active Long Term Memory Networks
ABSTRACT: Continual Learning in artificial neural networks suffers from interference
and forgetting when different tasks are learned sequentially. This paper
introduces the Active Long Term Memory Networks (A-LTM), a model of sequential
multi-task deep learning that is able to maintain previously learned
association between sensory input and behavioral output while acquiring knew
knowledge. A-LTM exploits the non-convex nature of deep neural networks and
actively maintains knowledge of previously learned, inactive tasks using a
distillation loss. Distortions of the learned input-output map are penalized
but hidden layers are free to transverse towards new local optima that are more
favorable for the multi-task objective. We re-frame the McClelland's seminal
Hippocampal theory with respect to Catastrophic Inference (CI) behavior
exhibited by modern deep architectures trained with back-propagation and
inhomogeneous sampling of latent factors across epochs. We present empirical
results of non-trivial CI during continual learning in Deep Linear Networks
trained on the same task, in Convolutional Neural Networks when the task shifts
from predicting semantic to graphical factors and during domain adaptation from
simple to complex environments. We present results of the A-LTM model's ability
to maintain viewpoint recognition learned in the highly controlled iLab-20M
dataset with 10 object categories and 88 camera viewpoints, while adapting to
the unstructured domain of Imagenet with 1,000 object categories.
| no_new_dataset | 0.944331 |
1606.02382 | Petteri Teikari | Petteri Teikari, Marc Santos, Charissa Poon, Kullervo Hynynen | Deep Learning Convolutional Networks for Multiphoton Microscopy
Vasculature Segmentation | 23 pages, 10 figures | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently there has been an increasing trend to use deep learning frameworks
for both 2D consumer images and for 3D medical images. However, there has been
little effort to use deep frameworks for volumetric vascular segmentation. We
wanted to address this by providing a freely available dataset of 12 annotated
two-photon vasculature microscopy stacks. We demonstrated the use of deep
learning framework consisting both 2D and 3D convolutional filters (ConvNet).
Our hybrid 2D-3D architecture produced promising segmentation result. We
derived the architectures from Lee et al. who used the ZNN framework initially
designed for electron microscope image segmentation. We hope that by sharing
our volumetric vasculature datasets, we will inspire other researchers to
experiment with vasculature dataset and improve the used network architectures.
| [
{
"version": "v1",
"created": "Wed, 8 Jun 2016 02:57:00 GMT"
}
] | 2016-06-09T00:00:00 | [
[
"Teikari",
"Petteri",
""
],
[
"Santos",
"Marc",
""
],
[
"Poon",
"Charissa",
""
],
[
"Hynynen",
"Kullervo",
""
]
] | TITLE: Deep Learning Convolutional Networks for Multiphoton Microscopy
Vasculature Segmentation
ABSTRACT: Recently there has been an increasing trend to use deep learning frameworks
for both 2D consumer images and for 3D medical images. However, there has been
little effort to use deep frameworks for volumetric vascular segmentation. We
wanted to address this by providing a freely available dataset of 12 annotated
two-photon vasculature microscopy stacks. We demonstrated the use of deep
learning framework consisting both 2D and 3D convolutional filters (ConvNet).
Our hybrid 2D-3D architecture produced promising segmentation result. We
derived the architectures from Lee et al. who used the ZNN framework initially
designed for electron microscope image segmentation. We hope that by sharing
our volumetric vasculature datasets, we will inspire other researchers to
experiment with vasculature dataset and improve the used network architectures.
| new_dataset | 0.958731 |
1606.02542 | Christian Walder Dr | Christian Walder | Symbolic Music Data Version 1.0 | arXiv admin note: substantial text overlap with arXiv:1606.01368 | null | null | null | cs.SD cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this document, we introduce a new dataset designed for training machine
learning models of symbolic music data. Five datasets are provided, one of
which is from a newly collected corpus of 20K midi files. We describe our
preprocessing and cleaning pipeline, which includes the exclusion of a number
of files based on scores from a previously developed probabilistic machine
learning model. We also define training, testing and validation splits for the
new dataset, based on a clustering scheme which we also describe. Some simple
histograms are included.
| [
{
"version": "v1",
"created": "Wed, 8 Jun 2016 13:19:01 GMT"
}
] | 2016-06-09T00:00:00 | [
[
"Walder",
"Christian",
""
]
] | TITLE: Symbolic Music Data Version 1.0
ABSTRACT: In this document, we introduce a new dataset designed for training machine
learning models of symbolic music data. Five datasets are provided, one of
which is from a newly collected corpus of 20K midi files. We describe our
preprocessing and cleaning pipeline, which includes the exclusion of a number
of files based on scores from a previously developed probabilistic machine
learning model. We also define training, testing and validation splits for the
new dataset, based on a clustering scheme which we also describe. Some simple
histograms are included.
| new_dataset | 0.961098 |
1606.02580 | Chrisantha Fernando Dr | Chrisantha Fernando, Dylan Banarse, Malcolm Reynolds, Frederic Besse,
David Pfau, Max Jaderberg, Marc Lanctot, Daan Wierstra | Convolution by Evolution: Differentiable Pattern Producing Networks | null | null | null | null | cs.NE cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work we introduce a differentiable version of the Compositional
Pattern Producing Network, called the DPPN. Unlike a standard CPPN, the
topology of a DPPN is evolved but the weights are learned. A Lamarckian
algorithm, that combines evolution and learning, produces DPPNs to reconstruct
an image. Our main result is that DPPNs can be evolved/trained to compress the
weights of a denoising autoencoder from 157684 to roughly 200 parameters, while
achieving a reconstruction accuracy comparable to a fully connected network
with more than two orders of magnitude more parameters. The regularization
ability of the DPPN allows it to rediscover (approximate) convolutional network
architectures embedded within a fully connected architecture. Such
convolutional architectures are the current state of the art for many computer
vision applications, so it is satisfying that DPPNs are capable of discovering
this structure rather than having to build it in by design. DPPNs exhibit
better generalization when tested on the Omniglot dataset after being trained
on MNIST, than directly encoded fully connected autoencoders. DPPNs are
therefore a new framework for integrating learning and evolution.
| [
{
"version": "v1",
"created": "Wed, 8 Jun 2016 14:37:39 GMT"
}
] | 2016-06-09T00:00:00 | [
[
"Fernando",
"Chrisantha",
""
],
[
"Banarse",
"Dylan",
""
],
[
"Reynolds",
"Malcolm",
""
],
[
"Besse",
"Frederic",
""
],
[
"Pfau",
"David",
""
],
[
"Jaderberg",
"Max",
""
],
[
"Lanctot",
"Marc",
""
],
[
"Wierstra",
"Daan",
""
]
] | TITLE: Convolution by Evolution: Differentiable Pattern Producing Networks
ABSTRACT: In this work we introduce a differentiable version of the Compositional
Pattern Producing Network, called the DPPN. Unlike a standard CPPN, the
topology of a DPPN is evolved but the weights are learned. A Lamarckian
algorithm, that combines evolution and learning, produces DPPNs to reconstruct
an image. Our main result is that DPPNs can be evolved/trained to compress the
weights of a denoising autoencoder from 157684 to roughly 200 parameters, while
achieving a reconstruction accuracy comparable to a fully connected network
with more than two orders of magnitude more parameters. The regularization
ability of the DPPN allows it to rediscover (approximate) convolutional network
architectures embedded within a fully connected architecture. Such
convolutional architectures are the current state of the art for many computer
vision applications, so it is satisfying that DPPNs are capable of discovering
this structure rather than having to build it in by design. DPPNs exhibit
better generalization when tested on the Omniglot dataset after being trained
on MNIST, than directly encoded fully connected autoencoders. DPPNs are
therefore a new framework for integrating learning and evolution.
| no_new_dataset | 0.947575 |
1606.02638 | Preethi Raghavan | Chaitanya Shivade, Preethi Raghavan, Siddharth Patwardhan | Addressing Limited Data for Textual Entailment Across Domains | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We seek to address the lack of labeled data (and high cost of annotation) for
textual entailment in some domains. To that end, we first create (for
experimental purposes) an entailment dataset for the clinical domain, and a
highly competitive supervised entailment system, ENT, that is effective (out of
the box) on two domains. We then explore self-training and active learning
strategies to address the lack of labeled data. With self-training, we
successfully exploit unlabeled data to improve over ENT by 15% F-score on the
newswire domain, and 13% F-score on clinical data. On the other hand, our
active learning experiments demonstrate that we can match (and even beat) ENT
using only 6.6% of the training data in the clinical domain, and only 5.8% of
the training data in the newswire domain.
| [
{
"version": "v1",
"created": "Wed, 8 Jun 2016 16:56:19 GMT"
}
] | 2016-06-09T00:00:00 | [
[
"Shivade",
"Chaitanya",
""
],
[
"Raghavan",
"Preethi",
""
],
[
"Patwardhan",
"Siddharth",
""
]
] | TITLE: Addressing Limited Data for Textual Entailment Across Domains
ABSTRACT: We seek to address the lack of labeled data (and high cost of annotation) for
textual entailment in some domains. To that end, we first create (for
experimental purposes) an entailment dataset for the clinical domain, and a
highly competitive supervised entailment system, ENT, that is effective (out of
the box) on two domains. We then explore self-training and active learning
strategies to address the lack of labeled data. With self-training, we
successfully exploit unlabeled data to improve over ENT by 15% F-score on the
newswire domain, and 13% F-score on clinical data. On the other hand, our
active learning experiments demonstrate that we can match (and even beat) ENT
using only 6.6% of the training data in the clinical domain, and only 5.8% of
the training data in the newswire domain.
| new_dataset | 0.887009 |
1504.07968 | Ubai Sandouk | Ubai Sandouk and Ke Chen | Learning Contextualized Music Semantics from Tags via a Siamese Network | 20 pages. To appear in ACM TIST: Intelligent Music Systems and
Applications | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Music information retrieval faces a challenge in modeling contextualized
musical concepts formulated by a set of co-occurring tags. In this paper, we
investigate the suitability of our recently proposed approach based on a
Siamese neural network in fighting off this challenge. By means of tag features
and probabilistic topic models, the network captures contextualized semantics
from tags via unsupervised learning. This leads to a distributed semantics
space and a potential solution to the out of vocabulary problem which has yet
to be sufficiently addressed. We explore the nature of the resultant
music-based semantics and address computational needs. We conduct experiments
on three public music tag collections -namely, CAL500, MagTag5K and Million
Song Dataset- and compare our approach to a number of state-of-the-art
semantics learning approaches. Comparative results suggest that this approach
outperforms previous approaches in terms of semantic priming and music tag
completion.
| [
{
"version": "v1",
"created": "Wed, 29 Apr 2015 19:05:06 GMT"
},
{
"version": "v2",
"created": "Tue, 7 Jun 2016 16:46:27 GMT"
}
] | 2016-06-08T00:00:00 | [
[
"Sandouk",
"Ubai",
""
],
[
"Chen",
"Ke",
""
]
] | TITLE: Learning Contextualized Music Semantics from Tags via a Siamese Network
ABSTRACT: Music information retrieval faces a challenge in modeling contextualized
musical concepts formulated by a set of co-occurring tags. In this paper, we
investigate the suitability of our recently proposed approach based on a
Siamese neural network in fighting off this challenge. By means of tag features
and probabilistic topic models, the network captures contextualized semantics
from tags via unsupervised learning. This leads to a distributed semantics
space and a potential solution to the out of vocabulary problem which has yet
to be sufficiently addressed. We explore the nature of the resultant
music-based semantics and address computational needs. We conduct experiments
on three public music tag collections -namely, CAL500, MagTag5K and Million
Song Dataset- and compare our approach to a number of state-of-the-art
semantics learning approaches. Comparative results suggest that this approach
outperforms previous approaches in terms of semantic priming and music tag
completion.
| no_new_dataset | 0.940572 |
1505.04364 | Kai-Fu Yang | Kai-Fu Yang, Hui Li, Chao-Yi Li, and Yong-Jie Li | Salient Structure Detection by Context-Guided Visual Search | 13 pages, 15 figures | IEEE Transactions on Image Processing (TIP), 2016 | 10.1109/TIP.2016.2572600 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We define the task of salient structure (SS) detection to unify the
saliency-related tasks like fixation prediction, salient object detection, and
other detection of structures of interest. In this study, we propose a unified
framework for SS detection by modeling the two-pathway-based guided search
strategy of biological vision. Firstly, context-based spatial prior (CBSP) is
extracted based on the layout of edges in the given scene along a fast visual
pathway, called non-selective pathway. This is a rough and non-selective
estimation of the locations where the potential SSs present. Secondly, another
flow of local feature extraction is executed in parallel along the selective
pathway. Finally, Bayesian inference is used to integrate local cues guided by
CBSP, and to predict the exact locations of SSs in the input scene. The
proposed model is invariant to size and features of objects. Experimental
results on four datasets (two fixation prediction datasets and two salient
object datasets) demonstrate that our system achieves competitive performance
for SS detection (i.e., both the tasks of fixation prediction and salient
object detection) comparing to the state-of-the-art methods.
| [
{
"version": "v1",
"created": "Sun, 17 May 2015 07:15:25 GMT"
}
] | 2016-06-08T00:00:00 | [
[
"Yang",
"Kai-Fu",
""
],
[
"Li",
"Hui",
""
],
[
"Li",
"Chao-Yi",
""
],
[
"Li",
"Yong-Jie",
""
]
] | TITLE: Salient Structure Detection by Context-Guided Visual Search
ABSTRACT: We define the task of salient structure (SS) detection to unify the
saliency-related tasks like fixation prediction, salient object detection, and
other detection of structures of interest. In this study, we propose a unified
framework for SS detection by modeling the two-pathway-based guided search
strategy of biological vision. Firstly, context-based spatial prior (CBSP) is
extracted based on the layout of edges in the given scene along a fast visual
pathway, called non-selective pathway. This is a rough and non-selective
estimation of the locations where the potential SSs present. Secondly, another
flow of local feature extraction is executed in parallel along the selective
pathway. Finally, Bayesian inference is used to integrate local cues guided by
CBSP, and to predict the exact locations of SSs in the input scene. The
proposed model is invariant to size and features of objects. Experimental
results on four datasets (two fixation prediction datasets and two salient
object datasets) demonstrate that our system achieves competitive performance
for SS detection (i.e., both the tasks of fixation prediction and salient
object detection) comparing to the state-of-the-art methods.
| no_new_dataset | 0.950824 |
1508.01134 | Maciej Mrowinski | Maciej J. Mrowinski, Agata Fronczak, Piotr Fronczak, Olgica Nedic,
Marcel Ausloos | Review times in peer review: quantitative analysis of editorial
workflows | null | Scientometrics 107 (2016) 271-286 | 10.1007/s11192-016-1871-z | null | physics.soc-ph cs.DL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We examine selected aspects of peer review and suggest possible improvements.
To this end, we analyse a dataset containing information about 300 papers
submitted to the Biochemistry and Biotechnology section of the Journal of the
Serbian Chemical Society. After separating the peer review process into stages
that each review has to go through, we use a weighted directed graph to
describe it in a probabilistic manner and test the impact of some modifications
of the editorial policy on the efficiency of the whole process.
| [
{
"version": "v1",
"created": "Wed, 5 Aug 2015 17:11:14 GMT"
}
] | 2016-06-08T00:00:00 | [
[
"Mrowinski",
"Maciej J.",
""
],
[
"Fronczak",
"Agata",
""
],
[
"Fronczak",
"Piotr",
""
],
[
"Nedic",
"Olgica",
""
],
[
"Ausloos",
"Marcel",
""
]
] | TITLE: Review times in peer review: quantitative analysis of editorial
workflows
ABSTRACT: We examine selected aspects of peer review and suggest possible improvements.
To this end, we analyse a dataset containing information about 300 papers
submitted to the Biochemistry and Biotechnology section of the Journal of the
Serbian Chemical Society. After separating the peer review process into stages
that each review has to go through, we use a weighted directed graph to
describe it in a probabilistic manner and test the impact of some modifications
of the editorial policy on the efficiency of the whole process.
| no_new_dataset | 0.946646 |
1601.01280 | Li Dong | Li Dong, Mirella Lapata | Language to Logical Form with Neural Attention | Accepted by ACL-16 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Semantic parsing aims at mapping natural language to machine interpretable
meaning representations. Traditional approaches rely on high-quality lexicons,
manually-built templates, and linguistic features which are either domain- or
representation-specific. In this paper we present a general method based on an
attention-enhanced encoder-decoder model. We encode input utterances into
vector representations, and generate their logical forms by conditioning the
output sequences or trees on the encoding vectors. Experimental results on four
datasets show that our approach performs competitively without using
hand-engineered features and is easy to adapt across domains and meaning
representations.
| [
{
"version": "v1",
"created": "Wed, 6 Jan 2016 19:13:12 GMT"
},
{
"version": "v2",
"created": "Mon, 6 Jun 2016 21:06:55 GMT"
}
] | 2016-06-08T00:00:00 | [
[
"Dong",
"Li",
""
],
[
"Lapata",
"Mirella",
""
]
] | TITLE: Language to Logical Form with Neural Attention
ABSTRACT: Semantic parsing aims at mapping natural language to machine interpretable
meaning representations. Traditional approaches rely on high-quality lexicons,
manually-built templates, and linguistic features which are either domain- or
representation-specific. In this paper we present a general method based on an
attention-enhanced encoder-decoder model. We encode input utterances into
vector representations, and generate their logical forms by conditioning the
output sequences or trees on the encoding vectors. Experimental results on four
datasets show that our approach performs competitively without using
hand-engineered features and is easy to adapt across domains and meaning
representations.
| no_new_dataset | 0.945197 |
1604.07706 | Shuai Li | Nathan Korda and Balazs Szorenyi and Shuai Li | Distributed Clustering of Linear Bandits in Peer to Peer Networks | The 33rd ICML, 2016 | null | null | null | cs.LG cs.AI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We provide two distributed confidence ball algorithms for solving linear
bandit problems in peer to peer networks with limited communication
capabilities. For the first, we assume that all the peers are solving the same
linear bandit problem, and prove that our algorithm achieves the optimal
asymptotic regret rate of any centralised algorithm that can instantly
communicate information between the peers. For the second, we assume that there
are clusters of peers solving the same bandit problem within each cluster, and
we prove that our algorithm discovers these clusters, while achieving the
optimal asymptotic regret rate within each one. Through experiments on several
real-world datasets, we demonstrate the performance of proposed algorithms
compared to the state-of-the-art.
| [
{
"version": "v1",
"created": "Tue, 26 Apr 2016 14:59:43 GMT"
},
{
"version": "v2",
"created": "Wed, 25 May 2016 06:12:46 GMT"
},
{
"version": "v3",
"created": "Tue, 7 Jun 2016 08:06:23 GMT"
}
] | 2016-06-08T00:00:00 | [
[
"Korda",
"Nathan",
""
],
[
"Szorenyi",
"Balazs",
""
],
[
"Li",
"Shuai",
""
]
] | TITLE: Distributed Clustering of Linear Bandits in Peer to Peer Networks
ABSTRACT: We provide two distributed confidence ball algorithms for solving linear
bandit problems in peer to peer networks with limited communication
capabilities. For the first, we assume that all the peers are solving the same
linear bandit problem, and prove that our algorithm achieves the optimal
asymptotic regret rate of any centralised algorithm that can instantly
communicate information between the peers. For the second, we assume that there
are clusters of peers solving the same bandit problem within each cluster, and
we prove that our algorithm discovers these clusters, while achieving the
optimal asymptotic regret rate within each one. Through experiments on several
real-world datasets, we demonstrate the performance of proposed algorithms
compared to the state-of-the-art.
| no_new_dataset | 0.952574 |
1606.01981 | Paul Merolla | Paul Merolla, Rathinakumar Appuswamy, John Arthur, Steve K. Esser,
Dharmendra Modha | Deep neural networks are robust to weight binarization and other
non-linear distortions | null | null | null | null | cs.NE cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent results show that deep neural networks achieve excellent performance
even when, during training, weights are quantized and projected to a binary
representation. Here, we show that this is just the tip of the iceberg: these
same networks, during testing, also exhibit a remarkable robustness to
distortions beyond quantization, including additive and multiplicative noise,
and a class of non-linear projections where binarization is just a special
case. To quantify this robustness, we show that one such network achieves 11%
test error on CIFAR-10 even with 0.68 effective bits per weight. Furthermore,
we find that a common training heuristic--namely, projecting quantized weights
during backpropagation--can be altered (or even removed) and networks still
achieve a base level of robustness during testing. Specifically, training with
weight projections other than quantization also works, as does simply clipping
the weights, both of which have never been reported before. We confirm our
results for CIFAR-10 and ImageNet datasets. Finally, drawing from these ideas,
we propose a stochastic projection rule that leads to a new state of the art
network with 7.64% test error on CIFAR-10 using no data augmentation.
| [
{
"version": "v1",
"created": "Tue, 7 Jun 2016 00:28:42 GMT"
}
] | 2016-06-08T00:00:00 | [
[
"Merolla",
"Paul",
""
],
[
"Appuswamy",
"Rathinakumar",
""
],
[
"Arthur",
"John",
""
],
[
"Esser",
"Steve K.",
""
],
[
"Modha",
"Dharmendra",
""
]
] | TITLE: Deep neural networks are robust to weight binarization and other
non-linear distortions
ABSTRACT: Recent results show that deep neural networks achieve excellent performance
even when, during training, weights are quantized and projected to a binary
representation. Here, we show that this is just the tip of the iceberg: these
same networks, during testing, also exhibit a remarkable robustness to
distortions beyond quantization, including additive and multiplicative noise,
and a class of non-linear projections where binarization is just a special
case. To quantify this robustness, we show that one such network achieves 11%
test error on CIFAR-10 even with 0.68 effective bits per weight. Furthermore,
we find that a common training heuristic--namely, projecting quantized weights
during backpropagation--can be altered (or even removed) and networks still
achieve a base level of robustness during testing. Specifically, training with
weight projections other than quantization also works, as does simply clipping
the weights, both of which have never been reported before. We confirm our
results for CIFAR-10 and ImageNet datasets. Finally, drawing from these ideas,
we propose a stochastic projection rule that leads to a new state of the art
network with 7.64% test error on CIFAR-10 using no data augmentation.
| no_new_dataset | 0.947381 |
1606.02031 | Li Cheng | Chi Xu, Lakshmi Narasimhan Govindarajan, Li Cheng | Hand Action Detection from Ego-centric Depth Sequences with
Error-correcting Hough Transform | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Detecting hand actions from ego-centric depth sequences is a practically
challenging problem, owing mostly to the complex and dexterous nature of hand
articulations as well as non-stationary camera motion. We address this problem
via a Hough transform based approach coupled with a discriminatively learned
error-correcting component to tackle the well known issue of incorrect votes
from the Hough transform. In this framework, local parts vote collectively for
the start $\&$ end positions of each action over time. We also construct an
in-house annotated dataset of 300 long videos, containing 3,177 single-action
subsequences over 16 action classes collected from 26 individuals. Our system
is empirically evaluated on this real-life dataset for both the action
recognition and detection tasks, and is shown to produce satisfactory results.
To facilitate reproduction, the new dataset and our implementation are also
provided online.
| [
{
"version": "v1",
"created": "Tue, 7 Jun 2016 05:02:14 GMT"
}
] | 2016-06-08T00:00:00 | [
[
"Xu",
"Chi",
""
],
[
"Govindarajan",
"Lakshmi Narasimhan",
""
],
[
"Cheng",
"Li",
""
]
] | TITLE: Hand Action Detection from Ego-centric Depth Sequences with
Error-correcting Hough Transform
ABSTRACT: Detecting hand actions from ego-centric depth sequences is a practically
challenging problem, owing mostly to the complex and dexterous nature of hand
articulations as well as non-stationary camera motion. We address this problem
via a Hough transform based approach coupled with a discriminatively learned
error-correcting component to tackle the well known issue of incorrect votes
from the Hough transform. In this framework, local parts vote collectively for
the start $\&$ end positions of each action over time. We also construct an
in-house annotated dataset of 300 long videos, containing 3,177 single-action
subsequences over 16 action classes collected from 26 individuals. Our system
is empirically evaluated on this real-life dataset for both the action
recognition and detection tasks, and is shown to produce satisfactory results.
To facilitate reproduction, the new dataset and our implementation are also
provided online.
| new_dataset | 0.956022 |
1606.02077 | Nagarajan Natarajan | Prateek Jain and Nagarajan Natarajan | Regret Bounds for Non-decomposable Metrics with Missing Labels | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the problem of recommending relevant labels (items) for a given
data point (user). In particular, we are interested in the practically
important setting where the evaluation is with respect to non-decomposable
(over labels) performance metrics like the $F_1$ measure, and the training data
has missing labels. To this end, we propose a generic framework that given a
performance metric $\Psi$, can devise a regularized objective function and a
threshold such that all the values in the predicted score vector above and only
above the threshold are selected to be positive. We show that the regret or
generalization error in the given metric $\Psi$ is bounded ultimately by
estimation error of certain underlying parameters. In particular, we derive
regret bounds under three popular settings: a) collaborative filtering, b)
multilabel classification, and c) PU (positive-unlabeled) learning. For each of
the above problems, we can obtain precise non-asymptotic regret bound which is
small even when a large fraction of labels is missing. Our empirical results on
synthetic and benchmark datasets demonstrate that by explicitly modeling for
missing labels and optimizing the desired performance metric, our algorithm
indeed achieves significantly better performance (like $F_1$ score) when
compared to methods that do not model missing label information carefully.
| [
{
"version": "v1",
"created": "Tue, 7 Jun 2016 10:00:30 GMT"
}
] | 2016-06-08T00:00:00 | [
[
"Jain",
"Prateek",
""
],
[
"Natarajan",
"Nagarajan",
""
]
] | TITLE: Regret Bounds for Non-decomposable Metrics with Missing Labels
ABSTRACT: We consider the problem of recommending relevant labels (items) for a given
data point (user). In particular, we are interested in the practically
important setting where the evaluation is with respect to non-decomposable
(over labels) performance metrics like the $F_1$ measure, and the training data
has missing labels. To this end, we propose a generic framework that given a
performance metric $\Psi$, can devise a regularized objective function and a
threshold such that all the values in the predicted score vector above and only
above the threshold are selected to be positive. We show that the regret or
generalization error in the given metric $\Psi$ is bounded ultimately by
estimation error of certain underlying parameters. In particular, we derive
regret bounds under three popular settings: a) collaborative filtering, b)
multilabel classification, and c) PU (positive-unlabeled) learning. For each of
the above problems, we can obtain precise non-asymptotic regret bound which is
small even when a large fraction of labels is missing. Our empirical results on
synthetic and benchmark datasets demonstrate that by explicitly modeling for
missing labels and optimizing the desired performance metric, our algorithm
indeed achieves significantly better performance (like $F_1$ score) when
compared to methods that do not model missing label information carefully.
| no_new_dataset | 0.946745 |
1606.02147 | Adam Paszke | Adam Paszke, Abhishek Chaurasia, Sangpil Kim, Eugenio Culurciello | ENet: A Deep Neural Network Architecture for Real-Time Semantic
Segmentation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The ability to perform pixel-wise semantic segmentation in real-time is of
paramount importance in mobile applications. Recent deep neural networks aimed
at this task have the disadvantage of requiring a large number of floating
point operations and have long run-times that hinder their usability. In this
paper, we propose a novel deep neural network architecture named ENet
(efficient neural network), created specifically for tasks requiring low
latency operation. ENet is up to 18$\times$ faster, requires 75$\times$ less
FLOPs, has 79$\times$ less parameters, and provides similar or better accuracy
to existing models. We have tested it on CamVid, Cityscapes and SUN datasets
and report on comparisons with existing state-of-the-art methods, and the
trade-offs between accuracy and processing time of a network. We present
performance measurements of the proposed architecture on embedded systems and
suggest possible software improvements that could make ENet even faster.
| [
{
"version": "v1",
"created": "Tue, 7 Jun 2016 14:09:27 GMT"
}
] | 2016-06-08T00:00:00 | [
[
"Paszke",
"Adam",
""
],
[
"Chaurasia",
"Abhishek",
""
],
[
"Kim",
"Sangpil",
""
],
[
"Culurciello",
"Eugenio",
""
]
] | TITLE: ENet: A Deep Neural Network Architecture for Real-Time Semantic
Segmentation
ABSTRACT: The ability to perform pixel-wise semantic segmentation in real-time is of
paramount importance in mobile applications. Recent deep neural networks aimed
at this task have the disadvantage of requiring a large number of floating
point operations and have long run-times that hinder their usability. In this
paper, we propose a novel deep neural network architecture named ENet
(efficient neural network), created specifically for tasks requiring low
latency operation. ENet is up to 18$\times$ faster, requires 75$\times$ less
FLOPs, has 79$\times$ less parameters, and provides similar or better accuracy
to existing models. We have tested it on CamVid, Cityscapes and SUN datasets
and report on comparisons with existing state-of-the-art methods, and the
trade-offs between accuracy and processing time of a network. We present
performance measurements of the proposed architecture on embedded systems and
suggest possible software improvements that could make ENet even faster.
| no_new_dataset | 0.953362 |
1606.02275 | Roger Grosse | Roger B. Grosse and Siddharth Ancha and Daniel M. Roy | Measuring the reliability of MCMC inference with bidirectional Monte
Carlo | null | null | null | null | cs.LG stat.CO stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Markov chain Monte Carlo (MCMC) is one of the main workhorses of
probabilistic inference, but it is notoriously hard to measure the quality of
approximate posterior samples. This challenge is particularly salient in black
box inference methods, which can hide details and obscure inference failures.
In this work, we extend the recently introduced bidirectional Monte Carlo
technique to evaluate MCMC-based posterior inference algorithms. By running
annealed importance sampling (AIS) chains both from prior to posterior and vice
versa on simulated data, we upper bound in expectation the symmetrized KL
divergence between the true posterior distribution and the distribution of
approximate samples. We present Bounding Divergences with REverse Annealing
(BREAD), a protocol for validating the relevance of simulated data experiments
to real datasets, and integrate it into two probabilistic programming
languages: WebPPL and Stan. As an example of how BREAD can be used to guide the
design of inference algorithms, we apply it to study the effectiveness of
different model representations in both WebPPL and Stan.
| [
{
"version": "v1",
"created": "Tue, 7 Jun 2016 19:39:02 GMT"
}
] | 2016-06-08T00:00:00 | [
[
"Grosse",
"Roger B.",
""
],
[
"Ancha",
"Siddharth",
""
],
[
"Roy",
"Daniel M.",
""
]
] | TITLE: Measuring the reliability of MCMC inference with bidirectional Monte
Carlo
ABSTRACT: Markov chain Monte Carlo (MCMC) is one of the main workhorses of
probabilistic inference, but it is notoriously hard to measure the quality of
approximate posterior samples. This challenge is particularly salient in black
box inference methods, which can hide details and obscure inference failures.
In this work, we extend the recently introduced bidirectional Monte Carlo
technique to evaluate MCMC-based posterior inference algorithms. By running
annealed importance sampling (AIS) chains both from prior to posterior and vice
versa on simulated data, we upper bound in expectation the symmetrized KL
divergence between the true posterior distribution and the distribution of
approximate samples. We present Bounding Divergences with REverse Annealing
(BREAD), a protocol for validating the relevance of simulated data experiments
to real datasets, and integrate it into two probabilistic programming
languages: WebPPL and Stan. As an example of how BREAD can be used to guide the
design of inference algorithms, we apply it to study the effectiveness of
different model representations in both WebPPL and Stan.
| no_new_dataset | 0.944331 |
1606.02280 | Huiling Wang | Huiling Wang, Tapani Raiko, Lasse Lensu, Tinghuai Wang, Juha Karhunen | Semi-Supervised Domain Adaptation for Weakly Labeled Semantic Video
Object Segmentation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep convolutional neural networks (CNNs) have been immensely successful in
many high-level computer vision tasks given large labeled datasets. However,
for video semantic object segmentation, a domain where labels are scarce,
effectively exploiting the representation power of CNN with limited training
data remains a challenge. Simply borrowing the existing pretrained CNN image
recognition model for video segmentation task can severely hurt performance. We
propose a semi-supervised approach to adapting CNN image recognition model
trained from labeled image data to the target domain exploiting both semantic
evidence learned from CNN, and the intrinsic structures of video data. By
explicitly modeling and compensating for the domain shift from the source
domain to the target domain, this proposed approach underpins a robust semantic
object segmentation method against the changes in appearance, shape and
occlusion in natural videos. We present extensive experiments on challenging
datasets that demonstrate the superior performance of our approach compared
with the state-of-the-art methods.
| [
{
"version": "v1",
"created": "Tue, 7 Jun 2016 19:54:53 GMT"
}
] | 2016-06-08T00:00:00 | [
[
"Wang",
"Huiling",
""
],
[
"Raiko",
"Tapani",
""
],
[
"Lensu",
"Lasse",
""
],
[
"Wang",
"Tinghuai",
""
],
[
"Karhunen",
"Juha",
""
]
] | TITLE: Semi-Supervised Domain Adaptation for Weakly Labeled Semantic Video
Object Segmentation
ABSTRACT: Deep convolutional neural networks (CNNs) have been immensely successful in
many high-level computer vision tasks given large labeled datasets. However,
for video semantic object segmentation, a domain where labels are scarce,
effectively exploiting the representation power of CNN with limited training
data remains a challenge. Simply borrowing the existing pretrained CNN image
recognition model for video segmentation task can severely hurt performance. We
propose a semi-supervised approach to adapting CNN image recognition model
trained from labeled image data to the target domain exploiting both semantic
evidence learned from CNN, and the intrinsic structures of video data. By
explicitly modeling and compensating for the domain shift from the source
domain to the target domain, this proposed approach underpins a robust semantic
object segmentation method against the changes in appearance, shape and
occlusion in natural videos. We present extensive experiments on challenging
datasets that demonstrate the superior performance of our approach compared
with the state-of-the-art methods.
| no_new_dataset | 0.951863 |
1606.02283 | Yang Yang | Brian Uzzi, Yang Yang, Kevin Gaughan | The Formation and Imprinting of Network Effects Among the Business Elite | null | null | null | null | physics.soc-ph cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The business elite constitutes a small but strikingly influential subset of
the population, oftentimes affecting important societal outcomes such as the
consolidation of political power, the adoption of corporate governance
practices, and the stability of national economies more broadly. Here we
analyze a unique dataset of all MBA students at a top 5 MBA program. After
matching students on all available characteristics (e.g., age, grade scores,
industry experience, etc.) - i.e. creating twin pairs - we find that the
distinguishing characteristics between students who do well in job placement
and those who do not is their network. Further, we find that the network
differences between the successful and unsuccessful students develops within
the first month of class and persists thereafter, suggesting a network
imprinting that is persistent. Finally, we find that these effects are
pronounced for students who are at the extreme ends of the distribution on
other measures of success - students with the best expected job placement do
particularly poorly without the right network (descenders), whereas students
with worst expected job placement pull themselves to the top of the placement
hierarchy (ascenders) with the right network.
| [
{
"version": "v1",
"created": "Tue, 7 Jun 2016 19:58:12 GMT"
}
] | 2016-06-08T00:00:00 | [
[
"Uzzi",
"Brian",
""
],
[
"Yang",
"Yang",
""
],
[
"Gaughan",
"Kevin",
""
]
] | TITLE: The Formation and Imprinting of Network Effects Among the Business Elite
ABSTRACT: The business elite constitutes a small but strikingly influential subset of
the population, oftentimes affecting important societal outcomes such as the
consolidation of political power, the adoption of corporate governance
practices, and the stability of national economies more broadly. Here we
analyze a unique dataset of all MBA students at a top 5 MBA program. After
matching students on all available characteristics (e.g., age, grade scores,
industry experience, etc.) - i.e. creating twin pairs - we find that the
distinguishing characteristics between students who do well in job placement
and those who do not is their network. Further, we find that the network
differences between the successful and unsuccessful students develops within
the first month of class and persists thereafter, suggesting a network
imprinting that is persistent. Finally, we find that these effects are
pronounced for students who are at the extreme ends of the distribution on
other measures of success - students with the best expected job placement do
particularly poorly without the right network (descenders), whereas students
with worst expected job placement pull themselves to the top of the placement
hierarchy (ascenders) with the right network.
| new_dataset | 0.853119 |
1206.6426 | Andriy Mnih | Andriy Mnih (University College London), Yee Whye Teh (University
College London) | A Fast and Simple Algorithm for Training Neural Probabilistic Language
Models | Appears in Proceedings of the 29th International Conference on
Machine Learning (ICML 2012) | In Proceedings of the 29th International Conference on Machine
Learning, pages 1751-1758, 2012 | null | null | cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In spite of their superior performance, neural probabilistic language models
(NPLMs) remain far less widely used than n-gram models due to their notoriously
long training times, which are measured in weeks even for moderately-sized
datasets. Training NPLMs is computationally expensive because they are
explicitly normalized, which leads to having to consider all words in the
vocabulary when computing the log-likelihood gradients.
We propose a fast and simple algorithm for training NPLMs based on
noise-contrastive estimation, a newly introduced procedure for estimating
unnormalized continuous distributions. We investigate the behaviour of the
algorithm on the Penn Treebank corpus and show that it reduces the training
times by more than an order of magnitude without affecting the quality of the
resulting models. The algorithm is also more efficient and much more stable
than importance sampling because it requires far fewer noise samples to perform
well.
We demonstrate the scalability of the proposed approach by training several
neural language models on a 47M-word corpus with a 80K-word vocabulary,
obtaining state-of-the-art results on the Microsoft Research Sentence
Completion Challenge dataset.
| [
{
"version": "v1",
"created": "Wed, 27 Jun 2012 19:59:59 GMT"
}
] | 2016-06-07T00:00:00 | [
[
"Mnih",
"Andriy",
"",
"University College London"
],
[
"Teh",
"Yee Whye",
"",
"University\n College London"
]
] | TITLE: A Fast and Simple Algorithm for Training Neural Probabilistic Language
Models
ABSTRACT: In spite of their superior performance, neural probabilistic language models
(NPLMs) remain far less widely used than n-gram models due to their notoriously
long training times, which are measured in weeks even for moderately-sized
datasets. Training NPLMs is computationally expensive because they are
explicitly normalized, which leads to having to consider all words in the
vocabulary when computing the log-likelihood gradients.
We propose a fast and simple algorithm for training NPLMs based on
noise-contrastive estimation, a newly introduced procedure for estimating
unnormalized continuous distributions. We investigate the behaviour of the
algorithm on the Penn Treebank corpus and show that it reduces the training
times by more than an order of magnitude without affecting the quality of the
resulting models. The algorithm is also more efficient and much more stable
than importance sampling because it requires far fewer noise samples to perform
well.
We demonstrate the scalability of the proposed approach by training several
neural language models on a 47M-word corpus with a 80K-word vocabulary,
obtaining state-of-the-art results on the Microsoft Research Sentence
Completion Challenge dataset.
| no_new_dataset | 0.951188 |
1411.1132 | Forough Arabshahi | Forough Arabshahi, Furong Huang, Animashree Anandkumar, Carter T.
Butts, Sean M. Fitshugh | Are you going to the party: depends, who else is coming? [Learning
hidden group dynamics via conditional latent tree models] | null | null | null | null | cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Scalable probabilistic modeling and prediction in high dimensional
multivariate time-series is a challenging problem, particularly for systems
with hidden sources of dependence and/or homogeneity. Examples of such problems
include dynamic social networks with co-evolving nodes and edges and dynamic
student learning in online courses. Here, we address these problems through the
discovery of hierarchical latent groups. We introduce a family of Conditional
Latent Tree Models (CLTM), in which tree-structured latent variables
incorporate the unknown groups. The latent tree itself is conditioned on
observed covariates such as seasonality, historical activity, and node
attributes. We propose a statistically efficient framework for learning both
the hierarchical tree structure and the parameters of the CLTM. We demonstrate
competitive performance in multiple real world datasets from different domains.
These include a dataset on students' attempts at answering questions in a
psychology MOOC, Twitter users participating in an emergency management
discussion and interacting with one another, and windsurfers interacting on a
beach in Southern California. In addition, our modeling framework provides
valuable and interpretable information about the hidden group structures and
their effect on the evolution of the time series.
| [
{
"version": "v1",
"created": "Wed, 5 Nov 2014 02:36:58 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Nov 2014 20:07:53 GMT"
},
{
"version": "v3",
"created": "Fri, 7 Nov 2014 11:34:26 GMT"
},
{
"version": "v4",
"created": "Sat, 28 Feb 2015 17:05:34 GMT"
},
{
"version": "v5",
"created": "Wed, 17 Jun 2015 15:39:37 GMT"
},
{
"version": "v6",
"created": "Fri, 19 Jun 2015 11:12:04 GMT"
},
{
"version": "v7",
"created": "Sun, 5 Jun 2016 16:19:24 GMT"
}
] | 2016-06-07T00:00:00 | [
[
"Arabshahi",
"Forough",
""
],
[
"Huang",
"Furong",
""
],
[
"Anandkumar",
"Animashree",
""
],
[
"Butts",
"Carter T.",
""
],
[
"Fitshugh",
"Sean M.",
""
]
] | TITLE: Are you going to the party: depends, who else is coming? [Learning
hidden group dynamics via conditional latent tree models]
ABSTRACT: Scalable probabilistic modeling and prediction in high dimensional
multivariate time-series is a challenging problem, particularly for systems
with hidden sources of dependence and/or homogeneity. Examples of such problems
include dynamic social networks with co-evolving nodes and edges and dynamic
student learning in online courses. Here, we address these problems through the
discovery of hierarchical latent groups. We introduce a family of Conditional
Latent Tree Models (CLTM), in which tree-structured latent variables
incorporate the unknown groups. The latent tree itself is conditioned on
observed covariates such as seasonality, historical activity, and node
attributes. We propose a statistically efficient framework for learning both
the hierarchical tree structure and the parameters of the CLTM. We demonstrate
competitive performance in multiple real world datasets from different domains.
These include a dataset on students' attempts at answering questions in a
psychology MOOC, Twitter users participating in an emergency management
discussion and interacting with one another, and windsurfers interacting on a
beach in Southern California. In addition, our modeling framework provides
valuable and interpretable information about the hidden group structures and
their effect on the evolution of the time series.
| no_new_dataset | 0.942401 |
1504.01013 | Chunhua Shen | Guosheng Lin, Chunhua Shen, Anton van dan Hengel, Ian Reid | Efficient piecewise training of deep structured models for semantic
segmentation | Appearing in IEEE Conf. Computer Vision and Pattern Recognition
(CVPR) 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advances in semantic image segmentation have mostly been achieved by
training deep convolutional neural networks (CNNs). We show how to improve
semantic segmentation through the use of contextual information; specifically,
we explore `patch-patch' context between image regions, and `patch-background'
context. For learning from the patch-patch context, we formulate Conditional
Random Fields (CRFs) with CNN-based pairwise potential functions to capture
semantic correlations between neighboring patches. Efficient piecewise training
of the proposed deep structured model is then applied to avoid repeated
expensive CRF inference for back propagation. For capturing the
patch-background context, we show that a network design with traditional
multi-scale image input and sliding pyramid pooling is effective for improving
performance. Our experimental results set new state-of-the-art performance on a
number of popular semantic segmentation datasets, including NYUDv2, PASCAL VOC
2012, PASCAL-Context, and SIFT-flow. In particular, we achieve an
intersection-over-union score of 78.0 on the challenging PASCAL VOC 2012
dataset.
| [
{
"version": "v1",
"created": "Sat, 4 Apr 2015 14:26:23 GMT"
},
{
"version": "v2",
"created": "Thu, 23 Apr 2015 02:05:01 GMT"
},
{
"version": "v3",
"created": "Wed, 9 Mar 2016 03:07:34 GMT"
},
{
"version": "v4",
"created": "Mon, 6 Jun 2016 00:26:44 GMT"
}
] | 2016-06-07T00:00:00 | [
[
"Lin",
"Guosheng",
""
],
[
"Shen",
"Chunhua",
""
],
[
"Hengel",
"Anton van dan",
""
],
[
"Reid",
"Ian",
""
]
] | TITLE: Efficient piecewise training of deep structured models for semantic
segmentation
ABSTRACT: Recent advances in semantic image segmentation have mostly been achieved by
training deep convolutional neural networks (CNNs). We show how to improve
semantic segmentation through the use of contextual information; specifically,
we explore `patch-patch' context between image regions, and `patch-background'
context. For learning from the patch-patch context, we formulate Conditional
Random Fields (CRFs) with CNN-based pairwise potential functions to capture
semantic correlations between neighboring patches. Efficient piecewise training
of the proposed deep structured model is then applied to avoid repeated
expensive CRF inference for back propagation. For capturing the
patch-background context, we show that a network design with traditional
multi-scale image input and sliding pyramid pooling is effective for improving
performance. Our experimental results set new state-of-the-art performance on a
number of popular semantic segmentation datasets, including NYUDv2, PASCAL VOC
2012, PASCAL-Context, and SIFT-flow. In particular, we achieve an
intersection-over-union score of 78.0 on the challenging PASCAL VOC 2012
dataset.
| no_new_dataset | 0.951414 |
1506.03365 | Fisher Yu | Fisher Yu, Ari Seff, Yinda Zhang, Shuran Song, Thomas Funkhouser,
Jianxiong Xiao | LSUN: Construction of a Large-scale Image Dataset using Deep Learning
with Humans in the Loop | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While there has been remarkable progress in the performance of visual
recognition algorithms, the state-of-the-art models tend to be exceptionally
data-hungry. Large labeled training datasets, expensive and tedious to produce,
are required to optimize millions of parameters in deep network models. Lagging
behind the growth in model capacity, the available datasets are quickly
becoming outdated in terms of size and density. To circumvent this bottleneck,
we propose to amplify human effort through a partially automated labeling
scheme, leveraging deep learning with humans in the loop. Starting from a large
set of candidate images for each category, we iteratively sample a subset, ask
people to label them, classify the others with a trained model, split the set
into positives, negatives, and unlabeled based on the classification
confidence, and then iterate with the unlabeled set. To assess the
effectiveness of this cascading procedure and enable further progress in visual
recognition research, we construct a new image dataset, LSUN. It contains
around one million labeled images for each of 10 scene categories and 20 object
categories. We experiment with training popular convolutional networks and find
that they achieve substantial performance gains when trained on this dataset.
| [
{
"version": "v1",
"created": "Wed, 10 Jun 2015 15:38:47 GMT"
},
{
"version": "v2",
"created": "Fri, 19 Jun 2015 19:12:05 GMT"
},
{
"version": "v3",
"created": "Sat, 4 Jun 2016 09:51:30 GMT"
}
] | 2016-06-07T00:00:00 | [
[
"Yu",
"Fisher",
""
],
[
"Seff",
"Ari",
""
],
[
"Zhang",
"Yinda",
""
],
[
"Song",
"Shuran",
""
],
[
"Funkhouser",
"Thomas",
""
],
[
"Xiao",
"Jianxiong",
""
]
] | TITLE: LSUN: Construction of a Large-scale Image Dataset using Deep Learning
with Humans in the Loop
ABSTRACT: While there has been remarkable progress in the performance of visual
recognition algorithms, the state-of-the-art models tend to be exceptionally
data-hungry. Large labeled training datasets, expensive and tedious to produce,
are required to optimize millions of parameters in deep network models. Lagging
behind the growth in model capacity, the available datasets are quickly
becoming outdated in terms of size and density. To circumvent this bottleneck,
we propose to amplify human effort through a partially automated labeling
scheme, leveraging deep learning with humans in the loop. Starting from a large
set of candidate images for each category, we iteratively sample a subset, ask
people to label them, classify the others with a trained model, split the set
into positives, negatives, and unlabeled based on the classification
confidence, and then iterate with the unlabeled set. To assess the
effectiveness of this cascading procedure and enable further progress in visual
recognition research, we construct a new image dataset, LSUN. It contains
around one million labeled images for each of 10 scene categories and 20 object
categories. We experiment with training popular convolutional networks and find
that they achieve substantial performance gains when trained on this dataset.
| new_dataset | 0.958538 |
1601.06602 | Markus Schneider | Markus Schneider and Wolfgang Ertel and Fabio Ramos | Expected Similarity Estimation for Large-Scale Batch and Streaming
Anomaly Detection | null | null | 10.1007/s10994-016-5567-7 | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a novel algorithm for anomaly detection on very large datasets and
data streams. The method, named EXPected Similarity Estimation (EXPoSE), is
kernel-based and able to efficiently compute the similarity between new data
points and the distribution of regular data. The estimator is formulated as an
inner product with a reproducing kernel Hilbert space embedding and makes no
assumption about the type or shape of the underlying data distribution. We show
that offline (batch) learning with EXPoSE can be done in linear time and online
(incremental) learning takes constant time per instance and model update.
Furthermore, EXPoSE can make predictions in constant time, while it requires
only constant memory. In addition, we propose different methodologies for
concept drift adaptation on evolving data streams. On several real datasets we
demonstrate that our approach can compete with state of the art algorithms for
anomaly detection while being an order of magnitude faster than most other
approaches.
| [
{
"version": "v1",
"created": "Mon, 25 Jan 2016 13:56:59 GMT"
},
{
"version": "v2",
"created": "Mon, 18 Apr 2016 12:37:33 GMT"
},
{
"version": "v3",
"created": "Mon, 6 Jun 2016 13:48:17 GMT"
}
] | 2016-06-07T00:00:00 | [
[
"Schneider",
"Markus",
""
],
[
"Ertel",
"Wolfgang",
""
],
[
"Ramos",
"Fabio",
""
]
] | TITLE: Expected Similarity Estimation for Large-Scale Batch and Streaming
Anomaly Detection
ABSTRACT: We present a novel algorithm for anomaly detection on very large datasets and
data streams. The method, named EXPected Similarity Estimation (EXPoSE), is
kernel-based and able to efficiently compute the similarity between new data
points and the distribution of regular data. The estimator is formulated as an
inner product with a reproducing kernel Hilbert space embedding and makes no
assumption about the type or shape of the underlying data distribution. We show
that offline (batch) learning with EXPoSE can be done in linear time and online
(incremental) learning takes constant time per instance and model update.
Furthermore, EXPoSE can make predictions in constant time, while it requires
only constant memory. In addition, we propose different methodologies for
concept drift adaptation on evolving data streams. On several real datasets we
demonstrate that our approach can compete with state of the art algorithms for
anomaly detection while being an order of magnitude faster than most other
approaches.
| no_new_dataset | 0.94625 |
1603.04525 | Chunhua Shen | Qichang Hu, Peng Wang, Chunhua Shen, Anton van den Hengel, Fatih
Porikli | Pushing the Limits of Deep CNNs for Pedestrian Detection | Fixed some typos | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Compared to other applications in computer vision, convolutional neural
networks have under-performed on pedestrian detection. A breakthrough was made
very recently by using sophisticated deep CNN models, with a number of
hand-crafted features, or explicit occlusion handling mechanism. In this work,
we show that by re-using the convolutional feature maps (CFMs) of a deep
convolutional neural network (DCNN) model as image features to train an
ensemble of boosted decision models, we are able to achieve the best reported
accuracy without using specially designed learning algorithms. We empirically
identify and disclose important implementation details. We also show that pixel
labelling may be simply combined with a detector to boost the detection
performance. By adding complementary hand-crafted features such as optical
flow, the DCNN based detector can be further improved. We set a new record on
the Caltech pedestrian dataset, lowering the log-average miss rate from
$11.7\%$ to $8.9\%$, a relative improvement of $24\%$. We also achieve a
comparable result to the state-of-the-art approaches on the KITTI dataset.
| [
{
"version": "v1",
"created": "Tue, 15 Mar 2016 01:55:14 GMT"
},
{
"version": "v2",
"created": "Mon, 6 Jun 2016 06:36:15 GMT"
}
] | 2016-06-07T00:00:00 | [
[
"Hu",
"Qichang",
""
],
[
"Wang",
"Peng",
""
],
[
"Shen",
"Chunhua",
""
],
[
"Hengel",
"Anton van den",
""
],
[
"Porikli",
"Fatih",
""
]
] | TITLE: Pushing the Limits of Deep CNNs for Pedestrian Detection
ABSTRACT: Compared to other applications in computer vision, convolutional neural
networks have under-performed on pedestrian detection. A breakthrough was made
very recently by using sophisticated deep CNN models, with a number of
hand-crafted features, or explicit occlusion handling mechanism. In this work,
we show that by re-using the convolutional feature maps (CFMs) of a deep
convolutional neural network (DCNN) model as image features to train an
ensemble of boosted decision models, we are able to achieve the best reported
accuracy without using specially designed learning algorithms. We empirically
identify and disclose important implementation details. We also show that pixel
labelling may be simply combined with a detector to boost the detection
performance. By adding complementary hand-crafted features such as optical
flow, the DCNN based detector can be further improved. We set a new record on
the Caltech pedestrian dataset, lowering the log-average miss rate from
$11.7\%$ to $8.9\%$, a relative improvement of $24\%$. We also achieve a
comparable result to the state-of-the-art approaches on the KITTI dataset.
| no_new_dataset | 0.946597 |
1604.00727 | David Golub | David Golub, Xiaodong He | Character-Level Question Answering with Attention | null | null | null | null | cs.CL cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We show that a character-level encoder-decoder framework can be successfully
applied to question answering with a structured knowledge base. We use our
model for single-relation question answering and demonstrate the effectiveness
of our approach on the SimpleQuestions dataset (Bordes et al., 2015), where we
improve state-of-the-art accuracy from 63.9% to 70.9%, without use of
ensembles. Importantly, our character-level model has 16x fewer parameters than
an equivalent word-level model, can be learned with significantly less data
compared to previous work, which relies on data augmentation, and is robust to
new entities in testing.
| [
{
"version": "v1",
"created": "Mon, 4 Apr 2016 02:43:23 GMT"
},
{
"version": "v2",
"created": "Tue, 5 Apr 2016 23:09:31 GMT"
},
{
"version": "v3",
"created": "Fri, 8 Apr 2016 21:12:47 GMT"
},
{
"version": "v4",
"created": "Sun, 5 Jun 2016 02:02:10 GMT"
}
] | 2016-06-07T00:00:00 | [
[
"Golub",
"David",
""
],
[
"He",
"Xiaodong",
""
]
] | TITLE: Character-Level Question Answering with Attention
ABSTRACT: We show that a character-level encoder-decoder framework can be successfully
applied to question answering with a structured knowledge base. We use our
model for single-relation question answering and demonstrate the effectiveness
of our approach on the SimpleQuestions dataset (Bordes et al., 2015), where we
improve state-of-the-art accuracy from 63.9% to 70.9%, without use of
ensembles. Importantly, our character-level model has 16x fewer parameters than
an equivalent word-level model, can be learned with significantly less data
compared to previous work, which relies on data augmentation, and is robust to
new entities in testing.
| no_new_dataset | 0.95222 |
1605.07866 | Martin Rajchl PhD | Martin Rajchl, Matthew C.H. Lee, Ozan Oktay, Konstantinos Kamnitsas,
Jonathan Passerat-Palmbach, Wenjia Bai, Mellisa Damodaram, Mary A.
Rutherford, Joseph V. Hajnal, Bernhard Kainz, Daniel Rueckert | DeepCut: Object Segmentation from Bounding Box Annotations using
Convolutional Neural Networks | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose DeepCut, a method to obtain pixelwise object
segmentations given an image dataset labelled with bounding box annotations. It
extends the approach of the well-known GrabCut method to include machine
learning by training a neural network classifier from bounding box annotations.
We formulate the problem as an energy minimisation problem over a
densely-connected conditional random field and iteratively update the training
targets to obtain pixelwise object segmentations. Additionally, we propose
variants of the DeepCut method and compare those to a naive approach to CNN
training under weak supervision. We test its applicability to solve brain and
lung segmentation problems on a challenging fetal magnetic resonance dataset
and obtain encouraging results in terms of accuracy.
| [
{
"version": "v1",
"created": "Wed, 25 May 2016 13:03:48 GMT"
},
{
"version": "v2",
"created": "Sun, 5 Jun 2016 22:00:49 GMT"
}
] | 2016-06-07T00:00:00 | [
[
"Rajchl",
"Martin",
""
],
[
"Lee",
"Matthew C. H.",
""
],
[
"Oktay",
"Ozan",
""
],
[
"Kamnitsas",
"Konstantinos",
""
],
[
"Passerat-Palmbach",
"Jonathan",
""
],
[
"Bai",
"Wenjia",
""
],
[
"Damodaram",
"Mellisa",
""
],
[
"Rutherford",
"Mary A.",
""
],
[
"Hajnal",
"Joseph V.",
""
],
[
"Kainz",
"Bernhard",
""
],
[
"Rueckert",
"Daniel",
""
]
] | TITLE: DeepCut: Object Segmentation from Bounding Box Annotations using
Convolutional Neural Networks
ABSTRACT: In this paper, we propose DeepCut, a method to obtain pixelwise object
segmentations given an image dataset labelled with bounding box annotations. It
extends the approach of the well-known GrabCut method to include machine
learning by training a neural network classifier from bounding box annotations.
We formulate the problem as an energy minimisation problem over a
densely-connected conditional random field and iteratively update the training
targets to obtain pixelwise object segmentations. Additionally, we propose
variants of the DeepCut method and compare those to a naive approach to CNN
training under weak supervision. We test its applicability to solve brain and
lung segmentation problems on a challenging fetal magnetic resonance dataset
and obtain encouraging results in terms of accuracy.
| no_new_dataset | 0.953622 |
1605.08512 | Milad Mohammadi | Milad Mohammadi, Subhasis Das | SNN: Stacked Neural Networks | 8pages | null | null | null | cs.LG cs.CV cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It has been proven that transfer learning provides an easy way to achieve
state-of-the-art accuracies on several vision tasks by training a simple
classifier on top of features obtained from pre-trained neural networks. The
goal of this work is to generate better features for transfer learning from
multiple publicly available pre-trained neural networks. To this end, we
propose a novel architecture called Stacked Neural Networks which leverages the
fast training time of transfer learning while simultaneously being much more
accurate. We show that using a stacked NN architecture can result in up to 8%
improvements in accuracy over state-of-the-art techniques using only one
pre-trained network for transfer learning. A second aim of this work is to make
network fine- tuning retain the generalizability of the base network to unseen
tasks. To this end, we propose a new technique called "joint fine-tuning" that
is able to give accuracies comparable to finetuning the same network
individually over two datasets. We also show that a jointly finetuned network
generalizes better to unseen tasks when compared to a network finetuned over a
single task.
| [
{
"version": "v1",
"created": "Fri, 27 May 2016 06:02:48 GMT"
}
] | 2016-06-07T00:00:00 | [
[
"Mohammadi",
"Milad",
""
],
[
"Das",
"Subhasis",
""
]
] | TITLE: SNN: Stacked Neural Networks
ABSTRACT: It has been proven that transfer learning provides an easy way to achieve
state-of-the-art accuracies on several vision tasks by training a simple
classifier on top of features obtained from pre-trained neural networks. The
goal of this work is to generate better features for transfer learning from
multiple publicly available pre-trained neural networks. To this end, we
propose a novel architecture called Stacked Neural Networks which leverages the
fast training time of transfer learning while simultaneously being much more
accurate. We show that using a stacked NN architecture can result in up to 8%
improvements in accuracy over state-of-the-art techniques using only one
pre-trained network for transfer learning. A second aim of this work is to make
network fine- tuning retain the generalizability of the base network to unseen
tasks. To this end, we propose a new technique called "joint fine-tuning" that
is able to give accuracies comparable to finetuning the same network
individually over two datasets. We also show that a jointly finetuned network
generalizes better to unseen tasks when compared to a network finetuned over a
single task.
| no_new_dataset | 0.949763 |
1605.08664 | Gabor Gyorgy Gulyas PhD | Gabor Gyorgy Gulyas, Gergely Acs, Claude Castelluccia | Near-Optimal Fingerprinting with Constraints | null | null | 10.1515/popets-2016-0051 | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Several recent studies have demonstrated that people show large behavioural
uniqueness. This has serious privacy implications as most individuals become
increasingly re-identifiable in large datasets or can be tracked while they are
browsing the web using only a couple of their attributes, called as their
fingerprints. Often, the success of these attacks depend on explicit
constraints on the number of attributes learnable about individuals, i.e., the
size of their fingerprints. These constraints can be budget as well as
technical constraints imposed by the data holder. For instance, Apple restricts
the number of applications that can be called by another application on iOS in
order to mitigate the potential privacy threats of leaking the list of
installed applications on a device. In this work, we address the problem of
identifying the attributes (e.g., smartphone applications) that can serve as a
fingerprint of users given constraints on the size of the fingerprint. We give
the best fingerprinting algorithms in general, and evaluate their effectiveness
on several real-world datasets. Our results show that current privacy guards
limiting the number of attributes that can be queried about individuals is
insufficient to mitigate their potential privacy risks in many practical cases.
| [
{
"version": "v1",
"created": "Fri, 27 May 2016 14:31:26 GMT"
},
{
"version": "v2",
"created": "Fri, 3 Jun 2016 21:07:43 GMT"
}
] | 2016-06-07T00:00:00 | [
[
"Gulyas",
"Gabor Gyorgy",
""
],
[
"Acs",
"Gergely",
""
],
[
"Castelluccia",
"Claude",
""
]
] | TITLE: Near-Optimal Fingerprinting with Constraints
ABSTRACT: Several recent studies have demonstrated that people show large behavioural
uniqueness. This has serious privacy implications as most individuals become
increasingly re-identifiable in large datasets or can be tracked while they are
browsing the web using only a couple of their attributes, called as their
fingerprints. Often, the success of these attacks depend on explicit
constraints on the number of attributes learnable about individuals, i.e., the
size of their fingerprints. These constraints can be budget as well as
technical constraints imposed by the data holder. For instance, Apple restricts
the number of applications that can be called by another application on iOS in
order to mitigate the potential privacy threats of leaking the list of
installed applications on a device. In this work, we address the problem of
identifying the attributes (e.g., smartphone applications) that can serve as a
fingerprint of users given constraints on the size of the fingerprint. We give
the best fingerprinting algorithms in general, and evaluate their effectiveness
on several real-world datasets. Our results show that current privacy guards
limiting the number of attributes that can be queried about individuals is
insufficient to mitigate their potential privacy risks in many practical cases.
| no_new_dataset | 0.939582 |
1605.09673 | Xu Jia | Bert De Brabandere, Xu Jia, Tinne Tuytelaars, Luc Van Gool | Dynamic Filter Networks | submitted to NIPS16; X. Jia and B. De Brabandere contributed equally
to this work and are listed in alphabetical order | null | null | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In a traditional convolutional layer, the learned filters stay fixed after
training. In contrast, we introduce a new framework, the Dynamic Filter
Network, where filters are generated dynamically conditioned on an input. We
show that this architecture is a powerful one, with increased flexibility
thanks to its adaptive nature, yet without an excessive increase in the number
of model parameters. A wide variety of filtering operations can be learned this
way, including local spatial transformations, but also others like selective
(de)blurring or adaptive feature extraction. Moreover, multiple such layers can
be combined, e.g. in a recurrent architecture. We demonstrate the effectiveness
of the dynamic filter network on the tasks of video and stereo prediction, and
reach state-of-the-art performance on the moving MNIST dataset with a much
smaller model. By visualizing the learned filters, we illustrate that the
network has picked up flow information by only looking at unlabelled training
data. This suggests that the network can be used to pretrain networks for
various supervised tasks in an unsupervised way, like optical flow and depth
estimation.
| [
{
"version": "v1",
"created": "Tue, 31 May 2016 15:29:36 GMT"
},
{
"version": "v2",
"created": "Mon, 6 Jun 2016 15:39:10 GMT"
}
] | 2016-06-07T00:00:00 | [
[
"De Brabandere",
"Bert",
""
],
[
"Jia",
"Xu",
""
],
[
"Tuytelaars",
"Tinne",
""
],
[
"Van Gool",
"Luc",
""
]
] | TITLE: Dynamic Filter Networks
ABSTRACT: In a traditional convolutional layer, the learned filters stay fixed after
training. In contrast, we introduce a new framework, the Dynamic Filter
Network, where filters are generated dynamically conditioned on an input. We
show that this architecture is a powerful one, with increased flexibility
thanks to its adaptive nature, yet without an excessive increase in the number
of model parameters. A wide variety of filtering operations can be learned this
way, including local spatial transformations, but also others like selective
(de)blurring or adaptive feature extraction. Moreover, multiple such layers can
be combined, e.g. in a recurrent architecture. We demonstrate the effectiveness
of the dynamic filter network on the tasks of video and stereo prediction, and
reach state-of-the-art performance on the moving MNIST dataset with a much
smaller model. By visualizing the learned filters, we illustrate that the
network has picked up flow information by only looking at unlabelled training
data. This suggests that the network can be used to pretrain networks for
various supervised tasks in an unsupervised way, like optical flow and depth
estimation.
| no_new_dataset | 0.950686 |
1606.01368 | Christian Walder Dr | Christian Walder | Modelling Symbolic Music: Beyond the Piano Roll | null | null | null | null | cs.SD | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we consider the problem of probabilistically modelling
symbolic music data. We introduce a representation which reduces polyphonic
music to a univariate categorical sequence. In this way, we are able to apply
state of the art natural language processing techniques, namely the long
short-term memory sequence model. The representation we employ permits
arbitrary rhythmic structure, which we assume to be given. We show that our
model is effective on four out of four piano roll based benchmark datasets. We
further improve our model by augmenting our training data set with
transpositions of the original pieces through all musical keys, thereby
convincingly advancing the state of the art on these benchmark problems. We
also fit models to music which is unconstrained in its rhythmic structure,
discuss the properties of this model, and provide musical samples which are
more sophisticated than previously possible with this class of recurrent neural
network sequence models. We also provide our newly preprocessed data set of non
piano-roll music data.
| [
{
"version": "v1",
"created": "Sat, 4 Jun 2016 10:51:24 GMT"
}
] | 2016-06-07T00:00:00 | [
[
"Walder",
"Christian",
""
]
] | TITLE: Modelling Symbolic Music: Beyond the Piano Roll
ABSTRACT: In this paper, we consider the problem of probabilistically modelling
symbolic music data. We introduce a representation which reduces polyphonic
music to a univariate categorical sequence. In this way, we are able to apply
state of the art natural language processing techniques, namely the long
short-term memory sequence model. The representation we employ permits
arbitrary rhythmic structure, which we assume to be given. We show that our
model is effective on four out of four piano roll based benchmark datasets. We
further improve our model by augmenting our training data set with
transpositions of the original pieces through all musical keys, thereby
convincingly advancing the state of the art on these benchmark problems. We
also fit models to music which is unconstrained in its rhythmic structure,
discuss the properties of this model, and provide musical samples which are
more sophisticated than previously possible with this class of recurrent neural
network sequence models. We also provide our newly preprocessed data set of non
piano-roll music data.
| no_new_dataset | 0.866302 |
1606.01535 | Kevin Jarrett | Kevin Jarrett, Koray Kvukcuoglu, Karol Gregor and Yann LeCun | What is the Best Feature Learning Procedure in Hierarchical Recognition
Architectures? | 17 pages, 3 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | (This paper was written in November 2011 and never published. It is posted on
arXiv.org in its original form in June 2016). Many recent object recognition
systems have proposed using a two phase training procedure to learn sparse
convolutional feature hierarchies: unsupervised pre-training followed by
supervised fine-tuning. Recent results suggest that these methods provide
little improvement over purely supervised systems when the appropriate
nonlinearities are included. This paper presents an empirical exploration of
the space of learning procedures for sparse convolutional networks to assess
which method produces the best performance. In our study, we introduce an
augmentation of the Predictive Sparse Decomposition method that includes a
discriminative term (DPSD). We also introduce a new single phase supervised
learning procedure that places an L1 penalty on the output state of each layer
of the network. This forces the network to produce sparse codes without the
expensive pre-training phase. Using DPSD with a new, complex predictor that
incorporates lateral inhibition, combined with multi-scale feature pooling, and
supervised refinement, the system achieves a 70.6\% recognition rate on
Caltech-101. With the addition of convolutional training, a 77\% recognition
was obtained on the CIfAR-10 dataset.
| [
{
"version": "v1",
"created": "Sun, 5 Jun 2016 17:31:39 GMT"
}
] | 2016-06-07T00:00:00 | [
[
"Jarrett",
"Kevin",
""
],
[
"Kvukcuoglu",
"Koray",
""
],
[
"Gregor",
"Karol",
""
],
[
"LeCun",
"Yann",
""
]
] | TITLE: What is the Best Feature Learning Procedure in Hierarchical Recognition
Architectures?
ABSTRACT: (This paper was written in November 2011 and never published. It is posted on
arXiv.org in its original form in June 2016). Many recent object recognition
systems have proposed using a two phase training procedure to learn sparse
convolutional feature hierarchies: unsupervised pre-training followed by
supervised fine-tuning. Recent results suggest that these methods provide
little improvement over purely supervised systems when the appropriate
nonlinearities are included. This paper presents an empirical exploration of
the space of learning procedures for sparse convolutional networks to assess
which method produces the best performance. In our study, we introduce an
augmentation of the Predictive Sparse Decomposition method that includes a
discriminative term (DPSD). We also introduce a new single phase supervised
learning procedure that places an L1 penalty on the output state of each layer
of the network. This forces the network to produce sparse codes without the
expensive pre-training phase. Using DPSD with a new, complex predictor that
incorporates lateral inhibition, combined with multi-scale feature pooling, and
supervised refinement, the system achieves a 70.6\% recognition rate on
Caltech-101. With the addition of convolutional training, a 77\% recognition
was obtained on the CIfAR-10 dataset.
| no_new_dataset | 0.948155 |
1606.01601 | Jiaping Zhao | Jiaping Zhao and Laurent Itti | shapeDTW: shape Dynamic Time Warping | 14 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dynamic Time Warping (DTW) is an algorithm to align temporal sequences with
possible local non-linear distortions, and has been widely applied to audio,
video and graphics data alignments. DTW is essentially a point-to-point
matching method under some boundary and temporal consistency constraints.
Although DTW obtains a global optimal solution, it does not necessarily achieve
locally sensible matchings. Concretely, two temporal points with entirely
dissimilar local structures may be matched by DTW. To address this problem, we
propose an improved alignment algorithm, named shape Dynamic Time Warping
(shapeDTW), which enhances DTW by taking point-wise local structural
information into consideration. shapeDTW is inherently a DTW algorithm, but
additionally attempts to pair locally similar structures and to avoid matching
points with distinct neighborhood structures. We apply shapeDTW to align audio
signal pairs having ground-truth alignments, as well as artificially simulated
pairs of aligned sequences, and obtain quantitatively much lower alignment
errors than DTW and its two variants. When shapeDTW is used as a distance
measure in a nearest neighbor classifier (NN-shapeDTW) to classify time series,
it beats DTW on 64 out of 84 UCR time series datasets, with significantly
improved classification accuracies. By using a properly designed local
structure descriptor, shapeDTW improves accuracies by more than 10% on 18
datasets. To the best of our knowledge, shapeDTW is the first distance measure
under the nearest neighbor classifier scheme to significantly outperform DTW,
which had been widely recognized as the best distance measure to date. Our code
is publicly accessible at: https://github.com/jiapingz/shapeDTW.
| [
{
"version": "v1",
"created": "Mon, 6 Jun 2016 02:38:01 GMT"
}
] | 2016-06-07T00:00:00 | [
[
"Zhao",
"Jiaping",
""
],
[
"Itti",
"Laurent",
""
]
] | TITLE: shapeDTW: shape Dynamic Time Warping
ABSTRACT: Dynamic Time Warping (DTW) is an algorithm to align temporal sequences with
possible local non-linear distortions, and has been widely applied to audio,
video and graphics data alignments. DTW is essentially a point-to-point
matching method under some boundary and temporal consistency constraints.
Although DTW obtains a global optimal solution, it does not necessarily achieve
locally sensible matchings. Concretely, two temporal points with entirely
dissimilar local structures may be matched by DTW. To address this problem, we
propose an improved alignment algorithm, named shape Dynamic Time Warping
(shapeDTW), which enhances DTW by taking point-wise local structural
information into consideration. shapeDTW is inherently a DTW algorithm, but
additionally attempts to pair locally similar structures and to avoid matching
points with distinct neighborhood structures. We apply shapeDTW to align audio
signal pairs having ground-truth alignments, as well as artificially simulated
pairs of aligned sequences, and obtain quantitatively much lower alignment
errors than DTW and its two variants. When shapeDTW is used as a distance
measure in a nearest neighbor classifier (NN-shapeDTW) to classify time series,
it beats DTW on 64 out of 84 UCR time series datasets, with significantly
improved classification accuracies. By using a properly designed local
structure descriptor, shapeDTW improves accuracies by more than 10% on 18
datasets. To the best of our knowledge, shapeDTW is the first distance measure
under the nearest neighbor classifier scheme to significantly outperform DTW,
which had been widely recognized as the best distance measure to date. Our code
is publicly accessible at: https://github.com/jiapingz/shapeDTW.
| no_new_dataset | 0.948346 |
1402.0030 | Andriy Mnih | Andriy Mnih, Karol Gregor | Neural Variational Inference and Learning in Belief Networks | null | Proceedings of the 31st International Conference on Machine
Learning (ICML), JMLR: W&CP volume 32, 2014 pgs 1791-1799 | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Highly expressive directed latent variable models, such as sigmoid belief
networks, are difficult to train on large datasets because exact inference in
them is intractable and none of the approximate inference methods that have
been applied to them scale well. We propose a fast non-iterative approximate
inference method that uses a feedforward network to implement efficient exact
sampling from the variational posterior. The model and this inference network
are trained jointly by maximizing a variational lower bound on the
log-likelihood. Although the naive estimator of the inference model gradient is
too high-variance to be useful, we make it practical by applying several
straightforward model-independent variance reduction techniques. Applying our
approach to training sigmoid belief networks and deep autoregressive networks,
we show that it outperforms the wake-sleep algorithm on MNIST and achieves
state-of-the-art results on the Reuters RCV1 document dataset.
| [
{
"version": "v1",
"created": "Fri, 31 Jan 2014 23:33:21 GMT"
},
{
"version": "v2",
"created": "Wed, 4 Jun 2014 17:12:03 GMT"
}
] | 2016-06-06T00:00:00 | [
[
"Mnih",
"Andriy",
""
],
[
"Gregor",
"Karol",
""
]
] | TITLE: Neural Variational Inference and Learning in Belief Networks
ABSTRACT: Highly expressive directed latent variable models, such as sigmoid belief
networks, are difficult to train on large datasets because exact inference in
them is intractable and none of the approximate inference methods that have
been applied to them scale well. We propose a fast non-iterative approximate
inference method that uses a feedforward network to implement efficient exact
sampling from the variational posterior. The model and this inference network
are trained jointly by maximizing a variational lower bound on the
log-likelihood. Although the naive estimator of the inference model gradient is
too high-variance to be useful, we make it practical by applying several
straightforward model-independent variance reduction techniques. Applying our
approach to training sigmoid belief networks and deep autoregressive networks,
we show that it outperforms the wake-sleep algorithm on MNIST and achieves
state-of-the-art results on the Reuters RCV1 document dataset.
| no_new_dataset | 0.946941 |
1405.1297 | Dong Huang | Dong Huang and Jian-Huang Lai and Chang-Dong Wang | Combining Multiple Clusterings via Crowd Agreement Estimation and
Multi-Granularity Link Analysis | The MATLAB source code of this work is available at:
https://www.researchgate.net/publication/281970316 | Neurocomputing, 2015, vol.170, pp.240-250 | 10.1016/j.neucom.2014.05.094 | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The clustering ensemble technique aims to combine multiple clusterings into a
probably better and more robust clustering and has been receiving an increasing
attention in recent years. There are mainly two aspects of limitations in the
existing clustering ensemble approaches. Firstly, many approaches lack the
ability to weight the base clusterings without access to the original data and
can be affected significantly by the low-quality, or even ill clusterings.
Secondly, they generally focus on the instance level or cluster level in the
ensemble system and fail to integrate multi-granularity cues into a unified
model. To address these two limitations, this paper proposes to solve the
clustering ensemble problem via crowd agreement estimation and
multi-granularity link analysis. We present the normalized crowd agreement
index (NCAI) to evaluate the quality of base clusterings in an unsupervised
manner and thus weight the base clusterings in accordance with their clustering
validity. To explore the relationship between clusters, the source aware
connected triple (SACT) similarity is introduced with regard to their common
neighbors and the source reliability. Based on NCAI and multi-granularity
information collected among base clusterings, clusters, and data instances, we
further propose two novel consensus functions, termed weighted evidence
accumulation clustering (WEAC) and graph partitioning with multi-granularity
link analysis (GP-MGLA) respectively. The experiments are conducted on eight
real-world datasets. The experimental results demonstrate the effectiveness and
robustness of the proposed methods.
| [
{
"version": "v1",
"created": "Tue, 6 May 2014 15:05:02 GMT"
},
{
"version": "v2",
"created": "Fri, 3 Jun 2016 16:10:19 GMT"
}
] | 2016-06-06T00:00:00 | [
[
"Huang",
"Dong",
""
],
[
"Lai",
"Jian-Huang",
""
],
[
"Wang",
"Chang-Dong",
""
]
] | TITLE: Combining Multiple Clusterings via Crowd Agreement Estimation and
Multi-Granularity Link Analysis
ABSTRACT: The clustering ensemble technique aims to combine multiple clusterings into a
probably better and more robust clustering and has been receiving an increasing
attention in recent years. There are mainly two aspects of limitations in the
existing clustering ensemble approaches. Firstly, many approaches lack the
ability to weight the base clusterings without access to the original data and
can be affected significantly by the low-quality, or even ill clusterings.
Secondly, they generally focus on the instance level or cluster level in the
ensemble system and fail to integrate multi-granularity cues into a unified
model. To address these two limitations, this paper proposes to solve the
clustering ensemble problem via crowd agreement estimation and
multi-granularity link analysis. We present the normalized crowd agreement
index (NCAI) to evaluate the quality of base clusterings in an unsupervised
manner and thus weight the base clusterings in accordance with their clustering
validity. To explore the relationship between clusters, the source aware
connected triple (SACT) similarity is introduced with regard to their common
neighbors and the source reliability. Based on NCAI and multi-granularity
information collected among base clusterings, clusters, and data instances, we
further propose two novel consensus functions, termed weighted evidence
accumulation clustering (WEAC) and graph partitioning with multi-granularity
link analysis (GP-MGLA) respectively. The experiments are conducted on eight
real-world datasets. The experimental results demonstrate the effectiveness and
robustness of the proposed methods.
| no_new_dataset | 0.951729 |
1504.05843 | Hao Yang Mr | Hao Yang, Joey Tianyi Zhou, Yu Zhang, Bin-Bin Gao, Jianxin Wu, Jianfei
Cai | Exploit Bounding Box Annotations for Multi-label Object Recognition | Accepted in CVPR 2016 | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Convolutional neural networks (CNNs) have shown great performance as general
feature representations for object recognition applications. However, for
multi-label images that contain multiple objects from different categories,
scales and locations, global CNN features are not optimal. In this paper, we
incorporate local information to enhance the feature discriminative power. In
particular, we first extract object proposals from each image. With each image
treated as a bag and object proposals extracted from it treated as instances,
we transform the multi-label recognition problem into a multi-class
multi-instance learning problem. Then, in addition to extracting the typical
CNN feature representation from each proposal, we propose to make use of
ground-truth bounding box annotations (strong labels) to add another level of
local information by using nearest-neighbor relationships of local regions to
form a multi-view pipeline. The proposed multi-view multi-instance framework
utilizes both weak and strong labels effectively, and more importantly it has
the generalization ability to even boost the performance of unseen categories
by partial strong labels from other categories. Our framework is extensively
compared with state-of-the-art hand-crafted feature based methods and CNN based
methods on two multi-label benchmark datasets. The experimental results
validate the discriminative power and the generalization ability of the
proposed framework. With strong labels, our framework is able to achieve
state-of-the-art results in both datasets.
| [
{
"version": "v1",
"created": "Wed, 22 Apr 2015 15:01:29 GMT"
},
{
"version": "v2",
"created": "Fri, 3 Jun 2016 09:44:35 GMT"
}
] | 2016-06-06T00:00:00 | [
[
"Yang",
"Hao",
""
],
[
"Zhou",
"Joey Tianyi",
""
],
[
"Zhang",
"Yu",
""
],
[
"Gao",
"Bin-Bin",
""
],
[
"Wu",
"Jianxin",
""
],
[
"Cai",
"Jianfei",
""
]
] | TITLE: Exploit Bounding Box Annotations for Multi-label Object Recognition
ABSTRACT: Convolutional neural networks (CNNs) have shown great performance as general
feature representations for object recognition applications. However, for
multi-label images that contain multiple objects from different categories,
scales and locations, global CNN features are not optimal. In this paper, we
incorporate local information to enhance the feature discriminative power. In
particular, we first extract object proposals from each image. With each image
treated as a bag and object proposals extracted from it treated as instances,
we transform the multi-label recognition problem into a multi-class
multi-instance learning problem. Then, in addition to extracting the typical
CNN feature representation from each proposal, we propose to make use of
ground-truth bounding box annotations (strong labels) to add another level of
local information by using nearest-neighbor relationships of local regions to
form a multi-view pipeline. The proposed multi-view multi-instance framework
utilizes both weak and strong labels effectively, and more importantly it has
the generalization ability to even boost the performance of unseen categories
by partial strong labels from other categories. Our framework is extensively
compared with state-of-the-art hand-crafted feature based methods and CNN based
methods on two multi-label benchmark datasets. The experimental results
validate the discriminative power and the generalization ability of the
proposed framework. With strong labels, our framework is able to achieve
state-of-the-art results in both datasets.
| no_new_dataset | 0.948822 |
1602.09013 | Anastasia Podosinnikova | Anastasia Podosinnikova, Francis Bach, and Simon Lacoste-Julien | Beyond CCA: Moment Matching for Multi-View Models | Appears in: Proceedings of the 33rd International Conference on
Machine Learning (ICML 2016). 22 pages | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce three novel semi-parametric extensions of probabilistic
canonical correlation analysis with identifiability guarantees. We consider
moment matching techniques for estimation in these models. For that, by drawing
explicit links between the new models and a discrete version of independent
component analysis (DICA), we first extend the DICA cumulant tensors to the new
discrete version of CCA. By further using a close connection with independent
component analysis, we introduce generalized covariance matrices, which can
replace the cumulant tensors in the moment matching framework, and, therefore,
improve sample complexity and simplify derivations and algorithms
significantly. As the tensor power method or orthogonal joint diagonalization
are not applicable in the new setting, we use non-orthogonal joint
diagonalization techniques for matching the cumulants. We demonstrate
performance of the proposed models and estimation techniques on experiments
with both synthetic and real datasets.
| [
{
"version": "v1",
"created": "Mon, 29 Feb 2016 15:51:50 GMT"
},
{
"version": "v2",
"created": "Fri, 3 Jun 2016 14:06:23 GMT"
}
] | 2016-06-06T00:00:00 | [
[
"Podosinnikova",
"Anastasia",
""
],
[
"Bach",
"Francis",
""
],
[
"Lacoste-Julien",
"Simon",
""
]
] | TITLE: Beyond CCA: Moment Matching for Multi-View Models
ABSTRACT: We introduce three novel semi-parametric extensions of probabilistic
canonical correlation analysis with identifiability guarantees. We consider
moment matching techniques for estimation in these models. For that, by drawing
explicit links between the new models and a discrete version of independent
component analysis (DICA), we first extend the DICA cumulant tensors to the new
discrete version of CCA. By further using a close connection with independent
component analysis, we introduce generalized covariance matrices, which can
replace the cumulant tensors in the moment matching framework, and, therefore,
improve sample complexity and simplify derivations and algorithms
significantly. As the tensor power method or orthogonal joint diagonalization
are not applicable in the new setting, we use non-orthogonal joint
diagonalization techniques for matching the cumulants. We demonstrate
performance of the proposed models and estimation techniques on experiments
with both synthetic and real datasets.
| no_new_dataset | 0.946101 |
1603.09260 | Vladimir Jojic | Tianxiang Gao and Vladimir Jojic | Degrees of Freedom in Deep Neural Networks | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we explore degrees of freedom in deep sigmoidal neural
networks. We show that the degrees of freedom in these models is related to the
expected optimism, which is the expected difference between test error and
training error. We provide an efficient Monte-Carlo method to estimate the
degrees of freedom for multi-class classification methods. We show degrees of
freedom are lower than the parameter count in a simple XOR network. We extend
these results to neural nets trained on synthetic and real data, and
investigate impact of network's architecture and different regularization
choices. The degrees of freedom in deep networks are dramatically smaller than
the number of parameters, in some real datasets several orders of magnitude.
Further, we observe that for fixed number of parameters, deeper networks have
less degrees of freedom exhibiting a regularization-by-depth.
| [
{
"version": "v1",
"created": "Wed, 30 Mar 2016 16:16:57 GMT"
},
{
"version": "v2",
"created": "Fri, 3 Jun 2016 14:45:35 GMT"
}
] | 2016-06-06T00:00:00 | [
[
"Gao",
"Tianxiang",
""
],
[
"Jojic",
"Vladimir",
""
]
] | TITLE: Degrees of Freedom in Deep Neural Networks
ABSTRACT: In this paper, we explore degrees of freedom in deep sigmoidal neural
networks. We show that the degrees of freedom in these models is related to the
expected optimism, which is the expected difference between test error and
training error. We provide an efficient Monte-Carlo method to estimate the
degrees of freedom for multi-class classification methods. We show degrees of
freedom are lower than the parameter count in a simple XOR network. We extend
these results to neural nets trained on synthetic and real data, and
investigate impact of network's architecture and different regularization
choices. The degrees of freedom in deep networks are dramatically smaller than
the number of parameters, in some real datasets several orders of magnitude.
Further, we observe that for fixed number of parameters, deeper networks have
less degrees of freedom exhibiting a regularization-by-depth.
| no_new_dataset | 0.952486 |
1606.00868 | Aykut Firat | Aykut Firat | Unified Framework for Quantification | 9 pages, 4 figures | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Quantification is the machine learning task of estimating test-data class
proportions that are not necessarily similar to those in training. Apart from
its intrinsic value as an aggregate statistic, quantification output can also
be used to optimize classifier probabilities, thereby increasing classification
accuracy. We unify major quantification approaches under a constrained
multi-variate regression framework, and use mathematical programming to
estimate class proportions for different loss functions. With this modeling
approach, we extend existing binary-only quantification approaches to
multi-class settings as well. We empirically verify our unified framework by
experimenting with several multi-class datasets including the Stanford
Sentiment Treebank and CIFAR-10.
| [
{
"version": "v1",
"created": "Thu, 2 Jun 2016 20:42:31 GMT"
}
] | 2016-06-06T00:00:00 | [
[
"Firat",
"Aykut",
""
]
] | TITLE: Unified Framework for Quantification
ABSTRACT: Quantification is the machine learning task of estimating test-data class
proportions that are not necessarily similar to those in training. Apart from
its intrinsic value as an aggregate statistic, quantification output can also
be used to optimize classifier probabilities, thereby increasing classification
accuracy. We unify major quantification approaches under a constrained
multi-variate regression framework, and use mathematical programming to
estimate class proportions for different loss functions. With this modeling
approach, we extend existing binary-only quantification approaches to
multi-class settings as well. We empirically verify our unified framework by
experimenting with several multi-class datasets including the Stanford
Sentiment Treebank and CIFAR-10.
| no_new_dataset | 0.945601 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.