id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
sequencelengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1606.09367 | Sepehr Valipour | Sepehr Valipour, Mennatullah Siam, Eleni Stroulia, Martin Jagersand | Parking Stall Vacancy Indicator System Based on Deep Convolutional
Neural Networks | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Parking management systems, and vacancy-indication services in particular,
can play a valuable role in reducing traffic and energy waste in large cities.
Visual detection methods represent a cost-effective option, since they can take
advantage of hardware usually already available in many parking lots, namely
cameras. However, visual detection methods can be fragile and not easily
generalizable. In this paper, we present a robust detection algorithm based on
deep convolutional neural networks. We implemented and tested our algorithm on
a large baseline dataset, and also on a set of image feeds from actual cameras
already installed in parking lots. We have developed a fully functional system,
from server-side image analysis to front-end user interface, to demonstrate the
practicality of our method.
| [
{
"version": "v1",
"created": "Thu, 30 Jun 2016 06:57:11 GMT"
}
] | 2016-07-01T00:00:00 | [
[
"Valipour",
"Sepehr",
""
],
[
"Siam",
"Mennatullah",
""
],
[
"Stroulia",
"Eleni",
""
],
[
"Jagersand",
"Martin",
""
]
] | TITLE: Parking Stall Vacancy Indicator System Based on Deep Convolutional
Neural Networks
ABSTRACT: Parking management systems, and vacancy-indication services in particular,
can play a valuable role in reducing traffic and energy waste in large cities.
Visual detection methods represent a cost-effective option, since they can take
advantage of hardware usually already available in many parking lots, namely
cameras. However, visual detection methods can be fragile and not easily
generalizable. In this paper, we present a robust detection algorithm based on
deep convolutional neural networks. We implemented and tested our algorithm on
a large baseline dataset, and also on a set of image feeds from actual cameras
already installed in parking lots. We have developed a fully functional system,
from server-side image analysis to front-end user interface, to demonstrate the
practicality of our method.
| no_new_dataset | 0.948346 |
1606.09370 | Sunil Sahu | Sunil Kumar Sahu, Ashish Anand, Krishnadev Oruganty, Mahanandeeshwar
Gattu | Relation extraction from clinical texts using domain invariant
convolutional neural network | This paper has been accepted in ACL BioNLP 2016 Workshop | null | null | null | cs.CL | http://creativecommons.org/publicdomain/zero/1.0/ | In recent years extracting relevant information from biomedical and clinical
texts such as research articles, discharge summaries, or electronic health
records have been a subject of many research efforts and shared challenges.
Relation extraction is the process of detecting and classifying the semantic
relation among entities in a given piece of texts. Existing models for this
task in biomedical domain use either manually engineered features or kernel
methods to create feature vector. These features are then fed to classifier for
the prediction of the correct class. It turns out that the results of these
methods are highly dependent on quality of user designed features and also
suffer from curse of dimensionality. In this work we focus on extracting
relations from clinical discharge summaries. Our main objective is to exploit
the power of convolution neural network (CNN) to learn features automatically
and thus reduce the dependency on manual feature engineering. We evaluate
performance of the proposed model on i2b2-2010 clinical relation extraction
challenge dataset. Our results indicate that convolution neural network can be
a good model for relation exaction in clinical text without being dependent on
expert's knowledge on defining quality features.
| [
{
"version": "v1",
"created": "Thu, 30 Jun 2016 07:10:07 GMT"
}
] | 2016-07-01T00:00:00 | [
[
"Sahu",
"Sunil Kumar",
""
],
[
"Anand",
"Ashish",
""
],
[
"Oruganty",
"Krishnadev",
""
],
[
"Gattu",
"Mahanandeeshwar",
""
]
] | TITLE: Relation extraction from clinical texts using domain invariant
convolutional neural network
ABSTRACT: In recent years extracting relevant information from biomedical and clinical
texts such as research articles, discharge summaries, or electronic health
records have been a subject of many research efforts and shared challenges.
Relation extraction is the process of detecting and classifying the semantic
relation among entities in a given piece of texts. Existing models for this
task in biomedical domain use either manually engineered features or kernel
methods to create feature vector. These features are then fed to classifier for
the prediction of the correct class. It turns out that the results of these
methods are highly dependent on quality of user designed features and also
suffer from curse of dimensionality. In this work we focus on extracting
relations from clinical discharge summaries. Our main objective is to exploit
the power of convolution neural network (CNN) to learn features automatically
and thus reduce the dependency on manual feature engineering. We evaluate
performance of the proposed model on i2b2-2010 clinical relation extraction
challenge dataset. Our results indicate that convolution neural network can be
a good model for relation exaction in clinical text without being dependent on
expert's knowledge on defining quality features.
| no_new_dataset | 0.949248 |
1606.09371 | Sunil Sahu | Sunil Kumar Sahu, Ashish Anand | Recurrent neural network models for disease name recognition using
domain invariant features | This work has been accepted in ACL-2016 as long paper | null | null | null | cs.CL | http://creativecommons.org/publicdomain/zero/1.0/ | Hand-crafted features based on linguistic and domain-knowledge play crucial
role in determining the performance of disease name recognition systems. Such
methods are further limited by the scope of these features or in other words,
their ability to cover the contexts or word dependencies within a sentence. In
this work, we focus on reducing such dependencies and propose a
domain-invariant framework for the disease name recognition task. In
particular, we propose various end-to-end recurrent neural network (RNN) models
for the tasks of disease name recognition and their classification into four
pre-defined categories. We also utilize convolution neural network (CNN) in
cascade of RNN to get character-based embedded features and employ it with
word-embedded features in our model. We compare our models with the
state-of-the-art results for the two tasks on NCBI disease dataset. Our results
for the disease mention recognition task indicate that state-of-the-art
performance can be obtained without relying on feature engineering. Further the
proposed models obtained improved performance on the classification task of
disease names.
| [
{
"version": "v1",
"created": "Thu, 30 Jun 2016 07:15:56 GMT"
}
] | 2016-07-01T00:00:00 | [
[
"Sahu",
"Sunil Kumar",
""
],
[
"Anand",
"Ashish",
""
]
] | TITLE: Recurrent neural network models for disease name recognition using
domain invariant features
ABSTRACT: Hand-crafted features based on linguistic and domain-knowledge play crucial
role in determining the performance of disease name recognition systems. Such
methods are further limited by the scope of these features or in other words,
their ability to cover the contexts or word dependencies within a sentence. In
this work, we focus on reducing such dependencies and propose a
domain-invariant framework for the disease name recognition task. In
particular, we propose various end-to-end recurrent neural network (RNN) models
for the tasks of disease name recognition and their classification into four
pre-defined categories. We also utilize convolution neural network (CNN) in
cascade of RNN to get character-based embedded features and employ it with
word-embedded features in our model. We compare our models with the
state-of-the-art results for the two tasks on NCBI disease dataset. Our results
for the disease mention recognition task indicate that state-of-the-art
performance can be obtained without relying on feature engineering. Further the
proposed models obtained improved performance on the classification task of
disease names.
| no_new_dataset | 0.947914 |
1509.04771 | Moo K. Chung | Moo K. Chung, Victoria Vilalta-Gil, Paul J. Rathouz, Benjamin B.
Lahey, David H. Zald | Mapping Heritability of Large-Scale Brain Networks with a Billion
Connections {\em via} Persistent Homology | null | null | null | null | cs.AI q-bio.NC stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In many human brain network studies, we do not have sufficient number (n) of
images relative to the number (p) of voxels due to the prohibitively expensive
cost of scanning enough subjects. Thus, brain network models usually suffer the
small-n large-p problem. Such a problem is often remedied by sparse network
models, which are usually solved numerically by optimizing L1-penalties.
Unfortunately, due to the computational bottleneck associated with optimizing
L1-penalties, it is not practical to apply such methods to construct
large-scale brain networks at the voxel-level. In this paper, we propose a new
scalable sparse network model using cross-correlations that bypass the
computational bottleneck. Our model can build sparse brain networks at the
voxel level with p > 25000. Instead of using a single sparse parameter that may
not be optimal in other studies and datasets, the computational speed gain
enables us to analyze the collection of networks at every possible sparse
parameter in a coherent mathematical framework via persistent homology. The
method is subsequently applied in determining the extent of heritability on a
functional brain network at the voxel-level for the first time using twin fMRI.
| [
{
"version": "v1",
"created": "Tue, 15 Sep 2015 23:54:12 GMT"
},
{
"version": "v2",
"created": "Wed, 29 Jun 2016 13:28:31 GMT"
}
] | 2016-06-30T00:00:00 | [
[
"Chung",
"Moo K.",
""
],
[
"Vilalta-Gil",
"Victoria",
""
],
[
"Rathouz",
"Paul J.",
""
],
[
"Lahey",
"Benjamin B.",
""
],
[
"Zald",
"David H.",
""
]
] | TITLE: Mapping Heritability of Large-Scale Brain Networks with a Billion
Connections {\em via} Persistent Homology
ABSTRACT: In many human brain network studies, we do not have sufficient number (n) of
images relative to the number (p) of voxels due to the prohibitively expensive
cost of scanning enough subjects. Thus, brain network models usually suffer the
small-n large-p problem. Such a problem is often remedied by sparse network
models, which are usually solved numerically by optimizing L1-penalties.
Unfortunately, due to the computational bottleneck associated with optimizing
L1-penalties, it is not practical to apply such methods to construct
large-scale brain networks at the voxel-level. In this paper, we propose a new
scalable sparse network model using cross-correlations that bypass the
computational bottleneck. Our model can build sparse brain networks at the
voxel level with p > 25000. Instead of using a single sparse parameter that may
not be optimal in other studies and datasets, the computational speed gain
enables us to analyze the collection of networks at every possible sparse
parameter in a coherent mathematical framework via persistent homology. The
method is subsequently applied in determining the extent of heritability on a
functional brain network at the voxel-level for the first time using twin fMRI.
| no_new_dataset | 0.952882 |
1511.07067 | Satwik Kottur | Satwik Kottur, Ramakrishna Vedantam, Jos\'e M. F. Moura, Devi Parikh | Visual Word2Vec (vis-w2v): Learning Visually Grounded Word Embeddings
Using Abstract Scenes | 15 pages, 11 figures | null | null | null | cs.CV cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a model to learn visually grounded word embeddings (vis-w2v) to
capture visual notions of semantic relatedness. While word embeddings trained
using text have been extremely successful, they cannot uncover notions of
semantic relatedness implicit in our visual world. For instance, although
"eats" and "stares at" seem unrelated in text, they share semantics visually.
When people are eating something, they also tend to stare at the food.
Grounding diverse relations like "eats" and "stares at" into vision remains
challenging, despite recent progress in vision. We note that the visual
grounding of words depends on semantics, and not the literal pixels. We thus
use abstract scenes created from clipart to provide the visual grounding. We
find that the embeddings we learn capture fine-grained, visually grounded
notions of semantic relatedness. We show improvements over text-only word
embeddings (word2vec) on three tasks: common-sense assertion classification,
visual paraphrasing and text-based image retrieval. Our code and datasets are
available online.
| [
{
"version": "v1",
"created": "Sun, 22 Nov 2015 20:46:42 GMT"
},
{
"version": "v2",
"created": "Wed, 29 Jun 2016 18:15:25 GMT"
}
] | 2016-06-30T00:00:00 | [
[
"Kottur",
"Satwik",
""
],
[
"Vedantam",
"Ramakrishna",
""
],
[
"Moura",
"José M. F.",
""
],
[
"Parikh",
"Devi",
""
]
] | TITLE: Visual Word2Vec (vis-w2v): Learning Visually Grounded Word Embeddings
Using Abstract Scenes
ABSTRACT: We propose a model to learn visually grounded word embeddings (vis-w2v) to
capture visual notions of semantic relatedness. While word embeddings trained
using text have been extremely successful, they cannot uncover notions of
semantic relatedness implicit in our visual world. For instance, although
"eats" and "stares at" seem unrelated in text, they share semantics visually.
When people are eating something, they also tend to stare at the food.
Grounding diverse relations like "eats" and "stares at" into vision remains
challenging, despite recent progress in vision. We note that the visual
grounding of words depends on semantics, and not the literal pixels. We thus
use abstract scenes created from clipart to provide the visual grounding. We
find that the embeddings we learn capture fine-grained, visually grounded
notions of semantic relatedness. We show improvements over text-only word
embeddings (word2vec) on three tasks: common-sense assertion classification,
visual paraphrasing and text-based image retrieval. Our code and datasets are
available online.
| no_new_dataset | 0.946498 |
1606.08805 | Mario Valerio Giuffrida | Mario Valerio Giuffrida and Sotirios A. Tsaftaris | Theta-RBM: Unfactored Gated Restricted Boltzmann Machine for
Rotation-Invariant Representations | 9 pages, 2 figures, 3 tables | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learning invariant representations is a critical task in computer vision. In
this paper, we propose the Theta-Restricted Boltzmann Machine ({\theta}-RBM in
short), which builds upon the original RBM formulation and injects the notion
of rotation-invariance during the learning procedure. In contrast to previous
approaches, we do not transform the training set with all possible rotations.
Instead, we rotate the gradient filters when they are computed during the
Contrastive Divergence algorithm. We formulate our model as an unfactored gated
Boltzmann machine, where another input layer is used to modulate the input
visible layer to drive the optimisation procedure. Among our contributions is a
mathematical proof that demonstrates that {\theta}-RBM is able to learn
rotation-invariant features according to a recently proposed invariance
measure. Our method reaches an invariance score of ~90% on mnist-rot dataset,
which is the highest result compared with the baseline methods and the current
state of the art in transformation-invariant feature learning in RBM. Using an
SVM classifier, we also showed that our network learns discriminative features
as well, obtaining ~10% of testing error.
| [
{
"version": "v1",
"created": "Tue, 28 Jun 2016 18:02:32 GMT"
},
{
"version": "v2",
"created": "Wed, 29 Jun 2016 09:57:08 GMT"
}
] | 2016-06-30T00:00:00 | [
[
"Giuffrida",
"Mario Valerio",
""
],
[
"Tsaftaris",
"Sotirios A.",
""
]
] | TITLE: Theta-RBM: Unfactored Gated Restricted Boltzmann Machine for
Rotation-Invariant Representations
ABSTRACT: Learning invariant representations is a critical task in computer vision. In
this paper, we propose the Theta-Restricted Boltzmann Machine ({\theta}-RBM in
short), which builds upon the original RBM formulation and injects the notion
of rotation-invariance during the learning procedure. In contrast to previous
approaches, we do not transform the training set with all possible rotations.
Instead, we rotate the gradient filters when they are computed during the
Contrastive Divergence algorithm. We formulate our model as an unfactored gated
Boltzmann machine, where another input layer is used to modulate the input
visible layer to drive the optimisation procedure. Among our contributions is a
mathematical proof that demonstrates that {\theta}-RBM is able to learn
rotation-invariant features according to a recently proposed invariance
measure. Our method reaches an invariance score of ~90% on mnist-rot dataset,
which is the highest result compared with the baseline methods and the current
state of the art in transformation-invariant feature learning in RBM. Using an
SVM classifier, we also showed that our network learns discriminative features
as well, obtaining ~10% of testing error.
| no_new_dataset | 0.953665 |
1606.08927 | Huiyuan Zhang | Huiyuan Zhang, Dung T. Nguyen, Soham Das, Huiling Zhang and My T. Thai | Least Cost Influence Maximization Across Multiple Social Networks | 21 pages, published in IEEE/ACM Transactions on Networking | IEEE/ACM Transactions on Networking, 24(2), 929-939, March 12,
2015 | 10.1109/TNET.2015.2394793 | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently in Online Social Networks (OSNs), the Least Cost Influence (LCI)
problem has become one of the central research topics. It aims at identifying a
minimum number of seed users who can trigger a wide cascade of information
propagation. Most of existing literature investigated the LCI problem only
based on an individual network. However, nowadays users often join several OSNs
such that information could be spread across different networks simultaneously.
Therefore, in order to obtain the best set of seed users, it is crucial to
consider the role of overlapping users under this circumstances.
In this article, we propose a unified framework to represent and analyze the
influence diffusion in multiplex networks. More specifically, we tackle the LCI
problem by mapping a set of networks into a single one via lossless and lossy
coupling schemes. The lossless coupling scheme preserves all properties of
original networks to achieve high quality solutions, while the lossy coupling
scheme offers an attractive alternative when the running time and memory
consumption are of primary concern. Various experiments conducted on both real
and synthesized datasets have validated the effectiveness of the coupling
schemes, which also provide some interesting insights into the process of
influence propagation in multiplex networks.
| [
{
"version": "v1",
"created": "Wed, 29 Jun 2016 01:04:29 GMT"
}
] | 2016-06-30T00:00:00 | [
[
"Zhang",
"Huiyuan",
""
],
[
"Nguyen",
"Dung T.",
""
],
[
"Das",
"Soham",
""
],
[
"Zhang",
"Huiling",
""
],
[
"Thai",
"My T.",
""
]
] | TITLE: Least Cost Influence Maximization Across Multiple Social Networks
ABSTRACT: Recently in Online Social Networks (OSNs), the Least Cost Influence (LCI)
problem has become one of the central research topics. It aims at identifying a
minimum number of seed users who can trigger a wide cascade of information
propagation. Most of existing literature investigated the LCI problem only
based on an individual network. However, nowadays users often join several OSNs
such that information could be spread across different networks simultaneously.
Therefore, in order to obtain the best set of seed users, it is crucial to
consider the role of overlapping users under this circumstances.
In this article, we propose a unified framework to represent and analyze the
influence diffusion in multiplex networks. More specifically, we tackle the LCI
problem by mapping a set of networks into a single one via lossless and lossy
coupling schemes. The lossless coupling scheme preserves all properties of
original networks to achieve high quality solutions, while the lossy coupling
scheme offers an attractive alternative when the running time and memory
consumption are of primary concern. Various experiments conducted on both real
and synthesized datasets have validated the effectiveness of the coupling
schemes, which also provide some interesting insights into the process of
influence propagation in multiplex networks.
| no_new_dataset | 0.948917 |
1606.08928 | Annamalai Narayanan | Annamalai Narayanan, Mahinthan Chandramohan, Lihui Chen, Yang Liu and
Santhoshkumar Saminathan | subgraph2vec: Learning Distributed Representations of Rooted Sub-graphs
from Large Graphs | null | null | null | null | cs.LG cs.AI cs.CR cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we present subgraph2vec, a novel approach for learning latent
representations of rooted subgraphs from large graphs inspired by recent
advancements in Deep Learning and Graph Kernels. These latent representations
encode semantic substructure dependencies in a continuous vector space, which
is easily exploited by statistical models for tasks such as graph
classification, clustering, link prediction and community detection.
subgraph2vec leverages on local information obtained from neighbourhoods of
nodes to learn their latent representations in an unsupervised fashion. We
demonstrate that subgraph vectors learnt by our approach could be used in
conjunction with classifiers such as CNNs, SVMs and relational data clustering
algorithms to achieve significantly superior accuracies. Also, we show that the
subgraph vectors could be used for building a deep learning variant of
Weisfeiler-Lehman graph kernel. Our experiments on several benchmark and
large-scale real-world datasets reveal that subgraph2vec achieves significant
improvements in accuracies over existing graph kernels on both supervised and
unsupervised learning tasks. Specifically, on two realworld program analysis
tasks, namely, code clone and malware detection, subgraph2vec outperforms
state-of-the-art kernels by more than 17% and 4%, respectively.
| [
{
"version": "v1",
"created": "Wed, 29 Jun 2016 01:05:36 GMT"
}
] | 2016-06-30T00:00:00 | [
[
"Narayanan",
"Annamalai",
""
],
[
"Chandramohan",
"Mahinthan",
""
],
[
"Chen",
"Lihui",
""
],
[
"Liu",
"Yang",
""
],
[
"Saminathan",
"Santhoshkumar",
""
]
] | TITLE: subgraph2vec: Learning Distributed Representations of Rooted Sub-graphs
from Large Graphs
ABSTRACT: In this paper, we present subgraph2vec, a novel approach for learning latent
representations of rooted subgraphs from large graphs inspired by recent
advancements in Deep Learning and Graph Kernels. These latent representations
encode semantic substructure dependencies in a continuous vector space, which
is easily exploited by statistical models for tasks such as graph
classification, clustering, link prediction and community detection.
subgraph2vec leverages on local information obtained from neighbourhoods of
nodes to learn their latent representations in an unsupervised fashion. We
demonstrate that subgraph vectors learnt by our approach could be used in
conjunction with classifiers such as CNNs, SVMs and relational data clustering
algorithms to achieve significantly superior accuracies. Also, we show that the
subgraph vectors could be used for building a deep learning variant of
Weisfeiler-Lehman graph kernel. Our experiments on several benchmark and
large-scale real-world datasets reveal that subgraph2vec achieves significant
improvements in accuracies over existing graph kernels on both supervised and
unsupervised learning tasks. Specifically, on two realworld program analysis
tasks, namely, code clone and malware detection, subgraph2vec outperforms
state-of-the-art kernels by more than 17% and 4%, respectively.
| no_new_dataset | 0.947332 |
1606.08955 | Vinay Bettadapura | Vinay Bettadapura, Caroline Pantofaru, Irfan Essa | Leveraging Contextual Cues for Generating Basketball Highlights | Proceedings of ACM Multimedia 2016 | null | null | null | cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The massive growth of sports videos has resulted in a need for automatic
generation of sports highlights that are comparable in quality to the
hand-edited highlights produced by broadcasters such as ESPN. Unlike previous
works that mostly use audio-visual cues derived from the video, we propose an
approach that additionally leverages contextual cues derived from the
environment that the game is being played in. The contextual cues provide
information about the excitement levels in the game, which can be ranked and
selected to automatically produce high-quality basketball highlights. We
introduce a new dataset of 25 NCAA games along with their play-by-play stats
and the ground-truth excitement data for each basket. We explore the
informativeness of five different cues derived from the video and from the
environment through user studies. Our experiments show that for our study
participants, the highlights produced by our system are comparable to the ones
produced by ESPN for the same games.
| [
{
"version": "v1",
"created": "Wed, 29 Jun 2016 05:04:27 GMT"
}
] | 2016-06-30T00:00:00 | [
[
"Bettadapura",
"Vinay",
""
],
[
"Pantofaru",
"Caroline",
""
],
[
"Essa",
"Irfan",
""
]
] | TITLE: Leveraging Contextual Cues for Generating Basketball Highlights
ABSTRACT: The massive growth of sports videos has resulted in a need for automatic
generation of sports highlights that are comparable in quality to the
hand-edited highlights produced by broadcasters such as ESPN. Unlike previous
works that mostly use audio-visual cues derived from the video, we propose an
approach that additionally leverages contextual cues derived from the
environment that the game is being played in. The contextual cues provide
information about the excitement levels in the game, which can be ranked and
selected to automatically produce high-quality basketball highlights. We
introduce a new dataset of 25 NCAA games along with their play-by-play stats
and the ground-truth excitement data for each basket. We explore the
informativeness of five different cues derived from the video and from the
environment through user studies. Our experiments show that for our study
participants, the highlights produced by our system are comparable to the ones
produced by ESPN for the same games.
| new_dataset | 0.95452 |
1606.09058 | Dimitrios Alikaniotis | Dimitrios Alikaniotis and John N. Williams | A Distributional Semantics Approach to Implicit Language Learning | 5 pages, 7 figures, NetWords 2015 | null | null | null | cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the present paper we show that distributional information is particularly
important when considering concept availability under implicit language
learning conditions. Based on results from different behavioural experiments we
argue that the implicit learnability of semantic regularities depends on the
degree to which the relevant concept is reflected in language use. In our
simulations, we train a Vector-Space model on either an English or a Chinese
corpus and then feed the resulting representations to a feed-forward neural
network. The task of the neural network was to find a mapping between the word
representations and the novel words. Using datasets from four behavioural
experiments, which used different semantic manipulations, we were able to
obtain learning patterns very similar to those obtained by humans.
| [
{
"version": "v1",
"created": "Wed, 29 Jun 2016 12:08:51 GMT"
}
] | 2016-06-30T00:00:00 | [
[
"Alikaniotis",
"Dimitrios",
""
],
[
"Williams",
"John N.",
""
]
] | TITLE: A Distributional Semantics Approach to Implicit Language Learning
ABSTRACT: In the present paper we show that distributional information is particularly
important when considering concept availability under implicit language
learning conditions. Based on results from different behavioural experiments we
argue that the implicit learnability of semantic regularities depends on the
degree to which the relevant concept is reflected in language use. In our
simulations, we train a Vector-Space model on either an English or a Chinese
corpus and then feed the resulting representations to a feed-forward neural
network. The task of the neural network was to find a mapping between the word
representations and the novel words. Using datasets from four behavioural
experiments, which used different semantic manipulations, we were able to
obtain learning patterns very similar to those obtained by humans.
| no_new_dataset | 0.950411 |
1606.09184 | Peter Schulam | Peter Schulam and Raman Arora | Disease Trajectory Maps | null | null | null | null | stat.ML cs.LG stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Medical researchers are coming to appreciate that many diseases are in fact
complex, heterogeneous syndromes composed of subpopulations that express
different variants of a related complication. Time series data extracted from
individual electronic health records (EHR) offer an exciting new way to study
subtle differences in the way these diseases progress over time. In this paper,
we focus on answering two questions that can be asked using these databases of
time series. First, we want to understand whether there are individuals with
similar disease trajectories and whether there are a small number of degrees of
freedom that account for differences in trajectories across the population.
Second, we want to understand how important clinical outcomes are associated
with disease trajectories. To answer these questions, we propose the Disease
Trajectory Map (DTM), a novel probabilistic model that learns low-dimensional
representations of sparse and irregularly sampled time series. We propose a
stochastic variational inference algorithm for learning the DTM that allows the
model to scale to large modern medical datasets. To demonstrate the DTM, we
analyze data collected on patients with the complex autoimmune disease,
scleroderma. We find that DTM learns meaningful representations of disease
trajectories and that the representations are significantly associated with
important clinical outcomes.
| [
{
"version": "v1",
"created": "Wed, 29 Jun 2016 17:06:45 GMT"
}
] | 2016-06-30T00:00:00 | [
[
"Schulam",
"Peter",
""
],
[
"Arora",
"Raman",
""
]
] | TITLE: Disease Trajectory Maps
ABSTRACT: Medical researchers are coming to appreciate that many diseases are in fact
complex, heterogeneous syndromes composed of subpopulations that express
different variants of a related complication. Time series data extracted from
individual electronic health records (EHR) offer an exciting new way to study
subtle differences in the way these diseases progress over time. In this paper,
we focus on answering two questions that can be asked using these databases of
time series. First, we want to understand whether there are individuals with
similar disease trajectories and whether there are a small number of degrees of
freedom that account for differences in trajectories across the population.
Second, we want to understand how important clinical outcomes are associated
with disease trajectories. To answer these questions, we propose the Disease
Trajectory Map (DTM), a novel probabilistic model that learns low-dimensional
representations of sparse and irregularly sampled time series. We propose a
stochastic variational inference algorithm for learning the DTM that allows the
model to scale to large modern medical datasets. To demonstrate the DTM, we
analyze data collected on patients with the complex autoimmune disease,
scleroderma. We find that DTM learns meaningful representations of disease
trajectories and that the representations are significantly associated with
important clinical outcomes.
| no_new_dataset | 0.947039 |
1506.05196 | Salman Khan Mr. | Salman H. Khan, Munawar Hayat, Mohammed Bennamoun, Roberto Togneri,
and Ferdous Sohel | A Discriminative Representation of Convolutional Features for Indoor
Scene Recognition | null | null | 10.1109/TIP.2016.2567076 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Indoor scene recognition is a multi-faceted and challenging problem due to
the diverse intra-class variations and the confusing inter-class similarities.
This paper presents a novel approach which exploits rich mid-level
convolutional features to categorize indoor scenes. Traditionally used
convolutional features preserve the global spatial structure, which is a
desirable property for general object recognition. However, we argue that this
structuredness is not much helpful when we have large variations in scene
layouts, e.g., in indoor scenes. We propose to transform the structured
convolutional activations to another highly discriminative feature space. The
representation in the transformed space not only incorporates the
discriminative aspects of the target dataset, but it also encodes the features
in terms of the general object categories that are present in indoor scenes. To
this end, we introduce a new large-scale dataset of 1300 object categories
which are commonly present in indoor scenes. Our proposed approach achieves a
significant performance boost over previous state of the art approaches on five
major scene classification datasets.
| [
{
"version": "v1",
"created": "Wed, 17 Jun 2015 03:55:19 GMT"
}
] | 2016-06-29T00:00:00 | [
[
"Khan",
"Salman H.",
""
],
[
"Hayat",
"Munawar",
""
],
[
"Bennamoun",
"Mohammed",
""
],
[
"Togneri",
"Roberto",
""
],
[
"Sohel",
"Ferdous",
""
]
] | TITLE: A Discriminative Representation of Convolutional Features for Indoor
Scene Recognition
ABSTRACT: Indoor scene recognition is a multi-faceted and challenging problem due to
the diverse intra-class variations and the confusing inter-class similarities.
This paper presents a novel approach which exploits rich mid-level
convolutional features to categorize indoor scenes. Traditionally used
convolutional features preserve the global spatial structure, which is a
desirable property for general object recognition. However, we argue that this
structuredness is not much helpful when we have large variations in scene
layouts, e.g., in indoor scenes. We propose to transform the structured
convolutional activations to another highly discriminative feature space. The
representation in the transformed space not only incorporates the
discriminative aspects of the target dataset, but it also encodes the features
in terms of the general object categories that are present in indoor scenes. To
this end, we introduce a new large-scale dataset of 1300 object categories
which are commonly present in indoor scenes. Our proposed approach achieves a
significant performance boost over previous state of the art approaches on five
major scene classification datasets.
| new_dataset | 0.962638 |
1506.09215 | Simon Lacoste-Julien | Jean-Baptiste Alayrac, Piotr Bojanowski, Nishant Agrawal, Josef Sivic,
Ivan Laptev, Simon Lacoste-Julien | Unsupervised Learning from Narrated Instruction Videos | Appears in: 2016 IEEE Conference on Computer Vision and Pattern
Recognition (CVPR 2016). 21 pages | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We address the problem of automatically learning the main steps to complete a
certain task, such as changing a car tire, from a set of narrated instruction
videos. The contributions of this paper are three-fold. First, we develop a new
unsupervised learning approach that takes advantage of the complementary nature
of the input video and the associated narration. The method solves two
clustering problems, one in text and one in video, applied one after each other
and linked by joint constraints to obtain a single coherent sequence of steps
in both modalities. Second, we collect and annotate a new challenging dataset
of real-world instruction videos from the Internet. The dataset contains about
800,000 frames for five different tasks that include complex interactions
between people and objects, and are captured in a variety of indoor and outdoor
settings. Third, we experimentally demonstrate that the proposed method can
automatically discover, in an unsupervised manner, the main steps to achieve
the task and locate the steps in the input videos.
| [
{
"version": "v1",
"created": "Tue, 30 Jun 2015 19:55:37 GMT"
},
{
"version": "v2",
"created": "Thu, 2 Jul 2015 16:43:36 GMT"
},
{
"version": "v3",
"created": "Mon, 30 Nov 2015 18:10:53 GMT"
},
{
"version": "v4",
"created": "Tue, 28 Jun 2016 18:43:37 GMT"
}
] | 2016-06-29T00:00:00 | [
[
"Alayrac",
"Jean-Baptiste",
""
],
[
"Bojanowski",
"Piotr",
""
],
[
"Agrawal",
"Nishant",
""
],
[
"Sivic",
"Josef",
""
],
[
"Laptev",
"Ivan",
""
],
[
"Lacoste-Julien",
"Simon",
""
]
] | TITLE: Unsupervised Learning from Narrated Instruction Videos
ABSTRACT: We address the problem of automatically learning the main steps to complete a
certain task, such as changing a car tire, from a set of narrated instruction
videos. The contributions of this paper are three-fold. First, we develop a new
unsupervised learning approach that takes advantage of the complementary nature
of the input video and the associated narration. The method solves two
clustering problems, one in text and one in video, applied one after each other
and linked by joint constraints to obtain a single coherent sequence of steps
in both modalities. Second, we collect and annotate a new challenging dataset
of real-world instruction videos from the Internet. The dataset contains about
800,000 frames for five different tasks that include complex interactions
between people and objects, and are captured in a variety of indoor and outdoor
settings. Third, we experimentally demonstrate that the proposed method can
automatically discover, in an unsupervised manner, the main steps to achieve
the task and locate the steps in the input videos.
| new_dataset | 0.946448 |
1510.00297 | Akshay Gadde | Aamir Anis, Akshay Gadde, Antonio Ortega | Efficient Sampling Set Selection for Bandlimited Graph Signals Using
Graph Spectral Proxies | 14 pages, 3 figures, 4 tables, Accepted for publication in IEEE
Transactions on Signal Processing | null | 10.1109/TSP.2016.2546233 | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the problem of selecting the best sampling set for bandlimited
reconstruction of signals on graphs. A frequency domain representation for
graph signals can be defined using the eigenvectors and eigenvalues of
variation operators that take into account the underlying graph connectivity.
Smoothly varying signals defined on the nodes are of particular interest in
various applications, and tend to be approximately bandlimited in the frequency
basis. Sampling theory for graph signals deals with the problem of choosing the
best subset of nodes for reconstructing a bandlimited signal from its samples.
Most approaches to this problem require a computation of the frequency basis
(i.e., the eigenvectors of the variation operator), followed by a search
procedure using the basis elements. This can be impractical, in terms of
storage and time complexity, for real datasets involving very large graphs. We
circumvent this issue in our formulation by introducing quantities called graph
spectral proxies, defined using the powers of the variation operator, in order
to approximate the spectral content of graph signals. This allows us to
formulate a direct sampling set selection approach that does not require the
computation and storage of the basis elements. We show that our approach also
provides stable reconstruction when the samples are noisy or when the original
signal is only approximately bandlimited. Furthermore, the proposed approach is
valid for any choice of the variation operator, thereby covering a wide range
of graphs and applications. We demonstrate its effectiveness through various
numerical experiments.
| [
{
"version": "v1",
"created": "Thu, 1 Oct 2015 16:14:35 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Mar 2016 01:45:07 GMT"
}
] | 2016-06-29T00:00:00 | [
[
"Anis",
"Aamir",
""
],
[
"Gadde",
"Akshay",
""
],
[
"Ortega",
"Antonio",
""
]
] | TITLE: Efficient Sampling Set Selection for Bandlimited Graph Signals Using
Graph Spectral Proxies
ABSTRACT: We study the problem of selecting the best sampling set for bandlimited
reconstruction of signals on graphs. A frequency domain representation for
graph signals can be defined using the eigenvectors and eigenvalues of
variation operators that take into account the underlying graph connectivity.
Smoothly varying signals defined on the nodes are of particular interest in
various applications, and tend to be approximately bandlimited in the frequency
basis. Sampling theory for graph signals deals with the problem of choosing the
best subset of nodes for reconstructing a bandlimited signal from its samples.
Most approaches to this problem require a computation of the frequency basis
(i.e., the eigenvectors of the variation operator), followed by a search
procedure using the basis elements. This can be impractical, in terms of
storage and time complexity, for real datasets involving very large graphs. We
circumvent this issue in our formulation by introducing quantities called graph
spectral proxies, defined using the powers of the variation operator, in order
to approximate the spectral content of graph signals. This allows us to
formulate a direct sampling set selection approach that does not require the
computation and storage of the basis elements. We show that our approach also
provides stable reconstruction when the samples are noisy or when the original
signal is only approximately bandlimited. Furthermore, the proposed approach is
valid for any choice of the variation operator, thereby covering a wide range
of graphs and applications. We demonstrate its effectiveness through various
numerical experiments.
| no_new_dataset | 0.951774 |
1603.00546 | Jan Egger | Jan Egger, Philip Voglreiter, Mark Dokter, Michael Hofmann, Xiaojun
Chen, Wolfram G. Zoller, Dieter Schmalstieg, Alexander Hann | US-Cut: Interactive Algorithm for rapid Detection and Segmentation of
Liver Tumors in Ultrasound Acquisitions | 6 pages, 6 figures, 1 table, 32 references | SPIE Medical Imaging Conference 2016, Paper 9790-47 | 10.1117/12.2216509 | null | cs.CV cs.CE cs.CG cs.GR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Ultrasound (US) is the most commonly used liver imaging modality worldwide.
It plays an important role in follow-up of cancer patients with liver
metastases. We present an interactive segmentation approach for liver tumors in
US acquisitions. Due to the low image quality and the low contrast between the
tumors and the surrounding tissue in US images, the segmentation is very
challenging. Thus, the clinical practice still relies on manual measurement and
outlining of the tumors in the US images. We target this problem by applying an
interactive segmentation algorithm to the US data, allowing the user to get
real-time feedback of the segmentation results. The algorithm has been
developed and tested hand-in-hand by physicians and computer scientists to make
sure a future practical usage in a clinical setting is feasible. To cover
typical acquisitions from the clinical routine, the approach has been evaluated
with dozens of datasets where the tumors are hyperechoic (brighter), hypoechoic
(darker) or isoechoic (similar) in comparison to the surrounding liver tissue.
Due to the interactive real-time behavior of the approach, it was possible even
in difficult cases to find satisfying segmentations of the tumors within
seconds and without parameter settings, and the average tumor deviation was
only 1.4mm compared with manual measurements. However, the long term goal is to
ease the volumetric acquisition of liver tumors in order to evaluate for
treatment response. Additional aim is the registration of intraoperative US
images via the interactive segmentations to the patient's pre-interventional CT
acquisitions.
| [
{
"version": "v1",
"created": "Wed, 2 Mar 2016 01:42:48 GMT"
}
] | 2016-06-29T00:00:00 | [
[
"Egger",
"Jan",
""
],
[
"Voglreiter",
"Philip",
""
],
[
"Dokter",
"Mark",
""
],
[
"Hofmann",
"Michael",
""
],
[
"Chen",
"Xiaojun",
""
],
[
"Zoller",
"Wolfram G.",
""
],
[
"Schmalstieg",
"Dieter",
""
],
[
"Hann",
"Alexander",
""
]
] | TITLE: US-Cut: Interactive Algorithm for rapid Detection and Segmentation of
Liver Tumors in Ultrasound Acquisitions
ABSTRACT: Ultrasound (US) is the most commonly used liver imaging modality worldwide.
It plays an important role in follow-up of cancer patients with liver
metastases. We present an interactive segmentation approach for liver tumors in
US acquisitions. Due to the low image quality and the low contrast between the
tumors and the surrounding tissue in US images, the segmentation is very
challenging. Thus, the clinical practice still relies on manual measurement and
outlining of the tumors in the US images. We target this problem by applying an
interactive segmentation algorithm to the US data, allowing the user to get
real-time feedback of the segmentation results. The algorithm has been
developed and tested hand-in-hand by physicians and computer scientists to make
sure a future practical usage in a clinical setting is feasible. To cover
typical acquisitions from the clinical routine, the approach has been evaluated
with dozens of datasets where the tumors are hyperechoic (brighter), hypoechoic
(darker) or isoechoic (similar) in comparison to the surrounding liver tissue.
Due to the interactive real-time behavior of the approach, it was possible even
in difficult cases to find satisfying segmentations of the tumors within
seconds and without parameter settings, and the average tumor deviation was
only 1.4mm compared with manual measurements. However, the long term goal is to
ease the volumetric acquisition of liver tumors in order to evaluate for
treatment response. Additional aim is the registration of intraoperative US
images via the interactive segmentations to the patient's pre-interventional CT
acquisitions.
| no_new_dataset | 0.951323 |
1603.07302 | Michelle Fritz | Michelle Fritz, Marivi Fernandez-Serra, Jose M. Soler | Optimization of an exchange-correlation density functional for water | 10 pages, 10 figures | null | 10.1063/1.4953081 | null | physics.chem-ph cond-mat.other physics.comp-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We describe a method, that we call data projection onto parameter space
(DPPS), to optimize an energy functional of the electron density, so that it
reproduces a dataset of experimental magnitudes. Our scheme, based on Bayes
theorem, constrains the optimized functional not to depart unphysically from
existing ab initio functionals. The resulting functional maximizes the
probability of being the \correct" parametrization of a given functional form,
in the sense of Bayes theory. The application of DPPS to water sheds new light
on why density functional theory has performed rather poorly for liquid water,
on what improvements are needed, and on the intrinsic limitations of the
generalized gradient approximation to electron exchange and correlation.
Finally, we present tests of our water-optimized functional, that we call
vdW-DF-w, showing that it performs very well for a variety of condensed water
systems.
| [
{
"version": "v1",
"created": "Wed, 23 Mar 2016 19:02:44 GMT"
},
{
"version": "v2",
"created": "Thu, 24 Mar 2016 11:31:21 GMT"
}
] | 2016-06-29T00:00:00 | [
[
"Fritz",
"Michelle",
""
],
[
"Fernandez-Serra",
"Marivi",
""
],
[
"Soler",
"Jose M.",
""
]
] | TITLE: Optimization of an exchange-correlation density functional for water
ABSTRACT: We describe a method, that we call data projection onto parameter space
(DPPS), to optimize an energy functional of the electron density, so that it
reproduces a dataset of experimental magnitudes. Our scheme, based on Bayes
theorem, constrains the optimized functional not to depart unphysically from
existing ab initio functionals. The resulting functional maximizes the
probability of being the \correct" parametrization of a given functional form,
in the sense of Bayes theory. The application of DPPS to water sheds new light
on why density functional theory has performed rather poorly for liquid water,
on what improvements are needed, and on the intrinsic limitations of the
generalized gradient approximation to electron exchange and correlation.
Finally, we present tests of our water-optimized functional, that we call
vdW-DF-w, showing that it performs very well for a variety of condensed water
systems.
| no_new_dataset | 0.948106 |
1606.08495 | Mihajlo Grbovic | Erik Ordentlich, Lee Yang, Andy Feng, Peter Cnudde, Mihajlo Grbovic,
Nemanja Djuric, Vladan Radosavljevic, Gavin Owens | Network-Efficient Distributed Word2vec Training System for Large
Vocabularies | 10 pages, 2 figures | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Word2vec is a popular family of algorithms for unsupervised training of dense
vector representations of words on large text corpuses. The resulting vectors
have been shown to capture semantic relationships among their corresponding
words, and have shown promise in reducing a number of natural language
processing (NLP) tasks to mathematical operations on these vectors. While
heretofore applications of word2vec have centered around vocabularies with a
few million words, wherein the vocabulary is the set of words for which vectors
are simultaneously trained, novel applications are emerging in areas outside of
NLP with vocabularies comprising several 100 million words. Existing word2vec
training systems are impractical for training such large vocabularies as they
either require that the vectors of all vocabulary words be stored in the memory
of a single server or suffer unacceptable training latency due to massive
network data transfer. In this paper, we present a novel distributed, parallel
training system that enables unprecedented practical training of vectors for
vocabularies with several 100 million words on a shared cluster of commodity
servers, using far less network traffic than the existing solutions. We
evaluate the proposed system on a benchmark dataset, showing that the quality
of vectors does not degrade relative to non-distributed training. Finally, for
several quarters, the system has been deployed for the purpose of matching
queries to ads in Gemini, the sponsored search advertising platform at Yahoo,
resulting in significant improvement of business metrics.
| [
{
"version": "v1",
"created": "Mon, 27 Jun 2016 22:00:21 GMT"
}
] | 2016-06-29T00:00:00 | [
[
"Ordentlich",
"Erik",
""
],
[
"Yang",
"Lee",
""
],
[
"Feng",
"Andy",
""
],
[
"Cnudde",
"Peter",
""
],
[
"Grbovic",
"Mihajlo",
""
],
[
"Djuric",
"Nemanja",
""
],
[
"Radosavljevic",
"Vladan",
""
],
[
"Owens",
"Gavin",
""
]
] | TITLE: Network-Efficient Distributed Word2vec Training System for Large
Vocabularies
ABSTRACT: Word2vec is a popular family of algorithms for unsupervised training of dense
vector representations of words on large text corpuses. The resulting vectors
have been shown to capture semantic relationships among their corresponding
words, and have shown promise in reducing a number of natural language
processing (NLP) tasks to mathematical operations on these vectors. While
heretofore applications of word2vec have centered around vocabularies with a
few million words, wherein the vocabulary is the set of words for which vectors
are simultaneously trained, novel applications are emerging in areas outside of
NLP with vocabularies comprising several 100 million words. Existing word2vec
training systems are impractical for training such large vocabularies as they
either require that the vectors of all vocabulary words be stored in the memory
of a single server or suffer unacceptable training latency due to massive
network data transfer. In this paper, we present a novel distributed, parallel
training system that enables unprecedented practical training of vectors for
vocabularies with several 100 million words on a shared cluster of commodity
servers, using far less network traffic than the existing solutions. We
evaluate the proposed system on a benchmark dataset, showing that the quality
of vectors does not degrade relative to non-distributed training. Finally, for
several quarters, the system has been deployed for the purpose of matching
queries to ads in Gemini, the sponsored search advertising platform at Yahoo,
resulting in significant improvement of business metrics.
| no_new_dataset | 0.944995 |
1606.08534 | Ian Wesley-Smith | Ian Wesley-Smith, Carl T. Bergstrom, Jevin D. West | Static Ranking of Scholarly Papers using Article-Level Eigenfactor
(ALEF) | null | null | null | null | cs.DL cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Microsoft Research hosted the 2016 WSDM Cup Challenge based on the Microsoft
Academic Graph. The goal was to provide static rankings for the articles that
make up the graph, with the rankings to be evaluated against those of human
judges. While the Microsoft Academic Graph provided metadata about many aspects
of each scholarly document, we focused more narrowly on citation data and used
this contest as an opportunity to test the Article Level Eigenfactor (ALEF), a
novel citation-based ranking algorithm, and evaluate its performance against
competing algorithms that drew upon multiple facets of the data from a large,
real world dataset (122M papers and 757M citations). Our final submission to
this contest was scored at 0.676, earning second place.
| [
{
"version": "v1",
"created": "Tue, 28 Jun 2016 01:55:56 GMT"
}
] | 2016-06-29T00:00:00 | [
[
"Wesley-Smith",
"Ian",
""
],
[
"Bergstrom",
"Carl T.",
""
],
[
"West",
"Jevin D.",
""
]
] | TITLE: Static Ranking of Scholarly Papers using Article-Level Eigenfactor
(ALEF)
ABSTRACT: Microsoft Research hosted the 2016 WSDM Cup Challenge based on the Microsoft
Academic Graph. The goal was to provide static rankings for the articles that
make up the graph, with the rankings to be evaluated against those of human
judges. While the Microsoft Academic Graph provided metadata about many aspects
of each scholarly document, we focused more narrowly on citation data and used
this contest as an opportunity to test the Article Level Eigenfactor (ALEF), a
novel citation-based ranking algorithm, and evaluate its performance against
competing algorithms that drew upon multiple facets of the data from a large,
real world dataset (122M papers and 757M citations). Our final submission to
this contest was scored at 0.676, earning second place.
| no_new_dataset | 0.945601 |
1606.08821 | Zhenhao Ge | Zhenhao Ge, Aravind Ganapathiraju, Ananth N. Iyer, Scott A. Randal and
Felix I. Wyss | Generation and Pruning of Pronunciation Variants to Improve ASR Accuracy | Interspeech 2016 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Speech recognition, especially name recognition, is widely used in phone
services such as company directory dialers, stock quote providers or location
finders. It is usually challenging due to pronunciation variations. This paper
proposes an efficient and robust data-driven technique which automatically
learns acceptable word pronunciations and updates the pronunciation dictionary
to build a better lexicon without affecting recognition of other words similar
to the target word. It generalizes well on datasets with various sizes, and
reduces the error rate on a database with 13000+ human names by 42%, compared
to a baseline with regular dictionaries already covering canonical
pronunciations of 97%+ words in names, plus a well-trained
spelling-to-pronunciation (STP) engine.
| [
{
"version": "v1",
"created": "Tue, 28 Jun 2016 18:44:38 GMT"
}
] | 2016-06-29T00:00:00 | [
[
"Ge",
"Zhenhao",
""
],
[
"Ganapathiraju",
"Aravind",
""
],
[
"Iyer",
"Ananth N.",
""
],
[
"Randal",
"Scott A.",
""
],
[
"Wyss",
"Felix I.",
""
]
] | TITLE: Generation and Pruning of Pronunciation Variants to Improve ASR Accuracy
ABSTRACT: Speech recognition, especially name recognition, is widely used in phone
services such as company directory dialers, stock quote providers or location
finders. It is usually challenging due to pronunciation variations. This paper
proposes an efficient and robust data-driven technique which automatically
learns acceptable word pronunciations and updates the pronunciation dictionary
to build a better lexicon without affecting recognition of other words similar
to the target word. It generalizes well on datasets with various sizes, and
reduces the error rate on a database with 13000+ human names by 42%, compared
to a baseline with regular dictionaries already covering canonical
pronunciations of 97%+ words in names, plus a well-trained
spelling-to-pronunciation (STP) engine.
| no_new_dataset | 0.950778 |
1411.0541 | Baharan Mirzasoleiman | Baharan Mirzasoleiman, Amin Karbasi, Rik Sarkar, and Andreas Krause | Distributed Submodular Maximization | null | null | null | null | cs.LG cs.AI cs.DC cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many large-scale machine learning problems--clustering, non-parametric
learning, kernel machines, etc.--require selecting a small yet representative
subset from a large dataset. Such problems can often be reduced to maximizing a
submodular set function subject to various constraints. Classical approaches to
submodular optimization require centralized access to the full dataset, which
is impractical for truly large-scale problems. In this paper, we consider the
problem of submodular function maximization in a distributed fashion. We
develop a simple, two-stage protocol GreeDi, that is easily implemented using
MapReduce style computations. We theoretically analyze our approach, and show
that under certain natural conditions, performance close to the centralized
approach can be achieved. We begin with monotone submodular maximization
subject to a cardinality constraint, and then extend this approach to obtain
approximation guarantees for (not necessarily monotone) submodular maximization
subject to more general constraints including matroid or knapsack constraints.
In our extensive experiments, we demonstrate the effectiveness of our approach
on several applications, including sparse Gaussian process inference and
exemplar based clustering on tens of millions of examples using Hadoop.
| [
{
"version": "v1",
"created": "Mon, 3 Nov 2014 16:03:05 GMT"
},
{
"version": "v2",
"created": "Mon, 27 Jun 2016 16:32:35 GMT"
}
] | 2016-06-28T00:00:00 | [
[
"Mirzasoleiman",
"Baharan",
""
],
[
"Karbasi",
"Amin",
""
],
[
"Sarkar",
"Rik",
""
],
[
"Krause",
"Andreas",
""
]
] | TITLE: Distributed Submodular Maximization
ABSTRACT: Many large-scale machine learning problems--clustering, non-parametric
learning, kernel machines, etc.--require selecting a small yet representative
subset from a large dataset. Such problems can often be reduced to maximizing a
submodular set function subject to various constraints. Classical approaches to
submodular optimization require centralized access to the full dataset, which
is impractical for truly large-scale problems. In this paper, we consider the
problem of submodular function maximization in a distributed fashion. We
develop a simple, two-stage protocol GreeDi, that is easily implemented using
MapReduce style computations. We theoretically analyze our approach, and show
that under certain natural conditions, performance close to the centralized
approach can be achieved. We begin with monotone submodular maximization
subject to a cardinality constraint, and then extend this approach to obtain
approximation guarantees for (not necessarily monotone) submodular maximization
subject to more general constraints including matroid or knapsack constraints.
In our extensive experiments, we demonstrate the effectiveness of our approach
on several applications, including sparse Gaussian process inference and
exemplar based clustering on tens of millions of examples using Hadoop.
| no_new_dataset | 0.9463 |
1510.08160 | Jianan Li | Jianan Li, Xiaodan Liang, ShengMei Shen, Tingfa Xu, Jiashi Feng,
Shuicheng Yan | Scale-aware Fast R-CNN for Pedestrian Detection | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we consider the problem of pedestrian detection in natural
scenes. Intuitively, instances of pedestrians with different spatial scales may
exhibit dramatically different features. Thus, large variance in instance
scales, which results in undesirable large intra-category variance in features,
may severely hurt the performance of modern object instance detection methods.
We argue that this issue can be substantially alleviated by the
divide-and-conquer philosophy. Taking pedestrian detection as an example, we
illustrate how we can leverage this philosophy to develop a Scale-Aware Fast
R-CNN (SAF R-CNN) framework. The model introduces multiple built-in
sub-networks which detect pedestrians with scales from disjoint ranges. Outputs
from all the sub-networks are then adaptively combined to generate the final
detection results that are shown to be robust to large variance in instance
scales, via a gate function defined over the sizes of object proposals.
Extensive evaluations on several challenging pedestrian detection datasets well
demonstrate the effectiveness of the proposed SAF R-CNN. Particularly, our
method achieves state-of-the-art performance on Caltech, INRIA, and ETH, and
obtains competitive results on KITTI.
| [
{
"version": "v1",
"created": "Wed, 28 Oct 2015 01:59:14 GMT"
},
{
"version": "v2",
"created": "Mon, 9 Nov 2015 06:08:18 GMT"
},
{
"version": "v3",
"created": "Sat, 25 Jun 2016 09:26:07 GMT"
}
] | 2016-06-28T00:00:00 | [
[
"Li",
"Jianan",
""
],
[
"Liang",
"Xiaodan",
""
],
[
"Shen",
"ShengMei",
""
],
[
"Xu",
"Tingfa",
""
],
[
"Feng",
"Jiashi",
""
],
[
"Yan",
"Shuicheng",
""
]
] | TITLE: Scale-aware Fast R-CNN for Pedestrian Detection
ABSTRACT: In this work, we consider the problem of pedestrian detection in natural
scenes. Intuitively, instances of pedestrians with different spatial scales may
exhibit dramatically different features. Thus, large variance in instance
scales, which results in undesirable large intra-category variance in features,
may severely hurt the performance of modern object instance detection methods.
We argue that this issue can be substantially alleviated by the
divide-and-conquer philosophy. Taking pedestrian detection as an example, we
illustrate how we can leverage this philosophy to develop a Scale-Aware Fast
R-CNN (SAF R-CNN) framework. The model introduces multiple built-in
sub-networks which detect pedestrians with scales from disjoint ranges. Outputs
from all the sub-networks are then adaptively combined to generate the final
detection results that are shown to be robust to large variance in instance
scales, via a gate function defined over the sizes of object proposals.
Extensive evaluations on several challenging pedestrian detection datasets well
demonstrate the effectiveness of the proposed SAF R-CNN. Particularly, our
method achieves state-of-the-art performance on Caltech, INRIA, and ETH, and
obtains competitive results on KITTI.
| no_new_dataset | 0.946646 |
1604.01792 | Tom Sercu | Tom Sercu, Vaibhava Goel | Advances in Very Deep Convolutional Neural Networks for LVCSR | Proc. Interspeech 2016 | null | null | null | cs.CL cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Very deep CNNs with small 3x3 kernels have recently been shown to achieve
very strong performance as acoustic models in hybrid NN-HMM speech recognition
systems. In this paper we investigate how to efficiently scale these models to
larger datasets. Specifically, we address the design choice of pooling and
padding along the time dimension which renders convolutional evaluation of
sequences highly inefficient. We propose a new CNN design without timepadding
and without timepooling, which is slightly suboptimal for accuracy, but has two
significant advantages: it enables sequence training and deployment by allowing
efficient convolutional evaluation of full utterances, and, it allows for batch
normalization to be straightforwardly adopted to CNNs on sequence data. Through
batch normalization, we recover the lost peformance from removing the
time-pooling, while keeping the benefit of efficient convolutional evaluation.
We demonstrate the performance of our models both on larger scale data than
before, and after sequence training. Our very deep CNN model sequence trained
on the 2000h switchboard dataset obtains 9.4 word error rate on the Hub5
test-set, matching with a single model the performance of the 2015 IBM system
combination, which was the previous best published result.
| [
{
"version": "v1",
"created": "Wed, 6 Apr 2016 20:07:52 GMT"
},
{
"version": "v2",
"created": "Sat, 25 Jun 2016 00:27:19 GMT"
}
] | 2016-06-28T00:00:00 | [
[
"Sercu",
"Tom",
""
],
[
"Goel",
"Vaibhava",
""
]
] | TITLE: Advances in Very Deep Convolutional Neural Networks for LVCSR
ABSTRACT: Very deep CNNs with small 3x3 kernels have recently been shown to achieve
very strong performance as acoustic models in hybrid NN-HMM speech recognition
systems. In this paper we investigate how to efficiently scale these models to
larger datasets. Specifically, we address the design choice of pooling and
padding along the time dimension which renders convolutional evaluation of
sequences highly inefficient. We propose a new CNN design without timepadding
and without timepooling, which is slightly suboptimal for accuracy, but has two
significant advantages: it enables sequence training and deployment by allowing
efficient convolutional evaluation of full utterances, and, it allows for batch
normalization to be straightforwardly adopted to CNNs on sequence data. Through
batch normalization, we recover the lost peformance from removing the
time-pooling, while keeping the benefit of efficient convolutional evaluation.
We demonstrate the performance of our models both on larger scale data than
before, and after sequence training. Our very deep CNN model sequence trained
on the 2000h switchboard dataset obtains 9.4 word error rate on the Hub5
test-set, matching with a single model the performance of the 2015 IBM system
combination, which was the previous best published result.
| no_new_dataset | 0.951097 |
1606.07103 | Sai Praneeth Suggu | Sai Praneeth Suggu, Kushwanth N. Goutham, Manoj K. Chinnakotla and
Manish Shrivastava | Deep Feature Fusion Network for Answer Quality Prediction in Community
Question Answering | Neu-IR '16 SIGIR Workshop on Neural Information Retrieval, July 21,
2016, Pisa, Italy | null | null | null | cs.IR cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Community Question Answering (cQA) forums have become a popular medium for
soliciting direct answers to specific questions of users from experts or other
experienced users on a given topic. However, for a given question, users
sometimes have to sift through a large number of low-quality or irrelevant
answers to find out the answer which satisfies their information need. To
alleviate this, the problem of Answer Quality Prediction (AQP) aims to predict
the quality of an answer posted in response to a forum question. Current AQP
systems either learn models using - a) various hand-crafted features (HCF) or
b) use deep learning (DL) techniques which automatically learn the required
feature representations.
In this paper, we propose a novel approach for AQP known as - "Deep Feature
Fusion Network (DFFN)" which leverages the advantages of both hand-crafted
features and deep learning based systems. Given a question-answer pair along
with its metadata, DFFN independently - a) learns deep features using a
Convolutional Neural Network (CNN) and b) computes hand-crafted features using
various external resources and then combines them using a deep neural network
trained to predict the final answer quality. DFFN achieves state-of-the-art
performance on the standard SemEval-2015 and SemEval-2016 benchmark datasets
and outperforms baseline approaches which individually employ either HCF or DL
based techniques alone.
| [
{
"version": "v1",
"created": "Wed, 22 Jun 2016 20:58:08 GMT"
},
{
"version": "v2",
"created": "Sun, 26 Jun 2016 05:54:51 GMT"
}
] | 2016-06-28T00:00:00 | [
[
"Suggu",
"Sai Praneeth",
""
],
[
"Goutham",
"Kushwanth N.",
""
],
[
"Chinnakotla",
"Manoj K.",
""
],
[
"Shrivastava",
"Manish",
""
]
] | TITLE: Deep Feature Fusion Network for Answer Quality Prediction in Community
Question Answering
ABSTRACT: Community Question Answering (cQA) forums have become a popular medium for
soliciting direct answers to specific questions of users from experts or other
experienced users on a given topic. However, for a given question, users
sometimes have to sift through a large number of low-quality or irrelevant
answers to find out the answer which satisfies their information need. To
alleviate this, the problem of Answer Quality Prediction (AQP) aims to predict
the quality of an answer posted in response to a forum question. Current AQP
systems either learn models using - a) various hand-crafted features (HCF) or
b) use deep learning (DL) techniques which automatically learn the required
feature representations.
In this paper, we propose a novel approach for AQP known as - "Deep Feature
Fusion Network (DFFN)" which leverages the advantages of both hand-crafted
features and deep learning based systems. Given a question-answer pair along
with its metadata, DFFN independently - a) learns deep features using a
Convolutional Neural Network (CNN) and b) computes hand-crafted features using
various external resources and then combines them using a deep neural network
trained to predict the final answer quality. DFFN achieves state-of-the-art
performance on the standard SemEval-2015 and SemEval-2016 benchmark datasets
and outperforms baseline approaches which individually employ either HCF or DL
based techniques alone.
| no_new_dataset | 0.952397 |
1606.07285 | Wojciech Samek | Farhad Arbabzadah and Gr\'egoire Montavon and Klaus-Robert M\"uller
and Wojciech Samek | Identifying individual facial expressions by deconstructing a neural
network | 12 pages, 7 figures, Paper accepted for GCPR 2016 | null | null | null | cs.CV cs.NE stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper focuses on the problem of explaining predictions of psychological
attributes such as attractiveness, happiness, confidence and intelligence from
face photographs using deep neural networks. Since psychological attribute
datasets typically suffer from small sample sizes, we apply transfer learning
with two base models to avoid overfitting. These models were trained on an age
and gender prediction task, respectively. Using a novel explanation method we
extract heatmaps that highlight the parts of the image most responsible for the
prediction. We further observe that the explanation method provides important
insights into the nature of features of the base model, which allow one to
assess the aptitude of the base model for a given transfer learning task.
Finally, we observe that the multiclass model is more feature rich than its
binary counterpart. The experimental evaluation is performed on the 2222 images
from the 10k US faces dataset containing psychological attribute labels as well
as on a subset of KDEF images.
| [
{
"version": "v1",
"created": "Thu, 23 Jun 2016 12:24:45 GMT"
},
{
"version": "v2",
"created": "Sun, 26 Jun 2016 00:41:35 GMT"
}
] | 2016-06-28T00:00:00 | [
[
"Arbabzadah",
"Farhad",
""
],
[
"Montavon",
"Grégoire",
""
],
[
"Müller",
"Klaus-Robert",
""
],
[
"Samek",
"Wojciech",
""
]
] | TITLE: Identifying individual facial expressions by deconstructing a neural
network
ABSTRACT: This paper focuses on the problem of explaining predictions of psychological
attributes such as attractiveness, happiness, confidence and intelligence from
face photographs using deep neural networks. Since psychological attribute
datasets typically suffer from small sample sizes, we apply transfer learning
with two base models to avoid overfitting. These models were trained on an age
and gender prediction task, respectively. Using a novel explanation method we
extract heatmaps that highlight the parts of the image most responsible for the
prediction. We further observe that the explanation method provides important
insights into the nature of features of the base model, which allow one to
assess the aptitude of the base model for a given transfer learning task.
Finally, we observe that the multiclass model is more feature rich than its
binary counterpart. The experimental evaluation is performed on the 2222 images
from the 10k US faces dataset containing psychological attribute labels as well
as on a subset of KDEF images.
| no_new_dataset | 0.951594 |
1606.07827 | Dan Xie | Dan Xie and Tianmin Shu and Sinisa Todorovic and Song-Chun Zhu | Modeling and Inferring Human Intents and Latent Functional Objects for
Trajectory Prediction | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper is about detecting functional objects and inferring human
intentions in surveillance videos of public spaces. People in the videos are
expected to intentionally take shortest paths toward functional objects subject
to obstacles, where people can satisfy certain needs (e.g., a vending machine
can quench thirst), by following one of three possible intent behaviors: reach
a single functional object and stop, or sequentially visit several functional
objects, or initially start moving toward one goal but then change the intent
to move toward another. Since detecting functional objects in low-resolution
surveillance videos is typically unreliable, we call them "dark matter"
characterized by the functionality to attract people. We formulate the
Agent-based Lagrangian Mechanics wherein human trajectories are
probabilistically modeled as motions of agents in many layers of "dark-energy"
fields, where each agent can select a particular force field to affect its
motions, and thus define the minimum-energy Dijkstra path toward the
corresponding source "dark matter". For evaluation, we compiled and annotated a
new dataset. The results demonstrate our effectiveness in predicting human
intent behaviors and trajectories, and localizing functional objects, as well
as discovering distinct functional classes of objects by clustering human
motion behavior in the vicinity of functional objects.
| [
{
"version": "v1",
"created": "Fri, 24 Jun 2016 20:15:12 GMT"
}
] | 2016-06-28T00:00:00 | [
[
"Xie",
"Dan",
""
],
[
"Shu",
"Tianmin",
""
],
[
"Todorovic",
"Sinisa",
""
],
[
"Zhu",
"Song-Chun",
""
]
] | TITLE: Modeling and Inferring Human Intents and Latent Functional Objects for
Trajectory Prediction
ABSTRACT: This paper is about detecting functional objects and inferring human
intentions in surveillance videos of public spaces. People in the videos are
expected to intentionally take shortest paths toward functional objects subject
to obstacles, where people can satisfy certain needs (e.g., a vending machine
can quench thirst), by following one of three possible intent behaviors: reach
a single functional object and stop, or sequentially visit several functional
objects, or initially start moving toward one goal but then change the intent
to move toward another. Since detecting functional objects in low-resolution
surveillance videos is typically unreliable, we call them "dark matter"
characterized by the functionality to attract people. We formulate the
Agent-based Lagrangian Mechanics wherein human trajectories are
probabilistically modeled as motions of agents in many layers of "dark-energy"
fields, where each agent can select a particular force field to affect its
motions, and thus define the minimum-energy Dijkstra path toward the
corresponding source "dark matter". For evaluation, we compiled and annotated a
new dataset. The results demonstrate our effectiveness in predicting human
intent behaviors and trajectories, and localizing functional objects, as well
as discovering distinct functional classes of objects by clustering human
motion behavior in the vicinity of functional objects.
| new_dataset | 0.950915 |
1606.07869 | Dwaipayan Roy | Dwaipayan Roy, Debasis Ganguly, Mandar Mitra, Gareth J.F. Jones | Representing Documents and Queries as Sets of Word Embedded Vectors for
Information Retrieval | Neu-IR '16 SIGIR Workshop on Neural Information Retrieval July 21,
2016, Pisa, Italy | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A major difficulty in applying word vector embeddings in IR is in devising an
effective and efficient strategy for obtaining representations of compound
units of text, such as whole documents, (in comparison to the atomic words),
for the purpose of indexing and scoring documents. Instead of striving for a
suitable method for obtaining a single vector representation of a large
document of text, we rather aim for developing a similarity metric that makes
use of the similarities between the individual embedded word vectors in a
document and a query. More specifically, we represent a document and a query as
sets of word vectors, and use a standard notion of similarity measure between
these sets, computed as a function of the similarities between each constituent
word pair from these sets. We then make use of this similarity measure in
combination with standard IR based similarities for document ranking. The
results of our initial experimental investigations shows that our proposed
method improves MAP by up to $5.77\%$, in comparison to standard text-based
language model similarity, on the TREC ad-hoc dataset.
| [
{
"version": "v1",
"created": "Sat, 25 Jun 2016 04:35:47 GMT"
}
] | 2016-06-28T00:00:00 | [
[
"Roy",
"Dwaipayan",
""
],
[
"Ganguly",
"Debasis",
""
],
[
"Mitra",
"Mandar",
""
],
[
"Jones",
"Gareth J. F.",
""
]
] | TITLE: Representing Documents and Queries as Sets of Word Embedded Vectors for
Information Retrieval
ABSTRACT: A major difficulty in applying word vector embeddings in IR is in devising an
effective and efficient strategy for obtaining representations of compound
units of text, such as whole documents, (in comparison to the atomic words),
for the purpose of indexing and scoring documents. Instead of striving for a
suitable method for obtaining a single vector representation of a large
document of text, we rather aim for developing a similarity metric that makes
use of the similarities between the individual embedded word vectors in a
document and a query. More specifically, we represent a document and a query as
sets of word vectors, and use a standard notion of similarity measure between
these sets, computed as a function of the similarities between each constituent
word pair from these sets. We then make use of this similarity measure in
combination with standard IR based similarities for document ranking. The
results of our initial experimental investigations shows that our proposed
method improves MAP by up to $5.77\%$, in comparison to standard text-based
language model similarity, on the TREC ad-hoc dataset.
| no_new_dataset | 0.94743 |
1606.07921 | Gonzalo Vaca-Castano | Gonzalo Vaca-Castano | Finding the Topic of a Set of Images | null | null | null | null | cs.CV cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we introduce the problem of determining the topic that a set of
images is describing, where every topic is represented as a set of words.
Different from other problems like tag assignment or similar, a) we assume
multiple images are used as input instead of single image, b) Input images are
typically not visually related, c) Input images are not necessarily
semantically close, and d) Output word space is unconstrained. In our proposed
solution, visual information of each query image is used to retrieve similar
images with text labels (tags) from an image database. We consider a scenario
where the tags are very noisy and diverse, given that they were obtained by
implicit crowd-sourcing in a database of 1 million images and over seventy
seven thousand tags. The words or tags associated to each query are processed
jointly in a word selection algorithm using random walks that allows to refine
the search topic, rejecting words that are not part of the topic and produce a
set of words that fairly describe the topic. Experiments on a dataset of 300
topics, with up to twenty images per topic, show that our algorithm performs
better than the proposed baseline for any number of query images. We also
present a new Conditional Random Field (CRF) word mapping algorithm that
preserves the semantic similarity of the mapped words, increasing the
performance of the results over the baseline.
| [
{
"version": "v1",
"created": "Sat, 25 Jun 2016 15:06:27 GMT"
}
] | 2016-06-28T00:00:00 | [
[
"Vaca-Castano",
"Gonzalo",
""
]
] | TITLE: Finding the Topic of a Set of Images
ABSTRACT: In this paper we introduce the problem of determining the topic that a set of
images is describing, where every topic is represented as a set of words.
Different from other problems like tag assignment or similar, a) we assume
multiple images are used as input instead of single image, b) Input images are
typically not visually related, c) Input images are not necessarily
semantically close, and d) Output word space is unconstrained. In our proposed
solution, visual information of each query image is used to retrieve similar
images with text labels (tags) from an image database. We consider a scenario
where the tags are very noisy and diverse, given that they were obtained by
implicit crowd-sourcing in a database of 1 million images and over seventy
seven thousand tags. The words or tags associated to each query are processed
jointly in a word selection algorithm using random walks that allows to refine
the search topic, rejecting words that are not part of the topic and produce a
set of words that fairly describe the topic. Experiments on a dataset of 300
topics, with up to twenty images per topic, show that our algorithm performs
better than the proposed baseline for any number of query images. We also
present a new Conditional Random Field (CRF) word mapping algorithm that
preserves the semantic similarity of the mapped words, increasing the
performance of the results over the baseline.
| no_new_dataset | 0.724139 |
1606.08003 | Guy Edward Toh Emerson | Guy Emerson, Ann Copestake | Functional Distributional Semantics | Published at Representation Learning for NLP workshop at ACL 2016,
https://sites.google.com/site/repl4nlp2016/ | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Vector space models have become popular in distributional semantics, despite
the challenges they face in capturing various semantic phenomena. We propose a
novel probabilistic framework which draws on both formal semantics and recent
advances in machine learning. In particular, we separate predicates from the
entities they refer to, allowing us to perform Bayesian inference based on
logical forms. We describe an implementation of this framework using a
combination of Restricted Boltzmann Machines and feedforward neural networks.
Finally, we demonstrate the feasibility of this approach by training it on a
parsed corpus and evaluating it on established similarity datasets.
| [
{
"version": "v1",
"created": "Sun, 26 Jun 2016 07:44:08 GMT"
}
] | 2016-06-28T00:00:00 | [
[
"Emerson",
"Guy",
""
],
[
"Copestake",
"Ann",
""
]
] | TITLE: Functional Distributional Semantics
ABSTRACT: Vector space models have become popular in distributional semantics, despite
the challenges they face in capturing various semantic phenomena. We propose a
novel probabilistic framework which draws on both formal semantics and recent
advances in machine learning. In particular, we separate predicates from the
entities they refer to, allowing us to perform Bayesian inference based on
logical forms. We describe an implementation of this framework using a
combination of Restricted Boltzmann Machines and feedforward neural networks.
Finally, we demonstrate the feasibility of this approach by training it on a
parsed corpus and evaluating it on established similarity datasets.
| no_new_dataset | 0.94625 |
1606.08057 | Lawrence Jackel | Artem Provodin, Liila Torabi, Beat Flepp, Yann LeCun, Michael Sergio,
L. D. Jackel, Urs Muller, Jure Zbontar | Fast Incremental Learning for Off-Road Robot Navigation | null | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A promising approach to autonomous driving is machine learning. In such
systems, training datasets are created that capture the sensory input to a
vehicle as well as the desired response. A disadvantage of using a learned
navigation system is that the learning process itself may require a huge number
of training examples and a large amount of computing. To avoid the need to
collect a large training set of driving examples, we describe a system that
takes advantage of the huge number of training examples provided by ImageNet,
but is able to adapt quickly using a small training set for the specific
driving environment.
| [
{
"version": "v1",
"created": "Sun, 26 Jun 2016 17:31:02 GMT"
}
] | 2016-06-28T00:00:00 | [
[
"Provodin",
"Artem",
""
],
[
"Torabi",
"Liila",
""
],
[
"Flepp",
"Beat",
""
],
[
"LeCun",
"Yann",
""
],
[
"Sergio",
"Michael",
""
],
[
"Jackel",
"L. D.",
""
],
[
"Muller",
"Urs",
""
],
[
"Zbontar",
"Jure",
""
]
] | TITLE: Fast Incremental Learning for Off-Road Robot Navigation
ABSTRACT: A promising approach to autonomous driving is machine learning. In such
systems, training datasets are created that capture the sensory input to a
vehicle as well as the desired response. A disadvantage of using a learned
navigation system is that the learning process itself may require a huge number
of training examples and a large amount of computing. To avoid the need to
collect a large training set of driving examples, we describe a system that
takes advantage of the huge number of training examples provided by ImageNet,
but is able to adapt quickly using a small training set for the specific
driving environment.
| no_new_dataset | 0.927034 |
1606.08132 | Iva Bojic | Iva Bojic, Alexander Belyi, Carlo Ratti, Stanislav Sobolevsky | Scaling of foreign attractiveness for countries and states | null | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | People's behavior on online social networks, which store geo-tagged
information showing where people were or are at the moment, can provide
information about their offline life as well. In this paper we present one
possible research direction that can be taken using Flickr dataset of publicly
available geo-tagged media objects (e.g., photographs, videos). Namely, our
focus is on investigating attractiveness of countries or smaller large-scale
composite regions (e.g., US states) for foreign visitors where attractiveness
is defined as the absolute number of media objects taken in a certain state or
country by its foreign visitors compared to its population size. We also
consider it together with attractiveness of the destination for the
international migration, measured through publicly available dataset provided
by United Nations. By having those two datasets, we are able to look at
attractiveness from two different perspectives: short-term and long-term one.
As our previous study showed that city attractiveness for Spanish cities
follows a superlinear trend, here we want to see if the same law is also
applicable to country/state (i.e., composite regions) attractiveness. Finally,
we provide one possible explanation for the obtained results.
| [
{
"version": "v1",
"created": "Mon, 27 Jun 2016 05:54:30 GMT"
}
] | 2016-06-28T00:00:00 | [
[
"Bojic",
"Iva",
""
],
[
"Belyi",
"Alexander",
""
],
[
"Ratti",
"Carlo",
""
],
[
"Sobolevsky",
"Stanislav",
""
]
] | TITLE: Scaling of foreign attractiveness for countries and states
ABSTRACT: People's behavior on online social networks, which store geo-tagged
information showing where people were or are at the moment, can provide
information about their offline life as well. In this paper we present one
possible research direction that can be taken using Flickr dataset of publicly
available geo-tagged media objects (e.g., photographs, videos). Namely, our
focus is on investigating attractiveness of countries or smaller large-scale
composite regions (e.g., US states) for foreign visitors where attractiveness
is defined as the absolute number of media objects taken in a certain state or
country by its foreign visitors compared to its population size. We also
consider it together with attractiveness of the destination for the
international migration, measured through publicly available dataset provided
by United Nations. By having those two datasets, we are able to look at
attractiveness from two different perspectives: short-term and long-term one.
As our previous study showed that city attractiveness for Spanish cities
follows a superlinear trend, here we want to see if the same law is also
applicable to country/state (i.e., composite regions) attractiveness. Finally,
we provide one possible explanation for the obtained results.
| no_new_dataset | 0.937954 |
1606.08270 | Naomi Saphra | Naomi Saphra and Adam Lopez | Evaluating Informal-Domain Word Representations With UrbanDictionary | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by-sa/4.0/ | Existing corpora for intrinsic evaluation are not targeted towards tasks in
informal domains such as Twitter or news comment forums. We want to test
whether a representation of informal words fulfills the promise of eliding
explicit text normalization as a preprocessing step. One possible evaluation
metric for such domains is the proximity of spelling variants. We propose how
such a metric might be computed and how a spelling variant dataset can be
collected using UrbanDictionary.
| [
{
"version": "v1",
"created": "Mon, 27 Jun 2016 13:39:54 GMT"
}
] | 2016-06-28T00:00:00 | [
[
"Saphra",
"Naomi",
""
],
[
"Lopez",
"Adam",
""
]
] | TITLE: Evaluating Informal-Domain Word Representations With UrbanDictionary
ABSTRACT: Existing corpora for intrinsic evaluation are not targeted towards tasks in
informal domains such as Twitter or news comment forums. We want to test
whether a representation of informal words fulfills the promise of eliding
explicit text normalization as a preprocessing step. One possible evaluation
metric for such domains is the proximity of spelling variants. We propose how
such a metric might be computed and how a spelling variant dataset can be
collected using UrbanDictionary.
| new_dataset | 0.754259 |
1601.02306 | Iva Bojic | Iva Bojic, Ivana Nizetic-Kosovic, Alexander Belyi, Vedran Podobnik,
Stanislav Sobolevsky, Stanislav Sobolevsky, Carlo Ratti | Sublinear scaling of country attractiveness observed from Flickr dataset | null | null | null | null | cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The number of people who decide to share their photographs publicly increases
every day, consequently making available new almost real-time insights of human
behavior while traveling. Rather than having this statistic once a month or
yearly, urban planners and touristic workers now can make decisions almost
simultaneously with the emergence of new events. Moreover, these datasets can
be used not only to compare how popular different touristic places are, but
also predict how popular they should be taking into an account their
characteristics. In this paper we investigate how country attractiveness scales
with its population and size using number of foreign users taking photographs,
which is observed from Flickr dataset, as a proxy for attractiveness. The
results showed two things: to a certain extent country attractiveness scales
with population, but does not with its size; and unlike in case of Spanish
cities, country attractiveness scales sublinearly with population, and not
superlinearly.
| [
{
"version": "v1",
"created": "Mon, 11 Jan 2016 02:41:20 GMT"
}
] | 2016-06-27T00:00:00 | [
[
"Bojic",
"Iva",
""
],
[
"Nizetic-Kosovic",
"Ivana",
""
],
[
"Belyi",
"Alexander",
""
],
[
"Podobnik",
"Vedran",
""
],
[
"Sobolevsky",
"Stanislav",
""
],
[
"Sobolevsky",
"Stanislav",
""
],
[
"Ratti",
"Carlo",
""
]
] | TITLE: Sublinear scaling of country attractiveness observed from Flickr dataset
ABSTRACT: The number of people who decide to share their photographs publicly increases
every day, consequently making available new almost real-time insights of human
behavior while traveling. Rather than having this statistic once a month or
yearly, urban planners and touristic workers now can make decisions almost
simultaneously with the emergence of new events. Moreover, these datasets can
be used not only to compare how popular different touristic places are, but
also predict how popular they should be taking into an account their
characteristics. In this paper we investigate how country attractiveness scales
with its population and size using number of foreign users taking photographs,
which is observed from Flickr dataset, as a proxy for attractiveness. The
results showed two things: to a certain extent country attractiveness scales
with population, but does not with its size; and unlike in case of Spanish
cities, country attractiveness scales sublinearly with population, and not
superlinearly.
| no_new_dataset | 0.942718 |
1602.06468 | Yuyu Zhang | Yuyu Zhang, Mohammad Taha Bahadori, Hang Su, Jimeng Sun | FLASH: Fast Bayesian Optimization for Data Analytic Pipelines | 21 pages, KDD 2016 | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Modern data science relies on data analytic pipelines to organize
interdependent computational steps. Such analytic pipelines often involve
different algorithms across multiple steps, each with its own hyperparameters.
To achieve the best performance, it is often critical to select optimal
algorithms and to set appropriate hyperparameters, which requires large
computational efforts. Bayesian optimization provides a principled way for
searching optimal hyperparameters for a single algorithm. However, many
challenges remain in solving pipeline optimization problems with
high-dimensional and highly conditional search space. In this work, we propose
Fast LineAr SearcH (FLASH), an efficient method for tuning analytic pipelines.
FLASH is a two-layer Bayesian optimization framework, which firstly uses a
parametric model to select promising algorithms, then computes a nonparametric
model to fine-tune hyperparameters of the promising algorithms. FLASH also
includes an effective caching algorithm which can further accelerate the search
process. Extensive experiments on a number of benchmark datasets have
demonstrated that FLASH significantly outperforms previous state-of-the-art
methods in both search speed and accuracy. Using 50% of the time budget, FLASH
achieves up to 20% improvement on test error rate compared to the baselines.
FLASH also yields state-of-the-art performance on a real-world application for
healthcare predictive modeling.
| [
{
"version": "v1",
"created": "Sat, 20 Feb 2016 21:56:49 GMT"
},
{
"version": "v2",
"created": "Mon, 13 Jun 2016 02:06:27 GMT"
},
{
"version": "v3",
"created": "Fri, 24 Jun 2016 01:28:23 GMT"
}
] | 2016-06-27T00:00:00 | [
[
"Zhang",
"Yuyu",
""
],
[
"Bahadori",
"Mohammad Taha",
""
],
[
"Su",
"Hang",
""
],
[
"Sun",
"Jimeng",
""
]
] | TITLE: FLASH: Fast Bayesian Optimization for Data Analytic Pipelines
ABSTRACT: Modern data science relies on data analytic pipelines to organize
interdependent computational steps. Such analytic pipelines often involve
different algorithms across multiple steps, each with its own hyperparameters.
To achieve the best performance, it is often critical to select optimal
algorithms and to set appropriate hyperparameters, which requires large
computational efforts. Bayesian optimization provides a principled way for
searching optimal hyperparameters for a single algorithm. However, many
challenges remain in solving pipeline optimization problems with
high-dimensional and highly conditional search space. In this work, we propose
Fast LineAr SearcH (FLASH), an efficient method for tuning analytic pipelines.
FLASH is a two-layer Bayesian optimization framework, which firstly uses a
parametric model to select promising algorithms, then computes a nonparametric
model to fine-tune hyperparameters of the promising algorithms. FLASH also
includes an effective caching algorithm which can further accelerate the search
process. Extensive experiments on a number of benchmark datasets have
demonstrated that FLASH significantly outperforms previous state-of-the-art
methods in both search speed and accuracy. Using 50% of the time budget, FLASH
achieves up to 20% improvement on test error rate compared to the baselines.
FLASH also yields state-of-the-art performance on a real-world application for
healthcare predictive modeling.
| no_new_dataset | 0.946349 |
1603.01547 | Ondrej Bajgar | Rudolf Kadlec, Martin Schmid, Ondrej Bajgar and Jan Kleindienst | Text Understanding with the Attention Sum Reader Network | Presented at ACL 2016 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Several large cloze-style context-question-answer datasets have been
introduced recently: the CNN and Daily Mail news data and the Children's Book
Test. Thanks to the size of these datasets, the associated text comprehension
task is well suited for deep-learning techniques that currently seem to
outperform all alternative approaches. We present a new, simple model that uses
attention to directly pick the answer from the context as opposed to computing
the answer using a blended representation of words in the document as is usual
in similar models. This makes the model particularly suitable for
question-answering problems where the answer is a single word from the
document. Ensemble of our models sets new state of the art on all evaluated
datasets.
| [
{
"version": "v1",
"created": "Fri, 4 Mar 2016 17:32:42 GMT"
},
{
"version": "v2",
"created": "Fri, 24 Jun 2016 13:04:47 GMT"
}
] | 2016-06-27T00:00:00 | [
[
"Kadlec",
"Rudolf",
""
],
[
"Schmid",
"Martin",
""
],
[
"Bajgar",
"Ondrej",
""
],
[
"Kleindienst",
"Jan",
""
]
] | TITLE: Text Understanding with the Attention Sum Reader Network
ABSTRACT: Several large cloze-style context-question-answer datasets have been
introduced recently: the CNN and Daily Mail news data and the Children's Book
Test. Thanks to the size of these datasets, the associated text comprehension
task is well suited for deep-learning techniques that currently seem to
outperform all alternative approaches. We present a new, simple model that uses
attention to directly pick the answer from the context as opposed to computing
the answer using a blended representation of words in the document as is usual
in similar models. This makes the model particularly suitable for
question-answering problems where the answer is a single word from the
document. Ensemble of our models sets new state of the art on all evaluated
datasets.
| no_new_dataset | 0.935817 |
1606.07239 | Samuel St-Jean | Samuel St-Jean, Pierrick Coup\'e and Maxime Descoteaux | Non Local Spatial and Angular Matching : Enabling higher spatial
resolution diffusion MRI datasets through adaptive denoising | Code available : https://github.com/samuelstjean/nlsam Datasets
available : https://github.com/samuelstjean/nlsam_data, Medical Image
Analysis, 2016 | Medical Image Analysis , Volume 32 , 115 - 130, 2016 | 10.1016/j.media.2016.02.010 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Diffusion magnetic resonance imaging datasets suffer from low Signal-to-Noise
Ratio, especially at high b-values. Acquiring data at high b-values contains
relevant information and is now of great interest for microstructural and
connectomics studies. High noise levels bias the measurements due to the
non-Gaussian nature of the noise, which in turn can lead to a false and biased
estimation of the diffusion parameters. Additionally, the usage of in-plane
acceleration techniques during the acquisition leads to a spatially varying
noise distribution, which depends on the parallel acceleration method
implemented on the scanner. This paper proposes a novel diffusion MRI denoising
technique that can be used on all existing data, without adding to the scanning
time. We first apply a statistical framework to convert the noise to Gaussian
distributed noise, effectively removing the bias. We then introduce a spatially
and angular adaptive denoising technique, the Non Local Spatial and Angular
Matching (NLSAM) algorithm. Each volume is first decomposed in small 4D
overlapping patches to capture the structure of the diffusion data and a
dictionary of atoms is learned on those patches. A local sparse decomposition
is then found by bounding the reconstruction error with the local noise
variance. We compare against three other state-of-the-art denoising methods and
show quantitative local and connectivity results on a synthetic phantom and on
an in-vivo high resolution dataset. Overall, our method restores perceptual
information, removes the noise bias in common diffusion metrics, restores the
extracted peaks coherence and improves reproducibility of tractography. Our
work paves the way for higher spatial resolution acquisition of diffusion MRI
datasets, which could in turn reveal new anatomical details that are not
discernible at the spatial resolution currently used by the diffusion MRI
community.
| [
{
"version": "v1",
"created": "Thu, 23 Jun 2016 09:28:29 GMT"
}
] | 2016-06-27T00:00:00 | [
[
"St-Jean",
"Samuel",
""
],
[
"Coupé",
"Pierrick",
""
],
[
"Descoteaux",
"Maxime",
""
]
] | TITLE: Non Local Spatial and Angular Matching : Enabling higher spatial
resolution diffusion MRI datasets through adaptive denoising
ABSTRACT: Diffusion magnetic resonance imaging datasets suffer from low Signal-to-Noise
Ratio, especially at high b-values. Acquiring data at high b-values contains
relevant information and is now of great interest for microstructural and
connectomics studies. High noise levels bias the measurements due to the
non-Gaussian nature of the noise, which in turn can lead to a false and biased
estimation of the diffusion parameters. Additionally, the usage of in-plane
acceleration techniques during the acquisition leads to a spatially varying
noise distribution, which depends on the parallel acceleration method
implemented on the scanner. This paper proposes a novel diffusion MRI denoising
technique that can be used on all existing data, without adding to the scanning
time. We first apply a statistical framework to convert the noise to Gaussian
distributed noise, effectively removing the bias. We then introduce a spatially
and angular adaptive denoising technique, the Non Local Spatial and Angular
Matching (NLSAM) algorithm. Each volume is first decomposed in small 4D
overlapping patches to capture the structure of the diffusion data and a
dictionary of atoms is learned on those patches. A local sparse decomposition
is then found by bounding the reconstruction error with the local noise
variance. We compare against three other state-of-the-art denoising methods and
show quantitative local and connectivity results on a synthetic phantom and on
an in-vivo high resolution dataset. Overall, our method restores perceptual
information, removes the noise bias in common diffusion metrics, restores the
extracted peaks coherence and improves reproducibility of tractography. Our
work paves the way for higher spatial resolution acquisition of diffusion MRI
datasets, which could in turn reveal new anatomical details that are not
discernible at the spatial resolution currently used by the diffusion MRI
community.
| no_new_dataset | 0.955899 |
1606.07496 | Roberto Camacho Barranco | Roberto Camacho Barranco (1), Laura M. Rodriguez (1), Rebecca Urbina
(1), and M. Shahriar Hossain (1) ((1) The University of Texas at El Paso) | Is a Picture Worth Ten Thousand Words in a Review Dataset? | 10 pages, 11 figures, "for associated results, see
http://http://auto-captioning.herokuapp.com/" "submitted to DLRS 2016
workshop" | null | null | null | cs.CV cs.CL cs.IR cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While textual reviews have become prominent in many recommendation-based
systems, automated frameworks to provide relevant visual cues against text
reviews where pictures are not available is a new form of task confronted by
data mining and machine learning researchers. Suggestions of pictures that are
relevant to the content of a review could significantly benefit the users by
increasing the effectiveness of a review. We propose a deep learning-based
framework to automatically: (1) tag the images available in a review dataset,
(2) generate a caption for each image that does not have one, and (3) enhance
each review by recommending relevant images that might not be uploaded by the
corresponding reviewer. We evaluate the proposed framework using the Yelp
Challenge Dataset. While a subset of the images in this particular dataset are
correctly captioned, the majority of the pictures do not have any associated
text. Moreover, there is no mapping between reviews and images. Each image has
a corresponding business-tag where the picture was taken, though. The overall
data setting and unavailability of crucial pieces required for a mapping make
the problem of recommending images for reviews a major challenge. Qualitative
and quantitative evaluations indicate that our proposed framework provides high
quality enhancements through automatic captioning, tagging, and recommendation
for mapping reviews and images.
| [
{
"version": "v1",
"created": "Thu, 23 Jun 2016 22:04:08 GMT"
}
] | 2016-06-27T00:00:00 | [
[
"Barranco",
"Roberto Camacho",
"",
"The University of Texas at El Paso"
],
[
"Rodriguez",
"Laura M.",
"",
"The University of Texas at El Paso"
],
[
"Urbina",
"Rebecca",
"",
"The University of Texas at El Paso"
],
[
"Hossain",
"M. Shahriar",
"",
"The University of Texas at El Paso"
]
] | TITLE: Is a Picture Worth Ten Thousand Words in a Review Dataset?
ABSTRACT: While textual reviews have become prominent in many recommendation-based
systems, automated frameworks to provide relevant visual cues against text
reviews where pictures are not available is a new form of task confronted by
data mining and machine learning researchers. Suggestions of pictures that are
relevant to the content of a review could significantly benefit the users by
increasing the effectiveness of a review. We propose a deep learning-based
framework to automatically: (1) tag the images available in a review dataset,
(2) generate a caption for each image that does not have one, and (3) enhance
each review by recommending relevant images that might not be uploaded by the
corresponding reviewer. We evaluate the proposed framework using the Yelp
Challenge Dataset. While a subset of the images in this particular dataset are
correctly captioned, the majority of the pictures do not have any associated
text. Moreover, there is no mapping between reviews and images. Each image has
a corresponding business-tag where the picture was taken, though. The overall
data setting and unavailability of crucial pieces required for a mapping make
the problem of recommending images for reviews a major challenge. Qualitative
and quantitative evaluations indicate that our proposed framework provides high
quality enhancements through automatic captioning, tagging, and recommendation
for mapping reviews and images.
| no_new_dataset | 0.949153 |
1606.07550 | Rok Sosic | Jure Leskovec and Rok Sosic | SNAP: A General Purpose Network Analysis and Graph Mining Library | null | null | null | null | cs.SI cs.DB physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large networks are becoming a widely used abstraction for studying complex
systems in a broad set of disciplines, ranging from social network analysis to
molecular biology and neuroscience. Despite an increasing need to analyze and
manipulate large networks, only a limited number of tools are available for
this task.
Here, we describe Stanford Network Analysis Platform (SNAP), a
general-purpose, high-performance system that provides easy to use, high-level
operations for analysis and manipulation of large networks. We present SNAP
functionality, describe its implementational details, and give performance
benchmarks. SNAP has been developed for single big-memory machines and it
balances the trade-off between maximum performance, compact in-memory graph
representation, and the ability to handle dynamic graphs where nodes and edges
are being added or removed over time. SNAP can process massive networks with
hundreds of millions of nodes and billions of edges. SNAP offers over 140
different graph algorithms that can efficiently manipulate large graphs,
calculate structural properties, generate regular and random graphs, and handle
attributes and meta-data on nodes and edges. Besides being able to handle large
graphs, an additional strength of SNAP is that networks and their attributes
are fully dynamic, they can be modified during the computation at low cost.
SNAP is provided as an open source library in C++ as well as a module in
Python.
We also describe the Stanford Large Network Dataset, a set of social and
information real-world networks and datasets, which we make publicly available.
The collection is a complementary resource to our SNAP software and is widely
used for development and benchmarking of graph analytics algorithms.
| [
{
"version": "v1",
"created": "Fri, 24 Jun 2016 03:17:12 GMT"
}
] | 2016-06-27T00:00:00 | [
[
"Leskovec",
"Jure",
""
],
[
"Sosic",
"Rok",
""
]
] | TITLE: SNAP: A General Purpose Network Analysis and Graph Mining Library
ABSTRACT: Large networks are becoming a widely used abstraction for studying complex
systems in a broad set of disciplines, ranging from social network analysis to
molecular biology and neuroscience. Despite an increasing need to analyze and
manipulate large networks, only a limited number of tools are available for
this task.
Here, we describe Stanford Network Analysis Platform (SNAP), a
general-purpose, high-performance system that provides easy to use, high-level
operations for analysis and manipulation of large networks. We present SNAP
functionality, describe its implementational details, and give performance
benchmarks. SNAP has been developed for single big-memory machines and it
balances the trade-off between maximum performance, compact in-memory graph
representation, and the ability to handle dynamic graphs where nodes and edges
are being added or removed over time. SNAP can process massive networks with
hundreds of millions of nodes and billions of edges. SNAP offers over 140
different graph algorithms that can efficiently manipulate large graphs,
calculate structural properties, generate regular and random graphs, and handle
attributes and meta-data on nodes and edges. Besides being able to handle large
graphs, an additional strength of SNAP is that networks and their attributes
are fully dynamic, they can be modified during the computation at low cost.
SNAP is provided as an open source library in C++ as well as a module in
Python.
We also describe the Stanford Large Network Dataset, a set of social and
information real-world networks and datasets, which we make publicly available.
The collection is a complementary resource to our SNAP software and is widely
used for development and benchmarking of graph analytics algorithms.
| new_dataset | 0.659707 |
1606.07565 | Daniel Cohen | Daniel Cohen, Qingyao Ai, W. Bruce Croft | Adaptability of Neural Networks on Varying Granularity IR Tasks | 4 pages, Neu-IR'16 SIGIR Workshop on Neural Information Retrieval,
July 21, 2016, Pisa, Italy | null | null | null | cs.IR cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent work in Information Retrieval (IR) using Deep Learning models has
yielded state of the art results on a variety of IR tasks. Deep neural networks
(DNN) are capable of learning ideal representations of data during the training
process, removing the need for independently extracting features. However, the
structures of these DNNs are often tailored to perform on specific datasets. In
addition, IR tasks deal with text at varying levels of granularity from single
factoids to documents containing thousands of words. In this paper, we examine
the role of the granularity on the performance of common state of the art DNN
structures in IR.
| [
{
"version": "v1",
"created": "Fri, 24 Jun 2016 04:40:48 GMT"
}
] | 2016-06-27T00:00:00 | [
[
"Cohen",
"Daniel",
""
],
[
"Ai",
"Qingyao",
""
],
[
"Croft",
"W. Bruce",
""
]
] | TITLE: Adaptability of Neural Networks on Varying Granularity IR Tasks
ABSTRACT: Recent work in Information Retrieval (IR) using Deep Learning models has
yielded state of the art results on a variety of IR tasks. Deep neural networks
(DNN) are capable of learning ideal representations of data during the training
process, removing the need for independently extracting features. However, the
structures of these DNNs are often tailored to perform on specific datasets. In
addition, IR tasks deal with text at varying levels of granularity from single
factoids to documents containing thousands of words. In this paper, we examine
the role of the granularity on the performance of common state of the art DNN
structures in IR.
| no_new_dataset | 0.953101 |
1606.07575 | Arash Shahriari | Arash Shahriari | Multipartite Ranking-Selection of Low-Dimensional Instances by
Supervised Projection to High-Dimensional Space | 15 pages, 1 figure, 2 tables, 3 algorithms, 1 appendix | null | null | null | stat.ML cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Pruning of redundant or irrelevant instances of data is a key to every
successful solution for pattern recognition. In this paper, we present a novel
ranking-selection framework for low-length but highly correlated instances.
Instead of working in the low-dimensional instance space, we learn a supervised
projection to high-dimensional space spanned by the number of classes in the
dataset under study. Imposing higher distinctions via exposing the notion of
labels to the instances, lets to deploy one versus all ranking for each
individual classes and selecting quality instances via adaptive thresholding of
the overall scores. To prove the efficiency of our paradigm, we employ it for
the purpose of texture understanding which is a hard recognition challenge due
to high similarity of texture pixels and low dimensionality of their color
features. Our experiments show considerable improvements in recognition
performance over other local descriptors on several publicly available
datasets.
| [
{
"version": "v1",
"created": "Fri, 24 Jun 2016 06:15:45 GMT"
}
] | 2016-06-27T00:00:00 | [
[
"Shahriari",
"Arash",
""
]
] | TITLE: Multipartite Ranking-Selection of Low-Dimensional Instances by
Supervised Projection to High-Dimensional Space
ABSTRACT: Pruning of redundant or irrelevant instances of data is a key to every
successful solution for pattern recognition. In this paper, we present a novel
ranking-selection framework for low-length but highly correlated instances.
Instead of working in the low-dimensional instance space, we learn a supervised
projection to high-dimensional space spanned by the number of classes in the
dataset under study. Imposing higher distinctions via exposing the notion of
labels to the instances, lets to deploy one versus all ranking for each
individual classes and selecting quality instances via adaptive thresholding of
the overall scores. To prove the efficiency of our paradigm, we employ it for
the purpose of texture understanding which is a hard recognition challenge due
to high similarity of texture pixels and low dimensionality of their color
features. Our experiments show considerable improvements in recognition
performance over other local descriptors on several publicly available
datasets.
| no_new_dataset | 0.953966 |
1606.07783 | Ngoc Thang Vu | Ngoc Thang Vu | Sequential Convolutional Neural Networks for Slot Filling in Spoken
Language Understanding | Accepted at Interspeech 2016 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate the usage of convolutional neural networks (CNNs) for the slot
filling task in spoken language understanding. We propose a novel CNN
architecture for sequence labeling which takes into account the previous
context words with preserved order information and pays special attention to
the current word with its surrounding context. Moreover, it combines the
information from the past and the future words for classification. Our proposed
CNN architecture outperforms even the previously best ensembling recurrent
neural network model and achieves state-of-the-art results with an F1-score of
95.61% on the ATIS benchmark dataset without using any additional linguistic
knowledge and resources.
| [
{
"version": "v1",
"created": "Fri, 24 Jun 2016 18:35:56 GMT"
}
] | 2016-06-27T00:00:00 | [
[
"Vu",
"Ngoc Thang",
""
]
] | TITLE: Sequential Convolutional Neural Networks for Slot Filling in Spoken
Language Understanding
ABSTRACT: We investigate the usage of convolutional neural networks (CNNs) for the slot
filling task in spoken language understanding. We propose a novel CNN
architecture for sequence labeling which takes into account the previous
context words with preserved order information and pays special attention to
the current word with its surrounding context. Moreover, it combines the
information from the past and the future words for classification. Our proposed
CNN architecture outperforms even the previously best ensembling recurrent
neural network model and achieves state-of-the-art results with an F1-score of
95.61% on the ATIS benchmark dataset without using any additional linguistic
knowledge and resources.
| no_new_dataset | 0.956186 |
1510.05328 | Pedro Tabacof | Pedro Tabacof, Eduardo Valle | Exploring the Space of Adversarial Images | Copyright 2016 IEEE. This manuscript was accepted at the IEEE
International Joint Conference on Neural Networks (IJCNN) 2016. We will link
the published version as soon as the DOI is available | null | null | null | cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Adversarial examples have raised questions regarding the robustness and
security of deep neural networks. In this work we formalize the problem of
adversarial images given a pretrained classifier, showing that even in the
linear case the resulting optimization problem is nonconvex. We generate
adversarial images using shallow and deep classifiers on the MNIST and ImageNet
datasets. We probe the pixel space of adversarial images using noise of varying
intensity and distribution. We bring novel visualizations that showcase the
phenomenon and its high variability. We show that adversarial images appear in
large regions in the pixel space, but that, for the same task, a shallow
classifier seems more robust to adversarial images than a deep convolutional
network.
| [
{
"version": "v1",
"created": "Mon, 19 Oct 2015 00:54:37 GMT"
},
{
"version": "v2",
"created": "Sun, 25 Oct 2015 17:40:25 GMT"
},
{
"version": "v3",
"created": "Mon, 23 Nov 2015 01:14:49 GMT"
},
{
"version": "v4",
"created": "Tue, 10 May 2016 22:36:20 GMT"
},
{
"version": "v5",
"created": "Thu, 23 Jun 2016 04:14:32 GMT"
}
] | 2016-06-24T00:00:00 | [
[
"Tabacof",
"Pedro",
""
],
[
"Valle",
"Eduardo",
""
]
] | TITLE: Exploring the Space of Adversarial Images
ABSTRACT: Adversarial examples have raised questions regarding the robustness and
security of deep neural networks. In this work we formalize the problem of
adversarial images given a pretrained classifier, showing that even in the
linear case the resulting optimization problem is nonconvex. We generate
adversarial images using shallow and deep classifiers on the MNIST and ImageNet
datasets. We probe the pixel space of adversarial images using noise of varying
intensity and distribution. We bring novel visualizations that showcase the
phenomenon and its high variability. We show that adversarial images appear in
large regions in the pixel space, but that, for the same task, a shallow
classifier seems more robust to adversarial images than a deep convolutional
network.
| no_new_dataset | 0.952309 |
1604.06433 | Jing Wang | Jing Wang, Yu Cheng, Rogerio Schmidt Feris | Walk and Learn: Facial Attribute Representation Learning from Egocentric
Video and Contextual Data | Paper accepted by CVPR 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The way people look in terms of facial attributes (ethnicity, hair color,
facial hair, etc.) and the clothes or accessories they wear (sunglasses, hat,
hoodies, etc.) is highly dependent on geo-location and weather condition,
respectively. This work explores, for the first time, the use of this
contextual information, as people with wearable cameras walk across different
neighborhoods of a city, in order to learn a rich feature representation for
facial attribute classification, without the costly manual annotation required
by previous methods. By tracking the faces of casual walkers on more than 40
hours of egocentric video, we are able to cover tens of thousands of different
identities and automatically extract nearly 5 million pairs of images connected
by or from different face tracks, along with their weather and location
context, under pose and lighting variations. These image pairs are then fed
into a deep network that preserves similarity of images connected by the same
track, in order to capture identity-related attribute features, and optimizes
for location and weather prediction to capture additional facial attribute
features. Finally, the network is fine-tuned with manually annotated samples.
We perform an extensive experimental analysis on wearable data and two standard
benchmark datasets based on web images (LFWA and CelebA). Our method
outperforms by a large margin a network trained from scratch. Moreover, even
without using manually annotated identity labels for pre-training as in
previous methods, our approach achieves results that are better than the state
of the art.
| [
{
"version": "v1",
"created": "Thu, 21 Apr 2016 19:21:55 GMT"
},
{
"version": "v2",
"created": "Wed, 27 Apr 2016 17:07:33 GMT"
},
{
"version": "v3",
"created": "Wed, 22 Jun 2016 20:51:33 GMT"
}
] | 2016-06-24T00:00:00 | [
[
"Wang",
"Jing",
""
],
[
"Cheng",
"Yu",
""
],
[
"Feris",
"Rogerio Schmidt",
""
]
] | TITLE: Walk and Learn: Facial Attribute Representation Learning from Egocentric
Video and Contextual Data
ABSTRACT: The way people look in terms of facial attributes (ethnicity, hair color,
facial hair, etc.) and the clothes or accessories they wear (sunglasses, hat,
hoodies, etc.) is highly dependent on geo-location and weather condition,
respectively. This work explores, for the first time, the use of this
contextual information, as people with wearable cameras walk across different
neighborhoods of a city, in order to learn a rich feature representation for
facial attribute classification, without the costly manual annotation required
by previous methods. By tracking the faces of casual walkers on more than 40
hours of egocentric video, we are able to cover tens of thousands of different
identities and automatically extract nearly 5 million pairs of images connected
by or from different face tracks, along with their weather and location
context, under pose and lighting variations. These image pairs are then fed
into a deep network that preserves similarity of images connected by the same
track, in order to capture identity-related attribute features, and optimizes
for location and weather prediction to capture additional facial attribute
features. Finally, the network is fine-tuned with manually annotated samples.
We perform an extensive experimental analysis on wearable data and two standard
benchmark datasets based on web images (LFWA and CelebA). Our method
outperforms by a large margin a network trained from scratch. Moreover, even
without using manually annotated identity labels for pre-training as in
previous methods, our approach achieves results that are better than the state
of the art.
| no_new_dataset | 0.943764 |
1605.04655 | Petr Baudi\v{s} | Petr Baudis, Silvestr Stanko and Jan Sedivy | Joint Learning of Sentence Embeddings for Relevance and Entailment | repl4nlp workshop at ACL Berlin 2016 | null | null | null | cs.CL cs.LG cs.NE | http://creativecommons.org/licenses/by/4.0/ | We consider the problem of Recognizing Textual Entailment within an
Information Retrieval context, where we must simultaneously determine the
relevancy as well as degree of entailment for individual pieces of evidence to
determine a yes/no answer to a binary natural language question.
We compare several variants of neural networks for sentence embeddings in a
setting of decision-making based on evidence of varying relevance. We propose a
basic model to integrate evidence for entailment, show that joint training of
the sentence embeddings to model relevance and entailment is feasible even with
no explicit per-evidence supervision, and show the importance of evaluating
strong baselines. We also demonstrate the benefit of carrying over text
comprehension model trained on an unrelated task for our small datasets.
Our research is motivated primarily by a new open dataset we introduce,
consisting of binary questions and news-based evidence snippets. We also apply
the proposed relevance-entailment model on a similar task of ranking
multiple-choice test answers, evaluating it on a preliminary dataset of school
test questions as well as the standard MCTest dataset, where we improve the
neural model state-of-art.
| [
{
"version": "v1",
"created": "Mon, 16 May 2016 05:50:54 GMT"
},
{
"version": "v2",
"created": "Wed, 22 Jun 2016 22:41:26 GMT"
}
] | 2016-06-24T00:00:00 | [
[
"Baudis",
"Petr",
""
],
[
"Stanko",
"Silvestr",
""
],
[
"Sedivy",
"Jan",
""
]
] | TITLE: Joint Learning of Sentence Embeddings for Relevance and Entailment
ABSTRACT: We consider the problem of Recognizing Textual Entailment within an
Information Retrieval context, where we must simultaneously determine the
relevancy as well as degree of entailment for individual pieces of evidence to
determine a yes/no answer to a binary natural language question.
We compare several variants of neural networks for sentence embeddings in a
setting of decision-making based on evidence of varying relevance. We propose a
basic model to integrate evidence for entailment, show that joint training of
the sentence embeddings to model relevance and entailment is feasible even with
no explicit per-evidence supervision, and show the importance of evaluating
strong baselines. We also demonstrate the benefit of carrying over text
comprehension model trained on an unrelated task for our small datasets.
Our research is motivated primarily by a new open dataset we introduce,
consisting of binary questions and news-based evidence snippets. We also apply
the proposed relevance-entailment model on a similar task of ranking
multiple-choice test answers, evaluating it on a preliminary dataset of school
test questions as well as the standard MCTest dataset, where we improve the
neural model state-of-art.
| new_dataset | 0.961642 |
1606.06368 | Fereshte Khani | Fereshte Khani, Martin Rinard, Percy Liang | Unanimous Prediction for 100% Precision with Application to Learning
Semantic Mappings | ACL 2016, Removed the duplicate author name of the previous version | null | null | null | cs.LG cs.AI cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Can we train a system that, on any new input, either says "don't know" or
makes a prediction that is guaranteed to be correct? We answer the question in
the affirmative provided our model family is well-specified. Specifically, we
introduce the unanimity principle: only predict when all models consistent with
the training data predict the same output. We operationalize this principle for
semantic parsing, the task of mapping utterances to logical forms. We develop a
simple, efficient method that reasons over the infinite set of all consistent
models by only checking two of the models. We prove that our method obtains
100% precision even with a modest amount of training data from a possibly
adversarial distribution. Empirically, we demonstrate the effectiveness of our
approach on the standard GeoQuery dataset.
| [
{
"version": "v1",
"created": "Mon, 20 Jun 2016 23:59:25 GMT"
},
{
"version": "v2",
"created": "Thu, 23 Jun 2016 07:33:01 GMT"
}
] | 2016-06-24T00:00:00 | [
[
"Khani",
"Fereshte",
""
],
[
"Rinard",
"Martin",
""
],
[
"Liang",
"Percy",
""
]
] | TITLE: Unanimous Prediction for 100% Precision with Application to Learning
Semantic Mappings
ABSTRACT: Can we train a system that, on any new input, either says "don't know" or
makes a prediction that is guaranteed to be correct? We answer the question in
the affirmative provided our model family is well-specified. Specifically, we
introduce the unanimity principle: only predict when all models consistent with
the training data predict the same output. We operationalize this principle for
semantic parsing, the task of mapping utterances to logical forms. We develop a
simple, efficient method that reasons over the infinite set of all consistent
models by only checking two of the models. We prove that our method obtains
100% precision even with a modest amount of training data from a possibly
adversarial distribution. Empirically, we demonstrate the effectiveness of our
approach on the standard GeoQuery dataset.
| no_new_dataset | 0.941601 |
1606.07088 | Carlos Kamienski | Rogerio Minhano, Stenio Fernandes, Carlos Kamienski | Revealing Hidden Connections in Recommendation Networks | null | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Companies have been increasingly seeking new mechanisms for making their
electronic marketing campaigns to become viral, thus obtaining a cascading
recommendation effect similar to word-of-mouth. We analysed a dataset of a
magazine publisher that uses email as the main marketing strategy and found out
that networks emerging from those campaigns form a very sparse graph. We show
that online social networks can be effectively used as a means to expand
recommendation networks. Starting from a set of users, called seeders, we
crawled Google's Orkut and collected about 20 million users and 80 million
relationships. Next, we extended the original recommendation network by adding
new edges using Orkut relationships that built a much denser network.
Therefore, we advocate that online social networks are much more effective than
email-based marketing campaigns
| [
{
"version": "v1",
"created": "Wed, 22 Jun 2016 20:16:46 GMT"
}
] | 2016-06-24T00:00:00 | [
[
"Minhano",
"Rogerio",
""
],
[
"Fernandes",
"Stenio",
""
],
[
"Kamienski",
"Carlos",
""
]
] | TITLE: Revealing Hidden Connections in Recommendation Networks
ABSTRACT: Companies have been increasingly seeking new mechanisms for making their
electronic marketing campaigns to become viral, thus obtaining a cascading
recommendation effect similar to word-of-mouth. We analysed a dataset of a
magazine publisher that uses email as the main marketing strategy and found out
that networks emerging from those campaigns form a very sparse graph. We show
that online social networks can be effectively used as a means to expand
recommendation networks. Starting from a set of users, called seeders, we
crawled Google's Orkut and collected about 20 million users and 80 million
relationships. Next, we extended the original recommendation network by adding
new edges using Orkut relationships that built a much denser network.
Therefore, we advocate that online social networks are much more effective than
email-based marketing campaigns
| no_new_dataset | 0.93744 |
1606.07351 | Caiing Dong | Cailing Dong and Arvind Agarwal | A Relevant Content Filtering Based Framework For Data Stream
Summarization | 8 pages, 8 figures | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Social media platforms are a rich source of information these days, however,
of all the available information, only a small fraction is of users' interest.
To help users catch up with the latest topics of their interests from the large
amount of information available in social media, we present a relevant content
filtering based framework for data stream summarization. More specifically,
given the topic or event of interest, this framework can dynamically discover
and filter out relevant information from irrelevant information in the stream
of text provided by social media platforms. It then further captures the most
representative and up-to-date information to generate a sequential summary or
event story line along with the evolution of the topic or event. Our framework
does not depend on any labeled data, it instead uses the weak supervision
provided by the user, which matches the real scenarios of users searching for
information about an ongoing event. We experimented on two real events traced
by a Twitter dataset from TREC 2011. The results verified the effectiveness of
relevant content filtering and sequential summary generation of the proposed
framework. It also shows its robustness of using the most easy-to-obtain weak
supervision, i.e., trending topic or hashtag. Thus, this framework can be
easily integrated into social media platforms such as Twitter to generate
sequential summaries for the events of interest. We also make the manually
generated gold-standard sequential summaries of the two test events publicly
available for future use in the community.
| [
{
"version": "v1",
"created": "Thu, 23 Jun 2016 15:49:08 GMT"
}
] | 2016-06-24T00:00:00 | [
[
"Dong",
"Cailing",
""
],
[
"Agarwal",
"Arvind",
""
]
] | TITLE: A Relevant Content Filtering Based Framework For Data Stream
Summarization
ABSTRACT: Social media platforms are a rich source of information these days, however,
of all the available information, only a small fraction is of users' interest.
To help users catch up with the latest topics of their interests from the large
amount of information available in social media, we present a relevant content
filtering based framework for data stream summarization. More specifically,
given the topic or event of interest, this framework can dynamically discover
and filter out relevant information from irrelevant information in the stream
of text provided by social media platforms. It then further captures the most
representative and up-to-date information to generate a sequential summary or
event story line along with the evolution of the topic or event. Our framework
does not depend on any labeled data, it instead uses the weak supervision
provided by the user, which matches the real scenarios of users searching for
information about an ongoing event. We experimented on two real events traced
by a Twitter dataset from TREC 2011. The results verified the effectiveness of
relevant content filtering and sequential summary generation of the proposed
framework. It also shows its robustness of using the most easy-to-obtain weak
supervision, i.e., trending topic or hashtag. Thus, this framework can be
easily integrated into social media platforms such as Twitter to generate
sequential summaries for the events of interest. We also make the manually
generated gold-standard sequential summaries of the two test events publicly
available for future use in the community.
| no_new_dataset | 0.951908 |
1202.4679 | Marco van Hulten MSc | Marco van Hulten, Andreas Sterl, Alessandro Tagliabue, Jean-Claude
Dutay, Marion Gehlen, Hein J. W. de Baar, Rob Middag | Aluminium in an ocean general circulation model compared with the West
Atlantic Geotraces cruises | J. Mar. Syst. (2012), ISSN 0924-7963. 22 pages, 30 figures, on the
occasion of the May 2011 GEOTRACES colloquium | J.Mar.Syst. 126 (2013) 3-23 | 10.1016/j.jmarsys.2012.05.005 | null | physics.ao-ph | http://creativecommons.org/licenses/by-nc-sa/3.0/ | A model of aluminium has been developed and implemented in an Ocean General
Circulation Model (NEMO-PISCES). In the model, aluminium enters the ocean by
means of dust deposition. The internal oceanic processes are described by
advection, mixing and reversible scavenging. The model has been evaluated
against a number of selected high-quality datasets covering much of the world
ocean, especially those from the West Atlantic Geotraces cruises of 2010 and
2011. Generally, the model results are in fair agreement with the observations.
However, the model does not describe well the vertical distribution of
dissolved Al in the North Atlantic Ocean. The model may require changes in the
physical forcing and the vertical dependence of the sinking velocity of
biogenic silica to account for other discrepancies. To explore the model
behaviour, sensitivity experiments have been performed, in which we changed the
key parameters of the scavenging process as well as the input of aluminium into
the ocean. This resulted in a better understanding of aluminium in the ocean,
and it is now clear which parameter has what effect on the dissolved aluminium
distribution and which processes might be missing in the model, among which
boundary scavenging and biological incorporation of aluminium into diatoms.
| [
{
"version": "v1",
"created": "Tue, 21 Feb 2012 15:49:20 GMT"
},
{
"version": "v2",
"created": "Fri, 18 May 2012 14:57:08 GMT"
},
{
"version": "v3",
"created": "Wed, 26 Sep 2012 13:39:16 GMT"
}
] | 2016-06-23T00:00:00 | [
[
"van Hulten",
"Marco",
""
],
[
"Sterl",
"Andreas",
""
],
[
"Tagliabue",
"Alessandro",
""
],
[
"Dutay",
"Jean-Claude",
""
],
[
"Gehlen",
"Marion",
""
],
[
"de Baar",
"Hein J. W.",
""
],
[
"Middag",
"Rob",
""
]
] | TITLE: Aluminium in an ocean general circulation model compared with the West
Atlantic Geotraces cruises
ABSTRACT: A model of aluminium has been developed and implemented in an Ocean General
Circulation Model (NEMO-PISCES). In the model, aluminium enters the ocean by
means of dust deposition. The internal oceanic processes are described by
advection, mixing and reversible scavenging. The model has been evaluated
against a number of selected high-quality datasets covering much of the world
ocean, especially those from the West Atlantic Geotraces cruises of 2010 and
2011. Generally, the model results are in fair agreement with the observations.
However, the model does not describe well the vertical distribution of
dissolved Al in the North Atlantic Ocean. The model may require changes in the
physical forcing and the vertical dependence of the sinking velocity of
biogenic silica to account for other discrepancies. To explore the model
behaviour, sensitivity experiments have been performed, in which we changed the
key parameters of the scavenging process as well as the input of aluminium into
the ocean. This resulted in a better understanding of aluminium in the ocean,
and it is now clear which parameter has what effect on the dissolved aluminium
distribution and which processes might be missing in the model, among which
boundary scavenging and biological incorporation of aluminium into diatoms.
| no_new_dataset | 0.946448 |
1605.02276 | Manaal Faruqui | Manaal Faruqui, Yulia Tsvetkov, Pushpendre Rastogi, Chris Dyer | Problems With Evaluation of Word Embeddings Using Word Similarity Tasks | The First Workshop on Evaluating Vector Space Representations for NLP | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Lacking standardized extrinsic evaluation methods for vector representations
of words, the NLP community has relied heavily on word similarity tasks as a
proxy for intrinsic evaluation of word vectors. Word similarity evaluation,
which correlates the distance between vectors and human judgments of semantic
similarity is attractive, because it is computationally inexpensive and fast.
In this paper we present several problems associated with the evaluation of
word vectors on word similarity datasets, and summarize existing solutions. Our
study suggests that the use of word similarity tasks for evaluation of word
vectors is not sustainable and calls for further research on evaluation
methods.
| [
{
"version": "v1",
"created": "Sun, 8 May 2016 05:09:28 GMT"
},
{
"version": "v2",
"created": "Tue, 21 Jun 2016 04:48:34 GMT"
},
{
"version": "v3",
"created": "Wed, 22 Jun 2016 02:41:04 GMT"
}
] | 2016-06-23T00:00:00 | [
[
"Faruqui",
"Manaal",
""
],
[
"Tsvetkov",
"Yulia",
""
],
[
"Rastogi",
"Pushpendre",
""
],
[
"Dyer",
"Chris",
""
]
] | TITLE: Problems With Evaluation of Word Embeddings Using Word Similarity Tasks
ABSTRACT: Lacking standardized extrinsic evaluation methods for vector representations
of words, the NLP community has relied heavily on word similarity tasks as a
proxy for intrinsic evaluation of word vectors. Word similarity evaluation,
which correlates the distance between vectors and human judgments of semantic
similarity is attractive, because it is computationally inexpensive and fast.
In this paper we present several problems associated with the evaluation of
word vectors on word similarity datasets, and summarize existing solutions. Our
study suggests that the use of word similarity tasks for evaluation of word
vectors is not sustainable and calls for further research on evaluation
methods.
| no_new_dataset | 0.944022 |
1606.05409 | Linfeng Song | Linfeng Song, Zhiguo Wang, Haitao Mi and Daniel Gildea | Sense Embedding Learning for Word Sense Induction | 6 pages, no figures in *SEM 2016 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Conventional word sense induction (WSI) methods usually represent each
instance with discrete linguistic features or cooccurrence features, and train
a model for each polysemous word individually. In this work, we propose to
learn sense embeddings for the WSI task. In the training stage, our method
induces several sense centroids (embedding) for each polysemous word. In the
testing stage, our method represents each instance as a contextual vector, and
induces its sense by finding the nearest sense centroid in the embedding space.
The advantages of our method are (1) distributed sense vectors are taken as the
knowledge representations which are trained discriminatively, and usually have
better performance than traditional count-based distributional models, and (2)
a general model for the whole vocabulary is jointly trained to induce sense
centroids under the mutlitask learning framework. Evaluated on SemEval-2010 WSI
dataset, our method outperforms all participants and most of the recent
state-of-the-art methods. We further verify the two advantages by comparing
with carefully designed baselines.
| [
{
"version": "v1",
"created": "Fri, 17 Jun 2016 02:49:52 GMT"
},
{
"version": "v2",
"created": "Wed, 22 Jun 2016 04:59:08 GMT"
}
] | 2016-06-23T00:00:00 | [
[
"Song",
"Linfeng",
""
],
[
"Wang",
"Zhiguo",
""
],
[
"Mi",
"Haitao",
""
],
[
"Gildea",
"Daniel",
""
]
] | TITLE: Sense Embedding Learning for Word Sense Induction
ABSTRACT: Conventional word sense induction (WSI) methods usually represent each
instance with discrete linguistic features or cooccurrence features, and train
a model for each polysemous word individually. In this work, we propose to
learn sense embeddings for the WSI task. In the training stage, our method
induces several sense centroids (embedding) for each polysemous word. In the
testing stage, our method represents each instance as a contextual vector, and
induces its sense by finding the nearest sense centroid in the embedding space.
The advantages of our method are (1) distributed sense vectors are taken as the
knowledge representations which are trained discriminatively, and usually have
better performance than traditional count-based distributional models, and (2)
a general model for the whole vocabulary is jointly trained to induce sense
centroids under the mutlitask learning framework. Evaluated on SemEval-2010 WSI
dataset, our method outperforms all participants and most of the recent
state-of-the-art methods. We further verify the two advantages by comparing
with carefully designed baselines.
| no_new_dataset | 0.947527 |
1606.06769 | Kai Zhao | Kai Zhao | Network Analysis of Urban Traffic with Big Bus Data | This technical report won the best hack award in Big Data Science
Hackathon, Helsinki,2015 | null | null | null | physics.soc-ph cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Urban traffic analysis is crucial for traffic forecasting systems, urban
planning and, more recently, various mobile and network applications. In this
paper, we analyse urban traffic with network and statistical methods. Our
analysis is based on one big bus dataset containing 45 million bus arrival
samples in Helsinki. We mainly address following questions: 1. How can we
identify the areas that cause most of the traffic in the city? 2. Why there is
a urban traffic? Is bus traffic a key cause of the urban traffic? 3. How can we
improve the urban traffic systems? To answer these questions, first, the
betweenness is used to identify the most import areas that cause most traffics.
Second, we find that bus traffic is not an important cause of urban traffic
using statistical methods. We differentiate the urban traffic and the bus
traffic in a city. We use bus delay as an identification of the urban traffic,
and the number of bus as an identification of the bus traffic. Third, we give
our solutions on how to improve urban traffic by the traffic simulation on road
networks. We show that adding more buses during the peak time and providing
better bus schedule plan in the hot areas like railway station, metro station,
shopping malls etc. will reduce the urban traffic.
| [
{
"version": "v1",
"created": "Tue, 21 Jun 2016 21:04:06 GMT"
}
] | 2016-06-23T00:00:00 | [
[
"Zhao",
"Kai",
""
]
] | TITLE: Network Analysis of Urban Traffic with Big Bus Data
ABSTRACT: Urban traffic analysis is crucial for traffic forecasting systems, urban
planning and, more recently, various mobile and network applications. In this
paper, we analyse urban traffic with network and statistical methods. Our
analysis is based on one big bus dataset containing 45 million bus arrival
samples in Helsinki. We mainly address following questions: 1. How can we
identify the areas that cause most of the traffic in the city? 2. Why there is
a urban traffic? Is bus traffic a key cause of the urban traffic? 3. How can we
improve the urban traffic systems? To answer these questions, first, the
betweenness is used to identify the most import areas that cause most traffics.
Second, we find that bus traffic is not an important cause of urban traffic
using statistical methods. We differentiate the urban traffic and the bus
traffic in a city. We use bus delay as an identification of the urban traffic,
and the number of bus as an identification of the bus traffic. Third, we give
our solutions on how to improve urban traffic by the traffic simulation on road
networks. We show that adding more buses during the peak time and providing
better bus schedule plan in the hot areas like railway station, metro station,
shopping malls etc. will reduce the urban traffic.
| no_new_dataset | 0.907476 |
1606.06854 | Xingyi Zhou | Xingyi Zhou, Qingfu Wan, Wei Zhang, Xiangyang Xue, Yichen Wei | Model-based Deep Hand Pose Estimation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Previous learning based hand pose estimation methods does not fully exploit
the prior information in hand model geometry. Instead, they usually rely a
separate model fitting step to generate valid hand poses. Such a post
processing is inconvenient and sub-optimal. In this work, we propose a model
based deep learning approach that adopts a forward kinematics based layer to
ensure the geometric validity of estimated poses. For the first time, we show
that embedding such a non-linear generative process in deep learning is
feasible for hand pose estimation. Our approach is verified on challenging
public datasets and achieves state-of-the-art performance.
| [
{
"version": "v1",
"created": "Wed, 22 Jun 2016 08:47:06 GMT"
}
] | 2016-06-23T00:00:00 | [
[
"Zhou",
"Xingyi",
""
],
[
"Wan",
"Qingfu",
""
],
[
"Zhang",
"Wei",
""
],
[
"Xue",
"Xiangyang",
""
],
[
"Wei",
"Yichen",
""
]
] | TITLE: Model-based Deep Hand Pose Estimation
ABSTRACT: Previous learning based hand pose estimation methods does not fully exploit
the prior information in hand model geometry. Instead, they usually rely a
separate model fitting step to generate valid hand poses. Such a post
processing is inconvenient and sub-optimal. In this work, we propose a model
based deep learning approach that adopts a forward kinematics based layer to
ensure the geometric validity of estimated poses. For the first time, we show
that embedding such a non-linear generative process in deep learning is
feasible for hand pose estimation. Our approach is verified on challenging
public datasets and achieves state-of-the-art performance.
| no_new_dataset | 0.952086 |
1606.06975 | Yuehaw Khoo | Yuehaw Khoo, Amit Singer, David Cowburn | Bias Correction in Saupe Tensor Estimation | 24 pages, 5 figures | null | null | null | cs.CE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Estimation of the Saupe tensor is central to the determination of molecular
structures from residual dipolar couplings (RDC) or chemical shift
anisotropies. Assuming a given template structure, the singular value
decomposition (SVD) method proposed in Losonczi et al. 1999 has been used
traditionally to estimate the Saupe tensor. Despite its simplicity, whenever
the template structure has large structural noise, the eigenvalues of the
estimated tensor have a magnitude systematically smaller than their actual
values. This leads to systematic error when calculating the eigenvalue
dependent parameters, magnitude and rhombicity. We propose here a Monte Carlo
simulation method to remove such bias. We further demonstrate the effectiveness
of our method in the setting when the eigenvalue estimates from multiple
template protein fragments are available and their average is used as an
improved eigenvalue estimator. For both synthetic and experimental RDC datasets
of ubiquitin, when using template fragments corrupted by large noise, the
magnitude of our proposed bias-reduced estimator generally reaches at least 90%
of the actual value, whereas the magnitude of SVD estimator can be shrunk below
80% of the true value.
| [
{
"version": "v1",
"created": "Wed, 22 Jun 2016 15:05:23 GMT"
}
] | 2016-06-23T00:00:00 | [
[
"Khoo",
"Yuehaw",
""
],
[
"Singer",
"Amit",
""
],
[
"Cowburn",
"David",
""
]
] | TITLE: Bias Correction in Saupe Tensor Estimation
ABSTRACT: Estimation of the Saupe tensor is central to the determination of molecular
structures from residual dipolar couplings (RDC) or chemical shift
anisotropies. Assuming a given template structure, the singular value
decomposition (SVD) method proposed in Losonczi et al. 1999 has been used
traditionally to estimate the Saupe tensor. Despite its simplicity, whenever
the template structure has large structural noise, the eigenvalues of the
estimated tensor have a magnitude systematically smaller than their actual
values. This leads to systematic error when calculating the eigenvalue
dependent parameters, magnitude and rhombicity. We propose here a Monte Carlo
simulation method to remove such bias. We further demonstrate the effectiveness
of our method in the setting when the eigenvalue estimates from multiple
template protein fragments are available and their average is used as an
improved eigenvalue estimator. For both synthetic and experimental RDC datasets
of ubiquitin, when using template fragments corrupted by large noise, the
magnitude of our proposed bias-reduced estimator generally reaches at least 90%
of the actual value, whereas the magnitude of SVD estimator can be shrunk below
80% of the true value.
| no_new_dataset | 0.948106 |
1601.00881 | Tomoyuki Obuchi | Tomoyuki Obuchi and Yoshiyuki Kabashima | Cross validation in LASSO and its acceleration | 32 pages, 7 figures | null | 10.1088/1742-5468/2016/05/053304 | null | cs.IT cond-mat.dis-nn cond-mat.stat-mech math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate leave-one-out cross validation (CV) as a determinator of the
weight of the penalty term in the least absolute shrinkage and selection
operator (LASSO). First, on the basis of the message passing algorithm and a
perturbative discussion assuming that the number of observations is
sufficiently large, we provide simple formulas for approximately assessing two
types of CV errors, which enable us to significantly reduce the necessary cost
of computation. These formulas also provide a simple connection of the CV
errors to the residual sums of squares between the reconstructed and the given
measurements. Second, on the basis of this finding, we analytically evaluate
the CV errors when the design matrix is given as a simple random matrix in the
large size limit by using the replica method. Finally, these results are
compared with those of numerical simulations on finite-size systems and are
confirmed to be correct. We also apply the simple formulas of the first type of
CV error to an actual dataset of the supernovae.
| [
{
"version": "v1",
"created": "Tue, 29 Dec 2015 02:50:51 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Mar 2016 09:34:44 GMT"
}
] | 2016-06-22T00:00:00 | [
[
"Obuchi",
"Tomoyuki",
""
],
[
"Kabashima",
"Yoshiyuki",
""
]
] | TITLE: Cross validation in LASSO and its acceleration
ABSTRACT: We investigate leave-one-out cross validation (CV) as a determinator of the
weight of the penalty term in the least absolute shrinkage and selection
operator (LASSO). First, on the basis of the message passing algorithm and a
perturbative discussion assuming that the number of observations is
sufficiently large, we provide simple formulas for approximately assessing two
types of CV errors, which enable us to significantly reduce the necessary cost
of computation. These formulas also provide a simple connection of the CV
errors to the residual sums of squares between the reconstructed and the given
measurements. Second, on the basis of this finding, we analytically evaluate
the CV errors when the design matrix is given as a simple random matrix in the
large size limit by using the replica method. Finally, these results are
compared with those of numerical simulations on finite-size systems and are
confirmed to be correct. We also apply the simple formulas of the first type of
CV error to an actual dataset of the supernovae.
| no_new_dataset | 0.941223 |
1601.03055 | Yuqing Hou | Yuqing Hou, Zhouchen Lin, Jin-ge Yao | Subspace Clustering Based Tag Sharing for Inductive Tag Matrix
Refinement with Complex Errors | 4 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Annotating images with tags is useful for indexing and retrieving images.
However, many available annotation data include missing or inaccurate
annotations. In this paper, we propose an image annotation framework which
sequentially performs tag completion and refinement. We utilize the subspace
property of data via sparse subspace clustering for tag completion. Then we
propose a novel matrix completion model for tag refinement, integrating visual
correlation, semantic correlation and the novelly studied property of complex
errors. The proposed method outperforms the state-of-the-art approaches on
multiple benchmark datasets even when they contain certain levels of annotation
noise.
| [
{
"version": "v1",
"created": "Tue, 12 Jan 2016 21:03:43 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Mar 2016 04:41:53 GMT"
},
{
"version": "v3",
"created": "Tue, 21 Jun 2016 15:48:06 GMT"
}
] | 2016-06-22T00:00:00 | [
[
"Hou",
"Yuqing",
""
],
[
"Lin",
"Zhouchen",
""
],
[
"Yao",
"Jin-ge",
""
]
] | TITLE: Subspace Clustering Based Tag Sharing for Inductive Tag Matrix
Refinement with Complex Errors
ABSTRACT: Annotating images with tags is useful for indexing and retrieving images.
However, many available annotation data include missing or inaccurate
annotations. In this paper, we propose an image annotation framework which
sequentially performs tag completion and refinement. We utilize the subspace
property of data via sparse subspace clustering for tag completion. Then we
propose a novel matrix completion model for tag refinement, integrating visual
correlation, semantic correlation and the novelly studied property of complex
errors. The proposed method outperforms the state-of-the-art approaches on
multiple benchmark datasets even when they contain certain levels of annotation
noise.
| no_new_dataset | 0.947914 |
1602.01237 | Shanshan Zhang | Shanshan Zhang, Rodrigo Benenson, Mohamed Omran, Jan Hosang, and Bernt
Schiele | How Far are We from Solving Pedestrian Detection? | CVPR16 camera ready | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Encouraged by the recent progress in pedestrian detection, we investigate the
gap between current state-of-the-art methods and the "perfect single frame
detector". We enable our analysis by creating a human baseline for pedestrian
detection (over the Caltech dataset), and by manually clustering the recurrent
errors of a top detector. Our results characterize both localization and
background-versus-foreground errors. To address localization errors we study
the impact of training annotation noise on the detector performance, and show
that we can improve even with a small portion of sanitized training data. To
address background/foreground discrimination, we study convnets for pedestrian
detection, and discuss which factors affect their performance. Other than our
in-depth analysis, we report top performance on the Caltech dataset, and
provide a new sanitized set of training and test annotations.
| [
{
"version": "v1",
"created": "Wed, 3 Feb 2016 09:45:56 GMT"
},
{
"version": "v2",
"created": "Tue, 21 Jun 2016 11:33:13 GMT"
}
] | 2016-06-22T00:00:00 | [
[
"Zhang",
"Shanshan",
""
],
[
"Benenson",
"Rodrigo",
""
],
[
"Omran",
"Mohamed",
""
],
[
"Hosang",
"Jan",
""
],
[
"Schiele",
"Bernt",
""
]
] | TITLE: How Far are We from Solving Pedestrian Detection?
ABSTRACT: Encouraged by the recent progress in pedestrian detection, we investigate the
gap between current state-of-the-art methods and the "perfect single frame
detector". We enable our analysis by creating a human baseline for pedestrian
detection (over the Caltech dataset), and by manually clustering the recurrent
errors of a top detector. Our results characterize both localization and
background-versus-foreground errors. To address localization errors we study
the impact of training annotation noise on the detector performance, and show
that we can improve even with a small portion of sanitized training data. To
address background/foreground discrimination, we study convnets for pedestrian
detection, and discuss which factors affect their performance. Other than our
in-depth analysis, we report top performance on the Caltech dataset, and
provide a new sanitized set of training and test annotations.
| no_new_dataset | 0.919859 |
1602.01827 | Yang Zhong | Yang Zhong, Josephine Sullivan, Haibo Li | Leveraging Mid-Level Deep Representations For Predicting Face Attributes
in the Wild | In proceedings of 2016 International Conference on Image Processing
(ICIP) | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Predicting facial attributes from faces in the wild is very challenging due
to pose and lighting variations in the real world. The key to this problem is
to build proper feature representations to cope with these unfavourable
conditions. Given the success of Convolutional Neural Network (CNN) in image
classification, the high-level CNN feature, as an intuitive and reasonable
choice, has been widely utilized for this problem. In this paper, however, we
consider the mid-level CNN features as an alternative to the high-level ones
for attribute prediction. This is based on the observation that face attributes
are different: some of them are locally oriented while others are globally
defined. Our investigations reveal that the mid-level deep representations
outperform the prediction accuracy achieved by the (fine-tuned) high-level
abstractions. We empirically demonstrate that the midlevel representations
achieve state-of-the-art prediction performance on CelebA and LFWA datasets.
Our investigations also show that by utilizing the mid-level representations
one can employ a single deep network to achieve both face recognition and
attribute prediction.
| [
{
"version": "v1",
"created": "Thu, 4 Feb 2016 20:58:02 GMT"
},
{
"version": "v2",
"created": "Fri, 5 Feb 2016 07:08:05 GMT"
},
{
"version": "v3",
"created": "Tue, 21 Jun 2016 15:52:58 GMT"
}
] | 2016-06-22T00:00:00 | [
[
"Zhong",
"Yang",
""
],
[
"Sullivan",
"Josephine",
""
],
[
"Li",
"Haibo",
""
]
] | TITLE: Leveraging Mid-Level Deep Representations For Predicting Face Attributes
in the Wild
ABSTRACT: Predicting facial attributes from faces in the wild is very challenging due
to pose and lighting variations in the real world. The key to this problem is
to build proper feature representations to cope with these unfavourable
conditions. Given the success of Convolutional Neural Network (CNN) in image
classification, the high-level CNN feature, as an intuitive and reasonable
choice, has been widely utilized for this problem. In this paper, however, we
consider the mid-level CNN features as an alternative to the high-level ones
for attribute prediction. This is based on the observation that face attributes
are different: some of them are locally oriented while others are globally
defined. Our investigations reveal that the mid-level deep representations
outperform the prediction accuracy achieved by the (fine-tuned) high-level
abstractions. We empirically demonstrate that the midlevel representations
achieve state-of-the-art prediction performance on CelebA and LFWA datasets.
Our investigations also show that by utilizing the mid-level representations
one can employ a single deep network to achieve both face recognition and
attribute prediction.
| no_new_dataset | 0.948155 |
1602.03935 | Yang Zhong | Yang Zhong, Josephine Sullivan, Haibo Li | Face Attribute Prediction Using Off-the-Shelf CNN Features | In proceeding of 2016 International Conference on Biometrics (ICB) | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Predicting attributes from face images in the wild is a challenging computer
vision problem. To automatically describe face attributes from face containing
images, traditionally one needs to cascade three technical blocks --- face
localization, facial descriptor construction, and attribute classification ---
in a pipeline. As a typical classification problem, face attribute prediction
has been addressed using deep learning. Current state-of-the-art performance
was achieved by using two cascaded Convolutional Neural Networks (CNNs), which
were specifically trained to learn face localization and attribute description.
In this paper, we experiment with an alternative way of employing the power of
deep representations from CNNs. Combining with conventional face localization
techniques, we use off-the-shelf architectures trained for face recognition to
build facial descriptors. Recognizing that the describable face attributes are
diverse, our face descriptors are constructed from different levels of the CNNs
for different attributes to best facilitate face attribute prediction.
Experiments on two large datasets, LFWA and CelebA, show that our approach is
entirely comparable to the state-of-the-art. Our findings not only demonstrate
an efficient face attribute prediction approach, but also raise an important
question: how to leverage the power of off-the-shelf CNN representations for
novel tasks.
| [
{
"version": "v1",
"created": "Fri, 12 Feb 2016 00:44:16 GMT"
},
{
"version": "v2",
"created": "Tue, 21 Jun 2016 14:27:33 GMT"
}
] | 2016-06-22T00:00:00 | [
[
"Zhong",
"Yang",
""
],
[
"Sullivan",
"Josephine",
""
],
[
"Li",
"Haibo",
""
]
] | TITLE: Face Attribute Prediction Using Off-the-Shelf CNN Features
ABSTRACT: Predicting attributes from face images in the wild is a challenging computer
vision problem. To automatically describe face attributes from face containing
images, traditionally one needs to cascade three technical blocks --- face
localization, facial descriptor construction, and attribute classification ---
in a pipeline. As a typical classification problem, face attribute prediction
has been addressed using deep learning. Current state-of-the-art performance
was achieved by using two cascaded Convolutional Neural Networks (CNNs), which
were specifically trained to learn face localization and attribute description.
In this paper, we experiment with an alternative way of employing the power of
deep representations from CNNs. Combining with conventional face localization
techniques, we use off-the-shelf architectures trained for face recognition to
build facial descriptors. Recognizing that the describable face attributes are
diverse, our face descriptors are constructed from different levels of the CNNs
for different attributes to best facilitate face attribute prediction.
Experiments on two large datasets, LFWA and CelebA, show that our approach is
entirely comparable to the state-of-the-art. Our findings not only demonstrate
an efficient face attribute prediction approach, but also raise an important
question: how to leverage the power of off-the-shelf CNN representations for
novel tasks.
| no_new_dataset | 0.946597 |
1605.02699 | Saikat Basu | Saikat Basu, Manohar Karki, Robert DiBiano, Supratik Mukhopadhyay,
Sangram Ganguly, Ramakrishna Nemani and Shreekant Gayaka | A Theoretical Analysis of Deep Neural Networks for Texture
Classification | Accepted in International Joint Conference on Neural Networks, IJCNN
2016 | null | null | null | cs.CV cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate the use of Deep Neural Networks for the classification of
image datasets where texture features are important for generating
class-conditional discriminative representations. To this end, we first derive
the size of the feature space for some standard textural features extracted
from the input dataset and then use the theory of Vapnik-Chervonenkis dimension
to show that hand-crafted feature extraction creates low-dimensional
representations which help in reducing the overall excess error rate. As a
corollary to this analysis, we derive for the first time upper bounds on the VC
dimension of Convolutional Neural Network as well as Dropout and Dropconnect
networks and the relation between excess error rate of Dropout and Dropconnect
networks. The concept of intrinsic dimension is used to validate the intuition
that texture-based datasets are inherently higher dimensional as compared to
handwritten digits or other object recognition datasets and hence more
difficult to be shattered by neural networks. We then derive the mean distance
from the centroid to the nearest and farthest sampling points in an
n-dimensional manifold and show that the Relative Contrast of the sample data
vanishes as dimensionality of the underlying vector space tends to infinity.
| [
{
"version": "v1",
"created": "Mon, 9 May 2016 19:11:22 GMT"
},
{
"version": "v2",
"created": "Tue, 21 Jun 2016 19:32:06 GMT"
}
] | 2016-06-22T00:00:00 | [
[
"Basu",
"Saikat",
""
],
[
"Karki",
"Manohar",
""
],
[
"DiBiano",
"Robert",
""
],
[
"Mukhopadhyay",
"Supratik",
""
],
[
"Ganguly",
"Sangram",
""
],
[
"Nemani",
"Ramakrishna",
""
],
[
"Gayaka",
"Shreekant",
""
]
] | TITLE: A Theoretical Analysis of Deep Neural Networks for Texture
Classification
ABSTRACT: We investigate the use of Deep Neural Networks for the classification of
image datasets where texture features are important for generating
class-conditional discriminative representations. To this end, we first derive
the size of the feature space for some standard textural features extracted
from the input dataset and then use the theory of Vapnik-Chervonenkis dimension
to show that hand-crafted feature extraction creates low-dimensional
representations which help in reducing the overall excess error rate. As a
corollary to this analysis, we derive for the first time upper bounds on the VC
dimension of Convolutional Neural Network as well as Dropout and Dropconnect
networks and the relation between excess error rate of Dropout and Dropconnect
networks. The concept of intrinsic dimension is used to validate the intuition
that texture-based datasets are inherently higher dimensional as compared to
handwritten digits or other object recognition datasets and hence more
difficult to be shattered by neural networks. We then derive the mean distance
from the centroid to the nearest and farthest sampling points in an
n-dimensional manifold and show that the Relative Contrast of the sample data
vanishes as dimensionality of the underlying vector space tends to infinity.
| no_new_dataset | 0.947332 |
1606.06357 | Th\'eo Trouillon | Th\'eo Trouillon, Johannes Welbl, Sebastian Riedel, \'Eric Gaussier,
Guillaume Bouchard | Complex Embeddings for Simple Link Prediction | 10+2 pages, accepted at ICML 2016 | null | null | null | cs.AI cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In statistical relational learning, the link prediction problem is key to
automatically understand the structure of large knowledge bases. As in previous
studies, we propose to solve this problem through latent factorization.
However, here we make use of complex valued embeddings. The composition of
complex embeddings can handle a large variety of binary relations, among them
symmetric and antisymmetric relations. Compared to state-of-the-art models such
as Neural Tensor Network and Holographic Embeddings, our approach based on
complex embeddings is arguably simpler, as it only uses the Hermitian dot
product, the complex counterpart of the standard dot product between real
vectors. Our approach is scalable to large datasets as it remains linear in
both space and time, while consistently outperforming alternative approaches on
standard link prediction benchmarks.
| [
{
"version": "v1",
"created": "Mon, 20 Jun 2016 22:52:48 GMT"
}
] | 2016-06-22T00:00:00 | [
[
"Trouillon",
"Théo",
""
],
[
"Welbl",
"Johannes",
""
],
[
"Riedel",
"Sebastian",
""
],
[
"Gaussier",
"Éric",
""
],
[
"Bouchard",
"Guillaume",
""
]
] | TITLE: Complex Embeddings for Simple Link Prediction
ABSTRACT: In statistical relational learning, the link prediction problem is key to
automatically understand the structure of large knowledge bases. As in previous
studies, we propose to solve this problem through latent factorization.
However, here we make use of complex valued embeddings. The composition of
complex embeddings can handle a large variety of binary relations, among them
symmetric and antisymmetric relations. Compared to state-of-the-art models such
as Neural Tensor Network and Holographic Embeddings, our approach based on
complex embeddings is arguably simpler, as it only uses the Hermitian dot
product, the complex counterpart of the standard dot product between real
vectors. Our approach is scalable to large datasets as it remains linear in
both space and time, while consistently outperforming alternative approaches on
standard link prediction benchmarks.
| no_new_dataset | 0.944074 |
1606.06437 | Raghudeep Gadde | Raghudeep Gadde and Varun Jampani and Renaud Marlet and Peter V.
Gehler | Efficient 2D and 3D Facade Segmentation using Auto-Context | 8 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces a fast and efficient segmentation technique for 2D
images and 3D point clouds of building facades. Facades of buildings are highly
structured and consequently most methods that have been proposed for this
problem aim to make use of this strong prior information. Contrary to most
prior work, we are describing a system that is almost domain independent and
consists of standard segmentation methods. We train a sequence of boosted
decision trees using auto-context features. This is learned using stacked
generalization. We find that this technique performs better, or comparable with
all previous published methods and present empirical results on all available
2D and 3D facade benchmark datasets. The proposed method is simple to
implement, easy to extend, and very efficient at test-time inference.
| [
{
"version": "v1",
"created": "Tue, 21 Jun 2016 06:50:35 GMT"
}
] | 2016-06-22T00:00:00 | [
[
"Gadde",
"Raghudeep",
""
],
[
"Jampani",
"Varun",
""
],
[
"Marlet",
"Renaud",
""
],
[
"Gehler",
"Peter V.",
""
]
] | TITLE: Efficient 2D and 3D Facade Segmentation using Auto-Context
ABSTRACT: This paper introduces a fast and efficient segmentation technique for 2D
images and 3D point clouds of building facades. Facades of buildings are highly
structured and consequently most methods that have been proposed for this
problem aim to make use of this strong prior information. Contrary to most
prior work, we are describing a system that is almost domain independent and
consists of standard segmentation methods. We train a sequence of boosted
decision trees using auto-context features. This is learned using stacked
generalization. We find that this technique performs better, or comparable with
all previous published methods and present empirical results on all available
2D and 3D facade benchmark datasets. The proposed method is simple to
implement, easy to extend, and very efficient at test-time inference.
| no_new_dataset | 0.948202 |
1410.2386 | Qibin Zhao Dr | Qibin Zhao, Guoxu Zhou, Liqing Zhang, Andrzej Cichocki, and Shun-ichi
Amari | Bayesian Robust Tensor Factorization for Incomplete Multiway Data | in IEEE Transactions on Neural Networks and Learning Systems, 2015 | null | 10.1109/TNNLS.2015.2423694 | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a generative model for robust tensor factorization in the presence
of both missing data and outliers. The objective is to explicitly infer the
underlying low-CP-rank tensor capturing the global information and a sparse
tensor capturing the local information (also considered as outliers), thus
providing the robust predictive distribution over missing entries. The
low-CP-rank tensor is modeled by multilinear interactions between multiple
latent factors on which the column sparsity is enforced by a hierarchical
prior, while the sparse tensor is modeled by a hierarchical view of Student-$t$
distribution that associates an individual hyperparameter with each element
independently. For model learning, we develop an efficient closed-form
variational inference under a fully Bayesian treatment, which can effectively
prevent the overfitting problem and scales linearly with data size. In contrast
to existing related works, our method can perform model selection automatically
and implicitly without need of tuning parameters. More specifically, it can
discover the groundtruth of CP rank and automatically adapt the sparsity
inducing priors to various types of outliers. In addition, the tradeoff between
the low-rank approximation and the sparse representation can be optimized in
the sense of maximum model evidence. The extensive experiments and comparisons
with many state-of-the-art algorithms on both synthetic and real-world datasets
demonstrate the superiorities of our method from several perspectives.
| [
{
"version": "v1",
"created": "Thu, 9 Oct 2014 08:50:31 GMT"
},
{
"version": "v2",
"created": "Thu, 16 Apr 2015 05:36:23 GMT"
}
] | 2016-06-21T00:00:00 | [
[
"Zhao",
"Qibin",
""
],
[
"Zhou",
"Guoxu",
""
],
[
"Zhang",
"Liqing",
""
],
[
"Cichocki",
"Andrzej",
""
],
[
"Amari",
"Shun-ichi",
""
]
] | TITLE: Bayesian Robust Tensor Factorization for Incomplete Multiway Data
ABSTRACT: We propose a generative model for robust tensor factorization in the presence
of both missing data and outliers. The objective is to explicitly infer the
underlying low-CP-rank tensor capturing the global information and a sparse
tensor capturing the local information (also considered as outliers), thus
providing the robust predictive distribution over missing entries. The
low-CP-rank tensor is modeled by multilinear interactions between multiple
latent factors on which the column sparsity is enforced by a hierarchical
prior, while the sparse tensor is modeled by a hierarchical view of Student-$t$
distribution that associates an individual hyperparameter with each element
independently. For model learning, we develop an efficient closed-form
variational inference under a fully Bayesian treatment, which can effectively
prevent the overfitting problem and scales linearly with data size. In contrast
to existing related works, our method can perform model selection automatically
and implicitly without need of tuning parameters. More specifically, it can
discover the groundtruth of CP rank and automatically adapt the sparsity
inducing priors to various types of outliers. In addition, the tradeoff between
the low-rank approximation and the sparse representation can be optimized in
the sense of maximum model evidence. The extensive experiments and comparisons
with many state-of-the-art algorithms on both synthetic and real-world datasets
demonstrate the superiorities of our method from several perspectives.
| no_new_dataset | 0.946001 |
1505.05561 | Devansh Arpit | Devansh Arpit, Yingbo Zhou, Hung Ngo, Venu Govindaraju | Why Regularized Auto-Encoders learn Sparse Representation? | 8 pages of content, 1 page of reference, 4 pages of supplementary.
ICML 2016; bug fix in lemma 1 | null | null | null | stat.ML cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While the authors of Batch Normalization (BN) identify and address an
important problem involved in training deep networks-- \textit{Internal
Covariate Shift}-- the current solution has certain drawbacks. For instance, BN
depends on batch statistics for layerwise input normalization during training
which makes the estimates of mean and standard deviation of input
(distribution) to hidden layers inaccurate due to shifting parameter values
(especially during initial training epochs). Another fundamental problem with
BN is that it cannot be used with batch-size $ 1 $ during training. We address
these drawbacks of BN by proposing a non-adaptive normalization technique for
removing covariate shift, that we call \textit{Normalization Propagation}. Our
approach does not depend on batch statistics, but rather uses a
data-independent parametric estimate of mean and standard-deviation in every
layer thus being computationally faster compared with BN. We exploit the
observation that the pre-activation before Rectified Linear Units follow
Gaussian distribution in deep networks, and that once the first and second
order statistics of any given dataset are normalized, we can forward propagate
this normalization without the need for recalculating the approximate
statistics for hidden layers.
| [
{
"version": "v1",
"created": "Thu, 21 May 2015 00:10:46 GMT"
},
{
"version": "v2",
"created": "Fri, 29 May 2015 19:22:37 GMT"
},
{
"version": "v3",
"created": "Wed, 2 Mar 2016 15:29:29 GMT"
},
{
"version": "v4",
"created": "Mon, 23 May 2016 23:04:21 GMT"
},
{
"version": "v5",
"created": "Fri, 17 Jun 2016 23:01:20 GMT"
}
] | 2016-06-21T00:00:00 | [
[
"Arpit",
"Devansh",
""
],
[
"Zhou",
"Yingbo",
""
],
[
"Ngo",
"Hung",
""
],
[
"Govindaraju",
"Venu",
""
]
] | TITLE: Why Regularized Auto-Encoders learn Sparse Representation?
ABSTRACT: While the authors of Batch Normalization (BN) identify and address an
important problem involved in training deep networks-- \textit{Internal
Covariate Shift}-- the current solution has certain drawbacks. For instance, BN
depends on batch statistics for layerwise input normalization during training
which makes the estimates of mean and standard deviation of input
(distribution) to hidden layers inaccurate due to shifting parameter values
(especially during initial training epochs). Another fundamental problem with
BN is that it cannot be used with batch-size $ 1 $ during training. We address
these drawbacks of BN by proposing a non-adaptive normalization technique for
removing covariate shift, that we call \textit{Normalization Propagation}. Our
approach does not depend on batch statistics, but rather uses a
data-independent parametric estimate of mean and standard-deviation in every
layer thus being computationally faster compared with BN. We exploit the
observation that the pre-activation before Rectified Linear Units follow
Gaussian distribution in deep networks, and that once the first and second
order statistics of any given dataset are normalized, we can forward propagate
this normalization without the need for recalculating the approximate
statistics for hidden layers.
| no_new_dataset | 0.944893 |
1505.07335 | Guy Karlebach | Guy Karlebach | A Novel Algorithm for the Maximal Fit Problem in Boolean Networks | null | null | null | null | q-bio.MN cs.CE cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Gene regulatory networks (GRNs) are increasingly used for explaining
biological processes with complex transcriptional regulation. A GRN links the
expression levels of a set of genes via regulatory controls that gene products
exert on one another. Boolean networks are a common modeling choice since they
balance between detail and ease of analysis. However, even for Boolean networks
the problem of fitting a given network model to an expression dataset is
NP-Complete. Previous methods have addressed this issue heuristically or by
focusing on acyclic networks and specific classes of regulation functions. In
this paper we introduce a novel algorithm for this problem that makes use of
sampling in order to handle large datasets. Our algorithm can handle time
series data for any network type and steady state data for acyclic networks.
Using in-silico time series data we demonstrate good performance on large
datasets with a significant level of noise.
| [
{
"version": "v1",
"created": "Mon, 25 May 2015 08:12:41 GMT"
},
{
"version": "v2",
"created": "Tue, 31 May 2016 19:32:39 GMT"
},
{
"version": "v3",
"created": "Mon, 20 Jun 2016 02:29:12 GMT"
}
] | 2016-06-21T00:00:00 | [
[
"Karlebach",
"Guy",
""
]
] | TITLE: A Novel Algorithm for the Maximal Fit Problem in Boolean Networks
ABSTRACT: Gene regulatory networks (GRNs) are increasingly used for explaining
biological processes with complex transcriptional regulation. A GRN links the
expression levels of a set of genes via regulatory controls that gene products
exert on one another. Boolean networks are a common modeling choice since they
balance between detail and ease of analysis. However, even for Boolean networks
the problem of fitting a given network model to an expression dataset is
NP-Complete. Previous methods have addressed this issue heuristically or by
focusing on acyclic networks and specific classes of regulation functions. In
this paper we introduce a novel algorithm for this problem that makes use of
sampling in order to handle large datasets. Our algorithm can handle time
series data for any network type and steady state data for acyclic networks.
Using in-silico time series data we demonstrate good performance on large
datasets with a significant level of noise.
| no_new_dataset | 0.949669 |
1508.01983 | Amr Bakry | Amr Bakry, Mohamed Elhoseiny, Tarek El-Gaaly and Ahmed Elgammal | Digging Deep into the layers of CNNs: In Search of How CNNs Achieve View
Invariance | This paper accepted in ICLR 2016 main conference | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper is focused on studying the view-manifold structure in the feature
spaces implied by the different layers of Convolutional Neural Networks (CNN).
There are several questions that this paper aims to answer: Does the learned
CNN representation achieve viewpoint invariance? How does it achieve viewpoint
invariance? Is it achieved by collapsing the view manifolds, or separating them
while preserving them? At which layer is view invariance achieved? How can the
structure of the view manifold at each layer of a deep convolutional neural
network be quantified experimentally? How does fine-tuning of a pre-trained CNN
on a multi-view dataset affect the representation at each layer of the network?
In order to answer these questions we propose a methodology to quantify the
deformation and degeneracy of view manifolds in CNN layers. We apply this
methodology and report interesting results in this paper that answer the
aforementioned questions.
| [
{
"version": "v1",
"created": "Sun, 9 Aug 2015 04:02:51 GMT"
},
{
"version": "v2",
"created": "Fri, 20 Nov 2015 09:22:40 GMT"
},
{
"version": "v3",
"created": "Fri, 8 Jan 2016 06:56:49 GMT"
},
{
"version": "v4",
"created": "Mon, 20 Jun 2016 10:05:15 GMT"
}
] | 2016-06-21T00:00:00 | [
[
"Bakry",
"Amr",
""
],
[
"Elhoseiny",
"Mohamed",
""
],
[
"El-Gaaly",
"Tarek",
""
],
[
"Elgammal",
"Ahmed",
""
]
] | TITLE: Digging Deep into the layers of CNNs: In Search of How CNNs Achieve View
Invariance
ABSTRACT: This paper is focused on studying the view-manifold structure in the feature
spaces implied by the different layers of Convolutional Neural Networks (CNN).
There are several questions that this paper aims to answer: Does the learned
CNN representation achieve viewpoint invariance? How does it achieve viewpoint
invariance? Is it achieved by collapsing the view manifolds, or separating them
while preserving them? At which layer is view invariance achieved? How can the
structure of the view manifold at each layer of a deep convolutional neural
network be quantified experimentally? How does fine-tuning of a pre-trained CNN
on a multi-view dataset affect the representation at each layer of the network?
In order to answer these questions we propose a methodology to quantify the
deformation and degeneracy of view manifolds in CNN layers. We apply this
methodology and report interesting results in this paper that answer the
aforementioned questions.
| no_new_dataset | 0.946597 |
1601.07140 | Andreas Veit | Andreas Veit and Tomas Matera and Lukas Neumann and Jiri Matas and
Serge Belongie | COCO-Text: Dataset and Benchmark for Text Detection and Recognition in
Natural Images | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper describes the COCO-Text dataset. In recent years large-scale
datasets like SUN and Imagenet drove the advancement of scene understanding and
object recognition. The goal of COCO-Text is to advance state-of-the-art in
text detection and recognition in natural images. The dataset is based on the
MS COCO dataset, which contains images of complex everyday scenes. The images
were not collected with text in mind and thus contain a broad variety of text
instances. To reflect the diversity of text in natural scenes, we annotate text
with (a) location in terms of a bounding box, (b) fine-grained classification
into machine printed text and handwritten text, (c) classification into legible
and illegible text, (d) script of the text and (e) transcriptions of legible
text. The dataset contains over 173k text annotations in over 63k images. We
provide a statistical analysis of the accuracy of our annotations. In addition,
we present an analysis of three leading state-of-the-art photo Optical
Character Recognition (OCR) approaches on our dataset. While scene text
detection and recognition enjoys strong advances in recent years, we identify
significant shortcomings motivating future work.
| [
{
"version": "v1",
"created": "Tue, 26 Jan 2016 19:30:34 GMT"
},
{
"version": "v2",
"created": "Sun, 19 Jun 2016 23:52:14 GMT"
}
] | 2016-06-21T00:00:00 | [
[
"Veit",
"Andreas",
""
],
[
"Matera",
"Tomas",
""
],
[
"Neumann",
"Lukas",
""
],
[
"Matas",
"Jiri",
""
],
[
"Belongie",
"Serge",
""
]
] | TITLE: COCO-Text: Dataset and Benchmark for Text Detection and Recognition in
Natural Images
ABSTRACT: This paper describes the COCO-Text dataset. In recent years large-scale
datasets like SUN and Imagenet drove the advancement of scene understanding and
object recognition. The goal of COCO-Text is to advance state-of-the-art in
text detection and recognition in natural images. The dataset is based on the
MS COCO dataset, which contains images of complex everyday scenes. The images
were not collected with text in mind and thus contain a broad variety of text
instances. To reflect the diversity of text in natural scenes, we annotate text
with (a) location in terms of a bounding box, (b) fine-grained classification
into machine printed text and handwritten text, (c) classification into legible
and illegible text, (d) script of the text and (e) transcriptions of legible
text. The dataset contains over 173k text annotations in over 63k images. We
provide a statistical analysis of the accuracy of our annotations. In addition,
we present an analysis of three leading state-of-the-art photo Optical
Character Recognition (OCR) approaches on our dataset. While scene text
detection and recognition enjoys strong advances in recent years, we identify
significant shortcomings motivating future work.
| new_dataset | 0.953319 |
1601.07255 | Chunhua Shen | Lin Wu, Chunhua Shen, Anton van den Hengel | PersonNet: Person Re-identification with Deep Convolutional Neural
Networks | 7 pages. Fixed Figure 4 (a) | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a deep end-to-end neu- ral network to
simultaneously learn high-level features and a corresponding similarity metric
for person re-identification. The network takes a pair of raw RGB images as
input, and outputs a similarity value indicating whether the two input images
depict the same person. A layer of computing neighborhood range differences
across two input images is employed to capture local relationship between
patches. This operation is to seek a robust feature from input images. By
increasing the depth to 10 weight layers and using very small (3$\times$3)
convolution filters, our architecture achieves a remarkable improvement on the
prior-art configurations. Meanwhile, an adaptive Root- Mean-Square (RMSProp)
gradient decent algorithm is integrated into our architecture, which is
beneficial to deep nets. Our method consistently outperforms state-of-the-art
on two large datasets (CUHK03 and Market-1501), and a medium-sized data set
(CUHK01).
| [
{
"version": "v1",
"created": "Wed, 27 Jan 2016 03:49:34 GMT"
},
{
"version": "v2",
"created": "Mon, 20 Jun 2016 06:43:54 GMT"
}
] | 2016-06-21T00:00:00 | [
[
"Wu",
"Lin",
""
],
[
"Shen",
"Chunhua",
""
],
[
"Hengel",
"Anton van den",
""
]
] | TITLE: PersonNet: Person Re-identification with Deep Convolutional Neural
Networks
ABSTRACT: In this paper, we propose a deep end-to-end neu- ral network to
simultaneously learn high-level features and a corresponding similarity metric
for person re-identification. The network takes a pair of raw RGB images as
input, and outputs a similarity value indicating whether the two input images
depict the same person. A layer of computing neighborhood range differences
across two input images is employed to capture local relationship between
patches. This operation is to seek a robust feature from input images. By
increasing the depth to 10 weight layers and using very small (3$\times$3)
convolution filters, our architecture achieves a remarkable improvement on the
prior-art configurations. Meanwhile, an adaptive Root- Mean-Square (RMSProp)
gradient decent algorithm is integrated into our architecture, which is
beneficial to deep nets. Our method consistently outperforms state-of-the-art
on two large datasets (CUHK03 and Market-1501), and a medium-sized data set
(CUHK01).
| no_new_dataset | 0.946745 |
1602.04256 | Yihan Gao | Yihan Gao, Aditya Parameswaran | Squish: Near-Optimal Compression for Archival of Relational Datasets | null | null | 10.1145/2939672.2939867 | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Relational datasets are being generated at an alarmingly rapid rate across
organizations and industries. Compressing these datasets could significantly
reduce storage and archival costs. Traditional compression algorithms, e.g.,
gzip, are suboptimal for compressing relational datasets since they ignore the
table structure and relationships between attributes.
We study compression algorithms that leverage the relational structure to
compress datasets to a much greater extent. We develop Squish, a system that
uses a combination of Bayesian Networks and Arithmetic Coding to capture
multiple kinds of dependencies among attributes and achieve near-entropy
compression rate. Squish also supports user-defined attributes: users can
instantiate new data types by simply implementing five functions for a new
class interface. We prove the asymptotic optimality of our compression
algorithm and conduct experiments to show the effectiveness of our system:
Squish achieves a reduction of over 50\% in storage size relative to systems
developed in prior work on a variety of real datasets.
| [
{
"version": "v1",
"created": "Fri, 12 Feb 2016 22:46:57 GMT"
},
{
"version": "v2",
"created": "Sun, 19 Jun 2016 16:09:39 GMT"
}
] | 2016-06-21T00:00:00 | [
[
"Gao",
"Yihan",
""
],
[
"Parameswaran",
"Aditya",
""
]
] | TITLE: Squish: Near-Optimal Compression for Archival of Relational Datasets
ABSTRACT: Relational datasets are being generated at an alarmingly rapid rate across
organizations and industries. Compressing these datasets could significantly
reduce storage and archival costs. Traditional compression algorithms, e.g.,
gzip, are suboptimal for compressing relational datasets since they ignore the
table structure and relationships between attributes.
We study compression algorithms that leverage the relational structure to
compress datasets to a much greater extent. We develop Squish, a system that
uses a combination of Bayesian Networks and Arithmetic Coding to capture
multiple kinds of dependencies among attributes and achieve near-entropy
compression rate. Squish also supports user-defined attributes: users can
instantiate new data types by simply implementing five functions for a new
class interface. We prove the asymptotic optimality of our compression
algorithm and conduct experiments to show the effectiveness of our system:
Squish achieves a reduction of over 50\% in storage size relative to systems
developed in prior work on a variety of real datasets.
| no_new_dataset | 0.943348 |
1603.00223 | Liang Lu | Liang Lu, Lingpeng Kong, Chris Dyer, Noah A. Smith and Steve Renals | Segmental Recurrent Neural Networks for End-to-end Speech Recognition | 5 pages, 2 figures, accepted by Interspeech 2016 | null | null | null | cs.CL cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the segmental recurrent neural network for end-to-end acoustic
modelling. This model connects the segmental conditional random field (CRF)
with a recurrent neural network (RNN) used for feature extraction. Compared to
most previous CRF-based acoustic models, it does not rely on an external system
to provide features or segmentation boundaries. Instead, this model
marginalises out all the possible segmentations, and features are extracted
from the RNN trained together with the segmental CRF. In essence, this model is
self-contained and can be trained end-to-end. In this paper, we discuss
practical training and decoding issues as well as the method to speed up the
training in the context of speech recognition. We performed experiments on the
TIMIT dataset. We achieved 17.3 phone error rate (PER) from the first-pass
decoding --- the best reported result using CRFs, despite the fact that we only
used a zeroth-order CRF and without using any language model.
| [
{
"version": "v1",
"created": "Tue, 1 Mar 2016 10:43:43 GMT"
},
{
"version": "v2",
"created": "Mon, 20 Jun 2016 10:29:23 GMT"
}
] | 2016-06-21T00:00:00 | [
[
"Lu",
"Liang",
""
],
[
"Kong",
"Lingpeng",
""
],
[
"Dyer",
"Chris",
""
],
[
"Smith",
"Noah A.",
""
],
[
"Renals",
"Steve",
""
]
] | TITLE: Segmental Recurrent Neural Networks for End-to-end Speech Recognition
ABSTRACT: We study the segmental recurrent neural network for end-to-end acoustic
modelling. This model connects the segmental conditional random field (CRF)
with a recurrent neural network (RNN) used for feature extraction. Compared to
most previous CRF-based acoustic models, it does not rely on an external system
to provide features or segmentation boundaries. Instead, this model
marginalises out all the possible segmentations, and features are extracted
from the RNN trained together with the segmental CRF. In essence, this model is
self-contained and can be trained end-to-end. In this paper, we discuss
practical training and decoding issues as well as the method to speed up the
training in the context of speech recognition. We performed experiments on the
TIMIT dataset. We achieved 17.3 phone error rate (PER) from the first-pass
decoding --- the best reported result using CRFs, despite the fact that we only
used a zeroth-order CRF and without using any language model.
| no_new_dataset | 0.952574 |
1604.03628 | Jianwei Yang | Jianwei Yang, Devi Parikh, Dhruv Batra | Joint Unsupervised Learning of Deep Representations and Image Clusters | 19 pages, 11 figures, 14 tables, 2016 IEEE Conference on Computer
Vision and Pattern Recognition (CVPR) | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a recurrent framework for Joint Unsupervised
LEarning (JULE) of deep representations and image clusters. In our framework,
successive operations in a clustering algorithm are expressed as steps in a
recurrent process, stacked on top of representations output by a Convolutional
Neural Network (CNN). During training, image clusters and representations are
updated jointly: image clustering is conducted in the forward pass, while
representation learning in the backward pass. Our key idea behind this
framework is that good representations are beneficial to image clustering and
clustering results provide supervisory signals to representation learning. By
integrating two processes into a single model with a unified weighted triplet
loss and optimizing it end-to-end, we can obtain not only more powerful
representations, but also more precise image clusters. Extensive experiments
show that our method outperforms the state-of-the-art on image clustering
across a variety of image datasets. Moreover, the learned representations
generalize well when transferred to other tasks.
| [
{
"version": "v1",
"created": "Wed, 13 Apr 2016 01:24:59 GMT"
},
{
"version": "v2",
"created": "Wed, 25 May 2016 19:45:59 GMT"
},
{
"version": "v3",
"created": "Mon, 20 Jun 2016 19:56:16 GMT"
}
] | 2016-06-21T00:00:00 | [
[
"Yang",
"Jianwei",
""
],
[
"Parikh",
"Devi",
""
],
[
"Batra",
"Dhruv",
""
]
] | TITLE: Joint Unsupervised Learning of Deep Representations and Image Clusters
ABSTRACT: In this paper, we propose a recurrent framework for Joint Unsupervised
LEarning (JULE) of deep representations and image clusters. In our framework,
successive operations in a clustering algorithm are expressed as steps in a
recurrent process, stacked on top of representations output by a Convolutional
Neural Network (CNN). During training, image clusters and representations are
updated jointly: image clustering is conducted in the forward pass, while
representation learning in the backward pass. Our key idea behind this
framework is that good representations are beneficial to image clustering and
clustering results provide supervisory signals to representation learning. By
integrating two processes into a single model with a unified weighted triplet
loss and optimizing it end-to-end, we can obtain not only more powerful
representations, but also more precise image clusters. Extensive experiments
show that our method outperforms the state-of-the-art on image clustering
across a variety of image datasets. Moreover, the learned representations
generalize well when transferred to other tasks.
| no_new_dataset | 0.947672 |
1604.05592 | Angjoo Kanazawa | Angjoo Kanazawa and David W. Jacobs and Manmohan Chandraker | WarpNet: Weakly Supervised Matching for Single-view Reconstruction | to appear in IEEE Conference on Computer Vision and Pattern
Recognition (CVPR) 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present an approach to matching images of objects in fine-grained datasets
without using part annotations, with an application to the challenging problem
of weakly supervised single-view reconstruction. This is in contrast to prior
works that require part annotations, since matching objects across class and
pose variations is challenging with appearance features alone. We overcome this
challenge through a novel deep learning architecture, WarpNet, that aligns an
object in one image with a different object in another. We exploit the
structure of the fine-grained dataset to create artificial data for training
this network in an unsupervised-discriminative learning approach. The output of
the network acts as a spatial prior that allows generalization at test time to
match real images across variations in appearance, viewpoint and articulation.
On the CUB-200-2011 dataset of bird categories, we improve the AP over an
appearance-only network by 13.6%. We further demonstrate that our WarpNet
matches, together with the structure of fine-grained datasets, allow
single-view reconstructions with quality comparable to using annotated point
correspondences.
| [
{
"version": "v1",
"created": "Tue, 19 Apr 2016 14:28:42 GMT"
},
{
"version": "v2",
"created": "Mon, 20 Jun 2016 09:40:46 GMT"
}
] | 2016-06-21T00:00:00 | [
[
"Kanazawa",
"Angjoo",
""
],
[
"Jacobs",
"David W.",
""
],
[
"Chandraker",
"Manmohan",
""
]
] | TITLE: WarpNet: Weakly Supervised Matching for Single-view Reconstruction
ABSTRACT: We present an approach to matching images of objects in fine-grained datasets
without using part annotations, with an application to the challenging problem
of weakly supervised single-view reconstruction. This is in contrast to prior
works that require part annotations, since matching objects across class and
pose variations is challenging with appearance features alone. We overcome this
challenge through a novel deep learning architecture, WarpNet, that aligns an
object in one image with a different object in another. We exploit the
structure of the fine-grained dataset to create artificial data for training
this network in an unsupervised-discriminative learning approach. The output of
the network acts as a spatial prior that allows generalization at test time to
match real images across variations in appearance, viewpoint and articulation.
On the CUB-200-2011 dataset of bird categories, we improve the AP over an
appearance-only network by 13.6%. We further demonstrate that our WarpNet
matches, together with the structure of fine-grained datasets, allow
single-view reconstructions with quality comparable to using annotated point
correspondences.
| no_new_dataset | 0.951188 |
1606.05694 | Soroush Vosoughi Dr | Prashanth Vijayaraghavan, Ivan Sysoev, Soroush Vosoughi and Deb Roy | DeepStance at SemEval-2016 Task 6: Detecting Stance in Tweets Using
Character and Word-Level CNNs | SemEval 2016, San Diego, California. In Proceedings of the 10th
International Workshop on Semantic Evaluation (SemEval-2016). San Diego,
California | null | null | null | cs.CL cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper describes our approach for the Detecting Stance in Tweets task
(SemEval-2016 Task 6). We utilized recent advances in short text categorization
using deep learning to create word-level and character-level models. The choice
between word-level and character-level models in each particular case was
informed through validation performance. Our final system is a combination of
classifiers using word-level or character-level models. We also employed novel
data augmentation techniques to expand and diversify our training dataset, thus
making our system more robust. Our system achieved a macro-average precision,
recall and F1-scores of 0.67, 0.61 and 0.635 respectively.
| [
{
"version": "v1",
"created": "Fri, 17 Jun 2016 22:32:50 GMT"
}
] | 2016-06-21T00:00:00 | [
[
"Vijayaraghavan",
"Prashanth",
""
],
[
"Sysoev",
"Ivan",
""
],
[
"Vosoughi",
"Soroush",
""
],
[
"Roy",
"Deb",
""
]
] | TITLE: DeepStance at SemEval-2016 Task 6: Detecting Stance in Tweets Using
Character and Word-Level CNNs
ABSTRACT: This paper describes our approach for the Detecting Stance in Tweets task
(SemEval-2016 Task 6). We utilized recent advances in short text categorization
using deep learning to create word-level and character-level models. The choice
between word-level and character-level models in each particular case was
informed through validation performance. Our final system is a combination of
classifiers using word-level or character-level models. We also employed novel
data augmentation techniques to expand and diversify our training dataset, thus
making our system more robust. Our system achieved a macro-average precision,
recall and F1-scores of 0.67, 0.61 and 0.635 respectively.
| no_new_dataset | 0.950457 |
1606.05699 | Lu Wang | Lu Wang and Claire Cardie and Galen Marchetti | Socially-Informed Timeline Generation for Complex Events | NAACL 2015 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Existing timeline generation systems for complex events consider only
information from traditional media, ignoring the rich social context provided
by user-generated content that reveals representative public interests or
insightful opinions. We instead aim to generate socially-informed timelines
that contain both news article summaries and selected user comments. We present
an optimization framework designed to balance topical cohesion between the
article and comment summaries along with their informativeness and coverage of
the event. Automatic evaluations on real-world datasets that cover four complex
events show that our system produces more informative timelines than
state-of-the-art systems. In human evaluation, the associated comment summaries
are furthermore rated more insightful than editor's picks and comments ranked
highly by users.
| [
{
"version": "v1",
"created": "Fri, 17 Jun 2016 22:52:09 GMT"
}
] | 2016-06-21T00:00:00 | [
[
"Wang",
"Lu",
""
],
[
"Cardie",
"Claire",
""
],
[
"Marchetti",
"Galen",
""
]
] | TITLE: Socially-Informed Timeline Generation for Complex Events
ABSTRACT: Existing timeline generation systems for complex events consider only
information from traditional media, ignoring the rich social context provided
by user-generated content that reveals representative public interests or
insightful opinions. We instead aim to generate socially-informed timelines
that contain both news article summaries and selected user comments. We present
an optimization framework designed to balance topical cohesion between the
article and comment summaries along with their informativeness and coverage of
the event. Automatic evaluations on real-world datasets that cover four complex
events show that our system produces more informative timelines than
state-of-the-art systems. In human evaluation, the associated comment summaries
are furthermore rated more insightful than editor's picks and comments ranked
highly by users.
| no_new_dataset | 0.951414 |
1606.05706 | Lu Wang | Lu Wang and Claire Cardie | Improving Agreement and Disagreement Identification in Online
Discussions with A Socially-Tuned Sentiment Lexicon | ACL WASSA workshop 2014 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the problem of agreement and disagreement detection in online
discussions. An isotonic Conditional Random Fields (isotonic CRF) based
sequential model is proposed to make predictions on sentence- or segment-level.
We automatically construct a socially-tuned lexicon that is bootstrapped from
existing general-purpose sentiment lexicons to further improve the performance.
We evaluate our agreement and disagreement tagging model on two disparate
online discussion corpora -- Wikipedia Talk pages and online debates. Our model
is shown to outperform the state-of-the-art approaches in both datasets. For
example, the isotonic CRF model achieves F1 scores of 0.74 and 0.67 for
agreement and disagreement detection, when a linear chain CRF obtains 0.58 and
0.56 for the discussions on Wikipedia Talk pages.
| [
{
"version": "v1",
"created": "Fri, 17 Jun 2016 23:29:11 GMT"
}
] | 2016-06-21T00:00:00 | [
[
"Wang",
"Lu",
""
],
[
"Cardie",
"Claire",
""
]
] | TITLE: Improving Agreement and Disagreement Identification in Online
Discussions with A Socially-Tuned Sentiment Lexicon
ABSTRACT: We study the problem of agreement and disagreement detection in online
discussions. An isotonic Conditional Random Fields (isotonic CRF) based
sequential model is proposed to make predictions on sentence- or segment-level.
We automatically construct a socially-tuned lexicon that is bootstrapped from
existing general-purpose sentiment lexicons to further improve the performance.
We evaluate our agreement and disagreement tagging model on two disparate
online discussion corpora -- Wikipedia Talk pages and online debates. Our model
is shown to outperform the state-of-the-art approaches in both datasets. For
example, the isotonic CRF model achieves F1 scores of 0.74 and 0.67 for
agreement and disagreement detection, when a linear chain CRF obtains 0.58 and
0.56 for the discussions on Wikipedia Talk pages.
| no_new_dataset | 0.948965 |
1606.05708 | Kristi Morton | Kristi Morton, Hannaneh Hajishirzi, Magdalena Balazinska, Dan Grossman | View-Driven Deduplication with Active Learning | 13 pgs | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Visual analytics systems such as Tableau are increasingly popular for
interactive data exploration. These tools, however, do not currently assist
users with detecting or resolving potential data quality problems including the
well-known deduplication problem. Recent approaches for deduplication focus on
cleaning entire datasets and commonly require hundreds to thousands of user
labels. In this paper, we address the problem of deduplication in the context
of visual data analytics. We present a new approach for record deduplication
that strives to produce the cleanest view possible with a limited budget for
data labeling. The key idea behind our approach is to consider the impact that
individual tuples have on a visualization and to monitor how the view changes
during cleaning. With experiments on nine different visualizations for two
real-world datasets, we show that our approach produces significantly cleaner
views for small labeling budgets than state-of-the-art alternatives and that it
also stops the cleaning process after requesting fewer labels.
| [
{
"version": "v1",
"created": "Fri, 17 Jun 2016 23:38:51 GMT"
}
] | 2016-06-21T00:00:00 | [
[
"Morton",
"Kristi",
""
],
[
"Hajishirzi",
"Hannaneh",
""
],
[
"Balazinska",
"Magdalena",
""
],
[
"Grossman",
"Dan",
""
]
] | TITLE: View-Driven Deduplication with Active Learning
ABSTRACT: Visual analytics systems such as Tableau are increasingly popular for
interactive data exploration. These tools, however, do not currently assist
users with detecting or resolving potential data quality problems including the
well-known deduplication problem. Recent approaches for deduplication focus on
cleaning entire datasets and commonly require hundreds to thousands of user
labels. In this paper, we address the problem of deduplication in the context
of visual data analytics. We present a new approach for record deduplication
that strives to produce the cleanest view possible with a limited budget for
data labeling. The key idea behind our approach is to consider the impact that
individual tuples have on a visualization and to monitor how the view changes
during cleaning. With experiments on nine different visualizations for two
real-world datasets, we show that our approach produces significantly cleaner
views for small labeling budgets than state-of-the-art alternatives and that it
also stops the cleaning process after requesting fewer labels.
| no_new_dataset | 0.951594 |
1606.05725 | Amirhossein Akbarnejad | Amirhossein Akbarnejad, Mahdieh Soleymani Baghshah | An Efficient Large-scale Semi-supervised Multi-label Classifier Capable
of Handling Missing labels | null | null | null | null | cs.LG cs.AI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-label classification has received considerable interest in recent
years. Multi-label classifiers have to address many problems including:
handling large-scale datasets with many instances and a large set of labels,
compensating missing label assignments in the training set, considering
correlations between labels, as well as exploiting unlabeled data to improve
prediction performance. To tackle datasets with a large set of labels,
embedding-based methods have been proposed which seek to represent the label
assignments in a low-dimensional space. Many state-of-the-art embedding-based
methods use a linear dimensionality reduction to represent the label
assignments in a low-dimensional space. However, by doing so, these methods
actually neglect the tail labels - labels that are infrequently assigned to
instances. We propose an embedding-based method that non-linearly embeds the
label vectors using an stochastic approach, thereby predicting the tail labels
more accurately. Moreover, the proposed method have excellent mechanisms for
handling missing labels, dealing with large-scale datasets, as well as
exploiting unlabeled data. With the best of our knowledge, our proposed method
is the first multi-label classifier that simultaneously addresses all of the
mentioned challenges. Experiments on real-world datasets show that our method
outperforms stateof-the-art multi-label classifiers by a large margin, in terms
of prediction performance, as well as training time.
| [
{
"version": "v1",
"created": "Sat, 18 Jun 2016 07:49:13 GMT"
}
] | 2016-06-21T00:00:00 | [
[
"Akbarnejad",
"Amirhossein",
""
],
[
"Baghshah",
"Mahdieh Soleymani",
""
]
] | TITLE: An Efficient Large-scale Semi-supervised Multi-label Classifier Capable
of Handling Missing labels
ABSTRACT: Multi-label classification has received considerable interest in recent
years. Multi-label classifiers have to address many problems including:
handling large-scale datasets with many instances and a large set of labels,
compensating missing label assignments in the training set, considering
correlations between labels, as well as exploiting unlabeled data to improve
prediction performance. To tackle datasets with a large set of labels,
embedding-based methods have been proposed which seek to represent the label
assignments in a low-dimensional space. Many state-of-the-art embedding-based
methods use a linear dimensionality reduction to represent the label
assignments in a low-dimensional space. However, by doing so, these methods
actually neglect the tail labels - labels that are infrequently assigned to
instances. We propose an embedding-based method that non-linearly embeds the
label vectors using an stochastic approach, thereby predicting the tail labels
more accurately. Moreover, the proposed method have excellent mechanisms for
handling missing labels, dealing with large-scale datasets, as well as
exploiting unlabeled data. With the best of our knowledge, our proposed method
is the first multi-label classifier that simultaneously addresses all of the
mentioned challenges. Experiments on real-world datasets show that our method
outperforms stateof-the-art multi-label classifiers by a large margin, in terms
of prediction performance, as well as training time.
| no_new_dataset | 0.948155 |
1606.05730 | Ruocheng Guo | Ruocheng Guo, Paulo Shakarian | A Comparison of Methods for Cascade Prediction | 8 pages, 29 figures, ASONAM 2016 (Industry Track) | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Information cascades exist in a wide variety of platforms on Internet. A very
important real-world problem is to identify which information cascades can go
viral. A system addressing this problem can be used in a variety of
applications including public health, marketing and counter-terrorism. As a
cascade can be considered as compound of the social network and the time
series. However, in related literature where methods for solving the cascade
prediction problem were proposed, the experimental settings were often limited
to only a single metric for a specific problem formulation. Moreover, little
attention was paid to the run time of those methods. In this paper, we first
formulate the cascade prediction problem as both classification and regression.
Then we compare three categories of cascade prediction methods: centrality
based, feature based and point process based. We carry out the comparison
through evaluation of the methods by both accuracy metrics and run time. The
results show that feature based methods can outperform others in terms of
prediction accuracy but suffer from heavy overhead especially for large
datasets. While point process based methods can also run into issue of long run
time when the model can not well adapt to the data. This paper seeks to address
issues in order to allow developers of systems for social network analysis to
select the most appropriate method for predicting viral information cascades.
| [
{
"version": "v1",
"created": "Sat, 18 Jun 2016 08:41:16 GMT"
}
] | 2016-06-21T00:00:00 | [
[
"Guo",
"Ruocheng",
""
],
[
"Shakarian",
"Paulo",
""
]
] | TITLE: A Comparison of Methods for Cascade Prediction
ABSTRACT: Information cascades exist in a wide variety of platforms on Internet. A very
important real-world problem is to identify which information cascades can go
viral. A system addressing this problem can be used in a variety of
applications including public health, marketing and counter-terrorism. As a
cascade can be considered as compound of the social network and the time
series. However, in related literature where methods for solving the cascade
prediction problem were proposed, the experimental settings were often limited
to only a single metric for a specific problem formulation. Moreover, little
attention was paid to the run time of those methods. In this paper, we first
formulate the cascade prediction problem as both classification and regression.
Then we compare three categories of cascade prediction methods: centrality
based, feature based and point process based. We carry out the comparison
through evaluation of the methods by both accuracy metrics and run time. The
results show that feature based methods can outperform others in terms of
prediction accuracy but suffer from heavy overhead especially for large
datasets. While point process based methods can also run into issue of long run
time when the model can not well adapt to the data. This paper seeks to address
issues in order to allow developers of systems for social network analysis to
select the most appropriate method for predicting viral information cascades.
| no_new_dataset | 0.94743 |
1606.05752 | Chuxu Zhang | Chuxu Zhang, Chuang Liu, Lu Yu, Zi-Ke Zhang and Tao Zhou | Identifying the Academic Rising Stars | 12 pages | null | null | null | cs.DL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Predicting the fast-rising young researchers (Academic Rising Stars) in the
future provides useful guidance to the research community, e.g., offering
competitive candidates to university for young faculty hiring as they are
expected to have success academic careers. In this work, given a set of young
researchers who have published the first first-author paper recently, we solve
the problem of how to effectively predict the top k% researchers who achieve
the highest citation increment in \Delta t years. We explore a series of
factors that can drive an author to be fast-rising and design a novel impact
increment ranking learning (IIRL) algorithm that leverages those factors to
predict the academic rising stars. Experimental results on the large ArnetMiner
dataset with over 1.7 million authors demonstrate the effectiveness of IIRL.
Specifically, it outperforms all given benchmark methods, with over 8% average
improvement. Further analysis demonstrates that the prediction models for
different research topics follow the similar pattern. We also find that
temporal features are the best indicators for rising stars prediction, while
venue features are less relevant.
| [
{
"version": "v1",
"created": "Sat, 18 Jun 2016 14:01:55 GMT"
}
] | 2016-06-21T00:00:00 | [
[
"Zhang",
"Chuxu",
""
],
[
"Liu",
"Chuang",
""
],
[
"Yu",
"Lu",
""
],
[
"Zhang",
"Zi-Ke",
""
],
[
"Zhou",
"Tao",
""
]
] | TITLE: Identifying the Academic Rising Stars
ABSTRACT: Predicting the fast-rising young researchers (Academic Rising Stars) in the
future provides useful guidance to the research community, e.g., offering
competitive candidates to university for young faculty hiring as they are
expected to have success academic careers. In this work, given a set of young
researchers who have published the first first-author paper recently, we solve
the problem of how to effectively predict the top k% researchers who achieve
the highest citation increment in \Delta t years. We explore a series of
factors that can drive an author to be fast-rising and design a novel impact
increment ranking learning (IIRL) algorithm that leverages those factors to
predict the academic rising stars. Experimental results on the large ArnetMiner
dataset with over 1.7 million authors demonstrate the effectiveness of IIRL.
Specifically, it outperforms all given benchmark methods, with over 8% average
improvement. Further analysis demonstrates that the prediction models for
different research topics follow the similar pattern. We also find that
temporal features are the best indicators for rising stars prediction, while
venue features are less relevant.
| no_new_dataset | 0.946597 |
1606.05814 | Kyle Krafka | Kyle Krafka and Aditya Khosla and Petr Kellnhofer and Harini Kannan
and Suchendra Bhandarkar and Wojciech Matusik and Antonio Torralba | Eye Tracking for Everyone | The IEEE Conference on Computer Vision and Pattern Recognition
(CVPR), 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | From scientific research to commercial applications, eye tracking is an
important tool across many domains. Despite its range of applications, eye
tracking has yet to become a pervasive technology. We believe that we can put
the power of eye tracking in everyone's palm by building eye tracking software
that works on commodity hardware such as mobile phones and tablets, without the
need for additional sensors or devices. We tackle this problem by introducing
GazeCapture, the first large-scale dataset for eye tracking, containing data
from over 1450 people consisting of almost 2.5M frames. Using GazeCapture, we
train iTracker, a convolutional neural network for eye tracking, which achieves
a significant reduction in error over previous approaches while running in real
time (10-15fps) on a modern mobile device. Our model achieves a prediction
error of 1.71cm and 2.53cm without calibration on mobile phones and tablets
respectively. With calibration, this is reduced to 1.34cm and 2.12cm. Further,
we demonstrate that the features learned by iTracker generalize well to other
datasets, achieving state-of-the-art results. The code, data, and models are
available at http://gazecapture.csail.mit.edu.
| [
{
"version": "v1",
"created": "Sat, 18 Jun 2016 23:53:54 GMT"
}
] | 2016-06-21T00:00:00 | [
[
"Krafka",
"Kyle",
""
],
[
"Khosla",
"Aditya",
""
],
[
"Kellnhofer",
"Petr",
""
],
[
"Kannan",
"Harini",
""
],
[
"Bhandarkar",
"Suchendra",
""
],
[
"Matusik",
"Wojciech",
""
],
[
"Torralba",
"Antonio",
""
]
] | TITLE: Eye Tracking for Everyone
ABSTRACT: From scientific research to commercial applications, eye tracking is an
important tool across many domains. Despite its range of applications, eye
tracking has yet to become a pervasive technology. We believe that we can put
the power of eye tracking in everyone's palm by building eye tracking software
that works on commodity hardware such as mobile phones and tablets, without the
need for additional sensors or devices. We tackle this problem by introducing
GazeCapture, the first large-scale dataset for eye tracking, containing data
from over 1450 people consisting of almost 2.5M frames. Using GazeCapture, we
train iTracker, a convolutional neural network for eye tracking, which achieves
a significant reduction in error over previous approaches while running in real
time (10-15fps) on a modern mobile device. Our model achieves a prediction
error of 1.71cm and 2.53cm without calibration on mobile phones and tablets
respectively. With calibration, this is reduced to 1.34cm and 2.12cm. Further,
we demonstrate that the features learned by iTracker generalize well to other
datasets, achieving state-of-the-art results. The code, data, and models are
available at http://gazecapture.csail.mit.edu.
| new_dataset | 0.957118 |
1606.05893 | Neil Zhenqiang Gong | Neil Zhenqiang Gong and Bin Liu | You are Who You Know and How You Behave: Attribute Inference Attacks via
Users' Social Friends and Behaviors | Usenix Security Symposium 2016 | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose new privacy attacks to infer attributes (e.g., locations,
occupations, and interests) of online social network users. Our attacks
leverage seemingly innocent user information that is publicly available in
online social networks to infer missing attributes of targeted users. Given the
increasing availability of (seemingly innocent) user information online, our
results have serious implications for Internet privacy -- private attributes
can be inferred from users' publicly available data unless we take steps to
protect users from such inference attacks.
To infer attributes of a targeted user, existing inference attacks leverage
either the user's publicly available social friends or the user's behavioral
records (e.g., the webpages that the user has liked on Facebook, the apps that
the user has reviewed on Google Play), but not both. As we will show, such
inference attacks achieve limited success rates. However, the problem becomes
qualitatively different if we consider both social friends and behavioral
records. To address this challenge, we develop a novel model to integrate
social friends and behavioral records and design new attacks based on our
model. We theoretically and experimentally demonstrate the effectiveness of our
attacks. For instance, we observe that, in a real-world large-scale dataset
with 1.1 million users, our attack can correctly infer the cities a user lived
in for 57% of the users, via confidence estimation, we are able to increase the
attack success rate to over 90% if the attacker selectively attacks a half of
the users. Moreover, we show that our attack can correctly infer attributes for
significantly more users than previous attacks.
| [
{
"version": "v1",
"created": "Sun, 19 Jun 2016 17:34:50 GMT"
}
] | 2016-06-21T00:00:00 | [
[
"Gong",
"Neil Zhenqiang",
""
],
[
"Liu",
"Bin",
""
]
] | TITLE: You are Who You Know and How You Behave: Attribute Inference Attacks via
Users' Social Friends and Behaviors
ABSTRACT: We propose new privacy attacks to infer attributes (e.g., locations,
occupations, and interests) of online social network users. Our attacks
leverage seemingly innocent user information that is publicly available in
online social networks to infer missing attributes of targeted users. Given the
increasing availability of (seemingly innocent) user information online, our
results have serious implications for Internet privacy -- private attributes
can be inferred from users' publicly available data unless we take steps to
protect users from such inference attacks.
To infer attributes of a targeted user, existing inference attacks leverage
either the user's publicly available social friends or the user's behavioral
records (e.g., the webpages that the user has liked on Facebook, the apps that
the user has reviewed on Google Play), but not both. As we will show, such
inference attacks achieve limited success rates. However, the problem becomes
qualitatively different if we consider both social friends and behavioral
records. To address this challenge, we develop a novel model to integrate
social friends and behavioral records and design new attacks based on our
model. We theoretically and experimentally demonstrate the effectiveness of our
attacks. For instance, we observe that, in a real-world large-scale dataset
with 1.1 million users, our attack can correctly infer the cities a user lived
in for 57% of the users, via confidence estimation, we are able to increase the
attack success rate to over 90% if the attacker selectively attacks a half of
the users. Moreover, we show that our attack can correctly infer attributes for
significantly more users than previous attacks.
| no_new_dataset | 0.942929 |
1606.05967 | Amir Hossein Harati Nejad Torbati | Amir Hossein Harati Nejad Torbati, Joseph Picone | A Nonparametric Bayesian Approach for Spoken Term detection by Example
Query | interspeech 2016 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | State of the art speech recognition systems use data-intensive
context-dependent phonemes as acoustic units. However, these approaches do not
translate well to low resourced languages where large amounts of training data
is not available. For such languages, automatic discovery of acoustic units is
critical. In this paper, we demonstrate the application of nonparametric
Bayesian models to acoustic unit discovery. We show that the discovered units
are correlated with phonemes and therefore are linguistically meaningful. We
also present a spoken term detection (STD) by example query algorithm based on
these automatically learned units. We show that our proposed system produces a
P@N of 61.2% and an EER of 13.95% on the TIMIT dataset. The improvement in the
EER is 5% while P@N is only slightly lower than the best reported system in the
literature.
| [
{
"version": "v1",
"created": "Mon, 20 Jun 2016 04:06:23 GMT"
}
] | 2016-06-21T00:00:00 | [
[
"Torbati",
"Amir Hossein Harati Nejad",
""
],
[
"Picone",
"Joseph",
""
]
] | TITLE: A Nonparametric Bayesian Approach for Spoken Term detection by Example
Query
ABSTRACT: State of the art speech recognition systems use data-intensive
context-dependent phonemes as acoustic units. However, these approaches do not
translate well to low resourced languages where large amounts of training data
is not available. For such languages, automatic discovery of acoustic units is
critical. In this paper, we demonstrate the application of nonparametric
Bayesian models to acoustic unit discovery. We show that the discovered units
are correlated with phonemes and therefore are linguistically meaningful. We
also present a spoken term detection (STD) by example query algorithm based on
these automatically learned units. We show that our proposed system produces a
P@N of 61.2% and an EER of 13.95% on the TIMIT dataset. The improvement in the
EER is 5% while P@N is only slightly lower than the best reported system in the
literature.
| no_new_dataset | 0.953665 |
1606.06031 | Sandro Pezzelle | Denis Paperno (1), Germ\'an Kruszewski (1), Angeliki Lazaridou (1),
Quan Ngoc Pham (1), Raffaella Bernardi (1), Sandro Pezzelle (1), Marco Baroni
(1), Gemma Boleda (1), Raquel Fern\'andez (2) ((1) CIMeC - Center for
Mind/Brain Sciences, University of Trento, (2) Institute for Logic, Language
& Computation, University of Amsterdam) | The LAMBADA dataset: Word prediction requiring a broad discourse context | 10 pages, Accepted as a long paper for ACL 2016 | null | null | null | cs.CL cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce LAMBADA, a dataset to evaluate the capabilities of computational
models for text understanding by means of a word prediction task. LAMBADA is a
collection of narrative passages sharing the characteristic that human subjects
are able to guess their last word if they are exposed to the whole passage, but
not if they only see the last sentence preceding the target word. To succeed on
LAMBADA, computational models cannot simply rely on local context, but must be
able to keep track of information in the broader discourse. We show that
LAMBADA exemplifies a wide range of linguistic phenomena, and that none of
several state-of-the-art language models reaches accuracy above 1% on this
novel benchmark. We thus propose LAMBADA as a challenging test set, meant to
encourage the development of new models capable of genuine understanding of
broad context in natural language text.
| [
{
"version": "v1",
"created": "Mon, 20 Jun 2016 09:37:17 GMT"
}
] | 2016-06-21T00:00:00 | [
[
"Paperno",
"Denis",
""
],
[
"Kruszewski",
"Germán",
""
],
[
"Lazaridou",
"Angeliki",
""
],
[
"Pham",
"Quan Ngoc",
""
],
[
"Bernardi",
"Raffaella",
""
],
[
"Pezzelle",
"Sandro",
""
],
[
"Baroni",
"Marco",
""
],
[
"Boleda",
"Gemma",
""
],
[
"Fernández",
"Raquel",
""
]
] | TITLE: The LAMBADA dataset: Word prediction requiring a broad discourse context
ABSTRACT: We introduce LAMBADA, a dataset to evaluate the capabilities of computational
models for text understanding by means of a word prediction task. LAMBADA is a
collection of narrative passages sharing the characteristic that human subjects
are able to guess their last word if they are exposed to the whole passage, but
not if they only see the last sentence preceding the target word. To succeed on
LAMBADA, computational models cannot simply rely on local context, but must be
able to keep track of information in the broader discourse. We show that
LAMBADA exemplifies a wide range of linguistic phenomena, and that none of
several state-of-the-art language models reaches accuracy above 1% on this
novel benchmark. We thus propose LAMBADA as a challenging test set, meant to
encourage the development of new models capable of genuine understanding of
broad context in natural language text.
| new_dataset | 0.959383 |
1606.06121 | Tolga Bolukbasi | Tolga Bolukbasi, Kai-Wei Chang, James Zou, Venkatesh Saligrama, Adam
Kalai | Quantifying and Reducing Stereotypes in Word Embeddings | presented at 2016 ICML Workshop on #Data4Good: Machine Learning in
Social Good Applications, New York, NY | null | null | null | cs.CL cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Machine learning algorithms are optimized to model statistical properties of
the training data. If the input data reflects stereotypes and biases of the
broader society, then the output of the learning algorithm also captures these
stereotypes. In this paper, we initiate the study of gender stereotypes in {\em
word embedding}, a popular framework to represent text data. As their use
becomes increasingly common, applications can inadvertently amplify unwanted
stereotypes. We show across multiple datasets that the embeddings contain
significant gender stereotypes, especially with regard to professions. We
created a novel gender analogy task and combined it with crowdsourcing to
systematically quantify the gender bias in a given embedding. We developed an
efficient algorithm that reduces gender stereotype using just a handful of
training examples while preserving the useful geometric properties of the
embedding. We evaluated our algorithm on several metrics. While we focus on
male/female stereotypes, our framework may be applicable to other types of
embedding biases.
| [
{
"version": "v1",
"created": "Mon, 20 Jun 2016 13:58:45 GMT"
}
] | 2016-06-21T00:00:00 | [
[
"Bolukbasi",
"Tolga",
""
],
[
"Chang",
"Kai-Wei",
""
],
[
"Zou",
"James",
""
],
[
"Saligrama",
"Venkatesh",
""
],
[
"Kalai",
"Adam",
""
]
] | TITLE: Quantifying and Reducing Stereotypes in Word Embeddings
ABSTRACT: Machine learning algorithms are optimized to model statistical properties of
the training data. If the input data reflects stereotypes and biases of the
broader society, then the output of the learning algorithm also captures these
stereotypes. In this paper, we initiate the study of gender stereotypes in {\em
word embedding}, a popular framework to represent text data. As their use
becomes increasingly common, applications can inadvertently amplify unwanted
stereotypes. We show across multiple datasets that the embeddings contain
significant gender stereotypes, especially with regard to professions. We
created a novel gender analogy task and combined it with crowdsourcing to
systematically quantify the gender bias in a given embedding. We developed an
efficient algorithm that reduces gender stereotype using just a handful of
training examples while preserving the useful geometric properties of the
embedding. We evaluated our algorithm on several metrics. While we focus on
male/female stereotypes, our framework may be applicable to other types of
embedding biases.
| no_new_dataset | 0.950088 |
1606.06258 | Marco Crocco | Marco Crocco, Andrea Trucco, Alessio Del Bue | Uncalibrated 3D Room Reconstruction from Sound | The present work has been submitted to IEEE/ACM Transactions on Audio
Speech and Language Processing | null | null | null | cs.SD | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a method to reconstruct the 3D structure of generic
convex rooms from sound signals. Differently from most of the previous
approaches, the method is fully uncalibrated in the sense that no knowledge
about the microphones and sources position is needed. Moreover, we demonstrate
that it is possible to bypass the well known echo labeling problem, allowing to
reconstruct the room shape in a reasonable computation time without the need of
additional hypotheses on the echoes order of arrival. Finally, the method is
intrinsically robust to outliers and missing data in the echoes detection,
allowing to work also in low SNR conditions. The proposed pipeline formalises
the problem in different steps such as time of arrival estimation, microphones
and sources localization and walls estimation. After providing a solution to
these different problems we present a global optimization approach that links
together all the problems in a single optimization function. The accuracy and
robustness of the method is assessed on a wide set of simulated setups and in a
challenging real scenario. Moreover we make freely available for a challenging
dataset for 3D room reconstruction with accurate ground truth in a real
scenario.
| [
{
"version": "v1",
"created": "Mon, 20 Jun 2016 19:21:46 GMT"
}
] | 2016-06-21T00:00:00 | [
[
"Crocco",
"Marco",
""
],
[
"Trucco",
"Andrea",
""
],
[
"Del Bue",
"Alessio",
""
]
] | TITLE: Uncalibrated 3D Room Reconstruction from Sound
ABSTRACT: This paper presents a method to reconstruct the 3D structure of generic
convex rooms from sound signals. Differently from most of the previous
approaches, the method is fully uncalibrated in the sense that no knowledge
about the microphones and sources position is needed. Moreover, we demonstrate
that it is possible to bypass the well known echo labeling problem, allowing to
reconstruct the room shape in a reasonable computation time without the need of
additional hypotheses on the echoes order of arrival. Finally, the method is
intrinsically robust to outliers and missing data in the echoes detection,
allowing to work also in low SNR conditions. The proposed pipeline formalises
the problem in different steps such as time of arrival estimation, microphones
and sources localization and walls estimation. After providing a solution to
these different problems we present a global optimization approach that links
together all the problems in a single optimization function. The accuracy and
robustness of the method is assessed on a wide set of simulated setups and in a
challenging real scenario. Moreover we make freely available for a challenging
dataset for 3D room reconstruction with accurate ground truth in a real
scenario.
| no_new_dataset | 0.949763 |
1512.06790 | Siddharth Mahendran | Siddharth Mahendran and Ren\'e Vidal | Car Segmentation and Pose Estimation using 3D Object Models | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Image segmentation and 3D pose estimation are two key cogs in any algorithm
for scene understanding. However, state-of-the-art CRF-based models for image
segmentation rely mostly on 2D object models to construct top-down high-order
potentials. In this paper, we propose new top-down potentials for image
segmentation and pose estimation based on the shape and volume of a 3D object
model. We show that these complex top-down potentials can be easily decomposed
into standard forms for efficient inference in both the segmentation and pose
estimation tasks. Experiments on a car dataset show that knowledge of
segmentation helps perform pose estimation better and vice versa.
| [
{
"version": "v1",
"created": "Mon, 21 Dec 2015 20:01:53 GMT"
},
{
"version": "v2",
"created": "Fri, 17 Jun 2016 11:58:47 GMT"
}
] | 2016-06-20T00:00:00 | [
[
"Mahendran",
"Siddharth",
""
],
[
"Vidal",
"René",
""
]
] | TITLE: Car Segmentation and Pose Estimation using 3D Object Models
ABSTRACT: Image segmentation and 3D pose estimation are two key cogs in any algorithm
for scene understanding. However, state-of-the-art CRF-based models for image
segmentation rely mostly on 2D object models to construct top-down high-order
potentials. In this paper, we propose new top-down potentials for image
segmentation and pose estimation based on the shape and volume of a 3D object
model. We show that these complex top-down potentials can be easily decomposed
into standard forms for efficient inference in both the segmentation and pose
estimation tasks. Experiments on a car dataset show that knowledge of
segmentation helps perform pose estimation better and vice versa.
| no_new_dataset | 0.949201 |
1605.04553 | Dmitrijs Milajevs | Dmitrijs Milajevs and Sascha Griffiths | A Proposal for Linguistic Similarity Datasets Based on Commonality Lists | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Similarity is a core notion that is used in psychology and two branches of
linguistics: theoretical and computational. The similarity datasets that come
from the two fields differ in design: psychological datasets are focused around
a certain topic such as fruit names, while linguistic datasets contain words
from various categories. The later makes humans assign low similarity scores to
the words that have nothing in common and to the words that have contrast in
meaning, making similarity scores ambiguous. In this work we discuss the
similarity collection procedure for a multi-category dataset that avoids score
ambiguity and suggest changes to the evaluation procedure to reflect the
insights of psychological literature for word, phrase and sentence similarity.
We suggest to ask humans to provide a list of commonalities and differences
instead of numerical similarity scores and employ the structure of human
judgements beyond pairwise similarity for model evaluation. We believe that the
proposed approach will give rise to datasets that test meaning representation
models more thoroughly with respect to the human treatment of similarity.
| [
{
"version": "v1",
"created": "Sun, 15 May 2016 14:00:06 GMT"
},
{
"version": "v2",
"created": "Fri, 17 Jun 2016 16:55:20 GMT"
}
] | 2016-06-20T00:00:00 | [
[
"Milajevs",
"Dmitrijs",
""
],
[
"Griffiths",
"Sascha",
""
]
] | TITLE: A Proposal for Linguistic Similarity Datasets Based on Commonality Lists
ABSTRACT: Similarity is a core notion that is used in psychology and two branches of
linguistics: theoretical and computational. The similarity datasets that come
from the two fields differ in design: psychological datasets are focused around
a certain topic such as fruit names, while linguistic datasets contain words
from various categories. The later makes humans assign low similarity scores to
the words that have nothing in common and to the words that have contrast in
meaning, making similarity scores ambiguous. In this work we discuss the
similarity collection procedure for a multi-category dataset that avoids score
ambiguity and suggest changes to the evaluation procedure to reflect the
insights of psychological literature for word, phrase and sentence similarity.
We suggest to ask humans to provide a list of commonalities and differences
instead of numerical similarity scores and employ the structure of human
judgements beyond pairwise similarity for model evaluation. We believe that the
proposed approach will give rise to datasets that test meaning representation
models more thoroughly with respect to the human treatment of similarity.
| new_dataset | 0.959345 |
1606.03556 | Abhishek Das | Abhishek Das, Harsh Agrawal, C. Lawrence Zitnick, Devi Parikh, Dhruv
Batra | Human Attention in Visual Question Answering: Do Humans and Deep
Networks Look at the Same Regions? | 9 pages, 6 figures, 3 tables; Under review at EMNLP 2016 | null | null | null | cs.CV cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We conduct large-scale studies on `human attention' in Visual Question
Answering (VQA) to understand where humans choose to look to answer questions
about images. We design and test multiple game-inspired novel
attention-annotation interfaces that require the subject to sharpen regions of
a blurred image to answer a question. Thus, we introduce the VQA-HAT (Human
ATtention) dataset. We evaluate attention maps generated by state-of-the-art
VQA models against human attention both qualitatively (via visualizations) and
quantitatively (via rank-order correlation). Overall, our experiments show that
current attention models in VQA do not seem to be looking at the same regions
as humans.
| [
{
"version": "v1",
"created": "Sat, 11 Jun 2016 05:41:10 GMT"
},
{
"version": "v2",
"created": "Fri, 17 Jun 2016 04:39:01 GMT"
}
] | 2016-06-20T00:00:00 | [
[
"Das",
"Abhishek",
""
],
[
"Agrawal",
"Harsh",
""
],
[
"Zitnick",
"C. Lawrence",
""
],
[
"Parikh",
"Devi",
""
],
[
"Batra",
"Dhruv",
""
]
] | TITLE: Human Attention in Visual Question Answering: Do Humans and Deep
Networks Look at the Same Regions?
ABSTRACT: We conduct large-scale studies on `human attention' in Visual Question
Answering (VQA) to understand where humans choose to look to answer questions
about images. We design and test multiple game-inspired novel
attention-annotation interfaces that require the subject to sharpen regions of
a blurred image to answer a question. Thus, we introduce the VQA-HAT (Human
ATtention) dataset. We evaluate attention maps generated by state-of-the-art
VQA models against human attention both qualitatively (via visualizations) and
quantitatively (via rank-order correlation). Overall, our experiments show that
current attention models in VQA do not seem to be looking at the same regions
as humans.
| new_dataset | 0.953708 |
1606.05374 | Jacob Steinhardt | Jacob Steinhardt and Gregory Valiant and Moses Charikar | Avoiding Imposters and Delinquents: Adversarial Crowdsourcing and Peer
Prediction | 18 pages | null | null | null | cs.HC cs.CR cs.DS cs.GT cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider a crowdsourcing model in which $n$ workers are asked to rate the
quality of $n$ items previously generated by other workers. An unknown set of
$\alpha n$ workers generate reliable ratings, while the remaining workers may
behave arbitrarily and possibly adversarially. The manager of the experiment
can also manually evaluate the quality of a small number of items, and wishes
to curate together almost all of the high-quality items with at most an
$\epsilon$ fraction of low-quality items. Perhaps surprisingly, we show that
this is possible with an amount of work required of the manager, and each
worker, that does not scale with $n$: the dataset can be curated with
$\tilde{O}\Big(\frac{1}{\beta\alpha^3\epsilon^4}\Big)$ ratings per worker, and
$\tilde{O}\Big(\frac{1}{\beta\epsilon^2}\Big)$ ratings by the manager, where
$\beta$ is the fraction of high-quality items. Our results extend to the more
general setting of peer prediction, including peer grading in online
classrooms.
| [
{
"version": "v1",
"created": "Thu, 16 Jun 2016 21:45:14 GMT"
}
] | 2016-06-20T00:00:00 | [
[
"Steinhardt",
"Jacob",
""
],
[
"Valiant",
"Gregory",
""
],
[
"Charikar",
"Moses",
""
]
] | TITLE: Avoiding Imposters and Delinquents: Adversarial Crowdsourcing and Peer
Prediction
ABSTRACT: We consider a crowdsourcing model in which $n$ workers are asked to rate the
quality of $n$ items previously generated by other workers. An unknown set of
$\alpha n$ workers generate reliable ratings, while the remaining workers may
behave arbitrarily and possibly adversarially. The manager of the experiment
can also manually evaluate the quality of a small number of items, and wishes
to curate together almost all of the high-quality items with at most an
$\epsilon$ fraction of low-quality items. Perhaps surprisingly, we show that
this is possible with an amount of work required of the manager, and each
worker, that does not scale with $n$: the dataset can be curated with
$\tilde{O}\Big(\frac{1}{\beta\alpha^3\epsilon^4}\Big)$ ratings per worker, and
$\tilde{O}\Big(\frac{1}{\beta\epsilon^2}\Big)$ ratings by the manager, where
$\beta$ is the fraction of high-quality items. Our results extend to the more
general setting of peer prediction, including peer grading in online
classrooms.
| no_new_dataset | 0.940298 |
1606.05378 | Reginald Long | Reginald Long, Panupong Pasupat, Percy Liang | Simpler Context-Dependent Logical Forms via Model Projections | 10 pages, ACL 2016 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the task of learning a context-dependent mapping from utterances
to denotations. With only denotations at training time, we must search over a
combinatorially large space of logical forms, which is even larger with
context-dependent utterances. To cope with this challenge, we perform
successive projections of the full model onto simpler models that operate over
equivalence classes of logical forms. Though less expressive, we find that
these simpler models are much faster and can be surprisingly effective.
Moreover, they can be used to bootstrap the full model. Finally, we collected
three new context-dependent semantic parsing datasets, and develop a new
left-to-right parser.
| [
{
"version": "v1",
"created": "Thu, 16 Jun 2016 21:57:11 GMT"
}
] | 2016-06-20T00:00:00 | [
[
"Long",
"Reginald",
""
],
[
"Pasupat",
"Panupong",
""
],
[
"Liang",
"Percy",
""
]
] | TITLE: Simpler Context-Dependent Logical Forms via Model Projections
ABSTRACT: We consider the task of learning a context-dependent mapping from utterances
to denotations. With only denotations at training time, we must search over a
combinatorially large space of logical forms, which is even larger with
context-dependent utterances. To cope with this challenge, we perform
successive projections of the full model onto simpler models that operate over
equivalence classes of logical forms. Though less expressive, we find that
these simpler models are much faster and can be surprisingly effective.
Moreover, they can be used to bootstrap the full model. Finally, we collected
three new context-dependent semantic parsing datasets, and develop a new
left-to-right parser.
| new_dataset | 0.952662 |
1606.05413 | Chenchen Zhu | Chenchen Zhu, Yutong Zheng, Khoa Luu, Marios Savvides | CMS-RCNN: Contextual Multi-Scale Region-based CNN for Unconstrained Face
Detection | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Robust face detection in the wild is one of the ultimate components to
support various facial related problems, i.e. unconstrained face recognition,
facial periocular recognition, facial landmarking and pose estimation, facial
expression recognition, 3D facial model construction, etc. Although the face
detection problem has been intensely studied for decades with various
commercial applications, it still meets problems in some real-world scenarios
due to numerous challenges, e.g. heavy facial occlusions, extremely low
resolutions, strong illumination, exceptionally pose variations, image or video
compression artifacts, etc. In this paper, we present a face detection approach
named Contextual Multi-Scale Region-based Convolution Neural Network (CMS-RCNN)
to robustly solve the problems mentioned above. Similar to the region-based
CNNs, our proposed network consists of the region proposal component and the
region-of-interest (RoI) detection component. However, far apart of that
network, there are two main contributions in our proposed network that play a
significant role to achieve the state-of-the-art performance in face detection.
Firstly, the multi-scale information is grouped both in region proposal and RoI
detection to deal with tiny face regions. Secondly, our proposed network allows
explicit body contextual reasoning in the network inspired from the intuition
of human vision system. The proposed approach is benchmarked on two recent
challenging face detection databases, i.e. the WIDER FACE Dataset which
contains high degree of variability, as well as the Face Detection Dataset and
Benchmark (FDDB). The experimental results show that our proposed approach
trained on WIDER FACE Dataset outperforms strong baselines on WIDER FACE
Dataset by a large margin, and consistently achieves competitive results on
FDDB against the recent state-of-the-art face detection methods.
| [
{
"version": "v1",
"created": "Fri, 17 Jun 2016 03:19:09 GMT"
}
] | 2016-06-20T00:00:00 | [
[
"Zhu",
"Chenchen",
""
],
[
"Zheng",
"Yutong",
""
],
[
"Luu",
"Khoa",
""
],
[
"Savvides",
"Marios",
""
]
] | TITLE: CMS-RCNN: Contextual Multi-Scale Region-based CNN for Unconstrained Face
Detection
ABSTRACT: Robust face detection in the wild is one of the ultimate components to
support various facial related problems, i.e. unconstrained face recognition,
facial periocular recognition, facial landmarking and pose estimation, facial
expression recognition, 3D facial model construction, etc. Although the face
detection problem has been intensely studied for decades with various
commercial applications, it still meets problems in some real-world scenarios
due to numerous challenges, e.g. heavy facial occlusions, extremely low
resolutions, strong illumination, exceptionally pose variations, image or video
compression artifacts, etc. In this paper, we present a face detection approach
named Contextual Multi-Scale Region-based Convolution Neural Network (CMS-RCNN)
to robustly solve the problems mentioned above. Similar to the region-based
CNNs, our proposed network consists of the region proposal component and the
region-of-interest (RoI) detection component. However, far apart of that
network, there are two main contributions in our proposed network that play a
significant role to achieve the state-of-the-art performance in face detection.
Firstly, the multi-scale information is grouped both in region proposal and RoI
detection to deal with tiny face regions. Secondly, our proposed network allows
explicit body contextual reasoning in the network inspired from the intuition
of human vision system. The proposed approach is benchmarked on two recent
challenging face detection databases, i.e. the WIDER FACE Dataset which
contains high degree of variability, as well as the Face Detection Dataset and
Benchmark (FDDB). The experimental results show that our proposed approach
trained on WIDER FACE Dataset outperforms strong baselines on WIDER FACE
Dataset by a large margin, and consistently achieves competitive results on
FDDB against the recent state-of-the-art face detection methods.
| no_new_dataset | 0.945045 |
1606.05535 | Qibin Zhao Dr | Qibin Zhao, Guoxu Zhou, Shengli Xie, Liqing Zhang, and Andrzej
Cichocki | Tensor Ring Decomposition | null | null | null | null | cs.NA cs.CV cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Tensor networks have in recent years emerged as the powerful tools for
solving the large-scale optimization problems. One of the most popular tensor
network is tensor train (TT) decomposition that acts as the building blocks for
the complicated tensor networks. However, the TT decomposition highly depends
on permutations of tensor dimensions, due to its strictly sequential
multilinear products over latent cores, which leads to difficulties in finding
the optimal TT representation. In this paper, we introduce a fundamental tensor
decomposition model to represent a large dimensional tensor by a circular
multilinear products over a sequence of low dimensional cores, which can be
graphically interpreted as a cyclic interconnection of 3rd-order tensors, and
thus termed as tensor ring (TR) decomposition. The key advantage of TR model is
the circular dimensional permutation invariance which is gained by employing
the trace operation and treating the latent cores equivalently. TR model can be
viewed as a linear combination of TT decompositions, thus obtaining the
powerful and generalized representation abilities. For optimization of latent
cores, we present four different algorithms based on the sequential SVDs, ALS
scheme, and block-wise ALS techniques. Furthermore, the mathematical properties
of TR model are investigated, which shows that the basic multilinear algebra
can be performed efficiently by using TR representaions and the classical
tensor decompositions can be conveniently transformed into the TR
representation. Finally, the experiments on both synthetic signals and
real-world datasets were conducted to evaluate the performance of different
algorithms.
| [
{
"version": "v1",
"created": "Fri, 17 Jun 2016 14:40:18 GMT"
}
] | 2016-06-20T00:00:00 | [
[
"Zhao",
"Qibin",
""
],
[
"Zhou",
"Guoxu",
""
],
[
"Xie",
"Shengli",
""
],
[
"Zhang",
"Liqing",
""
],
[
"Cichocki",
"Andrzej",
""
]
] | TITLE: Tensor Ring Decomposition
ABSTRACT: Tensor networks have in recent years emerged as the powerful tools for
solving the large-scale optimization problems. One of the most popular tensor
network is tensor train (TT) decomposition that acts as the building blocks for
the complicated tensor networks. However, the TT decomposition highly depends
on permutations of tensor dimensions, due to its strictly sequential
multilinear products over latent cores, which leads to difficulties in finding
the optimal TT representation. In this paper, we introduce a fundamental tensor
decomposition model to represent a large dimensional tensor by a circular
multilinear products over a sequence of low dimensional cores, which can be
graphically interpreted as a cyclic interconnection of 3rd-order tensors, and
thus termed as tensor ring (TR) decomposition. The key advantage of TR model is
the circular dimensional permutation invariance which is gained by employing
the trace operation and treating the latent cores equivalently. TR model can be
viewed as a linear combination of TT decompositions, thus obtaining the
powerful and generalized representation abilities. For optimization of latent
cores, we present four different algorithms based on the sequential SVDs, ALS
scheme, and block-wise ALS techniques. Furthermore, the mathematical properties
of TR model are investigated, which shows that the basic multilinear algebra
can be performed efficiently by using TR representaions and the classical
tensor decompositions can be conveniently transformed into the TR
representation. Finally, the experiments on both synthetic signals and
real-world datasets were conducted to evaluate the performance of different
algorithms.
| no_new_dataset | 0.949106 |
1606.05589 | Abhishek Das | Abhishek Das, Harsh Agrawal, C. Lawrence Zitnick, Devi Parikh, Dhruv
Batra | Human Attention in Visual Question Answering: Do Humans and Deep
Networks Look at the Same Regions? | 5 pages, 4 figures, 3 tables, presented at 2016 ICML Workshop on
Human Interpretability in Machine Learning (WHI 2016), New York, NY. arXiv
admin note: substantial text overlap with arXiv:1606.03556 | null | null | null | stat.ML cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We conduct large-scale studies on `human attention' in Visual Question
Answering (VQA) to understand where humans choose to look to answer questions
about images. We design and test multiple game-inspired novel
attention-annotation interfaces that require the subject to sharpen regions of
a blurred image to answer a question. Thus, we introduce the VQA-HAT (Human
ATtention) dataset. We evaluate attention maps generated by state-of-the-art
VQA models against human attention both qualitatively (via visualizations) and
quantitatively (via rank-order correlation). Overall, our experiments show that
current attention models in VQA do not seem to be looking at the same regions
as humans.
| [
{
"version": "v1",
"created": "Fri, 17 Jun 2016 17:00:02 GMT"
}
] | 2016-06-20T00:00:00 | [
[
"Das",
"Abhishek",
""
],
[
"Agrawal",
"Harsh",
""
],
[
"Zitnick",
"C. Lawrence",
""
],
[
"Parikh",
"Devi",
""
],
[
"Batra",
"Dhruv",
""
]
] | TITLE: Human Attention in Visual Question Answering: Do Humans and Deep
Networks Look at the Same Regions?
ABSTRACT: We conduct large-scale studies on `human attention' in Visual Question
Answering (VQA) to understand where humans choose to look to answer questions
about images. We design and test multiple game-inspired novel
attention-annotation interfaces that require the subject to sharpen regions of
a blurred image to answer a question. Thus, we introduce the VQA-HAT (Human
ATtention) dataset. We evaluate attention maps generated by state-of-the-art
VQA models against human attention both qualitatively (via visualizations) and
quantitatively (via rank-order correlation). Overall, our experiments show that
current attention models in VQA do not seem to be looking at the same regions
as humans.
| new_dataset | 0.953708 |
1512.01655 | Nicola Pezzotti | Nicola Pezzotti, Boudewijn P.F. Lelieveldt, Laurens van der Maaten,
Thomas H\"ollt, Elmar Eisemann, and Anna Vilanova | Approximated and User Steerable tSNE for Progressive Visual Analytics | null | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Progressive Visual Analytics aims at improving the interactivity in existing
analytics techniques by means of visualization as well as interaction with
intermediate results. One key method for data analysis is dimensionality
reduction, for example, to produce 2D embeddings that can be visualized and
analyzed efficiently. t-Distributed Stochastic Neighbor Embedding (tSNE) is a
well-suited technique for the visualization of several high-dimensional data.
tSNE can create meaningful intermediate results but suffers from a slow
initialization that constrains its application in Progressive Visual Analytics.
We introduce a controllable tSNE approximation (A-tSNE), which trades off speed
and accuracy, to enable interactive data exploration. We offer real-time
visualization techniques, including a density-based solution and a Magic Lens
to inspect the degree of approximation. With this feedback, the user can decide
on local refinements and steer the approximation level during the analysis. We
demonstrate our technique with several datasets, in a real-world research
scenario and for the real-time analysis of high-dimensional streams to
illustrate its effectiveness for interactive data analysis.
| [
{
"version": "v1",
"created": "Sat, 5 Dec 2015 12:05:52 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Dec 2015 14:56:25 GMT"
},
{
"version": "v3",
"created": "Thu, 16 Jun 2016 09:36:40 GMT"
}
] | 2016-06-17T00:00:00 | [
[
"Pezzotti",
"Nicola",
""
],
[
"Lelieveldt",
"Boudewijn P. F.",
""
],
[
"van der Maaten",
"Laurens",
""
],
[
"Höllt",
"Thomas",
""
],
[
"Eisemann",
"Elmar",
""
],
[
"Vilanova",
"Anna",
""
]
] | TITLE: Approximated and User Steerable tSNE for Progressive Visual Analytics
ABSTRACT: Progressive Visual Analytics aims at improving the interactivity in existing
analytics techniques by means of visualization as well as interaction with
intermediate results. One key method for data analysis is dimensionality
reduction, for example, to produce 2D embeddings that can be visualized and
analyzed efficiently. t-Distributed Stochastic Neighbor Embedding (tSNE) is a
well-suited technique for the visualization of several high-dimensional data.
tSNE can create meaningful intermediate results but suffers from a slow
initialization that constrains its application in Progressive Visual Analytics.
We introduce a controllable tSNE approximation (A-tSNE), which trades off speed
and accuracy, to enable interactive data exploration. We offer real-time
visualization techniques, including a density-based solution and a Magic Lens
to inspect the degree of approximation. With this feedback, the user can decide
on local refinements and steer the approximation level during the analysis. We
demonstrate our technique with several datasets, in a real-world research
scenario and for the real-time analysis of high-dimensional streams to
illustrate its effectiveness for interactive data analysis.
| no_new_dataset | 0.944842 |
1602.05473 | Lars Maal{\o}e | Lars Maal{\o}e, Casper Kaae S{\o}nderby, S{\o}ren Kaae S{\o}nderby,
Ole Winther | Auxiliary Deep Generative Models | Proceedings of the 33rd International Conference on Machine Learning,
New York, NY, USA, 2016, JMLR: Workshop and Conference Proceedings volume 48,
Proceedings of the 33rd International Conference on Machine Learning, New
York, NY, USA, 2016 | null | null | null | stat.ML cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep generative models parameterized by neural networks have recently
achieved state-of-the-art performance in unsupervised and semi-supervised
learning. We extend deep generative models with auxiliary variables which
improves the variational approximation. The auxiliary variables leave the
generative model unchanged but make the variational distribution more
expressive. Inspired by the structure of the auxiliary variable we also propose
a model with two stochastic layers and skip connections. Our findings suggest
that more expressive and properly specified deep generative models converge
faster with better results. We show state-of-the-art performance within
semi-supervised learning on MNIST, SVHN and NORB datasets.
| [
{
"version": "v1",
"created": "Wed, 17 Feb 2016 16:24:50 GMT"
},
{
"version": "v2",
"created": "Thu, 26 May 2016 10:21:34 GMT"
},
{
"version": "v3",
"created": "Fri, 3 Jun 2016 09:19:21 GMT"
},
{
"version": "v4",
"created": "Thu, 16 Jun 2016 06:39:08 GMT"
}
] | 2016-06-17T00:00:00 | [
[
"Maaløe",
"Lars",
""
],
[
"Sønderby",
"Casper Kaae",
""
],
[
"Sønderby",
"Søren Kaae",
""
],
[
"Winther",
"Ole",
""
]
] | TITLE: Auxiliary Deep Generative Models
ABSTRACT: Deep generative models parameterized by neural networks have recently
achieved state-of-the-art performance in unsupervised and semi-supervised
learning. We extend deep generative models with auxiliary variables which
improves the variational approximation. The auxiliary variables leave the
generative model unchanged but make the variational distribution more
expressive. Inspired by the structure of the auxiliary variable we also propose
a model with two stochastic layers and skip connections. Our findings suggest
that more expressive and properly specified deep generative models converge
faster with better results. We show state-of-the-art performance within
semi-supervised learning on MNIST, SVHN and NORB datasets.
| no_new_dataset | 0.947235 |
1606.04956 | Ashton Anderson | Ashton Anderson, Jon Kleinberg and Sendhil Mullainathan | Assessing Human Error Against a Benchmark of Perfection | KDD 2016; 10 pages | null | 10.1145/2939672.2939803 | null | cs.AI cs.GT cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An increasing number of domains are providing us with detailed trace data on
human decisions in settings where we can evaluate the quality of these
decisions via an algorithm. Motivated by this development, an emerging line of
work has begun to consider whether we can characterize and predict the kinds of
decisions where people are likely to make errors.
To investigate what a general framework for human error prediction might look
like, we focus on a model system with a rich history in the behavioral
sciences: the decisions made by chess players as they select moves in a game.
We carry out our analysis at a large scale, employing datasets with several
million recorded games, and using chess tablebases to acquire a form of ground
truth for a subset of chess positions that have been completely solved by
computers but remain challenging even for the best players in the world.
We organize our analysis around three categories of features that we argue
are present in most settings where the analysis of human error is applicable:
the skill of the decision-maker, the time available to make the decision, and
the inherent difficulty of the decision. We identify rich structure in all
three of these categories of features, and find strong evidence that in our
domain, features describing the inherent difficulty of an instance are
significantly more powerful than features based on skill or time.
| [
{
"version": "v1",
"created": "Wed, 15 Jun 2016 20:00:32 GMT"
}
] | 2016-06-17T00:00:00 | [
[
"Anderson",
"Ashton",
""
],
[
"Kleinberg",
"Jon",
""
],
[
"Mullainathan",
"Sendhil",
""
]
] | TITLE: Assessing Human Error Against a Benchmark of Perfection
ABSTRACT: An increasing number of domains are providing us with detailed trace data on
human decisions in settings where we can evaluate the quality of these
decisions via an algorithm. Motivated by this development, an emerging line of
work has begun to consider whether we can characterize and predict the kinds of
decisions where people are likely to make errors.
To investigate what a general framework for human error prediction might look
like, we focus on a model system with a rich history in the behavioral
sciences: the decisions made by chess players as they select moves in a game.
We carry out our analysis at a large scale, employing datasets with several
million recorded games, and using chess tablebases to acquire a form of ground
truth for a subset of chess positions that have been completely solved by
computers but remain challenging even for the best players in the world.
We organize our analysis around three categories of features that we argue
are present in most settings where the analysis of human error is applicable:
the skill of the decision-maker, the time available to make the decision, and
the inherent difficulty of the decision. We identify rich structure in all
three of these categories of features, and find strong evidence that in our
domain, features describing the inherent difficulty of an instance are
significantly more powerful than features based on skill or time.
| no_new_dataset | 0.942612 |
1606.04985 | Yanwei Cui | Yanwei Cui, Laetitia Chapel, S\'ebastien Lef\`evre | Combining multiscale features for classification of hyperspectral
images: a sequence based kernel approach | 8th IEEE GRSS Workshop on Hyperspectral Image and Signal Processing:
Evolution in Remote Sensing (WHISPERS 2016), UCLA in Los Angeles, California,
U.S | null | null | null | cs.CV cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Nowadays, hyperspectral image classification widely copes with spatial
information to improve accuracy. One of the most popular way to integrate such
information is to extract hierarchical features from a multiscale segmentation.
In the classification context, the extracted features are commonly concatenated
into a long vector (also called stacked vector), on which is applied a
conventional vector-based machine learning technique (e.g. SVM with Gaussian
kernel). In this paper, we rather propose to use a sequence structured kernel:
the spectrum kernel. We show that the conventional stacked vector-based kernel
is actually a special case of this kernel. Experiments conducted on various
publicly available hyperspectral datasets illustrate the improvement of the
proposed kernel w.r.t. conventional ones using the same hierarchical spatial
features.
| [
{
"version": "v1",
"created": "Wed, 15 Jun 2016 21:19:54 GMT"
}
] | 2016-06-17T00:00:00 | [
[
"Cui",
"Yanwei",
""
],
[
"Chapel",
"Laetitia",
""
],
[
"Lefèvre",
"Sébastien",
""
]
] | TITLE: Combining multiscale features for classification of hyperspectral
images: a sequence based kernel approach
ABSTRACT: Nowadays, hyperspectral image classification widely copes with spatial
information to improve accuracy. One of the most popular way to integrate such
information is to extract hierarchical features from a multiscale segmentation.
In the classification context, the extracted features are commonly concatenated
into a long vector (also called stacked vector), on which is applied a
conventional vector-based machine learning technique (e.g. SVM with Gaussian
kernel). In this paper, we rather propose to use a sequence structured kernel:
the spectrum kernel. We show that the conventional stacked vector-based kernel
is actually a special case of this kernel. Experiments conducted on various
publicly available hyperspectral datasets illustrate the improvement of the
proposed kernel w.r.t. conventional ones using the same hierarchical spatial
features.
| no_new_dataset | 0.954009 |
1606.04991 | Aryan Mokhtari | Aryan Mokhtari and Alec Koppel and Alejandro Ribeiro | A Class of Parallel Doubly Stochastic Algorithms for Large-Scale
Learning | arXiv admin note: substantial text overlap with arXiv:1603.06782 | null | null | null | cs.LG math.OC stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider learning problems over training sets in which both, the number of
training examples and the dimension of the feature vectors, are large. To solve
these problems we propose the random parallel stochastic algorithm (RAPSA). We
call the algorithm random parallel because it utilizes multiple parallel
processors to operate on a randomly chosen subset of blocks of the feature
vector. We call the algorithm stochastic because processors choose training
subsets uniformly at random. Algorithms that are parallel in either of these
dimensions exist, but RAPSA is the first attempt at a methodology that is
parallel in both the selection of blocks and the selection of elements of the
training set. In RAPSA, processors utilize the randomly chosen functions to
compute the stochastic gradient component associated with a randomly chosen
block. The technical contribution of this paper is to show that this minimally
coordinated algorithm converges to the optimal classifier when the training
objective is convex. Moreover, we present an accelerated version of RAPSA
(ARAPSA) that incorporates the objective function curvature information by
premultiplying the descent direction by a Hessian approximation matrix. We
further extend the results for asynchronous settings and show that if the
processors perform their updates without any coordination the algorithms are
still convergent to the optimal argument. RAPSA and its extensions are then
numerically evaluated on a linear estimation problem and a binary image
classification task using the MNIST handwritten digit dataset.
| [
{
"version": "v1",
"created": "Wed, 15 Jun 2016 21:34:46 GMT"
}
] | 2016-06-17T00:00:00 | [
[
"Mokhtari",
"Aryan",
""
],
[
"Koppel",
"Alec",
""
],
[
"Ribeiro",
"Alejandro",
""
]
] | TITLE: A Class of Parallel Doubly Stochastic Algorithms for Large-Scale
Learning
ABSTRACT: We consider learning problems over training sets in which both, the number of
training examples and the dimension of the feature vectors, are large. To solve
these problems we propose the random parallel stochastic algorithm (RAPSA). We
call the algorithm random parallel because it utilizes multiple parallel
processors to operate on a randomly chosen subset of blocks of the feature
vector. We call the algorithm stochastic because processors choose training
subsets uniformly at random. Algorithms that are parallel in either of these
dimensions exist, but RAPSA is the first attempt at a methodology that is
parallel in both the selection of blocks and the selection of elements of the
training set. In RAPSA, processors utilize the randomly chosen functions to
compute the stochastic gradient component associated with a randomly chosen
block. The technical contribution of this paper is to show that this minimally
coordinated algorithm converges to the optimal classifier when the training
objective is convex. Moreover, we present an accelerated version of RAPSA
(ARAPSA) that incorporates the objective function curvature information by
premultiplying the descent direction by a Hessian approximation matrix. We
further extend the results for asynchronous settings and show that if the
processors perform their updates without any coordination the algorithms are
still convergent to the optimal argument. RAPSA and its extensions are then
numerically evaluated on a linear estimation problem and a binary image
classification task using the MNIST handwritten digit dataset.
| no_new_dataset | 0.944995 |
1606.05007 | Naoya Takahashi | Naoya Takahashi, Tofigh Naghibi, Beat Pfister | Automatic Pronunciation Generation by Utilizing a Semi-supervised Deep
Neural Networks | Proc. of 17th Interspeech (2016), San Francisco, California, USA | null | null | null | cs.CL cs.LG cs.SD | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Phonemic or phonetic sub-word units are the most commonly used atomic
elements to represent speech signals in modern ASRs. However they are not the
optimal choice due to several reasons such as: large amount of effort required
to handcraft a pronunciation dictionary, pronunciation variations, human
mistakes and under-resourced dialects and languages. Here, we propose a
data-driven pronunciation estimation and acoustic modeling method which only
takes the orthographic transcription to jointly estimate a set of sub-word
units and a reliable dictionary. Experimental results show that the proposed
method which is based on semi-supervised training of a deep neural network
largely outperforms phoneme based continuous speech recognition on the TIMIT
dataset.
| [
{
"version": "v1",
"created": "Wed, 15 Jun 2016 23:45:33 GMT"
}
] | 2016-06-17T00:00:00 | [
[
"Takahashi",
"Naoya",
""
],
[
"Naghibi",
"Tofigh",
""
],
[
"Pfister",
"Beat",
""
]
] | TITLE: Automatic Pronunciation Generation by Utilizing a Semi-supervised Deep
Neural Networks
ABSTRACT: Phonemic or phonetic sub-word units are the most commonly used atomic
elements to represent speech signals in modern ASRs. However they are not the
optimal choice due to several reasons such as: large amount of effort required
to handcraft a pronunciation dictionary, pronunciation variations, human
mistakes and under-resourced dialects and languages. Here, we propose a
data-driven pronunciation estimation and acoustic modeling method which only
takes the orthographic transcription to jointly estimate a set of sub-word
units and a reliable dictionary. Experimental results show that the proposed
method which is based on semi-supervised training of a deep neural network
largely outperforms phoneme based continuous speech recognition on the TIMIT
dataset.
| no_new_dataset | 0.950778 |
1606.05032 | Yang Yang | Yang Yang, Weilun Chen, Yadan Luo, Fumin Shen, Jie Shao and Heng Tao
Shen | Zero-Shot Hashing via Transferring Supervised Knowledge | 11 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hashing has shown its efficiency and effectiveness in facilitating
large-scale multimedia applications. Supervised knowledge e.g. semantic labels
or pair-wise relationship) associated to data is capable of significantly
improving the quality of hash codes and hash functions. However, confronted
with the rapid growth of newly-emerging concepts and multimedia data on the
Web, existing supervised hashing approaches may easily suffer from the scarcity
and validity of supervised information due to the expensive cost of manual
labelling. In this paper, we propose a novel hashing scheme, termed
\emph{zero-shot hashing} (ZSH), which compresses images of "unseen" categories
to binary codes with hash functions learned from limited training data of
"seen" categories. Specifically, we project independent data labels i.e.
0/1-form label vectors) into semantic embedding space, where semantic
relationships among all the labels can be precisely characterized and thus seen
supervised knowledge can be transferred to unseen classes. Moreover, in order
to cope with the semantic shift problem, we rotate the embedded space to more
suitably align the embedded semantics with the low-level visual feature space,
thereby alleviating the influence of semantic gap. In the meantime, to exert
positive effects on learning high-quality hash functions, we further propose to
preserve local structural property and discrete nature in binary codes.
Besides, we develop an efficient alternating algorithm to solve the ZSH model.
Extensive experiments conducted on various real-life datasets show the superior
zero-shot image retrieval performance of ZSH as compared to several
state-of-the-art hashing methods.
| [
{
"version": "v1",
"created": "Thu, 16 Jun 2016 02:56:39 GMT"
}
] | 2016-06-17T00:00:00 | [
[
"Yang",
"Yang",
""
],
[
"Chen",
"Weilun",
""
],
[
"Luo",
"Yadan",
""
],
[
"Shen",
"Fumin",
""
],
[
"Shao",
"Jie",
""
],
[
"Shen",
"Heng Tao",
""
]
] | TITLE: Zero-Shot Hashing via Transferring Supervised Knowledge
ABSTRACT: Hashing has shown its efficiency and effectiveness in facilitating
large-scale multimedia applications. Supervised knowledge e.g. semantic labels
or pair-wise relationship) associated to data is capable of significantly
improving the quality of hash codes and hash functions. However, confronted
with the rapid growth of newly-emerging concepts and multimedia data on the
Web, existing supervised hashing approaches may easily suffer from the scarcity
and validity of supervised information due to the expensive cost of manual
labelling. In this paper, we propose a novel hashing scheme, termed
\emph{zero-shot hashing} (ZSH), which compresses images of "unseen" categories
to binary codes with hash functions learned from limited training data of
"seen" categories. Specifically, we project independent data labels i.e.
0/1-form label vectors) into semantic embedding space, where semantic
relationships among all the labels can be precisely characterized and thus seen
supervised knowledge can be transferred to unseen classes. Moreover, in order
to cope with the semantic shift problem, we rotate the embedded space to more
suitably align the embedded semantics with the low-level visual feature space,
thereby alleviating the influence of semantic gap. In the meantime, to exert
positive effects on learning high-quality hash functions, we further propose to
preserve local structural property and discrete nature in binary codes.
Besides, we develop an efficient alternating algorithm to solve the ZSH model.
Extensive experiments conducted on various real-life datasets show the superior
zero-shot image retrieval performance of ZSH as compared to several
state-of-the-art hashing methods.
| no_new_dataset | 0.948775 |
1606.05060 | Feng Nan | Feng Nan, Joseph Wang, Venkatesh Saligrama | Pruning Random Forests for Prediction on a Budget | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose to prune a random forest (RF) for resource-constrained prediction.
We first construct a RF and then prune it to optimize expected feature cost &
accuracy. We pose pruning RFs as a novel 0-1 integer program with linear
constraints that encourages feature re-use. We establish total unimodularity of
the constraint set to prove that the corresponding LP relaxation solves the
original integer program. We then exploit connections to combinatorial
optimization and develop an efficient primal-dual algorithm, scalable to large
datasets. In contrast to our bottom-up approach, which benefits from good RF
initialization, conventional methods are top-down acquiring features based on
their utility value and is generally intractable, requiring heuristics.
Empirically, our pruning algorithm outperforms existing state-of-the-art
resource-constrained algorithms.
| [
{
"version": "v1",
"created": "Thu, 16 Jun 2016 05:56:36 GMT"
}
] | 2016-06-17T00:00:00 | [
[
"Nan",
"Feng",
""
],
[
"Wang",
"Joseph",
""
],
[
"Saligrama",
"Venkatesh",
""
]
] | TITLE: Pruning Random Forests for Prediction on a Budget
ABSTRACT: We propose to prune a random forest (RF) for resource-constrained prediction.
We first construct a RF and then prune it to optimize expected feature cost &
accuracy. We pose pruning RFs as a novel 0-1 integer program with linear
constraints that encourages feature re-use. We establish total unimodularity of
the constraint set to prove that the corresponding LP relaxation solves the
original integer program. We then exploit connections to combinatorial
optimization and develop an efficient primal-dual algorithm, scalable to large
datasets. In contrast to our bottom-up approach, which benefits from good RF
initialization, conventional methods are top-down acquiring features based on
their utility value and is generally intractable, requiring heuristics.
Empirically, our pruning algorithm outperforms existing state-of-the-art
resource-constrained algorithms.
| no_new_dataset | 0.948822 |
1606.05242 | Pedro Saleiro | Pedro Saleiro, Lu\'is Gomes, Carlos Soares | Sentiment Aggregate Functions for Political Opinion Polling using
Microblog Streams | null | null | null | null | cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The automatic content analysis of mass media in the social sciences has
become necessary and possible with the raise of social media and computational
power. One particularly promising avenue of research concerns the use of
sentiment analysis in microblog streams. However, one of the main challenges
consists in aggregating sentiment polarity in a timely fashion that can be fed
to the prediction method. We investigated a large set of sentiment aggregate
functions and performed a regression analysis using political opinion polls as
gold standard. Our dataset contains nearly 233 000 tweets, classified according
to their polarity (positive, negative or neutral), regarding the five main
Portuguese political leaders during the Portuguese bailout (2011-2014). Results
show that different sentiment aggregate functions exhibit different feature
importance over time while the error keeps almost unchanged.
| [
{
"version": "v1",
"created": "Thu, 16 Jun 2016 16:14:58 GMT"
}
] | 2016-06-17T00:00:00 | [
[
"Saleiro",
"Pedro",
""
],
[
"Gomes",
"Luís",
""
],
[
"Soares",
"Carlos",
""
]
] | TITLE: Sentiment Aggregate Functions for Political Opinion Polling using
Microblog Streams
ABSTRACT: The automatic content analysis of mass media in the social sciences has
become necessary and possible with the raise of social media and computational
power. One particularly promising avenue of research concerns the use of
sentiment analysis in microblog streams. However, one of the main challenges
consists in aggregating sentiment polarity in a timely fashion that can be fed
to the prediction method. We investigated a large set of sentiment aggregate
functions and performed a regression analysis using political opinion polls as
gold standard. Our dataset contains nearly 233 000 tweets, classified according
to their polarity (positive, negative or neutral), regarding the five main
Portuguese political leaders during the Portuguese bailout (2011-2014). Results
show that different sentiment aggregate functions exhibit different feature
importance over time while the error keeps almost unchanged.
| new_dataset | 0.959837 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.